fredag 27 oktober 2017

On Tegmark's Life 3.0

I've just read Max Tegmark´s new book Life 3.0: Being Human in the Age of Artificial Intelligence.1 My expectations were very high after his previous book Our Mathematical Universe, and yes, his new book is also good. In Tegmark's terminology, Life 1.0 is most of biological life, whose hardware and software are both evolved; Life 2.0 is us, whose hardware is mostly evolved but who mostly create our own software through culture and learning; and Life 3.0 is future machines who design their own software and hardware. The book is about what a breakthrough in artificial intelligence (AI) might entail for humanity - and for the rest of the universe. To a large extent it covers the same ground as Nick Bostrom's 2014 book Superintelligence, but with more emphasis on cosmological perspectives and on the problem of consciousness. Other than that, I found less novelty in Tegmark's book compared to Bostrom's than I had expected, but one difference is that while Bostrom's book is scholarly quite demanding, Tegmark's is more clearly directed to a broader audience, and in fact a very pleasant and easy read.

There is of course much I could comment upon in the book, but to keep this blog post short, let me zoom in on just one detail. The book's Figure 1.2 is a very nice diagram of ways to view the possibility of a future superhuman AI, with expected goodness of the consequences (utopia vs dystopia) on the x-axis, and expected time until its arrival on the y-axis. In his discussion of the various positions, Tegmark emphasizes that "virtually nobody" expects superhuman AI to arrive within the next few years. This is what pretty much everyone in the field - including myself - says. But I've been quietly wondering for some time what the actual evidence is for the claim that the required AI breakthrough will not happen in the next few years.2 Almost simultaneously with reading Life 3.0, I read Eliezer Yudkowsky's very recent essay There’s No Fire Alarm for Artificial General Intelligence, which draws attention to the fact that all of the empirical evidence that is usually held forth in favor of the breakthrough not being imminent describes a general situation that can be expected to still hold at a time very close to the breakthrough.3 Hence the purported evidence is not very convincing.

Now, unless I misremember, Tegmark doesn't actually say in his book that he endorses the view that a breakthrough is unlikely to be imminent - he just says that this is the consensus view among AI futurologists. Perhaps this is not an accident? Perhaps he has noticed the lack of evidence for the position, but chooses not to advertise this? I can see good reasons to keep a low profile on this issue. First, when one discusses topics that may come across as weird (AI futurology clearly is such a topic), one may want to somehow signal sobriety - and saying that an AI breakthrough in the next few years is unlikely may serve as such a signal. Second, there is probably no use in creating panic, as solving the so-called Friendly AI problem seems unlikely to be doable in just a few years. Perhaps one can even make the case that these reasons ought to have compelled me not to write this blog post.

Footnotes

1) I read the Swedish translation, which I typically do not do with books written in English, but this time I happened to receive it as a birthday gift from the translators Helena Sjöstrand Svenn and Gösta Svenn. The translation struck me as highly competent.

2) An even more extreme version of this question is to ask what the evidence is that, in a version of Bostrom's (2014) notion of the treacherous turn, the superintelligent AI already exists and is merely biding its time. It was philosopher John Danaher, in his provocative 2015 paper Why AI doomsayers are like sceptical theists and why it matters, who brought attention to this matter; see my earlier blog post A disturbing parallel between AI futurology and religion.

3) Here is Yudkowsky's own summary of this evidence:
    Why do we know that AGI is decades away? In popular articles penned by heads of AI research labs and the like, there are typically three prominent reasons given:

    (A) The author does not know how to build AGI using present technology. The author does not know where to start.

    (B) The author thinks it is really very hard to do the impressive things that modern AI technology does, they have to slave long hours over a hot GPU farm tweaking hyperparameters to get it done. They think that the public does not appreciate how hard it is to get anything done right now, and is panicking prematurely because the public thinks anyone can just fire up Tensorflow and build a robotic car.

    (C) The author spends a lot of time interacting with AI systems and therefore is able to personally appreciate all the ways in which they are still stupid and lack common sense.

måndag 23 oktober 2017

The AI meeting in Brussels last week

I owe my readers a report from the seminar entitled "Should we fear the future? Is it rational to be optimistic about artificial intelligence?" at the European Parliament's STOA (Science and Technology Options Assessment) committee in Brussels last Thursday. In my opinion, all things considered, the event turned out OK, and it was a pleasure to meet and debate with the event's main speaker Steven Pinker as well as with co-panelists Miles Brundage and Thomas Metzinger.1 I'd just like to comment on Pinker's arguments for why we should not take seriously or publicly discuss the risk for an existential catastrophe caused by the emergence of superintelligent AGI (artificial general intelligence). His arguments essentially boil down to the follwing four points, which in my view fail to show what he intends.
    1. The general public already has the nuclear threat and the climate threat to worry about, and bringing up yet another global risk may overwhelm people and cause them to simply give up on the future. There may be something to this speculation, but to evaluate the argument's merit we need to consider separately the two possibilities of
      (a) apocalyptic AI risk being real, and

      (b) apocalyptic AI risk being spurious.

    In case of (b), of course we should not waste time and effort on discussing such risk, but we didn't need the overwhelming-the-public argument to understand that. Consider instead case (a). Here Pinker's recommendation is that we simply ignore a threat that may kill us all. This does not strike me as a good idea. Surviving the nuclear threat and solving the climate crisis would of course be wonderful things, but their utility is severely hampered in case it just leads us into an AI apocalypse. Keeping quiet about a real risk also seems to fly straight in the face of one of Pinker's most dearly held ideas, namely that of scientific and intellectual openness, and Enlightenment values more generally. The same thing applies to the situation where we are unsure whether (a) or (b) holds - surely the approach best in line with Enlightenment values is then to openly discuss the problem and to try to work out whether the risk is real.

    2. Pinker held forth a bunch of concerns that seemed more or less copy-and-pasted from the standard climate denialism discourse. These included the observation that the Millennium bug did not cause global catastrophe, whence (or so the argument goes) a global catastrophe cannot be expected from a superintelligent AGI (analogously to the oft-repeated claim that the old Greek's fear that the skies would fall down turing out to be unfounded shows that greenhouse gas emissions cannot accelerate global warming in any dangerous way), and speculations about the hidden motives of those who discuss AI risk - they are probably just competing for status and research grants. This is not impressive. See also yesterday's blog post by my friend Björn Bengtsson for more on this; it is to him that I owe the (in retrospect obvious) parallel to climate denialism.

At this point, one may wonder why Pinker doesn't do the consistent thing, given these arguments, and join the climate denialism camp. He would probably respond that unlike AI risk, climate risk is backed up by solid scientific evidence. And indeed the two cases are different - the case for climate risk is considerably more solid - but the problem with Pinker's position is that he hasn't even bothered to find out what the science of AI risk says. This brings me to the next point.
    3. All the apocalyptic AI scenarios involve the AI having bad goals, which leads Pinker to reflect on why in the world would anyone program the machine with bad goals - let's just not do that! This is essentially the idea of the so-called Friendly AI project (see Yudkowsky, 2008, or Bostrom, 2014), but what Pinker does not seem to appreciate is that the project is extremely difficult. He went on to ask why in the world anyone would be so stupid as to program self-preservation at all costs into the machine, and this in fact annoyed me slightly, because it happened just 20 or so minutes after I had sketched the Omohundro-Bostrom theory for how self-preservation and various other instrumental goals are likely to emerge spontaneously (i.e., without having them explicitly put into it by human programmers) in any sufficiently intelligent AGI.

    4. In the debate, Pinker described (as he had done several times before) the superintelligent AGI in apocalyptic scenarios as having a typically male psychology, but pointed out that it can equally well turn out to have more female characteristics (things like compassion and motherhood), in which case everything will be all right. This is just another indication of how utterly unfamiliar he is with the literature on possible superintelligent psychologies. His male-female distinction in the general context of AGIs is just barely more relevant than the question of whether the next exoplanet we discover will turn out to be male or female.

In summary, I don't think that Pinker's arguments for why we should not talk about risks associated with an AI breakthrough hold water. On the contrary, I believe there's an extremely important discussion to be had on that topic, and I wish we had had time to delve a bit deeper into it in Brussels. Here is the video from the event.

(Or watch the video via this link, which may in some cases work better.)

Footnotes

1) My omission here of the third co-panelist Peter Bentley is on purpose; I did not enjoy his presence in the debate. In what appeared to be an attempt to compensate for the hollowness of his arguments,2 he reverted to assholery on a level that I rarely encounter in seminars and panel discussions: he expressed as much disdain as he could for opposing views, he interrupted and stole as much microphone time as he could get away with, and he made faces while other panelists were speaking.

2) After spending a disproportionate amount of his allotted time on praising his own credentials, Bentley went on to defend the idea that we can be sure that a breakthrough leading to superintelligent AGI will not happen. For this, he had basically one single argument, namely his and other AI developers' experience that all progress in the area requires hard work, that any new algorithm they invent can only solve one specific problem, and that initial success of the algorithm is always followed by a point of diminishing returns. Hence (he stressed), solving another problem always requires the hard work of inventing and implementing yet another algorithm. This line of argument conveniently overlooks the known fact (exemplified by the software of the human brain) that there do exist algorithms with a more open-ended problem-solving capacity, and is essentially identical to item (B) in Eliezer Yudkowsky's eloquent summary, in his recent essay There is no fire alarm for artificial general intelligence, of the typical arguments held forth for the position that an AGI breakthrough is either impossible or lies far in the future. Quoting from Yudkowsky:
    Why do we know that AGI is decades away? In popular articles penned by heads of AI research labs and the like, there are typically three prominent reasons given:

    (A) The author does not know how to build AGI using present technology. The author does not know where to start.

    (B) The author thinks it is really very hard to do the impressive things that modern AI technology does, they have to slave long hours over a hot GPU farm tweaking hyperparameters to get it done. They think that the public does not appreciate how hard it is to get anything done right now, and is panicking prematurely because the public thinks anyone can just fire up Tensorflow and build a robotic car.

    (C) The author spends a lot of time interacting with AI systems and therefore is able to personally appreciate all the ways in which they are still stupid and lack common sense.

The inadequacy of these arguments lies in the observation that the same situation can be expected to hold five years prior to an AGI breakthrough, or one year, or... (as explained by Yudkowsky later in the same essay).

At this point a reader or two may perhaps be tempted to point out that if (A)-(C) are not considered sufficient evidence against a future superintelligence, then how in the world would one falsify such a thing, and doesn't this cast doubt on whether AI apocalyptic risk studies should count as scientific? I advise those readers to consult my earlier blog post Vulgopopperianism.

fredag 13 oktober 2017

Fler ställen att läsa om vårt existentiell risk-program

Ingen någorlunda trogen läsare av denna blogg kan ha undvikit att notera det gästforskarprogram rubricerat Existential risk to humanity som för närvarande pågår på Chalmers och Göteborgs universitet. Vår särskilt inbjudne gästprofessor Anders Sandberg erbjuder på sin egen blogg den i mitt tycke allra bästa sammanfattningen av vad vi sysslar med i programmet, men jag gläds också åt den mediabevakning vi får. Jag har tidigare flaggat för inslag i DN, i P1 och i P4, och vill nu nämna ytterligare tre:

torsdag 5 oktober 2017

Videos from the existential risk workshop

The public workshop we held on September 7-8 as part of the ongoing GoCAS guest researcher program on existential risk exhibited many interesting talks. The talks were filmed, and we have now posted most of those videos on our own YouTube channel. They can of course be watched in any order, although to maximize the illusion of being present at the event, one might follow the list below, in which they appear in the same order as in the workshop. Enjoy!

måndag 2 oktober 2017

Några datum att hålla ordning på i oktober

Det händer mycket denna månad. Här några datum jag personligen har lite extra anledning att hålla ordning på: