torsdag 17 maj 2018

An open letter to Google

Dear Google,

I recently became aware of Project Maven, in which you assist the Pentagon in applying cutting-edge AI technology to give military drones the ability to identify humans. And I thought this is as good a time as any to remind you of these little words of wisdom: Don't be evil.

Let me repeat:

Be.

With kind regards,

Olle Häggström
Concerned world citizen

måndag 14 maj 2018

Two well-known arguments why an AI breakthrough is not imminent

Much of my recent writing has concerned future scenarios where an artificial intelligence (AI) breakthrough leads to a situation where we humans are no longer the smartest agents on the planet in terms of general intelligence, in which case we cannot (I argue) count on automatically remaining in control; see, e.g., Chapter 4 in my book Here Be Dragons: Science, Technology and the Future of Humanity, or my paper Remarks on artificial intelligence and rational optimism. I am aware of many popular arguments for why such a breakthrough is not around the corner but can only be expected in the far future or not at all, and while I remain open to the conclusion possibly being right, I typically find the arguments themselves at most moderately convincing.1 In this blog post I will briefly consider two such arguments, and give pointers to related and important recent developments. The first such argument is one that I've considered silly for as long as I've given any thought at all to these matters; this goes back at least to my early twenties. The second argument is perhaps more interesting, and in fact one that I've mostly been taking very seriously.

1. A computer program can never do anything creative, as all it does is to blindly execute what it has been programmed to do.

This argument is hard to take seriously, because if we do, we must also accept that a human being such as myself cannot be creative, as all I can do is to blindly execute what my genes and my environment have programmed me to do (this touches on the tedious old free will debate). Or we might actually bite that bullet and accept that humans cannot be creative, but with such a restrictive view of creativity the argument no longer works, as creativity will not be needed to outsmart us in terms of general intelligence. Anyway, the recent and intellectually crowd-sourced paper The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities offers a fascinating collection of counterexamples to the claim that computer programs cannot be creative.

2. We should distinguish between narrow AI and artificial general intelligence (AGI). Therefore, as far as a future AGI breakthrough is concerned, we should not be taken in by the current AI hype, because it is all just a bunch of narrow AI applications, irrelevant to AGI.

The dichotomy between narrow AI and AGI is worth emphasizing, as UC Berkeley computer scientist Michael Jordan does in his interesting recent essay Artificial Intelligence  - The Revolution Hasn’t Happened Yet. That discourse offers a healthy dose of skepticism concerning the imminence of AGI. And while the claim that progress in narrow AI is not automatically a stepping stone towards AGI seems right, the oft-repeated stronger claim that no progress in narrow AI can serve as such a stepping stone seems unwarranted. Can we be sure that the poor guy in the cartoon on the right (borrowed from Ray Kurzweil's 2005 book; click here for a larger image) can carry on much longer in his desperate production of examples of what only humans can do? Do we really know that AGI will not eventually emerge from a sufficiently broad range of specialized AI capabilities? Can we really trust Thore Husfeldt's image suggesting that Machine Learning Hill is just an isolated hill rather than a slope leading up towards Mount Improbable where real AGI can be found? I must admit that my certainty about such a topography in the landscape of computer programs is somewhat eroded by recent dramatic advances in AI applications. I've previously mentioned as an example AlphaZero's extraordinary and self-taught way of playing chess, made public in December last year. Even more striking is last week's demonstration of the Google Duplex personal assistant's ability to make intelligent phone conversations. Have a look:3

Footnotes

1) See Eliezer Yudkowsky's recent There’s No Fire Alarm for Artificial General Intelligence for a very interesting comment on the lack of convincing arguments for the non-imminence of an AI breakthrough.

2) The image appears some 22:30 into the video, but I really recommend watching Thore's entire talk, which is both instructive and entertaining, and which I had the privilege of hosting in Gothenburg last year.

3) See also the accompanying video exhibiting a wider range of Google AI products. I am a bit dismayed by its evangelical tone: we are told what wonderful enhancements of our lives these products offer, and there is no mention at all of possible social downsides or risks. Of course I realize that this is the way of the commercial sector, but I also think a company of Google's unique and stupendous power has a duty to rise above that narrow-minded logic. Don't be evil, goddamnit!

torsdag 3 maj 2018

Rekommenderad läsning: Aaronson om Caplan om utbildning

Den amerikanske nationalekonomen Bryan Caplans The Case Against Education: Why the Education System is a Waste of Time and Money är en av de mest störande böcker jag läst på länge, och den är det i kraft av att presentera mångfacetterad empirisk evidens och starka argument för slutsatser jag finner motbjudande. Caplans slutsatser kan sammanfattas i två huvudpunkter. För det första att utbildning överlag handlar mindre om att skaffa sig kunskaper än om att signalera sin allmänna arbetsduglighet (vilken Caplan kokar ned till de tre huvudkomponenterna intelligens, arbetsvillighet, och foglighet). Och för det andra att medan utbildning (ofta) är lönsam för individen, så är den (på det hela taget) olönsam för samhället som helhet. De båda slutsatserna är relaterade på följande vis: om det stämmer att utbildning mest handlar om signalering för att skaffa sig konkurrenskraft på arbetsmarknaden, så är det naturligt att tänka sig att det positiva utbyte jag får av att utbilda mig i hög grad kancelleras av det negativa utbyte (fortfarande av min utbildning) som mina potentiella konkurrenter på arbetsmarknaden får genom att hamna i en svagare konkurrensställning gentemot mig. Om deras totala negativa utbyte är tillräckligt stort kan utbildningen gå från att vara lönsam för mig till att vara olönsam på aggregerad (dvs samhälls-)nivå.

Vad är det då jag finner motbjudande med dessa slutsatser? Mycket kort uttryckt handlar det om att jag är en varm vän av utbildning och att jag alltid har trott på det som en väg till allmänt välstånd, men om det verkligen stämmer som Caplan säger att utbildning är olönsam på samhällsnivå så blir jag tvungen att revidera åtminstone delen om utbildning som väg till allmänt välstånd.

Aningen mindre kort handlar det om externalitetsinternalisering, en nationalekonomisk princip som (i ofullständig sammanfattning) säger att när en individs handlande har negativa återverkningar på tredje part så blir det bäst för helheten om samhällssystemet korrigeras på sådant vis att individen själv bär kostnaderna för dessa återverkningar. Koldioxiden är ett bra exempel: när jag kör bil och därigenom släpper ut koldioxid i atmosfären ger det en negativ klimatpåverkan som till största delen drabbar andra än mig själv, och externalitetsinternaliseringsprincipen säger då att mina utsläpp borde beskattas. En mycket god idé, och jag är allmänt en vän av externalitetsinternalisering (som tumregel i samhällsplanering, om än inte som helig princip att följas dogmatiskt). Problemet här är att om vi håller fast vid det och om Caplans slutsatser är riktiga så borde utbildning beskattas, snarare än subventioneras såsom fallet är idag. Sannerligen en motbjudande tanke!

Problemet för mig personligen är att jag ömt vårdar en bild av mig själv såsom rationell, och om jag accepterar argument som starka utan att finna minst lika starka motargument så kräver rationaliteten att jag accepterar argumentens slutsatser. Detta har skapat en spänning i mitt huvud som legat kvar oförlöst allt sedan jag för ett par månader sedan läste Caplans bok, och jag har därför hett önskat finna motargument kraftfulla nog att skjuta de utbildningsskeptiska slutsatserna i sank. Jag har inte funnit dem ännu, men Scott Aaronsons ambitiösa recension av boken är ett steg i den önskade riktningen och det bästa sådant steg jag hittills tagit del av. Recensionen är verkligen läsvärd, inte minst för den som vill förstå huvuddragen i Caplans bok och den empiri som åberopas utan att behöva läsa en hel bok, för Aaronson ger en utförlig och rättvis sammanfattning av dessa innan han ger sig i kast med sina motargument. För den som med intellektuellt allvar intresserar sig för utbildningsfrågor är jag nästan böjd att utnämna det till obligatoriskt att ta del av Caplans tankegångar, antingen via Aaronson eller genom att gå direkt till boken.

måndag 16 april 2018

Till storms mot de akademisk-romantiska och ekonomistisk-vulgära synsätten

I oktober förra året bidrog jag till ett symposium rubricerat Vetenskaplig redlighet och oredlighet arrangerat av Kungliga Vetenskaps- och Vitterhets-Samhället i Göteborg, och ombads efteråt stöpa om mitt föredrag till skriftligt format för publicering i en samlingsvolym ägnad symposiet. Jag är nu färdig med min uppsats, vilken (liksom mitt föredrag) fick rubriken... Uppsatsen kan beskrivas som ett 10-sidigt destillat av den forskaretiska ståndpunkt som präglar min bok Here Be Dragons: Science, Technology and the Future of Humanity från 2016 - en ståndpunkt som jag definierar i kontrast mot de vanligt förekommande synsätt jag valt att kalla de akademisk-romantiska och ekonomistisk-vulgära (varav jag själv bär på en dragning mot det förstnämnda, fast jag här tar avstånd från det). Om någon tycker att uppsatsen känns som en enda lång moralkaka så... javisst, lite så är det nog. Men läs den ändå!

onsdag 11 april 2018

Jag är emot mördarrobotar och hoppas att Sveriges riksdag instämmer

Sommaren 2015 anslöt jag mig till tusentals andra forskare i undertecknandet av ett öppet brev som uppmanade till moratorium mot utveckling av AI-teknologi för så kallade autonoma vapen - eller mördarrobotar som de med en inte fullt lika artig term också benämns. Frågans tyngd framgår av följande passage ur brevet:
    If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity.
I morgon torsdag har Sveriges riksdag möjlighet att göra motsvarande ställningstagande genom att bifalla en motion från den miljöpartistiske riksdagsmannen Carl Schlyter om att Sverige skall verka för ett internationellt förbud mot mördarrobotar. I en debattartikel i dagens nummer av Dagens Samhälle tillsammans med Anders Sandberg, Carin Ism, Max Tegmark och Markus Anderljung uttrycker vi en förhoppning om att riksdagen tar detta lilla steg på vägen mot en säkrare och bättre värld.

fredag 6 april 2018

Vetenskapsfestivalen 2018 närmar sig

Årets Vetenskapsfestival i Göteborg närmar sig, och i det som vanligt digra programmet finns inte mindre än tre programpunkter i vilka jag medverkar:
  • Artificiell intelligens - bara av godo? Om detta ämne samtalar jag med Ulrika Lindstrand och Lisa Bondesson (båda från Sveriges Ingenjörer) onsdagen den 18 april kl 12.00 på Chalmers, lokal HB3, Hörsalsvägen 10.

  • Mitt andra panelsamtal den onsdagen har rubriken Vita lögner och andra lögner. Övriga medverkande är Eva Staxäng (programproducent på Jonsereds herrgård), Christian Lenemark (lektor i litteraturvetenskap vid Göteborgs universitet) och Christer Borg (psykolog), och det hela går av stapeln onsdagen den 18 april kl 17.30 på Pedagogen, Västra Hamngatan 25, Hus A, Kjell Härnqvistsalen.

  • Söndagen den 22 april klockan 15.00 talar jag över ämnet Konkurrens eller samarbete, vilket i programmet sammanfattas som följer:
      Den filosofiska grundtanken bakom marknadsekonomin är att om var och en drivs av egennytta, så blir utfallet bra för kollektivet och samhället. Men stämmer det alltid? Oroande exempel dyker upp inom t.ex. klimatförändringar, kapprustning och artificiell intelligens.1
    Det äger rum på Pedagogen, Västra Hamngatan 25, Hus A, sal AK2 137.

Fotnot

1) Genom ett arrangörssjabbel kom en tidigare version av programmet att ange såväl en annan rubrik som en annan sammanfattning vilken innehöll ett svepande påstående ("det egennyttiga handlandet leder till katastrofala resultat för kollektivet") som jag inte vill förknippas med utan tvärtom tar avstånd från.

fredag 30 mars 2018

A spectacularly uneven AI report

Earlier this week, the EU Parliament's STOA (Science and Technology Options Assessment) committee released the report "Should we fear artificial intelligence?", whose quality is so spectacularly uneven that I don't think I've ever seen anything quite like it. It builds on a seminar in Brussels in October last year, which I've reported on before on this blog. Four authors have contributed one chapter each. Of the four chapters, three are very good two are of very high quality, one is of a quality level that my modesty forbids me to comment on, and one is abysmally bad. Let me list them here, not in the order they appear in the report, but in one that gives a slightly better dramatic effect.
  • Miles Brundage: Scaling Up Humanity: The Case for Conditional Optimism about Artificial Intelligence.

    In this chapter, Brundage (a research fellow at the Future of Humanity Institute) is very clear about the distinction between conditional optimism and just plain old optimism. He's not saying that an AI breakthrough will have good consequences (that would be plain old optimism). Rather, he's saying that if it has good consequences, i.e., if it doesn't cause humanity's extinction or throw us permanently into the jaws of Moloch, then there's a chance the outcome will be very, very good (this is conditional optimism).

  • Thomas Metzinger: Towards a Global Artificial Intelligence Charter.

    Here the well-know German philosopher Thomas Metzinger lists a number of risks that come with future AI development, ranging from well-known ones concerning technological unemployment or autonomous weapons to more exotic ones arising from the possibility of constructing machines with the capacity to suffer. He emphasizes the urgent need for legislation and other government action.

  • Olle Häggström: Remarks on Artificial Intelligence and Rational Optimism.

    This text is already familiar to readers of this blog. It is my humble attempt to sketch, in a balanced way, some of the main arguments for why the wrong kind of AI breakthrough might well be an existential risk to humanity.

  • Peter Bentley: The Three Laws of Artificial Intelligence: Dispelling Common Myths.

    Bentley assigns great significance to the fact that he is an AI developer. Thus, he says, he is (unlike us co-contributors to the report) among "the people who understand AI the most: the computer scientists and engineers who spend their days building the smart solutions, applying them to new products, and testing them". Why exactly expertise in developing AI and expertise in AI futurology necessarily coincide in this way (after all, it is rarely claimed that farmers are in a privileged position to make predictions about the future of agriculture) is not explained. In any case, he claims to debunk a number of myths, in order to arrive at the position which is perhaps best expressed in the words he chose to utter at the seminar in October: superhumanly intelligent AI "is not going to emerge, that’s the point! It’s entirely irrational to even conceive that it will emerge" [video from the event, at 12:08:45]. He relies more on naked unsupported claims than on actual arguments, however. In fact, there is hardly any end to the inanity of his chapter. It is very hard to comment on at all without falling into a condescending tone, but let me nevertheless risk listing a few of its very many very weak points:

    1. Bentley pretends to speak on behalf of AI experts - in his narrow sense of what such expertise entails. But it is easy to give examples of leading AI experts who, unlike him, take AI safety and apocalyptic AI scenarios seriously, such as Stuart Russell and Murray Shanahan. AI experts are in fact highly divided in this issue, as demonstrated in surveys. Bentley really should know this, as in his chapter he actually cites one of these surveys (but quotes it in shamelessly misleading fashion).

    2. In his desperate search for arguments to back up his central claim about the impossibility of building a superintelligent AI, Bentley waves at the so-called No Free Lunch theorem. As I explained in my paper Intelligent design and the NFL theorems a decade ago, this result is an utter triviality, which basically says that in a world with no structure at all, no better way than brute force exists if you want to find something. Fortunately, in a world such as ours which has structure, the result does not apply. Basically the only thing that the result has going for it is its cool name, something that creationist charlatan William Dembski exploited energetically to try to give the impression that biological evolution is impossible, and now Peter Bentley is attempting the analogous trick for superintelligent AI.

    3. At one point in his chapter, Bentley proclaims that "even if we could create a super-intelligence, there is no evidence that such a super-intelligent AI would ever wish to harm us". What the hell? Bentley knows about Omohundro-Bostrom theory for instrumental vs final AI goals (see my chapter in the report for a brief introduction) and how it predicts catastrophic consequences in case we fail to equip the superintelligent AI with goals that are well-aligned with human values. He knows it by virtue of having read my book Here Be Dragons (or at least he cites it and quotes it), on top of which he actually heard me present the topic at the Brussels seminar in October. Perhaps he has reasons to believe Omohundro-Bostrom theory to be flawed, in which case he should explain why. Simply stating out of the blue, as he does, that no reason exists for believing that a superintelligent AI might turn agianst us is deeply dishonest.

    4. Bentley spends a large part of his chapter attacking the silly straw man that the mere progress of Moore's law, giving increasing access to computer power, will somehow spontaneously create superintelligent AI. Many serious thinkers speculate about an AI breakthrough, but none of them (not even Ray Kurzweil) think computer power on its own will be enough.

    5. The more advanced an AI gets, the more involved will the testing step of its development be, claims Bentley, and goes on to argue that the amount of testing needed grows exponentially with the complexity of the situation, essentially preventing rapid development of advanced AI. His premise for this is that "partial testing is not sufficient - the intelligence must be tested on all likely permutations of the problem for its designed lifetime otherwise its capabilities may not be trustable", and to illustrate the immensity of this task he points out that if the machine's input consists of a mere 100 variables that each can take 10 values, then there are 10100 cases to test. And for readers for whom it is not evident that 10100 is a very large number, he writes it in decimal. Oh please. If Bentley doesn't know that "partial testing" is what all engineering projects need to resort to, then I'm beginning to wonder what planet he comes from. Here's a piece of homework for him: calculate how many cases the developers of the latest version of Microsoft Word would have needed to test, in order not to fall back on "partial testing", and how many pages would be needed for writing that number in decimal.

    6. Among the four contributors to the report, Bentley is alone in claiming to be able to predict the future. He just knows that superintelligent AI will not happen. Funny, then, that not even his claim that "we are terrible at predicting the future, and almost without exception the predictions (even by world experts) are completely wrong" doesn't seem to induce as much as a iota of empistemic humility into his prophecy.

    7. In the final paragraph of his chapter, Bentley reveals his motivation for writing it: "Do not be fearful of AI - marvel at the persistence and skill of those human specialists who are dedicating their lives to help create it. And appreciate that AI is helping to improve our lives every day." He is simply offended! He and his colleagues work so hard on AI, they just want to make the world a better place, and along comes a bunch of other people who have the insolence to come and talk about AI risks. How dare they! Well, I've got news for Bentley: The future development of AI comes with big risks, and to see that we do not even need to invoke the kind of superintelligence breakthrough that is the topic of the present discussion. There are plenty of more down-to-earth reasons to be "fearful" of what may come out of AI. One such example, which I touch upon in my own chapter in the report, is the development of AI technology for autonomous weapons, and how to keep this technology away from the hands of terrorists.

A few days after the report came out, Steven Pinker tweeted that he "especially recommend[s] AI expert Peter Bentley's 'The Three Laws of Artificial Intelligence: Dispelling Common Myths' (I make similar arguments in Enlightenment Now)". I find this astonishing. Is it really possible that Pinker is that blind to the errors and shortcomings in Bentley's chapter? Is there a name for the fallacy "I like the conclusion, therefore I am willing to accept any sort of crap as arguments"?