fredag 30 mars 2018

A spectacularly uneven AI report

Earlier this week, the EU Parliament's STOA (Science and Technology Options Assessment) committee released the report "Should we fear artificial intelligence?", whose quality is so spectacularly uneven that I don't think I've ever seen anything quite like it. It builds on a seminar in Brussels in October last year, which I've reported on before on this blog. Four authors have contributed one chapter each. Of the four chapters, three are very good two are of very high quality, one is of a quality level that my modesty forbids me to comment on, and one is abysmally bad. Let me list them here, not in the order they appear in the report, but in one that gives a slightly better dramatic effect.
  • Miles Brundage: Scaling Up Humanity: The Case for Conditional Optimism about Artificial Intelligence.

    In this chapter, Brundage (a research fellow at the Future of Humanity Institute) is very clear about the distinction between conditional optimism and just plain old optimism. He's not saying that an AI breakthrough will have good consequences (that would be plain old optimism). Rather, he's saying that if it has good consequences, i.e., if it doesn't cause humanity's extinction or throw us permanently into the jaws of Moloch, then there's a chance the outcome will be very, very good (this is conditional optimism).

  • Thomas Metzinger: Towards a Global Artificial Intelligence Charter.

    Here the well-know German philosopher Thomas Metzinger lists a number of risks that come with future AI development, ranging from well-known ones concerning technological unemployment or autonomous weapons to more exotic ones arising from the possibility of constructing machines with the capacity to suffer. He emphasizes the urgent need for legislation and other government action.

  • Olle Häggström: Remarks on Artificial Intelligence and Rational Optimism.

    This text is already familiar to readers of this blog. It is my humble attempt to sketch, in a balanced way, some of the main arguments for why the wrong kind of AI breakthrough might well be an existential risk to humanity.

  • Peter Bentley: The Three Laws of Artificial Intelligence: Dispelling Common Myths.

    Bentley assigns great significance to the fact that he is an AI developer. Thus, he says, he is (unlike us co-contributors to the report) among "the people who understand AI the most: the computer scientists and engineers who spend their days building the smart solutions, applying them to new products, and testing them". Why exactly expertise in developing AI and expertise in AI futurology necessarily coincide in this way (after all, it is rarely claimed that farmers are in a privileged position to make predictions about the future of agriculture) is not explained. In any case, he claims to debunk a number of myths, in order to arrive at the position which is perhaps best expressed in the words he chose to utter at the seminar in October: superhumanly intelligent AI "is not going to emerge, that’s the point! It’s entirely irrational to even conceive that it will emerge" [video from the event, at 12:08:45]. He relies more on naked unsupported claims than on actual arguments, however. In fact, there is hardly any end to the inanity of his chapter. It is very hard to comment on at all without falling into a condescending tone, but let me nevertheless risk listing a few of its very many very weak points:

    1. Bentley pretends to speak on behalf of AI experts - in his narrow sense of what such expertise entails. But it is easy to give examples of leading AI experts who, unlike him, take AI safety and apocalyptic AI scenarios seriously, such as Stuart Russell and Murray Shanahan. AI experts are in fact highly divided in this issue, as demonstrated in surveys. Bentley really should know this, as in his chapter he actually cites one of these surveys (but quotes it in shamelessly misleading fashion).

    2. In his desperate search for arguments to back up his central claim about the impossibility of building a superintelligent AI, Bentley waves at the so-called No Free Lunch theorem. As I explained in my paper Intelligent design and the NFL theorems a decade ago, this result is an utter triviality, which basically says that in a world with no structure at all, no better way than brute force exists if you want to find something. Fortunately, in a world such as ours which has structure, the result does not apply. Basically the only thing that the result has going for it is its cool name, something that creationist charlatan William Dembski exploited energetically to try to give the impression that biological evolution is impossible, and now Peter Bentley is attempting the analogous trick for superintelligent AI.

    3. At one point in his chapter, Bentley proclaims that "even if we could create a super-intelligence, there is no evidence that such a super-intelligent AI would ever wish to harm us". What the hell? Bentley knows about Omohundro-Bostrom theory for instrumental vs final AI goals (see my chapter in the report for a brief introduction) and how it predicts catastrophic consequences in case we fail to equip the superintelligent AI with goals that are well-aligned with human values. He knows it by virtue of having read my book Here Be Dragons (or at least he cites it and quotes it), on top of which he actually heard me present the topic at the Brussels seminar in October. Perhaps he has reasons to believe Omohundro-Bostrom theory to be flawed, in which case he should explain why. Simply stating out of the blue, as he does, that no reason exists for believing that a superintelligent AI might turn agianst us is deeply dishonest.

    4. Bentley spends a large part of his chapter attacking the silly straw man that the mere progress of Moore's law, giving increasing access to computer power, will somehow spontaneously create superintelligent AI. Many serious thinkers speculate about an AI breakthrough, but none of them (not even Ray Kurzweil) think computer power on its own will be enough.

    5. The more advanced an AI gets, the more involved will the testing step of its development be, claims Bentley, and goes on to argue that the amount of testing needed grows exponentially with the complexity of the situation, essentially preventing rapid development of advanced AI. His premise for this is that "partial testing is not sufficient - the intelligence must be tested on all likely permutations of the problem for its designed lifetime otherwise its capabilities may not be trustable", and to illustrate the immensity of this task he points out that if the machine's input consists of a mere 100 variables that each can take 10 values, then there are 10100 cases to test. And for readers for whom it is not evident that 10100 is a very large number, he writes it in decimal. Oh please. If Bentley doesn't know that "partial testing" is what all engineering projects need to resort to, then I'm beginning to wonder what planet he comes from. Here's a piece of homework for him: calculate how many cases the developers of the latest version of Microsoft Word would have needed to test, in order not to fall back on "partial testing", and how many pages would be needed for writing that number in decimal.

    6. Among the four contributors to the report, Bentley is alone in claiming to be able to predict the future. He just knows that superintelligent AI will not happen. Funny, then, that not even his claim that "we are terrible at predicting the future, and almost without exception the predictions (even by world experts) are completely wrong" doesn't seem to induce as much as a iota of empistemic humility into his prophecy.

    7. In the final paragraph of his chapter, Bentley reveals his motivation for writing it: "Do not be fearful of AI - marvel at the persistence and skill of those human specialists who are dedicating their lives to help create it. And appreciate that AI is helping to improve our lives every day." He is simply offended! He and his colleagues work so hard on AI, they just want to make the world a better place, and along comes a bunch of other people who have the insolence to come and talk about AI risks. How dare they! Well, I've got news for Bentley: The future development of AI comes with big risks, and to see that we do not even need to invoke the kind of superintelligence breakthrough that is the topic of the present discussion. There are plenty of more down-to-earth reasons to be "fearful" of what may come out of AI. One such example, which I touch upon in my own chapter in the report, is the development of AI technology for autonomous weapons, and how to keep this technology away from the hands of terrorists.

A few days after the report came out, Steven Pinker tweeted that he "especially recommend[s] AI expert Peter Bentley's 'The Three Laws of Artificial Intelligence: Dispelling Common Myths' (I make similar arguments in Enlightenment Now)". I find this astonishing. Is it really possible that Pinker is that blind to the errors and shortcomings in Bentley's chapter? Is there a name for the fallacy "I like the conclusion, therefore I am willing to accept any sort of crap as arguments"?

lördag 17 mars 2018

Om växthusgasutsläpp och invandring

Ett gäng forskare och biståndsarbetare med Frank Götmark (professor i ekologi vid Göteborgs universitet) i spetsen publicerade igår en debattartikel i Svenska Dagbladet med rubriken Miljöskäl talar för mer begränsad invandring. I korta drag är deras argument följande.
    Ett lands växthusgasutsläpp (liksom annan miljöbelastning) tenderar att öka med ökad folkmängd. Om invandringen till ett land är stor leder det till ökad folkmängd. Eftersom det är angeläget att hålla nere de svenska växthusgasutsläppen leder detta till att vi bör begränsa invandringen.
Det finns ett korn av sanning i deras resonemang, men jag vill mena att artikeln är opedagogiskt och oklart framställd på ett sätt som för tankarna mer till politisk retorik än till det slags saklighet som bör vara ledstjärna för forskare som deltar i samhällsdebatten. Detta då avgörande premisser för resonemanget sopas under mattan.

En uppenbar första invändning för en kritisk läsare som tar del av Götmarks m.fl. artikel är följande. Växthusgasutsläpp är ett globalt problem och inte ett svenskt, och även om invandring till Sverige riskerar leda till ökade växthusgasutsläpp i Sverige så är det inte uppenbart att det leder till en ökning av de globala utsläppen, eftersom vi behöver ta med minskningen i de länder invandrarna kommer ifrån i kalkylen.

Detta kan Götmark m.fl. givetvis besvara. Det troligaste är att de tänker sig att en invandrare till Sverige typiskt kommer från något land med lägre växthusgasutsläpp. Flyktingpojken Rashid från Afghanistan kommer från ett land där CO2-utsläppen per capita och år är 0,7 ton, och anländer till ett där motsvarande siffra är 4,6 (siffror från 2013). Det är rimligt att tänka sig att flytten från Afghanistan till Sverige leder till att Rashids CO2-avtryck rör sig bort från typiskt afghansk nivå i riktning mot typiskt svensk nivå, vilket alltså resulterar i ökade utsläpp.

Resonemanget bygger på att de svenska utsläppen per capita ligger på ohållbar nivå, och därtill på högre nivå än de länder varifrån flertalet invandrare kommer. Detta är sant idag, men kommer det att fortsätta vara sant i framtiden? Götmark m.fl. tycks ta för givet att det kommer att hålla i sig åtminstone fram till 2100, ty i annat fall skulle deras diagram över svensk befolkningsutveckling fram till dess under olika antaganden om invandringsvolym vara irrelevant. Själv anser jag (liksom många andrda, som Naturvårdsverket) att Sverige bör minska sina utsläpp snabbt, och bli koldioxidneutralt långt långt före 2100. Jag ser inga godtagbara moraliska argument för att vi skulle kunna fortsätta tillåta oss en utsläppsnivå som är globalt ohållbar och ligger högre än andra länder. Så snart vi rättat till den saken faller den götmarkska argumentationen.

Artikelförfattarna är givetvis i sin fulla rätt att ha annan uppfattning än jag i denna moraliska fråga, men de borde enligt min mening ha uttryckt sig mer i klartext. Då hade det t.ex. kunna låta så här:
    Vi anser att vi här i Sverige kan tillåta oss att tillskansa oss en hög levnadsstandard baserad på CO2-utsläpp som är högre än vad som är globalt hållbart, och högre än i andra länder. Detta privilegium bör vi dock vara ytterst försiktiga med att dela med oss av till Rashid och andra utlänningar, ty vår planet håller inte för om alla skulle leva som vi.

måndag 12 mars 2018

Lite radioprat om Elon Musks senaste AI-utspel

Elon Musk framträdde igår på en teknik- och kulturfestival i Austin, Texas. En av de saker som väckte störst uppmärksamhet var hans påstående att AI (artificiell intelligens) innebär en ännu större risk för mänskligheten än kärnvapenfaran. Så här lät det:

Jag inbjöds tidigare idag att kommentera detta i radioprogrammet Nordegren & Epstein i P1 - lyssna gärna på det! Inslaget om Elon Musk börjar ca 30:15 in i programmet, och jag kommer in i diskussionen vid pass 33:00.