måndag 15 september 2014

Superintelligence odds and ends III: Political reality and second-guessing

My review of Nick Bostrom's important book Superintelligence: Paths, Dangers, Strategies appeared first in Axess 6/2014, and then, in English translation, in a blog post here on September 10. The present blog post is the third in a series of five in which I offer various additional comments on the book (here is an index page for the series).

*

A breakthrough in AI leading to a superintelligence would, as Bostrom underlines in his book, be a terribly dangerous thing. Among many other aspects and considerations, he discusses whether our chances of surviving such an event are better if technological progress in this area speeds up or is slowed down, and this turns out to be a complicated and far from straightforward issue. On balance, however, I tend to think that in most cases we're better off with a slower progress towards an AI breakthrough.

Yet, in recent years I've participated in a couple of projects (with Claes Strannegård) ultimately aimed at creating an artificial general intelligence (AGI); see, e.g., this paper and this one. Am I deliberately worsening humanity's survival chances in order to do work I enjoy or to promote my academic career?

That would be bad, but I think what I'm doing is actually defensible. I might of course be deluding myself, but what I tell myself is this: The problem is not so much the speed of progress towards AGI itself, but rather the ratio between this speed and the speed at which we make concrete progress on what Bostrom calls the control problem, i.e., the problem of figuring out how to make sure that a future intelligence explosion becomes a controlled detonation with benign consequences for humanity. Even though the two papers cited in the previous paragraph show no hint of work on the control problem, I do think that in the slighly longer run it is probably on balance beneficial if, through my involvement in AI work and participation in the AI community, I improve the (currently dismally low) proportion of AI researchers caring about the control problem - both through my own head count of one, and by influencing others in the field. This is in line with a piece of advice recently offered by philosopher Nick Beckstaed: "My intuition is that any negative effects from speeding up technological development in these areas are likely to be small in comparison with the positive effects from putting people in place who might be in a position to influence the technical and social context that these technologies develop in."

On p 239 of Superintelligence, Bostrom outlines an alternative argument, borrowed from Eric Drexler, that I might use to defend my involvement in AGI research:
    1. The risks of X are great.
    2. Reducing these risks will require a period of serious preparation.
    3. Serious preparation will begin only once the prospect of X is taken seriously by broad sectors of society.
    4. Broad sectors of society will take the prospect of X seriously only once a large research effort to develop X is underway.
    5.The earlier a serious research effort is initiated, the longer it will take to deliver (bacause it starts from a lower level of pre-existing enabling technologies).
    6. Therefore, the earlier a serious research effort is initiated, the longer the period during which serious preparation will be taking place, and the greater the reduction of the risks.
    7. Therefore, a serious research effort toward X should be initiated immediately.
Thus, in Bostrom's words, "what initially looks like a reason for going slow or stopping - the risks of X being great - ends up, on this line of thinking, as a reason for the opposite conclusion." The context in which he discusses this is the complexity of political reality, where, even if we figure out what needs to be done and go public with it, and even if our argument is watertight, we cannot take for granted that our proposal will be implemented. Any idea we have arrived at concerning the best way forward...
    ...must be embodied in the form of a concrete message, which is entered into the arena of rhetorical and political reality. There it will be ingored, misunderstood, distorted, or appropriated for various conflicting purposes; it will bounce around like a pinball, causing actions and reactions, ushering in a cascade of consequences, the upshot of which need bear no straightforward relationship to the intentions of the original sender. (p 238)
In such a "rhetorical and political reality" there may be reason to send not the message that most straightforwardly and accurately describes what's on our mind, but rather the one that we, after careful strategic deliberation, consider most likely to trigger the responses we're hoping for. The 7-step argument about technology X is an example of such second-guessing.

I feel very uneasy about this kind of strategic thinking. Here's my translation of what I wrote in a blog post in Swedish earlier this year:
    I am very aware that my statements and my actions are not always strategically optimal [and I often do this deliberately]. I am highly suspicious of too much strategic thinking in public debate, because if everyone just says what he or she considers strategically optimal to say, as opposed than offering their true opinions, then we'll eventually end up in a situation where we can no longer see what anyone actually thinks is right. To me that is a nightmare scenario.
Bostrom has similar qualms:
    Ther may [...] be a moral case for de-emphasizing or refraining from second-guessing moves. Trying to outwit one another looks like a zero-sum game - or negative-sum, when one considers the time and energy that would be dissipated by the practice as well as the likelihood that it would make it generally harder for anybody to discover what others truly think and to be trusted when expressing their own opinions. A full-throttled deployment of the practices of strategic communication would kill candor and leave turth bereft to fend for herself in the backstabbing night of political bogeys. (p 240)

3 kommentarer:

  1. Mr. Haggström wrote:
    "if everyone just says what he or she considers strategically optimal to say, as opposed than offering their true opinions, then we'll eventually end up in a situation where we can no longer see what anyone actually thinks is right. To me that is a nightmare scenario.

    Wait. I think you are confusing your mightmare with our reality.
    We are surely already there?

    SvaraRadera
  2. It seems to me that Swedish politics has fostered a political climate where people become unintelligible. When noone care about truth everyone will just be yelling at each other.

    SvaraRadera
  3. Always adjust the message to the listener... As I have said before, compare it to climate change. To stop development of potentially economic cash cows because of something that might happen... it will take some heavy lobbying. Look at climate change, we are not doing much to stop it still we know some awful things that will happen if we do not do anything and we also have seen glimpses of what could happen if we do not act...

    It is a tough question though, and one that has made me think more of me as a Consequentialist within limits...

    SvaraRadera