Professor Nick Bostrom delivered an inspirational speech for the SCL last night on
"Superintelligence: a godsend or a doomsday device", although few would have found it reassuring - it is certainly conceivable that machines could become more intelligent than humans and that humans might not be able to control them. But these are still early days, he stresses. Regulating the development of artificial intelligence at this point risks halting progress. There's a lot more work to do to really
understand how artificial intelligence will develop beyond
playing old video games better than humans or recognising an image as the picture of a cat. We need to consider how the technology could
help avert our extinction, as well as how it might
wipe us out. Yet Nick also warns that it will take far less time for computers to exceed human level machine intelligence than to reach it in the first place. So we need to start work on the control mechanisms for the development and use of artifical intelligence, without regulating the industry out of existence: the control dichotomy.
Nick suggests that the guiding principle should be that of "
differential technological development" - diverting resources away from technologies and their application which could cause human extinction while focusing on those which will either help prevent our demise or will facilitate the expansion of the human race throughout the cosmos.
But how do we distinguish between helpful and harmful technologies and their application?
As Nick points out, it's tough to think of any human invention that is inherently 'good'. He mentions many things, from gun powder to genetic engineering, and I think we can throw in the wheel and the Internet for good measure. All these things are used by humans in bad ways as well as for the greater good. But I think
what horrifies us especially about the idea of Superintelligence or 'The Singularity' is that it will be machines, not bad humans, who will be using other machines against us. And while we have lots of experience in dealing with evil humans, even our top minds admit we still don't know much about how machines might act in this way or how to stop them - and what we humans fear most is the unknown.
You'll notice I haven't said 'evil' machines, since they might not be operating with any evil 'intent' at all. Human extinction might just be a mistake - 'collateral damage' arising from some other mission. For instance, Nick suggests that a particular machine left to itself in a given situation might decide to devote itself entirely to making paperclips. So, presumably, it would not bother to put out a fire, for example, or free a human (or itself) from a burning building. It might leave that to other machines, who might in turn have chosen their own narrow objective that involves ignoring people's screams.
Here's where I struggle with the notion of Superintelligence. In fact, as someone who hates being pigeon-holed into any single role, I think a machine's decision to only ever make paperclips might be fabulously logical and a brilliant choice for that machine in the circumstances, but it makes the machine as dumb as a post. For me, Superintelligence should involve a machine being able to do everything a human can and more.
But that's beside the point. Knowing what we know already, it would be insane to ignore the paperclip droid and wait for artificial intelligence to develop a machine more capable than humans before figuring out how we might control it. Nick is right to point out that we must figure that out in parallel. In other words, the concept of human control has to be part of the artificial intelligence programme. But it won't be as simple as coding machines to behave protectively, since machines will be able to programme each other. For instance, Nick suggests we could put the machines to work on the control problem, as well as on the problem of how to ensure the survival of our species. AI labs might also pay insurance premiums to cover the damage caused by what they develop. He was less certain about what we might do to constrain developments that occur in the context of secret defence programmes or intelligence gathering, but he seemed confident that we could at least infer the pace of development from the results, and be able to consider how to control the wider application of those developments. Mmmm.
At any rate, Nick also warns that we need to be careful what we wish for. Mandating human survival in a prescriptive way - even a specific biological form - would be a bad move, since we should not assume we are in a position to foster positive human development any more than the Holy Office of the Spanish Inquisition. Better to embed positive human values and emotions or, say, entertainment as a feature of intelligent machines (although I'm guessing that might not go down well with the jihadis). From a phyiscal standpoint, we already know that the human body won't do so well for long periods in space or on Mars, so some other version might need to evolve (okay, now I'm freaking myself out).
To retain a sense of pragmatism, at the end of the speech I asked Nick what he would recommend for our focus on
'Keeping Humans at the Heart of Technology' at the SCL conference in June. His tip was to consider which of the various types of control mechanism might work best, recognising the need to avoid constraining the positive development of artificial intelligence, while ensuring that we will be able to keep the machines in check if and when they become smarter than us.
No pressure then...