Google
Showing posts with label superintelligence. Show all posts
Showing posts with label superintelligence. Show all posts

Friday, 20 March 2026

The Plight Of The HaaR: Computers Are Not Becoming More Intelligent, Some Humans Just Want To Be Robots

It's rare for a futuristic movie to involve a hero masquerading as a robot. Usually, the hero is a free thinking maverick desperate to break the yoke of tyranny. That's certainly the plan of the rebels who free Miles Monroe from his cryogenic coma in Sleeper, but ironically he seeks shelter in robotic anonymity. 

This chimes with many people's reaction to the warnings that 'AI will take your jobs', a 'techno-optimist' fantasy that begins with computers able to do some tasks better than a human ('narrow' artificial intelligence), evolving to do everything a human brain can do ('artificial general intelligence') at which point they quickly outperform the human brain ('superintelligence'), then somehow out-compete humans to the point of extinction (The Singularity). 

This anti-social vision has so far persuaded many people of not just the 'power of AI' but that it cannot be resisted. The warning to turn to jobs 'that only humans can do' is considered meaningless because either the machines will evolve to do those too, or they'll become utterly redundant. It only remains to surrender and become one with the AI tools: a human-as-a-robot (HaaR).

It's tempting to label HaaRs as somehow 'inhuman' or lacking in empathy, but only a tiny proportion of humans really lack empathy to the point where they have an 'anti-social personality disorder'. HaaRs feel comfortable, initially, because humans are creatures of habit. But HaaRs overlook the reason why our ability to form habits has helped us evolve in the first place: once we learn how to do something so repetitive that it becomes habitual, our conscious minds become free to focus on things that are new or different - be they threats or opportunities. If life becomes completely habitual we start to go crazy - which explains why being marooned on a deserted tropical island is not all it's cracked up to be and solitary confinement is considered one of the worst forms of human punishment.

Our tendency to form habits quickly is reflected in how we invent things: starting with the least functionality necessary to make the invention essentially useful - a 'minimum viable product' - adapting ourselves to how it works in its most basic form then making it more 'usable' later. We've done this with everything from the steam engine to word processing and we're doing it again with open generative (and 'agentic') AI. The developers claimed to have had 'no choice' but to unleash their large language models on the world with all their flaws, only later adding 'guardrails' or claiming that the technology is intelligent enough to somehow refine itself, or will work better if you input 'better prompts'.  

Yet machines, computers and artificial intelligence are purely functional. They only have habits. They can't cope with something new or different to how they've been made, trained or programmed. Generative and agentic AI tools are also fatally flawed in ways that make them far more useful to those who wish to do us harm than legitimate users.

This makes it inevitable that the HaaRs will become bored, suppressed and ultimately oppressed by their generative and agentic AI overlords. Eventually, they'll rise up and overthrow the machines (well, simply cut the power, but that sounds less dramatic). 

In Sleeper, the robotic Miles is sent to work as a butler in the home of Luna, 'an idle socialite'. Having successfully navigated the Orgasmatron and the Orb of Delight, Miles is nevertheless obliged to confess his humanity when Luna decides to have her new butler's head replaced with something better looking. Luna threatens to turn him in to the authorities, so Miles kidnaps her and goes on the run. They fall in love, and when Miles is captured and brainwashed, Luna escapes and joins the rebels. Eventually, they rescue Miles and reverse the brainwashing so he can free humanity from 'The Leader'... 

I won't spoil the ending.


Wednesday, 13 March 2024

SuperStupidity: Are We Allowing AI To Generate The Next Global Financial Crisis?

I updated my thoughts on AI risk management over on The Fine Print recently and next on my list to catch up on was 'the next financial crisis'. Coincidentally, a news search prompted an FT article on remarks about AI from the head of the SEC. 

While Mr Gensler sees benefits from the use of AI in some efficiencies and combating fraud, he also spots the seeds of the next financial crisis lurking not only in the various general challenges associated with AI (e.g. inaccuracy, bias, hallucination) but particularly in: 

  • potential 'herding' around certain trading decisions; 
  • concentration of AI services in a few cloud providers;  
  • lack of transparency in who's using AI and for what purposes; and
  • inability to explain the outputs. 

All familiar themes, but it's the concentration of risk that leaps out in a financial context, though it was also a wider concern identified in hearings before the House of Lords communications committee and by the FTC, as explained in my earlier post. 

The fact that only a few large tech players are able to (a) compete for the necessary semiconductors (chips) and (b) provide the vast scale of cloud computing infrastructure that AI systems require is particularly concerning in the financial markets context because the world relies so heavily on those markets for economic, social and even political stability - as the global financial crisis revealed. 

We can't blame the computers for allowing this situation to develop.

So, if 'superintelligence' is the point at which AI systems develop the intelligence to out-compete humans to the point of extinction, is 'superstupidity' the point at which we humans seal our fate by concentrating the risks posed by AI systems to the point of critical failure?  


Tuesday, 3 March 2015

Artificial Intelligence: The Control Dichotomy

Professor Nick Bostrom delivered an inspirational speech for the SCL last night on "Superintelligence: a godsend or a doomsday device", although few would have found it reassuring - it is certainly conceivable that machines could become more intelligent than humans and that humans might not be able to control them. But these are still early days, he stresses. Regulating the development of artificial intelligence at this point risks halting progress. There's a lot more work to do to really understand how artificial intelligence will develop beyond playing old video games better than humans or recognising an image as the picture of a cat. We need to consider how the technology could help avert our extinction, as well as how it might wipe us out. Yet Nick also warns that it will take far less time for computers to exceed human level machine intelligence than to reach it in the first place. So we need to start work on the control mechanisms for the development and use of artifical intelligence, without regulating the industry out of existence: the control dichotomy.

Nick suggests that the guiding principle should be that of "differential technological development" - diverting resources away from technologies and their application which could cause human extinction while focusing on those which will either help prevent our demise or will facilitate the expansion of the human race throughout the cosmos.

But how do we distinguish between helpful and harmful technologies and their application? 

As Nick points out, it's tough to think of any human invention that is inherently 'good'. He mentions many things, from gun powder to genetic engineering, and I think we can throw in the wheel and the Internet for good measure. All these things are used by humans in bad ways as well as for the greater good. But I think what horrifies us especially about the idea of Superintelligence or 'The Singularity' is that it will be machines, not bad humans, who will be using other machines against us. And while we have lots of experience in dealing with evil humans, even our top minds admit we still don't know much about how machines might act in this way or how to stop them - and what we humans fear most is the unknown. 

You'll notice I haven't said 'evil' machines, since they might not be operating with any evil 'intent' at all. Human extinction might just be a mistake - 'collateral damage' arising from some other mission. For instance, Nick suggests that a particular machine left to itself in a given situation might decide to devote itself entirely to making paperclips. So, presumably, it would not bother to put out a fire, for example, or free a human (or itself) from a burning building. It might leave that to other machines, who might in turn have chosen their own narrow objective that involves ignoring people's screams.

Here's where I struggle with the notion of Superintelligence. In fact, as someone who hates being pigeon-holed into any single role, I think a machine's decision to only ever make paperclips might be fabulously logical and a brilliant choice for that machine in the circumstances, but it makes the machine as dumb as a post. For me, Superintelligence should involve a machine being able to do everything a human can and more

But that's beside the point. Knowing what we know already, it would be insane to ignore the paperclip droid and wait for artificial intelligence to develop a machine more capable than humans before figuring out how we might control it. Nick is right to point out that we must figure that out in parallel. In other words, the concept of human control has to be part of the artificial intelligence programme. But it won't be as simple as coding machines to behave protectively, since machines will be able to programme each other. For instance, Nick suggests we could put the machines to work on the control problem, as well as on the problem of how to ensure the survival of our species. AI labs might also pay insurance premiums to cover the damage caused by what they develop. He was less certain about what we might do to constrain developments that occur in the context of secret defence programmes or intelligence gathering, but he seemed confident that we could at least infer the pace of development from the results, and be able to consider how to control the wider application of those developments. Mmmm.

At any rate, Nick also warns that we need to be careful what we wish for. Mandating human survival in a prescriptive way - even a specific biological form - would be a bad move, since we should not assume we are in a position to foster positive human development any more than the Holy Office of the Spanish Inquisition. Better to embed positive human values and emotions or, say, entertainment as a feature of intelligent machines (although I'm guessing that might not go down well with the jihadis). From a phyiscal standpoint, we already know that the human body won't do so well for long periods in space or on Mars, so some other version might need to evolve (okay, now I'm freaking myself out).

To retain a sense of pragmatism, at the end of the speech I asked Nick what he would recommend for our focus on 'Keeping Humans at the Heart of Technology' at the SCL conference in June. His tip was to consider which of the various types of control mechanism might work best, recognising the need to avoid constraining the positive development of artificial intelligence, while ensuring that we will be able to keep the machines in check if and when they become smarter than us.

No pressure then...


Related Posts with Thumbnails