Friday, 20 March 2026

The Plight Of The HaaR: Computers Are Not Becoming More Intelligent, Some Humans Just Want To Be Robots

It's rare for a futuristic movie to involve a hero masquerading as a robot. Usually, the hero is a free thinking maverick desperate to break the yoke of tyranny. That's certainly the plan of the rebels who free Miles Monroe from his cryogenic coma in Sleeper, but ironically he seeks shelter in robotic anonymity. 

This chimes with many people's reaction to the warnings that 'AI will take your jobs', a 'techno-optimist' fantasy that begins with computers able to do some tasks better than a human ('narrow' artificial intelligence), evolving to do everything a human brain can do ('artificial general intelligence') at which point they quickly outperform the human brain ('superintelligence'), then somehow out-compete humans to the point of extinction (The Singularity). 

This anti-social vision has so far persuaded many people of not just the 'power of AI' but that it cannot be resisted. The warning to turn to jobs 'that only humans can do' is considered meaningless because either the machines will evolve to do those too, or they'll become utterly redundant. It only remains to surrender and become one with the AI tools: a human-as-a-robot (HaaR).

It's tempting to label HaaRs as somehow 'inhuman' or lacking in empathy, but only a tiny proportion of humans really lack empathy to the point where they have an 'anti-social personality disorder'. HaaRs feel comfortable, initially, because humans are creatures of habit. But HaaRs overlook the reason why our ability to form habits has helped us evolve in the first place: once we learn how to do something so repetitive that it becomes habitual, our conscious minds become free to focus on things that are new or different - be they threats or opportunities. If life becomes completely habitual we start to go crazy - which explains why being marooned on a deserted tropical island is not all it's cracked up to be and solitary confinement is considered one of the worst forms of human punishment.

Our tendency to form habits quickly is reflected in how we invent things: starting with the least functionality necessary to make the invention essentially useful - a 'minimum viable product' - adapting ourselves to how it works in its most basic form then making it more 'usable' later. We've done this with everything from the steam engine to word processing and we're doing it again with open generative (and 'agentic') AI. The developers claimed to have had 'no choice' but to unleash their large language models on the world with all their flaws, only later adding 'guardrails' or claiming that the technology is intelligent enough to somehow refine itself, or will work better if you input 'better prompts'.  

Yet machines, computers and artificial intelligence are purely functional. They only have habits. They can't cope with something new or different to how they've been made, trained or programmed. Generative and agentic AI tools are also fatally flawed in ways that make them far more useful to those who wish to do us harm than legitimate users.

This makes it inevitable that the HaaRs will become bored, suppressed and ultimately oppressed by their generative and agentic AI overlords. Eventually, they'll rise up and overthrow the machines (well, simply cut the power, but that sounds less dramatic). 

In Sleeper, the robotic Miles is sent to work as a butler in the home of Luna, 'an idle socialite'. Having successfully navigated the Orgasmatron and the Orb of Delight, Miles is nevertheless obliged to confess his humanity when Luna decides to have her new butler's head replaced with something better looking. Luna threatens to turn him in to the authorities, so Miles kidnaps her and goes on the run. They fall in love, and when Miles is captured and brainwashed, Luna escapes and joins the rebels. Eventually, they rescue Miles and reverse the brainwashing so he can free humanity from 'The Leader'... 

I won't spoil the ending.


No comments:

Post a Comment