Tuesday, 26 March 2024

There's Nothing Intelligent About The Government's Approach To AI Either

Surprise! The UK government's under-funded, shambolic approach to public services also extends to the public sector's use of artificial intelligence. Ministers are no doubt piling the pressure on officials with demands for 'announcements' and other soundbites. But amid concerns that even major online platforms are failing to adequately mitigate the risks - not to mention this government's record for explosively bad news - you'd have thought they'd tread more carefully.

Despite 60 of the 87 public bodies either using or planning to use AI, the National Audit Office reports a lack of governance, accountability, funding, implementation plans and performance measures. 

There are also "difficulties attracting and retaining staff with AI skills, and lack of clarity around legal liability... concerns about risks of unreliable or inaccurate outputs from AI, for example due to bias and discrimination, and risks to privacy, data protection, [and] cyber security." 

The full report is here.

Amid concerns that the major online platforms are also failing to adequately mitigate the risks of generative AI (among other things), you'd have thought that government would be more concerned to approach the use of AI technologies responsibly.

But, no...

For what it's worth, here's my post on AI risk management (recently re-published by SCL).


Wednesday, 13 March 2024

SuperStupidity: Are We Allowing AI To Generate The Next Global Financial Crisis?

I updated my thoughts on AI risk management over on The Fine Print recently and next on my list to catch up on was 'the next financial crisis'. Coincidentally, a news search prompted an FT article on remarks about AI from the head of the SEC. 

While Mr Gensler sees benefits from the use of AI in some efficiencies and combating fraud, he also spots the seeds of the next financial crisis lurking not only in the various general challenges associated with AI (e.g. inaccuracy, bias, hallucination) but particularly in: 

  • potential 'herding' around certain trading decisions; 
  • concentration of AI services in a few cloud providers;  
  • lack of transparency in who's using AI and for what purposes; and
  • inability to explain the outputs. 

All familiar themes, but it's the concentration of risk that leaps out in a financial context, though it was also a wider concern identified in hearings before the House of Lords communications committee and by the FTC, as explained in my earlier post. 

The fact that only a few large tech players are able to (a) compete for the necessary semiconductors (chips) and (b) provide the vast scale of cloud computing infrastructure that AI systems require is particularly concerning in the financial markets context because the world relies so heavily on those markets for economic, social and even political stability - as the global financial crisis revealed. 

We can't blame the computers for allowing this situation to develop.

So, if 'superintelligence' is the point at which AI systems develop the intelligence to out-compete humans to the point of extinction, is 'superstupidity' the point at which we humans seal our fate by concentrating the risks posed by AI systems to the point of critical failure?  


Wednesday, 6 March 2024

AI is a Set of Technologies, Not The End Of Work

We've heard a lot for a long time about artificial intelligence replacing our jobs and, ultimately, the human race. We're told we'll need to retrain to do things that AI computers cannot. But beware the hype. After all, AI is just a set of technologies and we've coped with the introduction of new technology before. Rather than having to retrain, it's more likely you'll be using AI without even realising it. And there are cases where robotics are needed because humans are reluctant or unavailable to do certain tasks... The real concern is that the hype distracts us from more immediate and greater threats posed by AI and how to manage and regulate the risks appropriately.

Much of the hype surrounding AI confuses its development with its actual or potential uses, whether in business or in the course of other activities, like writing a wedding speech. As with any technology, there's obviously a business in developing AI, selling it and deploying it. But how useful it is depends on who uses it and how.

This confusion is perhaps partly driven by the fact that some businesses are developing and operating 'open' AI systems-as-a-service focused on particular use-cases or scenarios (chatbots, research, text-to-image and so on), so you conduct your activity on their platform rather than your own device. The hype surrounding these platforms is intended to attract investment and users, but it seems unlikely that they will become the Alphabet (Google), Microsoft or Meta (Facebook) of tomorrow, especially as those tech giants are funding AI development themselves, to cement their own market dominance. 

Yet, while the tech giants might dominate the markets for their technology (and some markets where they act as more than just a tech business, like advertising), you'll see that they aren't dominating every business sector or industry in which their technology is used. 

It's therefore the people and businesses who successfully deploy and use AI who will benefit from those technologies, not the developers. This is no different to the use of telephones, laptop computers or email (or a distributed ledger supporting a cryptocurrency). 

Nobody who went from using a typewriter or the analog version of a telephone, laptop, email or work intranet would say that they're redundant or even work less as a result of using the new/replacement technology. If anything, those tools have enabled changes in work patterns that have meant that humans work faster, longer and, ultimately, harder.

And there was more 'retraining' involved in introducing PCs, email, spreadsheets and video conferencing than AI, which may be so embedded in existing processes that you don't even realise you're using it, whether in terms of product recommendations, chatbots and virtual assistants, predictive text and search features, or tagging your friends in the social media. 

There is plenty of speculation that truck drivers will be replaced by robots. Maybe truck technology has evolved to mean fewer drivers per tonne of truck, but there has been a steady increase in the number of trucks (and therefore demand for drivers) in the UK, for example, and driving a giant HGV takes more skill than smaller vehicles. Yet, ironically, there is a persistent shortage of drivers, so that transport firms are effectively being forced to invest in autonomous vehicles, just as farmers are turning to robotics due to a shortage of humans willing to pick fruit and vegetables). There are also many risky tasks that are better done remotely by machines, such as working in radioactive or other dangerous environments. AI may still be a threat to those still willing to do those tasks, yet they could also benefit from the demand for experienced humans to help in the wider development, deployment and use of the robots. This is no different to previous waves of technological innovation.

Yet all humans have a genuine concern if their personal information is being included in an AI's training data or is otherwise being harvested and used without your consent. That's where humans need to focus urgently, as well as in the creative industries where copyright violation by AIs is also rife. We also need to be on guard against hallucinating AIs, disinformation, deepfakes and misinformation - particularly in an election year.

More on that soon...