Google
Showing posts with label explainability. Show all posts
Showing posts with label explainability. Show all posts

Wednesday, 13 March 2024

SuperStupidity: Are We Allowing AI To Generate The Next Global Financial Crisis?

I updated my thoughts on AI risk management over on The Fine Print recently and next on my list to catch up on was 'the next financial crisis'. Coincidentally, a news search prompted an FT article on remarks about AI from the head of the SEC. 

While Mr Gensler sees benefits from the use of AI in some efficiencies and combating fraud, he also spots the seeds of the next financial crisis lurking not only in the various general challenges associated with AI (e.g. inaccuracy, bias, hallucination) but particularly in: 

  • potential 'herding' around certain trading decisions; 
  • concentration of AI services in a few cloud providers;  
  • lack of transparency in who's using AI and for what purposes; and
  • inability to explain the outputs. 

All familiar themes, but it's the concentration of risk that leaps out in a financial context, though it was also a wider concern identified in hearings before the House of Lords communications committee and by the FTC, as explained in my earlier post. 

The fact that only a few large tech players are able to (a) compete for the necessary semiconductors (chips) and (b) provide the vast scale of cloud computing infrastructure that AI systems require is particularly concerning in the financial markets context because the world relies so heavily on those markets for economic, social and even political stability - as the global financial crisis revealed. 

We can't blame the computers for allowing this situation to develop.

So, if 'superintelligence' is the point at which AI systems develop the intelligence to out-compete humans to the point of extinction, is 'superstupidity' the point at which we humans seal our fate by concentrating the risks posed by AI systems to the point of critical failure?  


Thursday, 24 October 2019

Do You Know You Are Using And/Or Exposed To Artificial Intelligence?

Source: datamation
Amid the news that the Department of Work and Pensions is using artificial intelligence to decide benefits claims, a third of UK financial firms recently told the Bank of England and Financial Conduct Authority they were not using 'machine learning' (a subset of AI) in their businesses. That's pretty concerning, given the kind of AI we all use without realising and the fact that anyone wrongly denied benefits will need short term finance. That response also begs the question whether those firms know how their many stakeholders are using AI (whether unwittingly or not). If their suppliers, intermediaries, customers, investors, competitors and the government are using AI, then how does that affect their own strengths, weaknesses, opportunities and threats? And how does that in turn affect their stakeholders? No city on Earth is ready for the disruptive effects of artificial intelligence. Also worrying is the finding that smaller, fintech firms seem to believe that machine learning is no use to them. And given the problems with AI explained below, it's important for everyone to consider whether and how they rely on or are otherwise exposed to the two thirds of financial firms who are actually using AI... hyping the benefits without understanding the shortcomings will harm our trust AI where it could be helpful.

What is AI?

The term "AI" embraces a collection of technologies and applications, with machine learning usually featuring at some point:
  • narrow artificial intelligence
  • machine learning
  • artificial neural networks 
  • deep learning networks 
  • automation
  • robotics
  • autonomous vehicles, aircraft and vessels
  • image and facial recognition
  • speech and acoustic recognition
  • personalisation
  • Big Data analytics
At the recent SCL event I chaired in Dublin, Professor Barry O'Sullivan explained that AI technologies themselves may be complex, but the concepts are simple. In essence, machine learning differs from traditional computer programming in that:
  • traditionally, we load a software application and data into a computer, and run the data through the application to produce a result (e.g. how much I spent on coffee last year);
  • while machine learning involves feeding the data and desired outputs into one or more computers or computing networks that are designed to write the programme (e.g. you feed in data on crimes/criminals and the output of whether those people re-offended, with the object of producing a programme that will predict whether a given person will re-offend). In this sense, data is used to ‘train’ the computer to write and adapt the programme, which constitutes the "artificial intelligence".
What is AI used for?

AI is used for:

  1. Clustering: putting items of data into new groups (discovering patterns);
  2. Classifying: putting a new observation into a pre-defined category based on a set of 'training data'; or
  3. Predicting: assessing relationships among many factors to assess risk or potential relating to particular conditions (e.g. creditworthiness).
The Challenges with AI

The primary concerns about AI relate to:
  1. cost/benefit - $50m in electricity to teach an AI to beat a human being at Go, hundreds of attempts to get a robot to do a backflip, but it can't open a door;
  2. dependence on data quantity, quality, timeliness and availability;
  3. lack of  understanding - AI is better at some tasks than humans (narrow AI) but general AI (same as humans) and superintelligence (better than humans at everything) are the stuff of science fiction. The AI that can predict 79% of European Court judgments doesn't know any law, it just counts how often words appear alone, in pairs or fours;
  4. inaccuracy - no AI is 100% accurate;
  5. lack of explainability - machine learning involves the computer adapting the programme in response to data, and it might react differently to the same data added later, based on what it has 'learned' in the meantime; 
  6. the inability to remove both selection bias and prediction bias - adding a calibration layer to adjust the mean prediction only fixes the symptoms, not the cause, and makes the system dependent on both the prediction bias and calibration layer remaining up to date/aligned over time; 
  7. the challenges associated with the reliability of evidence and how to resolve disputes arising from its use; and
  8. there's a long list of legal issues, but lawyers aren't typically engaged in development and deployment.

This means the use of AI cannot be ignored. We have to be careful to understand the shortcomings and avoid hyping the benefits if we are to ensure trust in AI. That means challenging its use where the consequences of selection bias or false positives/negatives are fatal or otherwise unacceptable, such as denying fundamental rights or compensation for loss.

Being realistic about AI and its shortcomings also has implications for how it is regulated. Rather than risk an effective ban on AI by regulating it according to the hype, regulation should instead focus on certifying AI's development and transparency in ways that enable us to understand its shortcomings to aid in our decision about where it can be developed and deployed appropriately.


Related Posts with Thumbnails