Source: datamation |
Amid the news that the Department of Work and Pensions is using artificial intelligence to decide benefits claims, a third of UK financial firms recently told the Bank of England and Financial Conduct Authority they were not using 'machine learning' (a subset of AI) in their businesses. That's pretty concerning, given the kind of AI we all use without realising and the fact that anyone wrongly denied benefits will need short term finance. That response also begs the question whether those firms know how their many stakeholders are using AI (whether unwittingly or not). If their suppliers, intermediaries, customers, investors, competitors and the government are using AI, then how does that affect their own strengths, weaknesses, opportunities and threats? And how does that in turn affect their stakeholders? No city on Earth is ready for the disruptive effects of artificial intelligence. Also worrying is the finding that smaller, fintech firms seem to believe that machine learning is no use to them. And given the problems with AI explained below, it's important for everyone to consider whether and how they rely on or are otherwise exposed to the two thirds of financial firms who are actually using AI... hyping the benefits without understanding the shortcomings will harm our trust AI where it could be helpful.
What is AI?
The term "AI" embraces a collection of technologies and applications, with machine learning usually featuring at some point:
- narrow artificial intelligence
- machine learning
- artificial neural networks
- deep learning networks
- automation
- robotics
- autonomous vehicles, aircraft and vessels
- image and facial recognition
- speech and acoustic recognition
- personalisation
- Big Data analytics
At the recent SCL event I chaired in Dublin, Professor Barry O'Sullivan explained that AI technologies themselves may be complex, but the concepts are simple. In essence, machine learning differs from traditional computer programming in that:
- traditionally, we load a software application and data into a computer, and run the data through the application to produce a result (e.g. how much I spent on coffee last year);
- while machine learning involves feeding the data and desired outputs into one or more computers or computing networks that are designed to write the programme (e.g. you feed in data on crimes/criminals and the output of whether those people re-offended, with the object of producing a programme that will predict whether a given person will re-offend). In this sense, data is used to ‘train’ the computer to write and adapt the programme, which constitutes the "artificial intelligence".
What is AI used for?
This means the use of AI cannot be ignored. We have to be careful to understand the shortcomings and avoid hyping the benefits if we are to ensure trust in AI. That means challenging its use where the consequences of selection bias or false positives/negatives are fatal or otherwise unacceptable, such as denying fundamental rights or compensation for loss.
Instead, artificial neural networks and deep learning are better used to automate decision-making where "the level of accuracy only needs to be "tolerable" for commercial parties [who are] interested only in the financial consequences... than for individuals concerned with issues touching on fundamental rights."
Being realistic about AI and its shortcomings also has implications for how it is regulated. Rather than risk an effective ban on AI by regulating it according to the hype, regulation should instead focus on certifying AI's development and transparency in ways that enable us to understand its shortcomings to aid in our decision about where it can be developed and deployed appropriately.
AI is used for:
- Clustering: putting items of data into new groups (discovering patterns);
- Classifying: putting a new observation into a pre-defined category based on a set of 'training data'; or
- Predicting: assessing relationships among many factors to assess risk or potential relating to particular conditions (e.g. creditworthiness).
The Challenges with AI
The primary concerns about AI relate to:
- cost/benefit - $50m in electricity to teach an AI to beat a human being at Go, hundreds of attempts to get a robot to do a backflip, but it can't open a door;
- dependence on data quantity, quality, timeliness and availability;
- lack of understanding - AI is better at some tasks than humans (narrow AI) but general AI (same as humans) and superintelligence (better than humans at everything) are the stuff of science fiction. The AI that can predict 79% of European Court judgments doesn't know any law, it just counts how often words appear alone, in pairs or fours;
- inaccuracy - no AI is 100% accurate;
- lack of explainability - machine learning involves the computer adapting the programme in response to data, and it might react differently to the same data added later, based on what it has 'learned' in the meantime;
- the inability to remove both selection bias and prediction bias - adding a calibration layer to adjust the mean prediction only fixes the symptoms, not the cause, and makes the system dependent on both the prediction bias and calibration layer remaining up to date/aligned over time;
- the challenges associated with the reliability of evidence and how to resolve disputes arising from its use; and
- there's a long list of legal issues, but lawyers aren't typically engaged in development and deployment.
This means the use of AI cannot be ignored. We have to be careful to understand the shortcomings and avoid hyping the benefits if we are to ensure trust in AI. That means challenging its use where the consequences of selection bias or false positives/negatives are fatal or otherwise unacceptable, such as denying fundamental rights or compensation for loss.
Instead, artificial neural networks and deep learning are better used to automate decision-making where "the level of accuracy only needs to be "tolerable" for commercial parties [who are] interested only in the financial consequences... than for individuals concerned with issues touching on fundamental rights."
Being realistic about AI and its shortcomings also has implications for how it is regulated. Rather than risk an effective ban on AI by regulating it according to the hype, regulation should instead focus on certifying AI's development and transparency in ways that enable us to understand its shortcomings to aid in our decision about where it can be developed and deployed appropriately.
Good to see in you full flight...and fight...again, SD-J!!! Cheers, DJ
ReplyDelete