Google
Showing posts with label bias. Show all posts
Showing posts with label bias. Show all posts

Tuesday, 26 March 2024

There's Nothing Intelligent About The Government's Approach To AI Either

Surprise! The UK government's under-funded, shambolic approach to public services also extends to the public sector's use of artificial intelligence. Ministers are no doubt piling the pressure on officials with demands for 'announcements' and other soundbites. But amid concerns that even major online platforms are failing to adequately mitigate the risks - not to mention this government's record for explosively bad news - you'd have thought they'd tread more carefully.

Despite 60 of the 87 public bodies either using or planning to use AI, the National Audit Office reports a lack of governance, accountability, funding, implementation plans and performance measures. 

There are also "difficulties attracting and retaining staff with AI skills, and lack of clarity around legal liability... concerns about risks of unreliable or inaccurate outputs from AI, for example due to bias and discrimination, and risks to privacy, data protection, [and] cyber security." 

The full report is here.

Amid concerns that the major online platforms are also failing to adequately mitigate the risks of generative AI (among other things), you'd have thought that government would be more concerned to approach the use of AI technologies responsibly.

But, no...

For what it's worth, here's my post on AI risk management (recently re-published by SCL).


Thursday, 24 October 2019

Do You Know You Are Using And/Or Exposed To Artificial Intelligence?

Source: datamation
Amid the news that the Department of Work and Pensions is using artificial intelligence to decide benefits claims, a third of UK financial firms recently told the Bank of England and Financial Conduct Authority they were not using 'machine learning' (a subset of AI) in their businesses. That's pretty concerning, given the kind of AI we all use without realising and the fact that anyone wrongly denied benefits will need short term finance. That response also begs the question whether those firms know how their many stakeholders are using AI (whether unwittingly or not). If their suppliers, intermediaries, customers, investors, competitors and the government are using AI, then how does that affect their own strengths, weaknesses, opportunities and threats? And how does that in turn affect their stakeholders? No city on Earth is ready for the disruptive effects of artificial intelligence. Also worrying is the finding that smaller, fintech firms seem to believe that machine learning is no use to them. And given the problems with AI explained below, it's important for everyone to consider whether and how they rely on or are otherwise exposed to the two thirds of financial firms who are actually using AI... hyping the benefits without understanding the shortcomings will harm our trust AI where it could be helpful.

What is AI?

The term "AI" embraces a collection of technologies and applications, with machine learning usually featuring at some point:
  • narrow artificial intelligence
  • machine learning
  • artificial neural networks 
  • deep learning networks 
  • automation
  • robotics
  • autonomous vehicles, aircraft and vessels
  • image and facial recognition
  • speech and acoustic recognition
  • personalisation
  • Big Data analytics
At the recent SCL event I chaired in Dublin, Professor Barry O'Sullivan explained that AI technologies themselves may be complex, but the concepts are simple. In essence, machine learning differs from traditional computer programming in that:
  • traditionally, we load a software application and data into a computer, and run the data through the application to produce a result (e.g. how much I spent on coffee last year);
  • while machine learning involves feeding the data and desired outputs into one or more computers or computing networks that are designed to write the programme (e.g. you feed in data on crimes/criminals and the output of whether those people re-offended, with the object of producing a programme that will predict whether a given person will re-offend). In this sense, data is used to ‘train’ the computer to write and adapt the programme, which constitutes the "artificial intelligence".
What is AI used for?

AI is used for:

  1. Clustering: putting items of data into new groups (discovering patterns);
  2. Classifying: putting a new observation into a pre-defined category based on a set of 'training data'; or
  3. Predicting: assessing relationships among many factors to assess risk or potential relating to particular conditions (e.g. creditworthiness).
The Challenges with AI

The primary concerns about AI relate to:
  1. cost/benefit - $50m in electricity to teach an AI to beat a human being at Go, hundreds of attempts to get a robot to do a backflip, but it can't open a door;
  2. dependence on data quantity, quality, timeliness and availability;
  3. lack of  understanding - AI is better at some tasks than humans (narrow AI) but general AI (same as humans) and superintelligence (better than humans at everything) are the stuff of science fiction. The AI that can predict 79% of European Court judgments doesn't know any law, it just counts how often words appear alone, in pairs or fours;
  4. inaccuracy - no AI is 100% accurate;
  5. lack of explainability - machine learning involves the computer adapting the programme in response to data, and it might react differently to the same data added later, based on what it has 'learned' in the meantime; 
  6. the inability to remove both selection bias and prediction bias - adding a calibration layer to adjust the mean prediction only fixes the symptoms, not the cause, and makes the system dependent on both the prediction bias and calibration layer remaining up to date/aligned over time; 
  7. the challenges associated with the reliability of evidence and how to resolve disputes arising from its use; and
  8. there's a long list of legal issues, but lawyers aren't typically engaged in development and deployment.

This means the use of AI cannot be ignored. We have to be careful to understand the shortcomings and avoid hyping the benefits if we are to ensure trust in AI. That means challenging its use where the consequences of selection bias or false positives/negatives are fatal or otherwise unacceptable, such as denying fundamental rights or compensation for loss.

Being realistic about AI and its shortcomings also has implications for how it is regulated. Rather than risk an effective ban on AI by regulating it according to the hype, regulation should instead focus on certifying AI's development and transparency in ways that enable us to understand its shortcomings to aid in our decision about where it can be developed and deployed appropriately.


Wednesday, 14 November 2018

Easy As 123: Politicising The BBC

The BBC's Brexit logo speaks volumes

The reason that the BBC finds itself 'politicised' in this way is not because the BBC is 'biased' - at least not in the sense of simply taking one side in any given debate.  It's down to how the BBC frames its coverage of major events in the first place.

The BBC seems to take its editorial course from what the UK government (in this case) has decided to do. It then seeks to maintain 'balance' by covering all sides in the debate about how the UK should do what the government has decided, leaving behind the question whether the UK should be doing the thing at all

Viewing the whole Brexit scenario through the BBC's lens, therefore, means that the numerous investigations into collusion between Leave campaigns, where their funding came from and how they abused people's personal data become irrelevant to the BBC's Brexit coverage. So, too, are marches to secure a 'People's Vote'. Because those things relate to whether the UK should leave the EU, not how the UK should go about leaving - even when stopping the process remains an option.

This is not to say the BBC completely ignores the Electoral Commission fines, Information Commissioner fines and the launch of investigations by the National Crime Agency, the Metropolitan Police and the Financial Conduct Authority into the affairs of Mr Banks and various other members of the Leave Campaign and Brexit community - not to mention all the lies, distortion and gaslighting that was involved. But the BBC treats these as historic issues related to the EU referendum, electoral reform, how personal data might be abused in elections more generally and, perhaps, the role of truth in politics. From the BBC's standpoint, they shouldn't form part of its Brexit coverage because they don't relate to how the UK leaves the EU. 

This is appalling for at least four reasons. 

Primarily, it becomes really easy for the UK government to "get the BBC on side" and use its vast resources as the government's own public address system when attempting something that is likely to prove hugely complex and controversial. The government simply has to decide to do it: invade Iraq, trigger Article 50 without a plan for how to leave the EU, ignore the Good Friday Agreement... 

Secondly, the BBC's editorial choice minimises dissent by removing the oxygen of publicity from those who are sceptical or critical of the government's decision; and diverting it to those who are broadly supportive of the outcome, even if they wish to quibble over how the government achieves that goal. This allows the likes of Andrew Neil to treat the diligent efforts of Carole Cadwalladr and other investigative journalists as irrelevant, at best.

Thirdly, by moving the focus away from how the government made its decision in the first place, the BBC's emphasis risks burying evidence of corruption and so on. The end has justified the means. This encourages the likes of Andrew Neil to declare that continuing to investigate evidence of corruption and other criminality in relation to those means is somehow 'mad'.

Finally, the BBC's approach means that its reputation (and licence-fee payers' investment in that reputation) is horribly exposed to the downside of major events - or the reversal of the government's decision. The bigger the downside, or the more significant the reversal, the greater the damage to the BBC's reputation. 

What should the BBC do?

Avoid setting its editorial policy to simply accord with what the UK government (in this example) wants to do - even if that is, or is presented as, "the will of the people". 

The BBC's role should simply be to educate "the people" about the options, their implications and consequences of decisions taken. This is not about being able to say "I told you so" - it's about the BBC re-establishing and maintaining its role as an apolitical, trusted source of news and information, so that the people aren't so easily hoodwinked.


Related Posts with Thumbnails