Google
Showing posts with label hallucination. Show all posts
Showing posts with label hallucination. Show all posts

Tuesday, 26 March 2024

There's Nothing Intelligent About The Government's Approach To AI Either

Surprise! The UK government's under-funded, shambolic approach to public services also extends to the public sector's use of artificial intelligence. Ministers are no doubt piling the pressure on officials with demands for 'announcements' and other soundbites. But amid concerns that even major online platforms are failing to adequately mitigate the risks - not to mention this government's record for explosively bad news - you'd have thought they'd tread more carefully.

Despite 60 of the 87 public bodies either using or planning to use AI, the National Audit Office reports a lack of governance, accountability, funding, implementation plans and performance measures. 

There are also "difficulties attracting and retaining staff with AI skills, and lack of clarity around legal liability... concerns about risks of unreliable or inaccurate outputs from AI, for example due to bias and discrimination, and risks to privacy, data protection, [and] cyber security." 

The full report is here.

Amid concerns that the major online platforms are also failing to adequately mitigate the risks of generative AI (among other things), you'd have thought that government would be more concerned to approach the use of AI technologies responsibly.

But, no...

For what it's worth, here's my post on AI risk management (recently re-published by SCL).


Monday, 10 July 2023

Machine Unlearning: The Death Knell for Artificial General Intelligence?

Dall-E and toppng.com

Just as AI systems seem to be creating a world of their own through various 'hallucinations', Google has announced a competition between now and mid-September to help develop ways for AI systems to unlearn by removing "the influence of a specific subset of training examples — the "forget set" — from a trained model." This is key to allow individuals to exercise their rights 'to be forgotten', to object to processing, restrict processing or rectify errors under EU and UK privacy regulation, for example: Google accepts that in some cases it's possible to infer that an individual's personal data was used to train an AI model even after the personal data was deleted. But what does machine unlearning mean for the 'holy grail' of general artificial intelligence?

Unlearning is intended to be a cost effective alternative to completely retraining the AI model from scratch with the "forget set" removed from the training dataset. The idea is to remove  certain data and its 'influence' while retaining the accuracy or fairness of an AI model and its ability to generalize in ways that have already been held out as examples of what the model can achieve.

A problem with approaches to 'machine unlearning' to date has been inconsistency in the measures for evaluating their effectiveness, making comparisons impracticable. 

By standardizing the evaluation metrics Google hopes to identify the strengths and weaknesses of different algorithms and spark broader work on this aspect of AI.

As part of the challenge, Google will offer a set of information, some of which must be forgotten if unlearning is successful: the unlearned model should contain no traces of the forgotten examples, so that 'membership inference attacks' (MIAs) would be unable to infer that any of them was part of the original training dataset. 

Perhaps unlike the problem of hallucinations or fabrication (from which humans also suffer) - the advent of 'machine unlearning' provides another reason why 'artificial general intelligence' - a computer's ability to replicate human intelligence - will remain elusive, since humans often forget things only to recall them later, or are unable to recall events or aspects of them that we witnessed firsthand and/or were 'supposed' to remember (like an accident or a birthday or wedding anniversary).


Related Posts with Thumbnails