I updated my thoughts on AI risk management over on The Fine Print recently and next on my list to catch up on was 'the next financial crisis'. Coincidentally, a news search prompted an FT article on remarks about AI from the head of the SEC.
While Mr Gensler sees benefits from the use of AI in some efficiencies and combating fraud, he also spots the seeds of the next financial crisis lurking not only in the various general challenges associated with AI (e.g. inaccuracy, bias, hallucination) but particularly in:
- potential 'herding' around certain trading decisions;
- concentration of AI services in a few cloud providers;
- lack of transparency in who's using AI and for what purposes; and
- inability to explain the outputs.
All familiar themes, but it's the concentration of risk that leaps out in a financial context, though it was also a wider concern identified in hearings before the House of Lords communications committee and by the FTC, as explained in my earlier post.
The fact that only a few large tech players are able to (a) compete for the necessary semiconductors (chips) and (b) provide the vast scale of cloud computing infrastructure that AI systems require is particularly concerning in the financial markets context because the world relies so heavily on those markets for economic, social and even political stability - as the global financial crisis revealed.
We can't blame the computers for allowing this situation to develop.
So, if 'superintelligence' is the point at which AI systems develop the intelligence to out-compete humans to the point of extinction, is 'superstupidity' the point at which we humans seal our fate by concentrating the risks posed by AI systems to the point of critical failure?
No comments:
Post a Comment