Readers will be familiar with my view that we consumers tend to be loyal to 'facilitators' who focus on solving our problems, rather than 'institutions' who solve their own problems at our expense. Previously trusted service providers can also lose their facilitator status, and I'd argue that Facebook has already done so (owing to privacy, electoral and extremist content scandals) and Google is firmly headed in that direction (through behaviour incurring massive EU fines). Yet, despite announcements designed to suggest increasing transparency, it seems BigTech is actively resisting independent human oversight and the perceived battle between computers and the human race is far from over...
Part of the problem is that 'BigTech' firms still operate as agents of retailers and other organisations who pay them vast amounts of money for exploiting our personal data targeting advertising at us, rather than as our agents for the purpose of finding what we need or want while shielding us against exploitation. In fact, this is the year when digital advertising spend will exceed spending on the old analogue 'meat space' channels.
Combine that exploitative role with rogue artificial intelligence (AI) and you have a highly toxic reputational cocktail - particularly because AI based on machine learning is seemingly beyond human investigation and control.
For instance, Amazon found that an AI programme used for its own recruiting purposes was terribly biased, but could not figure out what was going wrong or how to fix it, so had to simply shut the thing down. Alarmingly, that suggests other AI programmes that are already notorious for being biased, such as those used for 'predictive policing', are also beyond fixing and should be shut down...
Many BigTech firms are appointing 'ethics boards' to try to avoid their AI programmes heading in inappropriate directions. Trouble is, not only is there doubt about what data scientists might view as inappropriate (which drove the appointment of ethics boards in the first place), but these boards are also generally toothless (only CEOs and main boards can decide the actual course of development), and tend to be populated by industry insiders who sit on each other's ethics boards.
It is unclear, for example, whether the recommendations of the ethics committee overseeing the West Midlands police 'predictive policing' algorithm will be followed. Meanwhile, 14 other UK police forces are known to be using such AI programmes...
Another worrying trend is for AI firms to prevent investors voting on the company's plans, using "dual class" share structures that leave voting control with the founders rather than shareholders. Lyft is the latest to hit the news, but other offenders include Alphabet (Google), Blue Apron and Facebook, while Snap and Pinterest give shareholders zero control. Those firms might argue that stock prices are a check in themselves. But the stock market and investor greed are notorious for driving short-term decisions aimed at only maximising profits, and even giant regulatory fines are subject to appeal and can take a long time to be reflected in share prices. Voting power, on the other hand, is more qualitative and not simply a function of market forces - and the fact that it is being resisted tells you it's a promising tool for controlling BigTech.
Regulation will also be important, since fines for regulatory breaches are a source of revenue for the public sector that can be used to clean up the industry's mess and to send signals to management, investors, competitors and so on. I'm not suggesting that regulatory initiatives like the UK Brexidiot ToryKIP government's heavily ironic "Online Harms" initiative are right in the detail or approach, but Big Tech certainly cannot keep abdicating responsibility for the consequences and other 'externalities' associated with its services and approach. There has to be legal accountability - and grave consequences - for failing to ensure that AI and the firms themselves are subject to human control.
No comments:
Post a Comment