Tuesday, 30 April 2019

Is BigTech Still Battling The Entire Human Race, Or Just Some Of Us?

Readers will be familiar with my view that we consumers tend to be loyal to 'facilitators' who focus on solving our problems, rather than 'institutions' who solve their own problems at our expense. Previously trusted service providers can also lose their facilitator status, and I'd argue that Facebook has already done so (owing to privacy, electoral and extremist content scandals) and Google is firmly headed in that direction (through behaviour incurring massive EU fines). Yet, despite announcements designed to suggest increasing transparency, it seems BigTech is actively resisting independent human oversight and the perceived battle between computers and the human race is far from over...

Part of the problem is that 'BigTech' firms still operate as agents of retailers and other organisations who pay them vast amounts of money for exploiting our personal data targeting advertising at us, rather than as our agents for the purpose of finding what we need or want while shielding us against exploitation. In fact, this is the year when digital advertising spend will exceed spending on the old analogue 'meat space' channels

Combine that exploitative role with rogue artificial intelligence (AI) and you have a highly toxic reputational cocktail - particularly because AI based on machine learning is seemingly beyond human investigation and control. 

For instance, Amazon found that an AI programme used for its own recruiting purposes was terribly biased, but could not figure out what was going wrong or how to fix it, so had to simply shut the thing down.  Alarmingly, that suggests other AI programmes that are already notorious for being biased, such as those used for 'predictive policing', are also beyond fixing and should be shut down...

Many BigTech firms are appointing 'ethics boards' to try to avoid their AI programmes heading in inappropriate directions. Trouble is, not only is there doubt about what data scientists might view as inappropriate (which drove the appointment of ethics boards in the first place), but these boards are also generally toothless (only CEOs and main boards can decide the actual course of development), and tend to be populated by industry insiders who sit on each other's ethics boards

It is unclear, for example, whether the recommendations of the ethics committee overseeing the West Midlands police 'predictive policing' algorithm will be followed. Meanwhile, 14 other UK police forces are known to be using such AI programmes...

Another worrying trend is for AI firms to prevent investors voting on the company's plans, using "dual class" share structures that leave voting control with the founders rather than shareholders. Lyft is the latest to hit the news, but other offenders include Alphabet (Google), Blue Apron and Facebook, while Snap and Pinterest give shareholders zero control. Those firms might argue that stock prices are a check in themselves. But the stock market and investor greed are notorious for driving short-term decisions aimed at only maximising profits, and even giant regulatory fines are subject to appeal and can take a long time to be reflected in share prices. Voting power, on the other hand, is more qualitative and not simply a function of market forces - and the fact that it is being resisted tells you it's a promising tool for controlling BigTech.

Regulation will also be important, since fines for regulatory breaches are a source of revenue for the public sector that can be used to clean up the industry's mess and to send signals to management, investors, competitors and so on. I'm not suggesting that regulatory initiatives like the UK Brexidiot ToryKIP government's heavily ironic "Online Harms" initiative are right in the detail or approach, but Big Tech certainly cannot keep abdicating responsibility for the consequences and other 'externalities' associated with its services and approach. There has to be legal accountability - and grave consequences - for failing to ensure that AI and the firms themselves are subject to human control.

I guess the real question might be: which humans? 

Monday, 29 April 2019

Are England & Wales Ready For A Hard Border With Scotland By 2023?

With Brexit madness in full flow the case for a hard border with Scotland by 2023 is also gathering momentum. Here's why...

If Brexit proceeds, the UK government believes the British economy will under-perform by about £15bn a year in terms of government tax receipts, meaning it will need to borrow more and more to maintain current spending. Even if you believe in unicorns , it's therefore likely that extreme pressure on public spending across the UK will mean declining public services and increasing misery for many. 

Against this backdrop, the economic concerns during the first Scottish independence referendum seem less troubling. After all, Scotland (population 5.4m) is larger than 7 EU member states and even if it's economy is more precarious than other small EU members, it might prefer the protection of the world's largest trade bloc to a flat-lining UK. This could also mean that qualms about accepting the Euro would fall away.

At any rate, Scotland now intends to hold a second independence referendum by 2021. As Brexit impact and uncertainty worsens, it is likely that bruised Scots will be more likely to vote for both independence from the UK and for membership of the EU.  The original margin against independence of 55:45 could therefore easily reverse.

Of course, the sensible option is to revoke the Article 50 notice and stop all this nonsense entirely, but British politicians are too scared of the fascists for that...

Related Posts with Thumbnails