Google
Showing posts with label Singularity. Show all posts
Showing posts with label Singularity. Show all posts

Tuesday, 27 February 2024

Defending Humanity Against The Techno-Optimists

I've been involved in tech since the mid-90s, have experienced the rise and burst of many 'bubbles', and have been writing about SiliCon Valley's war on the human race since 2014. But the latest battles involving crypto and AI are proving to be especially dangerous. A cult of 'techno-optimism' has arisen, with a 'manifesto' asserting the dominance of their own self-interest, backed by a well-funded 'political action committee' making targeted political donations. Laws and lawsuits are pending, but humanity has to play a lot harder on defence... To chart a safe route, we must prioritize the public interest, and align technology with widely shared human values rather than the self-interest of a few tech enthusiasts, no matter how wealthy they are.

As Michael Lewis illustrated in The New New Thing, SiliCon Valley has always had its share of people eager to get rich flogging a 'minimum viable product' that leaves awkward 'externalities' for others to deal with. Twenty five years on, we are still wrestling with disinformation and other harmful content that flows from social media platforms, for example, never mind the 'dark web'.

Regardless of the potential downsides, the 'Techno-optimist manifesto' seeks to elevate and enshrine the get-rich-quick-at-others'-expense approach in a set of beliefs or 'creed' with technology as a 'god':

"Technology is the glory of human ambition and achievement, the spearhead of progress, and the realization of our potential." a16z

The techno-optimist creed commands followers to view the world only in terms of individual self-interest, to a point verging on malignant narcissism:

"We believe markets do not require people to be perfect, or even well intentioned – which is good, because, have you met people? Adam Smith: “It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own self-interest. We address ourselves not to their humanity but to their self-love, and never talk to them of our own necessities, but of their advantages.” a16z

In other words, techno-optimists aren't interested in humanity, good intentions or benevolence. They are only self-interested and believe that you and everyone else is, too. It's you against them, and them against you. In this way, the techno-optimists absolve themselves of any responsibility to care about other humans, because other humans are merely self-interested and technology is the pinnacle of everyone's self-interest. 

The cult only needs to focus on building new tech. 

The only remaining question relating to other humans is whether your self-interest is aligned with the techno-optimist's chosen technology. If not, you lose - as we'll see when it comes to their use of your cryptoassets or your copyright work or personal data where it is gathered among the training data they need to develop AI systems...

You might well ask if there are any constraints at all on the techno-optimists' ambition, and I would suggest only money, tech resources and the competing demands of other techno-optimists.

They claim not to be against regulation, so long as it doesn't throttle their unrestrained ambition or 'kill' their pet technology. To safeguard their self-interest, the techno-optimists are actively funding politicians who are aligned with their self-interest and support their technology, and attacking those who are not... with a dose of nationalism for good measure:

“If a candidate supports an optimistic technology-enabled future, we are for them. If they want to choke off important technologies, we are against them,” wrote Ben Horowitz, one of [a16z's] founders, in a Dec. 14 post, adding: “Every penny we donate will go to support like-minded candidates and oppose candidates who aim to kill America’s advanced technological future.” Cointelegraph

"Fairshake, a political action committee [PAC] supported by Coinbase and a16z, has a $73 million war chest to oppose anti-crypto candidates and support those in favor of digital assets... Fairshake describes itself as supporting candidates “committed to securing the United States as the home to innovators building the next generation of the internet.” Cointelegraph

Nationalistic claims are typical of such libertarian causes (Trump's "Make America Great Again") and invite unfortunate comparisons with European politics of the 1930s, as George Orwell pointed out in his Notes on Nationalism in 1945:

Nationalism is not to be confused with patriotism... two different and even opposing ideas are involved. By ‘patriotism’ I mean devotion to a particular place and a particular way of life, which one believes to be the best in the world but has no wish to force on other peoplePatriotism is of its nature defensive, both militarily and culturally. Nationalism, on the other hand, is inseparable from the desire for power... 

A nationalist is one who thinks solely, or mainly, in terms of competitive prestige. He may be a positive or a negative nationalist — that is, he may use his mental energy either in boosting or in denigrating — but at any rate his thoughts always turn on victories, defeats, triumphs and humiliations. He sees history, especially contemporary history, as the endless rise and decline of great power units, and every event that happens seems to him a demonstration that his own side is on the upgrade and some hated rival is on the downgrade. 

But finally, it is important not to confuse nationalism with mere worship of success. The nationalist does not go on the principle of simply ganging up with the strongest side. On the contrary, having picked his side, he persuades himself that it is the strongest, and is able to stick to his belief even when the facts are overwhelmingly against him. Nationalism is power-hunger tempered by self-deception. Every nationalist is capable of the most flagrant dishonesty, but he is also — since he is conscious of serving something bigger than himself — unshakeably certain of being in the right..."

Yet in 2014, Google's CEO at the time, Eric Schmidt, 'warned' us that humans can only avoid the much vaunted Singularity - where computers out-compete humans to the point of extinction - by finding things that 'only humans can do and are really good at'. Ironically, by dedicating themselves utterly to the god of technology, the techno-optimist is actually asserting the 'self-interest' of machines! 

Of course, technology is not inherently good or bad. That depends on their human creators, deployers and users. There's a long list of problems in the techno-optimist manifesto which they claim technology itself has 'solved' but self-evidently has not, either because the technology was useless without human involvement or the problems persist.

And what of their latest creatures: crypto and AI?

While 'blockchain' or distributed ledger technology does have some decent use-cases, the one that gets the techno-optimists most excited is using crypto-tokens as either a crypto-currency or some other form of tradeable crypto-asset. They insist that the technology is so distinct that it must not be subject to existing securities laws. Yet they use the terminology of existing regulated markets to describe roles in the crypto markets that are really only corruptions of their 'real world' counterparts. Markets for cryptocurrencies and cryptoassets are riddled with examples of fraud and market manipulation that were long ago prohibited in the regulated markets. A supposedly distributed means of exchange without human intervention is actually heavily facilitated by human-directed intermediaries, some of which claim to operate like their real world equivalents that safeguard their customers' funds, while actually doing the opposite. The shining example of all these problems, and the numerous conflicts with the participating techno-optimists' self-interest, is the FTX scandal. And there are many others.

As for AI, again there are decent systems and use-cases, but the development of some AI systems relies on huge sets of 'training data' that would be prohibitively expensive to come by, were they not simply 'scraped' from the internet, regardless of copyright or privacy concerns: the technological equivalent of toxic waste. The creators of several of these 'open' AI systems defend their activity on techno-optimist grounds. Midjourney founder David Holz has admitted that his company did not receive consent for the hundreds of millions of images used to train its AI image generator, outraging photographers and artists; and OpenAI blithely explained in its submission to a UK House of Lords committee:

“Because copyright today covers virtually every sort of human expression – including blogposts, photographs, forum posts, scraps of software code, and government documents – it would be impossible to train today’s leading AI models without using copyrighted materials.”

So, there we were in 2014 being warned to be creative, but it turns out that the techno-optimists believe that your self-interest and the rights that protect your work can simply be overridden by their 'divine' self-interest. 

Needless to say, many humans are not taking this lying down (even if some of their governments and institutions are).

In January 2023, illustrators sued Midjourney Inc, DeviantArt Inc (DreamUp), and Stability A.I. Ltd (Stable Diffusion), claiming these text-to-image AI systems are “21st-century collage tools that violate the rights of millions of artists.”  A spreadsheet submitted as evidence allegedly lists thousands of artists whose images the startup's AI picture generator "can successfully mimic or imitate." 

The New York Times has sued OpenAI and Microsoft for copying and using millions of its copyright works seeking to free-ride on its investment in its journalism by using it to build 'substitutive' products without permission or payment.  

Getty Images has also filed a claim that Stability AI ‘unlawfully’ scraped millions of images from its website. 

Numerous other lawsuits are pending; and legislative measures have either been passed (as in the EU and China) or regulators have been taking action under existing law (as the Federal Trade Commission has been doing in the US). 

Meanwhile, the right wing UK government has effectively sided with the techno-optimists by leaving it to 90 regulatory authorities to try to assess the impact of AI in their sectors, and even cancelled plans for guidance on AI copyright licensing that copyright owners had requested

As the Finance Innovation Lab (of which I’m a Senior Fellow) has pointed out, the AI governance debate is dominated by those most likely to profit from more AI - and the voices of those who may be most negatively impacted are being ignored. Government needs to bring industry, researchers and civil society together, and find ways to include the perspectives of the wider public. To chart a safe route forward, it is essential that we prioritize the public interest, and align technology with societal values rather than the self-interest of the techno-optimists. 

Commercially speaking, however, there's also the point that consumers tend to reward businesses that act as 'facilitators' (who solve our problems) rather than 'institutions' (who solve their own problems at our expense). Of course, businesses can start out in one category and end up in another... The techno-optimists' commitment to their own self-interest (if recognised by consumers) should place them immediately in the second category.


Monday, 10 September 2018

The Irony Or The Ecstasy? The UK Centre For Data Ethics And Innovation

You would be forgiven for uttering a very long string of properly ripe expletives on learning that the current UK government has the cheek to create a "Centre for Data Ethics and Innovation"!  Personally, I think they've missed a trick with the name. With a little more thought, the acronym could've been "DECEIT" - and maybe in some languages it would be - so let's go with that.

You might say that it's better to have, rather than not have, an 'independent' policy body focused on the use of data and "artificial intelligence", even if it's set up by a government controlled by those who masterminded and/or benefited from the most egregious abuse of data ethics in UK history.

Or you might be relieved by the notion that it's easier for the dominant political party of the day to control the ethical agenda and results achieve "the best possible outcomes" if the source of policy on data ethics is centralised, especially within a body being hastily set up on the back of a quick and dirty consultation paper released into the febrile, Brexit-dominated summer period before any aspirational statutory governance controls are in place. 

At any rate, we should all note that:
"[DECEIT], in dialogue with government, will need to carefully prioritise and scope the specific projects within its work programme. This should include an assessment of the value generated by the project, in terms of impact on innovation and public trust in the ethical use of data and AI, the rationale for [DECEIT] doing the work (relative to other organisations, inside or outside Government) and urgency of the work, for example in terms of current concerns amongst the public or business."
...
"In formulating its advice, the Centre will also seek to understand and take into consideration the plurality of views held by the public about the way in which data and AI should be governed. Where these views diverge, as is often the case with any new technology, the Centre will not be able to make recommendations that will satisfy everyone. Instead, it will be guided by the need to take ethically justified positions mindful of public opinion and respecting dissenting views. As part of this process it will seek to clearly articulate the complexities and trade offs involved in any recommendation."
Political point of view is absolutely critical here. This UK government does not accept that the Leave campaign or Cambridge Analytica etc did anything 'wrong' with people's data. Senior Brexiteers say the illegality resulting in fines by the Electoral Commission and further investigation by the ICO and the police are merely politically motivated 'allegations' by do-good Remainers. Ministers have dismissed their own "promises" (which others have called "fake news" outright lies and distortion) as merely "a series of possibilities". There is no contrition. Instead, the emerging majority of people who want Brexit to be subjected to a binding vote by the electorate are regarded as ignoring "public opinion" or "the will of the people" somehow enshrined forever in a single advisory referendum in 2016; and as therefore expressing merely "dissenting views".

Against this gaslit version of reality, the creation of DECEIT is chilling.

Meanwhile, you might ask why there needs to be separate silo for "Data Ethics and Innovation" when we have the Alan Turing Institute and at least a dozen other bodies, as well as the Information Commissioner, Electoral Commission and the police. Surely the responsibility for maintaining ethical behaviour and regulatory compliance are already firmly embedded in their DNA?

I did wonder at the time of its formation whether the ATI was really human-centric and never received an answer. And it's somewhat worrying that the ATI has responded to the consultation with the statement "We agree on the need for a government institution to devote attention to ethics". To be fair, however, one can read that statement as dripping with irony. Elsewhere, too, the ATI's response has the air of being written by someone with clenched teeth wondering if the government really knows what it's doing in this area, anymore than it knows how to successfully exit the EU:
We would encourage clarity around which of these roles and objectives the Centre will be primarily or solely responsible for delivering (and in these cases, to justify the centralisation of these functions), and which will be undertaken alongside other organisations.
... We would encourage more clarity around the Centre’s definitions of AI and emerging technologies, as this will help clarify the areas that the Centre will focus on.
Reinterpreting some of the ATI's other concerns a little more bluntly yields further evidence that the ATI smells the same rat that I do:
  • DECEIT will have such a broad agenda and so many stakeholders to consider that you wonder if it will have adequate resources, and would simply soak up resources from other stakeholders without actually achieving anything [conspiracy theorists: insert inference of Tory intent here, to starve the other stakeholders into submission];
  • the summary of "pressing issues in this field" misses key issues around the accountability and audibility of algorithms, the adequacy of consent in context and whether small innovative players will be able to bear inevitable regulations;
  • also omitted from the consultation paper are the key themes of privacy, identity, transparency in data collection/use and data sharing (all of which are the subject of ongoing investigation by the ICO, the police and others in relation to the Leave campaign);
  • the ATI's suggested "priority projects" imply its concern at the lack of traction in identifying accountability and liability for clearly unethical algorithms;
  • powers given to DECEIT should reinforce its independence and "make its abolition or undermining politically difficult";
  • DECEIT's activities and recommendations should be public;
  • how will the "dialogue with government" be managed to avoid DECEIT being captured by the government of the day?
  • how will "trade offs", "public opinion" and "dissenting views" be defined and handled (see my concerns above)?
I could add to this list concerns about the government's paternalistic outlook instead of a human-centric view of data and technology that goes beyond merely 'privacy by design'. The human condition, not Big Tech/Finance/Politics/Government etc, must benefit from advances in technology.

At any rate, given its parentage, I'm afraid that I shall "remain" utterly sceptical of the need for DECEIT, its machinations and output - unless and until it consistently demonstrates its independence, good sense, not to mention ethics

Tuesday, 19 September 2017

BigTech Must Reassure Us It's Human

Recent issues concerning the purchase of lethal materials online, "fake news" and secure messaging highlight a growing tension between artificial intelligence and human safety. To continue their unbridled growth, the tech giants will have to reassure society that they are human, solving human problems, rather than machines solving their own problems at humans' expense. While innovation necessarily moves ahead of the law and regulation, developments in artificial intelligence should be shaped more by humane and ethical considerations, rather than outsourcing these to government or treating them as secondary considerations.

In the latest demonstration of this concern, Channel 4 researchers were able to assemble a 'shopping basket' of potentially lethal bomb ingredients on Amazon, partly relying on Amazon's own suggestion features or 'algorithms' ("Frequently bought together” and “Customers who bought this item also bought...”), which even suggested adding ball-bearings. This follows the phenomenon that emerged during the Brexit referendum and US Presidential election whereby purveyors of 'fake news' received advertising revenue from Facebook while targeting gullible voters.

Neither business is keen to proactively monitor or police its services for fear of conceding an obligation to do so and rendering itself liable for not doing so where the monitoring fails.

Channel 4 quoted Amazon as merely saying that:
"all products must adhere to their selling guidelines and all UK laws. [We] will work closely with police and law enforcement agencies should they need [us] to assist investigations." [update 20.09.17: Amazon is reported to have responded the next day to say that it is reviewing its website to ensure the products “are presented in an appropriate manner”.]
Amazon makes a valid point. After all, the same products can be bought off-line, yet unlike an offline cash purchase in a walk-in store, if they are bought on Amazon there is likely to be a digital 'audit trail' showing who bought what and where it was delivered. Indeed, it's conceivable that Amazon had alerted the authorities to the nature of the items in Channel 4 researchers' shopping basket and the authorities may have allowed the session to run as part of a potential 'sting' operation. It is perhaps understandable that neither Amazon nor the authorities would want to explain that publicly, but it would be comforting to know this is the case. Channel 4 is also somewhat disingenuous in suggesting this is an Amazon problem, when less well-resourced services or other areas of the Internet (the 'dark web') may well offer easier opportunities to purchase the relevant products with less opportunity for detection.

At any rate, the main difference, of course, is that no one from an offline store is likely to help you find missing ingredients to make a potentially lethal device (unless they're already part of a terror cell or perhaps an undercover operative) - and this is the key to Amazon's enormous success as a retail platform. It's possible, however, that a helpful employee might unwittingly show a terrorist where things are, and Amazon might equally argue that its algorithms don't "know" what they are suggesting. But whether it's because of the 'promise' of the algorithms themselves, there is a sense that the algorithm should not be vulnerable to abuse in this way.

Similarly, in the case of Facebook, the social network service has become a raging success because it is specifically designed to facilitate the exchange of information that generates passionate connections amongst like-minded people far more readily than, say, the owner of a bar or other social hang-out or a newspaper or other form of traditional media. Equally, however, Facebook might argue that the helpful algorithms aren't actually "aware" of the content that is being shared, despite use of key words etc. Meanwhile, WhatsApp seems to have declined to provide a terrorist's final message because it could not 'read' it (although the authorities seem to have magically accessed it anyway...).

Just as we and the online platform owners have derived enormous benefit from the added dimensions to their services, however, we are beginning to consider that those dimensions should bring some additional responsibilities - whether merely moral or legal - possibly on both users and service providers/developers.

In many ways the so-called 'tech giants' - Apple, Amazon, Alphabet (Google), Facebook and others - still seem like challengers who need protection. That's why they received early tax breaks and exemptions from liability similar to those for public telecommunications carriers who can't actually "see" or "hear" the content in the data they carry. 

But while it's right that the law should follow commerce, intervening only when necessary and in a proportionate way to the size and scale of the problem, the size and reach of these platforms and the sheer pace of innovation is making it very hard for policymakers and legislators to catch up - especially as they tend to have wider responsibilities and get distracted by changes in government and issues like Brexit.  The technological waves seem to be coming faster and colliding more and more with the 'real world' through drones and driverless cars, for example. 

The question is whether these innovations are creating consequences that the service providers themselves should actively address, or at least help address, rather than ignore as 'externalities' that government, other service providers or society must simply cope with.

The tech giants are themselves struggling to understand and manage the scale and consequences of their success, and the relentless competition to attract the best talent and the race to push the boundaries of 'artificial intelligence' sometimes presents as a declaration of war on the human race. Even the government/university endowed Alan Turing Institute seems to consider the law and ethics as somehow separate from the practice of data science. Maybe algorithms should be developed and tested further before being released, or be coded to report suspicious activity (to the extent they might not already).  Perhaps more thought and planning should be devoted to retraining commercial van and truck drivers before driverless vehicles do to them what the sudden closure of British coal mines did to the miners and their communities (and what the closure of steel mills has done since!).

In any event, the current approach to governance of algorithms and other technological leaps forward has to change if the 'bigtech' service providers are to retain their mantle as 'facilitators' who help us solve our problems, rather than 'institutions' who just solve their own problems at their customers' expense. They and their data scientists have to remember that they are human, solving human problems, not machines solving their own problems at humans' expense.

[update 20.09.17 - It was very encouraging to see Channel 4 report last night that Amazon had promptly responded more positively to researchers' discovery that automated suggestion features were suggesting potentially lethal combinations of products; and is working to ensure that products are "presented in an appropriate manner". The challenge, however, is to be proactive. After all, they have control over the data and the algorithms. What they might lack is data on why certain combinations of products might be harmful in a wider context or scenario.]


Monday, 30 November 2015

Better Services For SMEs: Follow The Data

I was at a 'parliamentary roundtable' on Tuesday on the perennial topic of small business banking reform. A more official report will be forthcoming, but I thought I'd record a few thoughts in the meantime (on a Chatham House basis). 

It still seems to surprise some people that small businesses represent 95% of the UK's 5.4m businesses - 75% of which are sole traders - and that they account for 60% of private employment, most new jobs and about half the UK's turnover. So-called 'Big Business' is just the tip of the iceberg, since only they have the marketing and lobbying resources to be seen above the waves. As a result small businesses have long been a blind spot for the UK government - until very recently - and the impact has gone way beyond poor access to funding. It includes slow payment of invoices, the absence of customer protection when dealing with big business and lack of alternatives to litigation to resolve disputes.

What's changed?

A combination of financial crisis, better technology and access to data has exposed more of the problems surrounding SMEs - and made it possible to start doing something about them. And it's clear that legislators are prepared to act when they are faced with such data. The EU Late Payments Directive aimed to eliminate slow payments. The UK has created the British Business Bank to improve access to finance, as well as a mandatory process for banks to refer declined loan applications to alternative finance providers and improved access to SME credit data to make it easier for new lenders to independently assess SME creditworthiness. The crowdfunding boom has also been encouraged by the UK government, and has produced many new forms of non-bank finance for SMEs, including equity for start-ups, debentures for long term project funding, more flexible invoice trading and peer-to-peer loans for commercial property and working capital.  Last week, the FCA launched a discussion paper on broadening its consumer protection regime to include more SMEs.

Yet most of these initiatives are still to fully take effect; and listening to Tuesday's session on the latest issues made it clear there is a long way to go before the financial system allocates the right resources to the invisible majority of the private sector.

A key thread running through most areas of complaint seems to be a lack of transparency - ready access to data. This seems to be both a root cause of a lot of problems as well as the reason so many proposed solutions end up making little impact. But the huge numbers and diversity of SMEs presents the kind of complexity that only data scientists can help us resolve if we are to address the whole iceberg, rather than just the tip. That's surely one job for the newly launched Alan Turing Data Institute, for example, although readers will know of my fear that it seems more aligned with institutions than the poor old sole trader, let alone the consumer. So maybe SMEs need their own 'Chief Data Scientist' to champion their plight?

The latest specific concerns discussed were as follows:
  • the recent findings and remedies proposed by the Competition and Markets Authority into business current accounts are widely considered to be weak and unlikely to be effective - try searching the word "data" in the report to see how often there was too little available. The report still feels like the tip of an iceberg rather than a complete picture of the market and its problems;
  • austerity imperatives seem to be the main driver for off-loading RBS into private shareholder ownership - the bank pleading to be left to its own devices (not what it suddenly announced to the Chancellor in 2008!) - and trying to kill-off any further discussion of using its systems as a platform for a network of smaller regionally-focused banks (as in Germany);
  • the financial infrastructure for SMEs appears not to be geographically diverse - it doesn't yet mirror the Chancellor's "Northern Powerhouse" policy, for instance - despite calls for bank transparency on the geographical accessibility, a US-style "Community Reinvestment Act" and clear reporting on lending to SMEs by individual banks (rather than the Bank of England's summary reporting). There's a sense that we should see some kind of financial devolution to match political devolution, albeit one that still enables local finance to leverage national resources and economies of scale. Technology should help here, as we are tending to use the internet and mobile apps quite locally, despite their global potential;
  • Some believe that SMEs need to take more responsibility for actively managing their finances, including seeking out alternatives and switching; while others believe that financial welfare should be like a utility - somehow pumped to everyone like water or gas, I assume - indeed regional alternative energy companies were touted as possible platforms for expanding access to regional financial services. My own view is that humans are unlikely to become more financially capable, so financial and other services supplied in complex scenarios need to be made simpler and more accessible - we should be relying less on advertising and more on hard data and personalised apps in such instances.
  • Meanwhile, SME are said to lack a genuine, high profile champion whose role it is to ensure that the financial system generally is properly supportive of them. This may seem a little unfair to the Business Bank, various trade bodies and government departments, but it's also hard for any one of these bodies to oversee the whole fragmented picture. As I suggested above, however, I wonder whether a 'data champion' could be helpful to the various stakeholder in identifying and resolving problems, rather than a single being expected to act as a small business finance tsar. 
 In other words, we should follow the data, not the money...


Thursday, 15 October 2015

The Alan Turing Institute: Human-centric?

A slightly dispiriting day at The Alan Turing Institute 'Financial Summit', yesterday, I'm afraid to say. 

The ATI itself represents a grand vision and stunning organisational achievement - to act as a forum for focusing Britain's data scientists on the great problems of the world. Naturally, this leaves it open to attempts at 'capture' by all the usual vested interests, and its broad remit means that it must reflect the usual struggle between individuals and organisations and between 'facilitators', who exist to solve their customers problems, and 'institutions', who exist to solve their own problems at their customers' expense

And of course, it's the institutions that have most of the money - not to mention the data problems - so I can see, too, why the ATI advertises its purpose to institutions as "the convener of a multidisciplinary approach to the development of 'big data' and algorithms". It's true also, that there are global and social issues that transcend the individual and are valid targets for data scientists in combination with other specialists. 

But it was concerning that an apparently neutral event should seem predicated on a supplier-led vision of what is right for humans, rather than actually engineering from the human outward - to enable a world in which you to control what you buy and from whom by reference to the data you generate rather than by approximating you to a model or profile. Similarly, it was troubling to see a heavy emphasis in the research suggestions on how to enable big businesses to better employ the data science community in improving their ability to crunch data on customers for commercial exploitation.  

To be fair, there were warning signs posted for the assembled throng of banks, insurers and investment managers - in the FCA's presentation on its dedication to competition through its Innovation Hub; a presentation on the nature and value of privacy itself; and salutary lessons from a pioneer of loyalty programmes on the 'bear traps' of customer rejection on privacy grounds and consumers' desire for increasing control over the commercial use of our data. The director's slides also featured the work of Danezis and others on privacy-friendly smart metering and a reference to the need to be human-centric.  

But inverting the institutional narrative to a truly human-centric one would transform the supplier's data challenge into one of organising its product data to be found by consumers' machines that are searching open databases for solutions based on actual behaviour - open data spiders, as it were  - rather than sifting through ever larger datasets in search of the 'more predictive' customer profile to determine how it wastes spends its marketing budget.

Personally, I don't find much inspiration in the goal of enabling banks, insurers and other financial institutions to unite the data in their legacy systems to improve the 'predictive' nature of the various models they deploy, whether for wholesale or retail exploitation, and I'm sure delegates faced with such missions are mulling career changes. Indeed, one delegate lightened the mood with a reference to 'Conway's Law' (that interoperability failures in software within a business simply reflects the disjointed structure of the organisation itself). But it was clear that financial institutions would rather leave this as an IT problem than re-align their various silos and business processes to reflect their customers' end-to-end activities. There is also a continuing failure to recognise that most financial services are but a small step in the supply chain, after all. I mean, consider the financial services implications of using distributed ledgers to power the entertainment industry, for example... 

When queried after the event as to whose role it was to provide the 'voice of the customer', the response was that the ATI does not see itself as representing consumers' or citizens' interests in particular. That much is clear. But if it is to be just a neutral 'convenor' then nor should the ATI allow itself to be positioned as representing the suppliers in their use and development of 'big data' tools - certainly not with £42m of taxpayer funding. 

At any rate, in my view, the interests of human beings cannot simply be left to a few of the disciplines that the ATI aims to convene along side the data scientists - such as regulators, lawyers, compliance folk or identity providers. The ATI itself must be human-centric if we are to keep humans at the heart of technology.


Tuesday, 3 March 2015

Artificial Intelligence: The Control Dichotomy

Professor Nick Bostrom delivered an inspirational speech for the SCL last night on "Superintelligence: a godsend or a doomsday device", although few would have found it reassuring - it is certainly conceivable that machines could become more intelligent than humans and that humans might not be able to control them. But these are still early days, he stresses. Regulating the development of artificial intelligence at this point risks halting progress. There's a lot more work to do to really understand how artificial intelligence will develop beyond playing old video games better than humans or recognising an image as the picture of a cat. We need to consider how the technology could help avert our extinction, as well as how it might wipe us out. Yet Nick also warns that it will take far less time for computers to exceed human level machine intelligence than to reach it in the first place. So we need to start work on the control mechanisms for the development and use of artifical intelligence, without regulating the industry out of existence: the control dichotomy.

Nick suggests that the guiding principle should be that of "differential technological development" - diverting resources away from technologies and their application which could cause human extinction while focusing on those which will either help prevent our demise or will facilitate the expansion of the human race throughout the cosmos.

But how do we distinguish between helpful and harmful technologies and their application? 

As Nick points out, it's tough to think of any human invention that is inherently 'good'. He mentions many things, from gun powder to genetic engineering, and I think we can throw in the wheel and the Internet for good measure. All these things are used by humans in bad ways as well as for the greater good. But I think what horrifies us especially about the idea of Superintelligence or 'The Singularity' is that it will be machines, not bad humans, who will be using other machines against us. And while we have lots of experience in dealing with evil humans, even our top minds admit we still don't know much about how machines might act in this way or how to stop them - and what we humans fear most is the unknown. 

You'll notice I haven't said 'evil' machines, since they might not be operating with any evil 'intent' at all. Human extinction might just be a mistake - 'collateral damage' arising from some other mission. For instance, Nick suggests that a particular machine left to itself in a given situation might decide to devote itself entirely to making paperclips. So, presumably, it would not bother to put out a fire, for example, or free a human (or itself) from a burning building. It might leave that to other machines, who might in turn have chosen their own narrow objective that involves ignoring people's screams.

Here's where I struggle with the notion of Superintelligence. In fact, as someone who hates being pigeon-holed into any single role, I think a machine's decision to only ever make paperclips might be fabulously logical and a brilliant choice for that machine in the circumstances, but it makes the machine as dumb as a post. For me, Superintelligence should involve a machine being able to do everything a human can and more

But that's beside the point. Knowing what we know already, it would be insane to ignore the paperclip droid and wait for artificial intelligence to develop a machine more capable than humans before figuring out how we might control it. Nick is right to point out that we must figure that out in parallel. In other words, the concept of human control has to be part of the artificial intelligence programme. But it won't be as simple as coding machines to behave protectively, since machines will be able to programme each other. For instance, Nick suggests we could put the machines to work on the control problem, as well as on the problem of how to ensure the survival of our species. AI labs might also pay insurance premiums to cover the damage caused by what they develop. He was less certain about what we might do to constrain developments that occur in the context of secret defence programmes or intelligence gathering, but he seemed confident that we could at least infer the pace of development from the results, and be able to consider how to control the wider application of those developments. Mmmm.

At any rate, Nick also warns that we need to be careful what we wish for. Mandating human survival in a prescriptive way - even a specific biological form - would be a bad move, since we should not assume we are in a position to foster positive human development any more than the Holy Office of the Spanish Inquisition. Better to embed positive human values and emotions or, say, entertainment as a feature of intelligent machines (although I'm guessing that might not go down well with the jihadis). From a phyiscal standpoint, we already know that the human body won't do so well for long periods in space or on Mars, so some other version might need to evolve (okay, now I'm freaking myself out).

To retain a sense of pragmatism, at the end of the speech I asked Nick what he would recommend for our focus on 'Keeping Humans at the Heart of Technology' at the SCL conference in June. His tip was to consider which of the various types of control mechanism might work best, recognising the need to avoid constraining the positive development of artificial intelligence, while ensuring that we will be able to keep the machines in check if and when they become smarter than us.

No pressure then...


Thursday, 30 January 2014

P2P Goes Cloud-to-Cloud


In Part 2 of my response to Google's 'computers vs people' meme, I explained that humans can win the war for economic control of their data by transacting on peer-to-peer marketplaces. That's because the P2P platforms don't derive their revenue primarily by using their users' data as bait to attract advertising revenue. Instead, they enable many participants to transact directly with each other in return for relatively small payments towards the platforms' direct operational costs, leaving the lion's share of each transaction with the parties on either side. This post covers some technological developments which move the P2P front line deep into Big Data territory.

Perhaps the ultimate way to avoid Big Data's free ride on the ad revenue derived from your data is to cut your reliance on the World Wide Web itself. After all, the Web is just the 'human-readable' network of visible data that sits on the Internet - just one of many other uses. As I've mentioned previously, having your own pet 'open data spider' that gathers information based on your data without disclosing it would transform the advertiser's challenge from using Big Data tools to target you with their advertising, to enabling their product data to be found by your spider as and when you need it.

But that would not necessarily solve the problems that arise where your data has to be shared.

Fortunately, all but the most hardcore privacy lobbyists have finally moved beyond debating the meaning of "privacy" and "identity" to realise two important things. First, 'personal data' (data that identifies you, either on its own or in combination with other data) is just one type of user-related data we should be concerned about controlling in a Big Data world. Second, it's critical to our very survival that we share as much data about ourselves as possible to the right recipient in the right context. The focus is now firmly on the root cause of all the noise: lack of personal control over our own data. 

Perhaps the leading exponents of this turnaround have been those involved in the Privacy by Design initiative. As explained in their latest report, they've become convinced by a range of pragmatic commercial and technological developments which together produce a 'personal data ecosystem' with you at the centre. You are now able to store your data in various 'personal cloud' services. 'Semantic data interchange' enables your privacy preferences to be attached to your data in machine-readable form so that machines can process it accordingly. Contractually binding 'trust frameworks' ensure data portability between personal clouds, and enable you to quickly grant others restricted access to a subset of your data for a set time and revoke permission at will. The advent of multiple 'persistent accountable pseudonyms' supports your different identities and expectations of privacy in different contexts, allowing for a lawful degree of anonymity yet making your identity ascertainable for contractual purposes. You can also anonymise your own data before sharing it, or stipulate anonymity in the privacy preferences attached to it, so your data can be processed in the aggregate for your own benefit and/or that of society.

All that's missing is a focus on determining the right value in each context. I mean, it should be a simple matter to attach a condition to your data that you are to be paid a certain amount of value whenever Big Data processes it. But 'how much'? And are you to be 'paid' in hard currency, loyalty points or cost savings?   

The ability to put a value on your data in any scenario is not as far away as you might think. The Privacy by Design report notes that the personal data ecosystem (PDE) is "explicitly architected as a network of peer-to-peer connectivity over private personal channels that avoid both information silos and unnecessary “middlemen” between interactions."

Sound familiar?

As explained in the previous post, P2P marketplaces already enable you to balance your privacy and commercial interests by setting a value on your data that is appropriate to the specific context. Your account on each platform - whether it's eBay or Zopa or one of many others - is effectively a 'personal cloud' through which you interact with other users' personal clouds to sell/buy stuff or lend/borrow money on service terms that leave most of the transaction value with you and the other participants.

The wider developments in semantic data interchange, trust frameworks etc., that are noted in the Privacy by Design report enable these clouds or marketplaces to be linked with other personal clouds, either directly or through the 'personal information managers',  as envisaged in the Midata programme

Ultimately, we could use one or two personal information managers to host and control access to our data and derive income from the use of that data by transacting on different P2P platforms dedicated to discrete activities. Not only would this make it simpler to understand and verify whether the use of our data is appropriate in each context, but it would also enable us to diversify our sources of value - a concept that is just as important in the data world as it is in financial services. You don't want all your data and income streams (eggs) in the one cloud (basket).

The Privacy by Design report claims that "all these advancements mean that Big Privacy will produce a paradigm shift in privacy from an "organisation-centric" to a balanced model which is far more user-centric".

I agree, but would add a cautionary note.

In the context of the 'computers vs people' meme, I'm concerned by references in the report to "cloud-based autonomous agents that can cooperate to help people make even more effective data sharing decisions". Has Privacy by Design been unwittingly captured by the Singularity folk?

I don't think so. Such 'cloud-based agents' are ultimately a product of human design and control. Whether the technologists at the Singularity University choose to believe it or not, humans are in fact dictating each successive wave of automation. 

At any rate, we should take advantage of technology to keep things personal rather than submit to the Big Data machines.


Wednesday, 29 January 2014

Humans Win In The P2P Economy

There's been a lot of heat rising from Google CEO Eric Schmidt's recent assertions about a "race between computers and people" that obliges people to avoid jobs that machines can do. Initially, I suggested this was somewhat disingenuous, given the belief amongst the Silicon Valley elite that machines will achieve the 'Singularity', a state of autonomous superintelligence in which point they will outcompete humans to the point of extinction. Merely pushing people into a narrower and narrower range of 'creative' jobs only furthers that cause, since their creative output attracts the vast advertising revenues Big Data needs to build ever smarter machines.

But I also suggested there's an antidote, and today I want to focus more on that.

Not all Internet platforms finance themselves primarily by using free content as bait for advertising revenue. Since eBay enabled the first person-to-person auction in 1995, the 'P2P' model has spread to music and file sharing, voice and data communications, payments, donations, savings, loans, investments and so on. There are now too many such platforms to list. Even political campaigning has become a person-to-person proposition. In Japan a person can offer to care for another person's elderly parents in his city, if someone else will care for his own parents in another.

Like their meat-space counterparts - the 'mutual society' and the 'co-operative' - online P2P platforms enable people to transact and communicate directly with each other in return for relatively small payments towards the platforms' direct operational costs of facilitating the connection. The P2P model vastly limits the need for advertising, since the platform either enables participants to find each other or automatically matches and connects them using the data the participants enter. Through central service terms, each participant agrees with the others how the platform works and how their data is to be used. Typically, every participant has their own data account in which they can view their transaction history. Some platforms will allow that data to be downloaded, along with all the transaction data on the platform, and this is to be encouraged. Low charges make this a high volume business, like Big Data, but platform operators are able to achieve profitability without commanding the lion's share of the margin in each transaction. This helps explain why eBay is solidly profitable but has a lower market capitalisation than, say, Facebook or Google. It's a leaner intermediary - a facilitator rather than institution. That Wall Street attaches a lower value to a comparatively democratic and sustainable business model tells you all you need to know about Wall Street.

Google and Facebook might argue they are a kind of P2P platform. But aside from a few services, like App sales, they don't directly facilitate the negotiation and conclusion of transactions, so they cannot justify a transaction fee. Perhaps they might say they own the web pages and the servers or virtual 'land' on which their advertising is displayed. But that doesn't ring true. They provide the tools for users to create web pages, but if users did not build them there would be no facade on which to display ads, and no one to look at them. Besides, the supply of creative tools is a one-off, while users supply limitless amounts of data in return. Meanwhile, the advertising revenue that was once merely enough to sustain the Big Data ecosystem now dwarfs the value derived by all participants except the platform operators themselves. Any essence of mutuality - and humanity - has been lost in exactly the same way that banks grew from their mutual origins to capture more and more of the 'spread' between savings and loans. And just as banks now allocate most of the money they create to add financial assets to their balance sheets, rather than financing the productive economy, the Big Data platforms are investing in more ways to capitalise on free user data to lure advertising spend, rather than figuring out new ways to leave most of the value with their users.

Dealing with people and businesses over P2P platforms is a good way to use your own data to claw some of that value back.



Related Posts with Thumbnails