Google
Showing posts with label privacy. Show all posts
Showing posts with label privacy. Show all posts

Tuesday, 26 March 2024

There's Nothing Intelligent About The Government's Approach To AI Either

Surprise! The UK government's under-funded, shambolic approach to public services also extends to the public sector's use of artificial intelligence. Ministers are no doubt piling the pressure on officials with demands for 'announcements' and other soundbites. But amid concerns that even major online platforms are failing to adequately mitigate the risks - not to mention this government's record for explosively bad news - you'd have thought they'd tread more carefully.

Despite 60 of the 87 public bodies either using or planning to use AI, the National Audit Office reports a lack of governance, accountability, funding, implementation plans and performance measures. 

There are also "difficulties attracting and retaining staff with AI skills, and lack of clarity around legal liability... concerns about risks of unreliable or inaccurate outputs from AI, for example due to bias and discrimination, and risks to privacy, data protection, [and] cyber security." 

The full report is here.

Amid concerns that the major online platforms are also failing to adequately mitigate the risks of generative AI (among other things), you'd have thought that government would be more concerned to approach the use of AI technologies responsibly.

But, no...

For what it's worth, here's my post on AI risk management (recently re-published by SCL).


Monday, 10 September 2018

The Irony Or The Ecstasy? The UK Centre For Data Ethics And Innovation

You would be forgiven for uttering a very long string of properly ripe expletives on learning that the current UK government has the cheek to create a "Centre for Data Ethics and Innovation"!  Personally, I think they've missed a trick with the name. With a little more thought, the acronym could've been "DECEIT" - and maybe in some languages it would be - so let's go with that.

You might say that it's better to have, rather than not have, an 'independent' policy body focused on the use of data and "artificial intelligence", even if it's set up by a government controlled by those who masterminded and/or benefited from the most egregious abuse of data ethics in UK history.

Or you might be relieved by the notion that it's easier for the dominant political party of the day to control the ethical agenda and results achieve "the best possible outcomes" if the source of policy on data ethics is centralised, especially within a body being hastily set up on the back of a quick and dirty consultation paper released into the febrile, Brexit-dominated summer period before any aspirational statutory governance controls are in place. 

At any rate, we should all note that:
"[DECEIT], in dialogue with government, will need to carefully prioritise and scope the specific projects within its work programme. This should include an assessment of the value generated by the project, in terms of impact on innovation and public trust in the ethical use of data and AI, the rationale for [DECEIT] doing the work (relative to other organisations, inside or outside Government) and urgency of the work, for example in terms of current concerns amongst the public or business."
...
"In formulating its advice, the Centre will also seek to understand and take into consideration the plurality of views held by the public about the way in which data and AI should be governed. Where these views diverge, as is often the case with any new technology, the Centre will not be able to make recommendations that will satisfy everyone. Instead, it will be guided by the need to take ethically justified positions mindful of public opinion and respecting dissenting views. As part of this process it will seek to clearly articulate the complexities and trade offs involved in any recommendation."
Political point of view is absolutely critical here. This UK government does not accept that the Leave campaign or Cambridge Analytica etc did anything 'wrong' with people's data. Senior Brexiteers say the illegality resulting in fines by the Electoral Commission and further investigation by the ICO and the police are merely politically motivated 'allegations' by do-good Remainers. Ministers have dismissed their own "promises" (which others have called "fake news" outright lies and distortion) as merely "a series of possibilities". There is no contrition. Instead, the emerging majority of people who want Brexit to be subjected to a binding vote by the electorate are regarded as ignoring "public opinion" or "the will of the people" somehow enshrined forever in a single advisory referendum in 2016; and as therefore expressing merely "dissenting views".

Against this gaslit version of reality, the creation of DECEIT is chilling.

Meanwhile, you might ask why there needs to be separate silo for "Data Ethics and Innovation" when we have the Alan Turing Institute and at least a dozen other bodies, as well as the Information Commissioner, Electoral Commission and the police. Surely the responsibility for maintaining ethical behaviour and regulatory compliance are already firmly embedded in their DNA?

I did wonder at the time of its formation whether the ATI was really human-centric and never received an answer. And it's somewhat worrying that the ATI has responded to the consultation with the statement "We agree on the need for a government institution to devote attention to ethics". To be fair, however, one can read that statement as dripping with irony. Elsewhere, too, the ATI's response has the air of being written by someone with clenched teeth wondering if the government really knows what it's doing in this area, anymore than it knows how to successfully exit the EU:
We would encourage clarity around which of these roles and objectives the Centre will be primarily or solely responsible for delivering (and in these cases, to justify the centralisation of these functions), and which will be undertaken alongside other organisations.
... We would encourage more clarity around the Centre’s definitions of AI and emerging technologies, as this will help clarify the areas that the Centre will focus on.
Reinterpreting some of the ATI's other concerns a little more bluntly yields further evidence that the ATI smells the same rat that I do:
  • DECEIT will have such a broad agenda and so many stakeholders to consider that you wonder if it will have adequate resources, and would simply soak up resources from other stakeholders without actually achieving anything [conspiracy theorists: insert inference of Tory intent here, to starve the other stakeholders into submission];
  • the summary of "pressing issues in this field" misses key issues around the accountability and audibility of algorithms, the adequacy of consent in context and whether small innovative players will be able to bear inevitable regulations;
  • also omitted from the consultation paper are the key themes of privacy, identity, transparency in data collection/use and data sharing (all of which are the subject of ongoing investigation by the ICO, the police and others in relation to the Leave campaign);
  • the ATI's suggested "priority projects" imply its concern at the lack of traction in identifying accountability and liability for clearly unethical algorithms;
  • powers given to DECEIT should reinforce its independence and "make its abolition or undermining politically difficult";
  • DECEIT's activities and recommendations should be public;
  • how will the "dialogue with government" be managed to avoid DECEIT being captured by the government of the day?
  • how will "trade offs", "public opinion" and "dissenting views" be defined and handled (see my concerns above)?
I could add to this list concerns about the government's paternalistic outlook instead of a human-centric view of data and technology that goes beyond merely 'privacy by design'. The human condition, not Big Tech/Finance/Politics/Government etc, must benefit from advances in technology.

At any rate, given its parentage, I'm afraid that I shall "remain" utterly sceptical of the need for DECEIT, its machinations and output - unless and until it consistently demonstrates its independence, good sense, not to mention ethics

Thursday, 24 May 2018

If You Need Consent To Process My Personal Data, The Answer Is No

... there are plenty of reasons for businesses and public sector bodies to process the data they hold about you, without needing your consent. These are where the processing is necessary for:
  • performing a contract with you, or to take steps at your request before agreeing a contract; 
  • complying with their own legal obligation(s); 
  • protecting yours or another person's vital interests (to save your life, basically);
  • performing a task in the public interest or in the exercise of their official authority; 
  • their 'legitimate interests' (or someone else's), except where those interests are overridden by your legitimate interests or your fundamental rights which require protection of personal data. 
The General Data Protection Regulation lists other non-consent grounds apply where your personal data is more sensitive: relating to criminal convictions and offences or related security measures; or where it reveals racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership; or it is genetic or biometric data for the purpose of identifying only you; or data concerning health or your sex life or sexual orientation. National parliaments can add other grounds in local laws.

These non-consent grounds for processing are all pretty reasonable - and fairly broad. So, if you don't have the right to process my personal data on one of those grounds, why would I want you doing so?

This would seem to herald a new era in which the Big Data behavioural profiling/targeting/advertising model begins to decline, in favour of personal Apps (or open data spiders) that act as your agent and go looking for items in retailers' systems as you need it, without giving away your personal data unless or until it is necessary to do so...


Tuesday, 19 September 2017

BigTech Must Reassure Us It's Human

Recent issues concerning the purchase of lethal materials online, "fake news" and secure messaging highlight a growing tension between artificial intelligence and human safety. To continue their unbridled growth, the tech giants will have to reassure society that they are human, solving human problems, rather than machines solving their own problems at humans' expense. While innovation necessarily moves ahead of the law and regulation, developments in artificial intelligence should be shaped more by humane and ethical considerations, rather than outsourcing these to government or treating them as secondary considerations.

In the latest demonstration of this concern, Channel 4 researchers were able to assemble a 'shopping basket' of potentially lethal bomb ingredients on Amazon, partly relying on Amazon's own suggestion features or 'algorithms' ("Frequently bought together” and “Customers who bought this item also bought...”), which even suggested adding ball-bearings. This follows the phenomenon that emerged during the Brexit referendum and US Presidential election whereby purveyors of 'fake news' received advertising revenue from Facebook while targeting gullible voters.

Neither business is keen to proactively monitor or police its services for fear of conceding an obligation to do so and rendering itself liable for not doing so where the monitoring fails.

Channel 4 quoted Amazon as merely saying that:
"all products must adhere to their selling guidelines and all UK laws. [We] will work closely with police and law enforcement agencies should they need [us] to assist investigations." [update 20.09.17: Amazon is reported to have responded the next day to say that it is reviewing its website to ensure the products “are presented in an appropriate manner”.]
Amazon makes a valid point. After all, the same products can be bought off-line, yet unlike an offline cash purchase in a walk-in store, if they are bought on Amazon there is likely to be a digital 'audit trail' showing who bought what and where it was delivered. Indeed, it's conceivable that Amazon had alerted the authorities to the nature of the items in Channel 4 researchers' shopping basket and the authorities may have allowed the session to run as part of a potential 'sting' operation. It is perhaps understandable that neither Amazon nor the authorities would want to explain that publicly, but it would be comforting to know this is the case. Channel 4 is also somewhat disingenuous in suggesting this is an Amazon problem, when less well-resourced services or other areas of the Internet (the 'dark web') may well offer easier opportunities to purchase the relevant products with less opportunity for detection.

At any rate, the main difference, of course, is that no one from an offline store is likely to help you find missing ingredients to make a potentially lethal device (unless they're already part of a terror cell or perhaps an undercover operative) - and this is the key to Amazon's enormous success as a retail platform. It's possible, however, that a helpful employee might unwittingly show a terrorist where things are, and Amazon might equally argue that its algorithms don't "know" what they are suggesting. But whether it's because of the 'promise' of the algorithms themselves, there is a sense that the algorithm should not be vulnerable to abuse in this way.

Similarly, in the case of Facebook, the social network service has become a raging success because it is specifically designed to facilitate the exchange of information that generates passionate connections amongst like-minded people far more readily than, say, the owner of a bar or other social hang-out or a newspaper or other form of traditional media. Equally, however, Facebook might argue that the helpful algorithms aren't actually "aware" of the content that is being shared, despite use of key words etc. Meanwhile, WhatsApp seems to have declined to provide a terrorist's final message because it could not 'read' it (although the authorities seem to have magically accessed it anyway...).

Just as we and the online platform owners have derived enormous benefit from the added dimensions to their services, however, we are beginning to consider that those dimensions should bring some additional responsibilities - whether merely moral or legal - possibly on both users and service providers/developers.

In many ways the so-called 'tech giants' - Apple, Amazon, Alphabet (Google), Facebook and others - still seem like challengers who need protection. That's why they received early tax breaks and exemptions from liability similar to those for public telecommunications carriers who can't actually "see" or "hear" the content in the data they carry. 

But while it's right that the law should follow commerce, intervening only when necessary and in a proportionate way to the size and scale of the problem, the size and reach of these platforms and the sheer pace of innovation is making it very hard for policymakers and legislators to catch up - especially as they tend to have wider responsibilities and get distracted by changes in government and issues like Brexit.  The technological waves seem to be coming faster and colliding more and more with the 'real world' through drones and driverless cars, for example. 

The question is whether these innovations are creating consequences that the service providers themselves should actively address, or at least help address, rather than ignore as 'externalities' that government, other service providers or society must simply cope with.

The tech giants are themselves struggling to understand and manage the scale and consequences of their success, and the relentless competition to attract the best talent and the race to push the boundaries of 'artificial intelligence' sometimes presents as a declaration of war on the human race. Even the government/university endowed Alan Turing Institute seems to consider the law and ethics as somehow separate from the practice of data science. Maybe algorithms should be developed and tested further before being released, or be coded to report suspicious activity (to the extent they might not already).  Perhaps more thought and planning should be devoted to retraining commercial van and truck drivers before driverless vehicles do to them what the sudden closure of British coal mines did to the miners and their communities (and what the closure of steel mills has done since!).

In any event, the current approach to governance of algorithms and other technological leaps forward has to change if the 'bigtech' service providers are to retain their mantle as 'facilitators' who help us solve our problems, rather than 'institutions' who just solve their own problems at their customers' expense. They and their data scientists have to remember that they are human, solving human problems, not machines solving their own problems at humans' expense.

[update 20.09.17 - It was very encouraging to see Channel 4 report last night that Amazon had promptly responded more positively to researchers' discovery that automated suggestion features were suggesting potentially lethal combinations of products; and is working to ensure that products are "presented in an appropriate manner". The challenge, however, is to be proactive. After all, they have control over the data and the algorithms. What they might lack is data on why certain combinations of products might be harmful in a wider context or scenario.]


Friday, 13 May 2016

European Privacy Regulators Now Not Happy With US #PrivacyShield

It all seemed to be going so smoothly for US policy-makers when the gathering of the EU's privacy regulators (the Article 29 Working Party) issued a draft review of the US Privacy Shield in February. But the final report means the champagne will remain in the bottle for sometime yet.

Basically, the regulators found the Privacy Sword Shield is hard to read, unclear, inconsistently worded, inconsistent with the new General Data Protection Regulation, does not provide equivalent protection, makes it too hard for foreigners to get redress, the proposed Ombudsman will be neither independent nor adequately resourced; and does "not exclude massive and indiscriminate collection of personal data originating from the EU"!

Meanwhile, data transfers from the EU to the US are still okay to take place under the existing data transfer mechanisms ('standard model clauses' and 'Binding Corporate Rules').

Pity Mr Schrems who managed to overturn the 'Safe Harbor' but leave us even less protected than before!


Monday, 30 November 2015

Better Services For SMEs: Follow The Data

I was at a 'parliamentary roundtable' on Tuesday on the perennial topic of small business banking reform. A more official report will be forthcoming, but I thought I'd record a few thoughts in the meantime (on a Chatham House basis). 

It still seems to surprise some people that small businesses represent 95% of the UK's 5.4m businesses - 75% of which are sole traders - and that they account for 60% of private employment, most new jobs and about half the UK's turnover. So-called 'Big Business' is just the tip of the iceberg, since only they have the marketing and lobbying resources to be seen above the waves. As a result small businesses have long been a blind spot for the UK government - until very recently - and the impact has gone way beyond poor access to funding. It includes slow payment of invoices, the absence of customer protection when dealing with big business and lack of alternatives to litigation to resolve disputes.

What's changed?

A combination of financial crisis, better technology and access to data has exposed more of the problems surrounding SMEs - and made it possible to start doing something about them. And it's clear that legislators are prepared to act when they are faced with such data. The EU Late Payments Directive aimed to eliminate slow payments. The UK has created the British Business Bank to improve access to finance, as well as a mandatory process for banks to refer declined loan applications to alternative finance providers and improved access to SME credit data to make it easier for new lenders to independently assess SME creditworthiness. The crowdfunding boom has also been encouraged by the UK government, and has produced many new forms of non-bank finance for SMEs, including equity for start-ups, debentures for long term project funding, more flexible invoice trading and peer-to-peer loans for commercial property and working capital.  Last week, the FCA launched a discussion paper on broadening its consumer protection regime to include more SMEs.

Yet most of these initiatives are still to fully take effect; and listening to Tuesday's session on the latest issues made it clear there is a long way to go before the financial system allocates the right resources to the invisible majority of the private sector.

A key thread running through most areas of complaint seems to be a lack of transparency - ready access to data. This seems to be both a root cause of a lot of problems as well as the reason so many proposed solutions end up making little impact. But the huge numbers and diversity of SMEs presents the kind of complexity that only data scientists can help us resolve if we are to address the whole iceberg, rather than just the tip. That's surely one job for the newly launched Alan Turing Data Institute, for example, although readers will know of my fear that it seems more aligned with institutions than the poor old sole trader, let alone the consumer. So maybe SMEs need their own 'Chief Data Scientist' to champion their plight?

The latest specific concerns discussed were as follows:
  • the recent findings and remedies proposed by the Competition and Markets Authority into business current accounts are widely considered to be weak and unlikely to be effective - try searching the word "data" in the report to see how often there was too little available. The report still feels like the tip of an iceberg rather than a complete picture of the market and its problems;
  • austerity imperatives seem to be the main driver for off-loading RBS into private shareholder ownership - the bank pleading to be left to its own devices (not what it suddenly announced to the Chancellor in 2008!) - and trying to kill-off any further discussion of using its systems as a platform for a network of smaller regionally-focused banks (as in Germany);
  • the financial infrastructure for SMEs appears not to be geographically diverse - it doesn't yet mirror the Chancellor's "Northern Powerhouse" policy, for instance - despite calls for bank transparency on the geographical accessibility, a US-style "Community Reinvestment Act" and clear reporting on lending to SMEs by individual banks (rather than the Bank of England's summary reporting). There's a sense that we should see some kind of financial devolution to match political devolution, albeit one that still enables local finance to leverage national resources and economies of scale. Technology should help here, as we are tending to use the internet and mobile apps quite locally, despite their global potential;
  • Some believe that SMEs need to take more responsibility for actively managing their finances, including seeking out alternatives and switching; while others believe that financial welfare should be like a utility - somehow pumped to everyone like water or gas, I assume - indeed regional alternative energy companies were touted as possible platforms for expanding access to regional financial services. My own view is that humans are unlikely to become more financially capable, so financial and other services supplied in complex scenarios need to be made simpler and more accessible - we should be relying less on advertising and more on hard data and personalised apps in such instances.
  • Meanwhile, SME are said to lack a genuine, high profile champion whose role it is to ensure that the financial system generally is properly supportive of them. This may seem a little unfair to the Business Bank, various trade bodies and government departments, but it's also hard for any one of these bodies to oversee the whole fragmented picture. As I suggested above, however, I wonder whether a 'data champion' could be helpful to the various stakeholder in identifying and resolving problems, rather than a single being expected to act as a small business finance tsar. 
 In other words, we should follow the data, not the money...


Thursday, 15 October 2015

The Alan Turing Institute: Human-centric?

A slightly dispiriting day at The Alan Turing Institute 'Financial Summit', yesterday, I'm afraid to say. 

The ATI itself represents a grand vision and stunning organisational achievement - to act as a forum for focusing Britain's data scientists on the great problems of the world. Naturally, this leaves it open to attempts at 'capture' by all the usual vested interests, and its broad remit means that it must reflect the usual struggle between individuals and organisations and between 'facilitators', who exist to solve their customers problems, and 'institutions', who exist to solve their own problems at their customers' expense

And of course, it's the institutions that have most of the money - not to mention the data problems - so I can see, too, why the ATI advertises its purpose to institutions as "the convener of a multidisciplinary approach to the development of 'big data' and algorithms". It's true also, that there are global and social issues that transcend the individual and are valid targets for data scientists in combination with other specialists. 

But it was concerning that an apparently neutral event should seem predicated on a supplier-led vision of what is right for humans, rather than actually engineering from the human outward - to enable a world in which you to control what you buy and from whom by reference to the data you generate rather than by approximating you to a model or profile. Similarly, it was troubling to see a heavy emphasis in the research suggestions on how to enable big businesses to better employ the data science community in improving their ability to crunch data on customers for commercial exploitation.  

To be fair, there were warning signs posted for the assembled throng of banks, insurers and investment managers - in the FCA's presentation on its dedication to competition through its Innovation Hub; a presentation on the nature and value of privacy itself; and salutary lessons from a pioneer of loyalty programmes on the 'bear traps' of customer rejection on privacy grounds and consumers' desire for increasing control over the commercial use of our data. The director's slides also featured the work of Danezis and others on privacy-friendly smart metering and a reference to the need to be human-centric.  

But inverting the institutional narrative to a truly human-centric one would transform the supplier's data challenge into one of organising its product data to be found by consumers' machines that are searching open databases for solutions based on actual behaviour - open data spiders, as it were  - rather than sifting through ever larger datasets in search of the 'more predictive' customer profile to determine how it wastes spends its marketing budget.

Personally, I don't find much inspiration in the goal of enabling banks, insurers and other financial institutions to unite the data in their legacy systems to improve the 'predictive' nature of the various models they deploy, whether for wholesale or retail exploitation, and I'm sure delegates faced with such missions are mulling career changes. Indeed, one delegate lightened the mood with a reference to 'Conway's Law' (that interoperability failures in software within a business simply reflects the disjointed structure of the organisation itself). But it was clear that financial institutions would rather leave this as an IT problem than re-align their various silos and business processes to reflect their customers' end-to-end activities. There is also a continuing failure to recognise that most financial services are but a small step in the supply chain, after all. I mean, consider the financial services implications of using distributed ledgers to power the entertainment industry, for example... 

When queried after the event as to whose role it was to provide the 'voice of the customer', the response was that the ATI does not see itself as representing consumers' or citizens' interests in particular. That much is clear. But if it is to be just a neutral 'convenor' then nor should the ATI allow itself to be positioned as representing the suppliers in their use and development of 'big data' tools - certainly not with £42m of taxpayer funding. 

At any rate, in my view, the interests of human beings cannot simply be left to a few of the disciplines that the ATI aims to convene along side the data scientists - such as regulators, lawyers, compliance folk or identity providers. The ATI itself must be human-centric if we are to keep humans at the heart of technology.


Friday, 19 June 2015

Video Did Not Kill The Radio Star

So many highlights from the past few days at the SCL's Technology Law Futures Conference on 'Keeping Humans at the Heart of Technology', available online soon, but a favourite quote is that "video did not kill the radio star" (with apologies to the Buggles). 

A more fulsome report will no doubt be available shortly. In the meantime, we should all be thinking about the responsible development of 'artificial narrow intelligence' in the context of social care, driver-less cars and other autonomous vehicles (not to mention surveillance and warfare, to the extent we can influence that!). 

If we can insist on adequate transparency and appropriate rules governing risk and liability in the context of these types of projects, then maybe we'll be in good shape to deal with 'artificial general intelligence' when that comes, as well as the potential for artificial superintelligence beyond... [drum roll]... The Singularity.  

Or we could simply fade away as the machines take over.

It's up to us.

For now.


Sunday, 29 March 2015

Is There Really A Single EU Market?

Some sobering figures from the European Commission for single market fantasists enthusiasts (as if Greece wasn't sobering enough).

EU cross-border services account for 4% of all online services, as opposed to national services within the US (57%) and in each of the EU member states (39%). 

15% of EU consumers bought online from other member states, compared to 44% who bought online nationally, with online content seeing double-digit growth.

Only 7% of SMEs sell online across EU borders - and it costs an average of €9,000 to adapt their processes to local law in order to do so. 

The cost/price of delivery is (obviously) cited as a major problem, as well as differing VAT arrangements. But suggested solutions seem to ignore these and other key barriers to cross-border retail that have been cited in previous market studies, such as lack of marketing strategy, preference for national brands, language barriers and local employment law challenges. Presumably, that's because the Commission can do little to address such fundamental practicalities. Instead, they want to focus on:
  • stronger data protection rules;
  • broadband/4G roll-out;
  • use of 'Big Data' analytics; and
  • better digital skills amongst citizens and e-government by default.
The sense of futility that permeates such reports by Eurocrats only emphasises the fact that the law follows commerce; it doesn't catalyse markets.  

Yet, ironically, in areas where commercial and consumer pressure to enable cross-border activity is emerging, such as crowdfunding and crypto-technology, we find European institutions taking an unduly restrictive approach.

When will they simply get out of the way?


Tuesday, 3 March 2015

Artificial Intelligence: The Control Dichotomy

Professor Nick Bostrom delivered an inspirational speech for the SCL last night on "Superintelligence: a godsend or a doomsday device", although few would have found it reassuring - it is certainly conceivable that machines could become more intelligent than humans and that humans might not be able to control them. But these are still early days, he stresses. Regulating the development of artificial intelligence at this point risks halting progress. There's a lot more work to do to really understand how artificial intelligence will develop beyond playing old video games better than humans or recognising an image as the picture of a cat. We need to consider how the technology could help avert our extinction, as well as how it might wipe us out. Yet Nick also warns that it will take far less time for computers to exceed human level machine intelligence than to reach it in the first place. So we need to start work on the control mechanisms for the development and use of artifical intelligence, without regulating the industry out of existence: the control dichotomy.

Nick suggests that the guiding principle should be that of "differential technological development" - diverting resources away from technologies and their application which could cause human extinction while focusing on those which will either help prevent our demise or will facilitate the expansion of the human race throughout the cosmos.

But how do we distinguish between helpful and harmful technologies and their application? 

As Nick points out, it's tough to think of any human invention that is inherently 'good'. He mentions many things, from gun powder to genetic engineering, and I think we can throw in the wheel and the Internet for good measure. All these things are used by humans in bad ways as well as for the greater good. But I think what horrifies us especially about the idea of Superintelligence or 'The Singularity' is that it will be machines, not bad humans, who will be using other machines against us. And while we have lots of experience in dealing with evil humans, even our top minds admit we still don't know much about how machines might act in this way or how to stop them - and what we humans fear most is the unknown. 

You'll notice I haven't said 'evil' machines, since they might not be operating with any evil 'intent' at all. Human extinction might just be a mistake - 'collateral damage' arising from some other mission. For instance, Nick suggests that a particular machine left to itself in a given situation might decide to devote itself entirely to making paperclips. So, presumably, it would not bother to put out a fire, for example, or free a human (or itself) from a burning building. It might leave that to other machines, who might in turn have chosen their own narrow objective that involves ignoring people's screams.

Here's where I struggle with the notion of Superintelligence. In fact, as someone who hates being pigeon-holed into any single role, I think a machine's decision to only ever make paperclips might be fabulously logical and a brilliant choice for that machine in the circumstances, but it makes the machine as dumb as a post. For me, Superintelligence should involve a machine being able to do everything a human can and more

But that's beside the point. Knowing what we know already, it would be insane to ignore the paperclip droid and wait for artificial intelligence to develop a machine more capable than humans before figuring out how we might control it. Nick is right to point out that we must figure that out in parallel. In other words, the concept of human control has to be part of the artificial intelligence programme. But it won't be as simple as coding machines to behave protectively, since machines will be able to programme each other. For instance, Nick suggests we could put the machines to work on the control problem, as well as on the problem of how to ensure the survival of our species. AI labs might also pay insurance premiums to cover the damage caused by what they develop. He was less certain about what we might do to constrain developments that occur in the context of secret defence programmes or intelligence gathering, but he seemed confident that we could at least infer the pace of development from the results, and be able to consider how to control the wider application of those developments. Mmmm.

At any rate, Nick also warns that we need to be careful what we wish for. Mandating human survival in a prescriptive way - even a specific biological form - would be a bad move, since we should not assume we are in a position to foster positive human development any more than the Holy Office of the Spanish Inquisition. Better to embed positive human values and emotions or, say, entertainment as a feature of intelligent machines (although I'm guessing that might not go down well with the jihadis). From a phyiscal standpoint, we already know that the human body won't do so well for long periods in space or on Mars, so some other version might need to evolve (okay, now I'm freaking myself out).

To retain a sense of pragmatism, at the end of the speech I asked Nick what he would recommend for our focus on 'Keeping Humans at the Heart of Technology' at the SCL conference in June. His tip was to consider which of the various types of control mechanism might work best, recognising the need to avoid constraining the positive development of artificial intelligence, while ensuring that we will be able to keep the machines in check if and when they become smarter than us.

No pressure then...


Tuesday, 17 February 2015

Will Machines Out-Compete Humans To The Point of Extinction?

I've been a bit absent from these pages of late, partly pulling together SCL's Technology Law Futures Conference in June on 'how to keep humans at the heart of technology'. As I've explained on the SCL site, the conference is part of SCL's effort to focus attention on that question all year, starting with a speech by Oxford University's Professor Nick Bostrom on 2 March: "Superintelligence: a godsend or doomsday device

In other words, last year was when the threat of "The Singularity" really broke into the mainstream, while this year we are trying to shift the focus onto how we avert that outcome in practical terms. 

My own book on how we can achieve control over our own data is still ping-ponging between agent and publishers, but will hopefully find a home before another year is out - unless, of course, the machines have other ideas... 


Tuesday, 21 October 2014

A Developer's Guide to Privacy and Fairness?

Over the past few months I've noticed a range of different articles expressing privacy concerns about mobile apps, wearable devices and internet-enabled things, like smart TVs and bathroom scales ("the Internet of Things") on the one hand; and initiatives like 'Midata' to help you create your own 'personal data ecosystem', on the other. But regulation aimed at unfair trading is also relevant in this context, as are the various security requirements being proposed at EU level in relation to payments and 'cybersecurity' more generally. Official guidance in these areas is often broad but not comprehensive, as in the summary of privacy rules given in the context of Midata. It would be great to see a more concerted effort to draw all the guidance together. I have suggested this to the SCL. In the meantime, this overview explains briefly where to find guidance on meeting privacy and fairness requirements when using apps and other devices for consumer marketing purposes.

Note: as a developer, it's worth reading such guidance as if you were a consumer, to understand the regulatory intent. As a consumer it's worth reading guidance aimed at firms, since that gives you a better insight into how things actually work 'behind the scenes'.

The Information Commissioner has plenty of practical guidance on privacy in the context of cookies, mobile applications and data sharing (and a other guidance by sector or activity).

The Advertising Codes are important sources of information on how systems are supposed to behave in a marketing context.

PhonepayPlus has issued guidance on the use of premium rate numbers.

The Office of Fair Trading had plenty of guidance on how to comply with consumer protection regulation, which is now hosted by the Competition and Markets Authority, including principles for online and app-based games.

The OFT's guidance on what's appropriate in a consumer credit context, such as debt collection, is now in the FCA's consumer credit rules, and the FCA also recently consulted on updates to its guidance on financial promotions in the social media.

Firms seeking FCA authorisation often have to provide a lot of detail on their IT systems and governance in the process. The proposed new EU directive on payment services will broaden the range of regulated services and go into considerable detail on data security. In fact, security standards will be produced by the European Banking Authority, just to add to the confusion.

Knowing where consumers can complain is a guide to other regulators who may be interested in how your application works. There is an overview of UK consumer complaints channels here. There are specific complaints bodies for sectors, such as energy, financial services and telecoms, as well as for activities, like advertising and processing personal data.

However, it's you should be aware that the Data Protection Act gives businesses separate rights to process your personal data in the following circumstances:
  • for the performance of a contract to which you are a party, or for taking of steps at your request with a view to entering into a contract;
  • for compliance with any legal obligation, other than an obligation imposed by contract;
  • in order to protect your vital interests;
  • either for the exercise of a function conferred on a business by law or for the exercise of any other functions of a public nature exercised in the public interest;
  • for the purposes of legitimate interests pursued by a business or by someone else to whom the data are disclosed, except where that processing is unwarranted by reason of prejudice to your rights and freedoms or legitimate interests.
Public sector bodies also have certain rights to use your data which I haven't covered here. However, it's important to mention the ID Assurance Programme run by the Government Digital Service team, which has issued useful guidance on ID assurance. And the Connected Digital Economy Catapult that builds platforms for SMEs is due to develop a code of practice on consumer protection.


Tuesday, 16 September 2014

Google Switches To Defence In Its War On The Human Race

Nine months after Google's Chairman, Eric Schmidt, used his speech at Davos to declare war on the human race, he and the other  Big Data commanders find themselves very much on the defensive.

"I was suprised it turned this quickly," Mr Schmidt is quoted as saying of the political tide, after his European smarm offensive in June failed to avert calls for Google to be broken-up.

The trouble is that Big Data funds itself by selling the opportunity to find humans and present advertising to them. Even the craze in wearable devices is all about geolocating the wearer (and potentially their companion(s)) for advertising purposes. Ideally, you'll buy a watch or pair of glasses that will keep you reading ads and search results while on the move, but a wristband that tells your 'friends' what you're doing and where will do just nicely. Maybe one day you'll even go for the driverless car, so you can watch ads instead of the road.

As I mentioned in January, the advertising revenue that initially helped fund the transition from the analogue/paper world now dwarfs the value we actually get from Big Data and the Web. Mutuality - and humanity - is being sacrificed in the Big Data rush to sell you tat. Oh, and in the quest for The Singularity, when the high priests of SillyCon Valley believe that machines will achieve their own superintelligence and outcompete humans to extinction. Yes, really. 

In the same way that banks have grown from their mutual origins to suit themselves at our expense - keeping most of the 'spread' between savings and loans to suit themselves - Big Data platforms are primarily focused on how to leverage the data you generate ("Your Data") without rewarding you for the privilege.  GCHQ and the NHS are playing pretty much the same game.

But not all digital platforms finance themselves by using Your Data as bait for advertising revenue. Since eBay enabled the first person-to-person retail auction in 1995, that model has spread to create marketplaces in music, travel, communications, payments, donations, loans, investments and personal transport. The marketplace operators thrive by enabling many participants to use their own data to transact directly with each other in return for relatively small fees, leaving the lion's share of each transaction with the parties on either side. 

The marketplace model also reveals that most of daily transactions could be carried out between our machines. After all, they are much better at crunching all the data than we are. They are in the best position to combine our own transaction data, open public data and commercial product information to recommend the right car, mobile phone tariff or insurance products, without disclosing our identity to every advertiser in the process.  And why couldn't they arrange it so you switch to the cheapest phone or energy tariff each day, or switch car insurers depending on time of day or where your driving?

True, the platforms that enable you to leverage your own data more privately haven't yet attracted investors to the same extent as Big Data. eBay is solidly profitable and doesn't depend on substantial advertising revenues for its existence, yet it has a lower market capitalisation than Facebook or Google. It should come as no surprise to you that Wall Street and the world of high finance attaches a lower value to democratic and sustainable business models that don't suit a short term, institutional view of the world. But the financial news of 2014 must show institutional investors that we humans doubt whether Big Data has our best interests at heart. So the stock market value of marketplace operators may yet exceed that of the Big Data boys.

That's not to say that the whole Big Data movement has been a wasted experiment - it has just strayed from the path of simply digitising our daily experiences to trying to exploit them. Much of their technology could be re-aligned to empower you as an individual user, rather than treat you like a farm animal for the benefit of advertisers. 

Neither should we underestimate the Big Data giants' ability to reinvent themselves for the better. They are well-funded and more responsive to customers than banks and other institutions which have lost their way.

And it would be good to know they're working to sustain the human race, rather than kill it off.


Friday, 4 July 2014

Eurocrats Need A Reality Check

The Society for Computers and Law was recently entertained on the topic of trust in Big Data and the Cloud, by Paul Nemitz, European Commission Director of Fundamental Rights and Union Citizenship (in the Directorate-General for Justice). Both immigration and data protection feature among the main responsibilities of his Directorate, so you can imagine Paul is a very busy man right now, and it was very kind of him to take the time to speak.

Right, so that's the polite bit out of the way ;-)

Paul was keen to challenge the Brits in the audience to be more pragmatic in their attitude to the European Union. He believes the UK is among those who engage with the EU irresponsibly on the basis that "everything that comes out of Brussels is shite". Instead, he says British officials, lawyers and academics should be focused pragmatically on how to engage positively to achieve better European policy and regulation.

Of course, it's an old rhetorical trick to characterise your opponent's views as overly simplistic, boorish and stupid. Paul knows that the UK's opposition to red tape is based on more serious and fundamental differences than simply declaring everthing from Brussels as 'shite', as discussed below. But as a Commission official, he's not able to enter into debates over the fundamental principles of the EU. It's his job to be a 'Believer' and get on with building the vision. He must take it on faith that the European Union is a single market, rather than a loose collection of disparate nations held together by red tape and political ambition. 

It suits some EU member states to accept that same article of faith, but not all, and the people in the streets certainly don't think that way - consumers have been worryingly slow to purchase across borders, for example. And the recent election results revealed that a huge proportion of the electorate remain to be convinced that EU governance is wholly worthwhile. 

In these circumstances, the UK's rather sceptical view of what comes out of Brussels is quite broadly representative, and the attempt to draw a line in the sand over the imposition of a fervent unionist as head of the Commission was completely understandable. It's also pragmatic. If the EUrophiles were humble enough to accept that the single market is still an ambition, they too would realise it's unwise to be seen to force the issue. People have to be brought along on the journey, and maybe the UK is a good indicator of how far they are being left behind.

To back his claim that the UK's attitude is simply boorish, Paul points to a 'typical' lack of empirical evidence for resisting provisions in the General Data Protection Regulation requring large firms to appoint a data protection officer and to facilitate fee-free 'data subject access requests'. He says these things work well in other EU member states already, and haven't driven anyone out of business. And against the UK's charge that the European Commission is needlessly committed to ever-increasing levels of privacy regulation, Paul points to surveys that show ever-increasing levels of concern amongst EU citizens about commercial and governmental intrusion into their private lives; as well as recent judgments from the European Court of Justice and the US Supreme Court curbing commercial and governmental intrusion into these areas (ironic, given that one of the ECJ's decisions was to declare Europe's own Data Retention Directive invalid).

Again, he's missing a sensible, pragmatic point. The UK's reaction is telling him is that when huge swathes of the population questionn the very existence of the EU, it's wiser to stick to the essential foundations and building blocks, rather than snowing people with confetti about day-to-day compliance issues.

However, I'm glad to say that Paul was able to explain how the European Commission is working on some important foundations, such as getting standing for foreigners to take action to protect themselves in the US courts; and preventing indiscriminate mass collection of the personal data of EU citizens by any government or corporation, inside or outside the EU. Those two things are very important to building trust in governments, as well as Big Data, and are the sort of fundamental constitutional changes that citizens would find extremely difficult to achieve solely through the democratic process - though the European Commission has climbed on the bandwagon of public opinion (or Merkel's personal outrage), rather than initiated pressure to achieve these outcomes in its own right.

I also think Paul is right to point out that businesses are wrong in the view that personal data is 'the currency of the future' or 'oil in the wheels of commerce'.  Money is fungible - we view one note as the same as another - and, similarly, oil is just a commodity. So the data related to money and oil are hardly very sensitive and can be dealt with through economic regulation. But people, and the data about them and their personal affairs, come with more fundamental rights that can't simply be dealt with in economic terms. It's important that citizens have a right of action against governments and corporations to protect their interests (though I think the Google Spain decision was wrong).

But Paul overstates the 'synergies' between EU regulation, trust and innovation. He is stretching too far when he says that vigorous regulatory protection is essential to the creation of trust between people and their governments and the corporations they deal with. As evidence for this, he claims that the UK's Financial Conduct Authority as doling out the largest fines in the EU for the abuse of people's personal data, and asserts that this has built trust in the UK financial services market. From there, Paul leaps to the conclusion that similarly vigorous regulatory attention is somehow one of the necessary pre-conditions to the creation of commercial trust generally. He then leaps again to the notion that commercial trust driven by regulation is a pre-condition for innovation because, "There is no trust in start-ups," he says.

This is all nonsense.

Here Paul seems to be looking at the world through the lens of his own area of responsibility rather than from a consumer standpoint. Very few of the FCA's fines have anything to do with abuse of customer data, and its fines are puny compared to US regulators in any event. And in survey after survey, we've also seen that the providers of retail financial services are generally among the least trusted retail organisations in the UK and Europe. Enforcement processes also tend to be slow, resulting in fines for activity that ceased years before, and depriving consumers of the opportunity to cease dealing with firms at the time of wrongdoing. So, relative to consumers' perception of other industries, complex financial regulation and allegedly vigorous enforcement action has been no help at all.

It's also strange for Paul to suggest that "there is no trust in start-ups" without the backing of regulation, given the vast number of start-ups that have achieved mass consumer adoption absent effective regulation - certainly across borders. Unless, of course, Paul still considers Google, Facebook, Twitter etc to be 'start-ups', which would be weird. This ignores the fact that, love 'em or hate 'em, such businesses have been far more responsive to consumer/citizen pressure in changing their terms and policies than the European Commission or national legislators have been in altering their own laws etc. Indeed such businesses have even been relied upon by governments to enforce their consumer agreements to shutdown activities that national governments have been powerless to stop.

Paul's view of start-ups appears to reflect the continental civil law notion that citizens cannot undertake an activity unless the law permits it; while in the common law world 'the law follows commerce' - in the UK and Ireland (and the US, Canada, Australia etc) we can act unless the law prevents it. The havoc that arises from these opposing viewpoints - and the differing approaches to interpreting legislation - cannot be underestimated. In fairness, the UK needlessly creates a rod for its own citizens by 'gold-plating' EU laws (transposing them more or less verbatim). The national version is then interpreted literally. We would be far better off adopting the purposive interpretation of EU laws and implementing them according to their intended effect. This may mean a bit more friction with the Commission on the detail of implementation, but the French don't seem to mind frequent trips to the European Court where the Commission objects, and meanwhile their citizens don't labour under unduly restrictive interpretation of EU laws.

None of this is to say that I disagree with Paul's claim that strong individual rights and regulation to protect them are not inconsistent with making money and healthy innovation. But I reach this conclusion by a different route, starting from the premise that retail goods and services must ultimately solve consumers' problems, rather than be designed to solve suppliers' problems at consumers' expense. Strong individual rights are only one feature of a consumer's legitimate day-to-day requirements, not all of which can be legislated for. Co-regulation, self-regulation and responsible, adaptable terms of service are all part of the mix.

Of course, regulation can be helpful in preserving or boosting trust where it is already present - as can be seen in the development of privacy law amidst the rise of social media services (and in the context of peer-to-peer lending and crowd-investment, for example). But regulation can't create trust from scratch, any more than Parliament can start businesses.

If only the Eurocrats would recognise these realities and limit their attention to areas where government action is essential, I'm sure they would find more favour with pragmatists everywhere.


Wednesday, 14 May 2014

Google Spain Case Raises More Questions Than It Answers

I'm an enthusiastic supporter of greater control over your data. But I'm really struggling with the European Court of Justice ruling that you can stop a search engine linking to something lawfully published about you in your local newspaper's online archive.

The case in question concerned the appearance of someone's name in a local Spanish newspaper announcement for a real-estate auction connected with proceedings to recover social security debts 16 years ago. The individual concerned (openly named in the judgment, ironically) claimed that the proceedings had been "fully resolved for a number of years and that reference to them was now entirely irrelevant." He failed to obtain an order banning the newspaper from carrying the item in its online archive, but succeeded in getting Google Spain to remove any links to it.

But surely if it was lawful for the local newspaper to have published the item of data - and it remains okay for it to publish the data via its website - then it should be okay to allow someone to find it?

I mean, why stop at gagging Google's local site? Why not make local libraries cut tiny holes in their microfiche records?

On this point, the ECJ cited problems where multiple jurisdictions were involved, even though this was purely Spanish scenario:
"Given the ease with which information published on a website can be replicated on other sites and the fact that the persons responsible for its publication are not always subject to European Union legislation, effective and complete protection of data users [subjects?] could not be achieved if the latter had to obtain first or in parallel the erasure of the information relating to them from the publishers of websites."
But how could removing links to an item from a national search engine achieve "effective and complete protection" of the data subject when the same items are lawfully available via a national newspaper's online archive anyway? Surely a national problem such as this has to be dealt with at source, or not at all?

Another key issue is that the ECJ didn't seem to weigh up all the possible public interests against the particular individual's rights to 'respect for private life' and 'protection of personal data'. 

Surely, for example, there was some public interest in the publication of the notices of auction complained about, such as achieving a fair price for property being sold to pay a debt to the state? Perhaps if that requirement had been abolished you could make a case for requiring the deletion of public notices relating to them. But, absent their abolition, I'm not sure you can say it's "entirely irrelevant" that someone was mentioned in such a notice, even if that were years ago.

And is there not a public interest in being able to more readily find published material via search engines? Consider the huge variety of research processes that must now rely on search engines, from journalistic research, to employment checks, to official background checks. What holes will now emerge in such research processes? Will records be kept of all the links that search engines were told to remove? If so, where will those records be kept? Who will be allowed to access them? Aren't researchers now on notice that they should check individual newspaper archives for data that search engines aren't allowed to let you find? How many won't bother when they really should?

The problems with the judgment don't end there, as is demonstrated by the tortuous path the ECJ took to reach its result (explained here). 

All of this underlines the need for careful policy thought and regulatory clarity around these issues, rather than the celebratory gunfire heard in some quarters. This judgment raises more questions than it answers.

 

Friday, 24 January 2014

Google Declares War On The Human Race

Google's CEO, Eric Schmidt finally admitted yesterday something that the likes of Jaron Lanier have been warning us about for some years now: he believes there's actually a race between computers and people. In fact, many among the Silicon Valley elite fervently believe in something called The Singularity. They even have a university dedicated to achieving it.

The Singularity refers to an alleged moment when machines develop their own, independent 'superintelligence' and outcompete humans to the point of extinction. Basically, humans create machines and robots, harvest the worlds data until a vast proportion of it is in the machines, and those machines start making their own machines  and so on until they become autonomous. Stuart Armstrong reckons "there's an 80% probability that the singularity will occur between 2017 and 2112".  

If you follow the logic, we humans will never know if the Singularity actually happened. So belief in it is an act of faith. In other words, Singularity is a religion.

Lots of horrific things have been done in the name of one religion or another. But what sets this one apart is that the believers are, by definition, actively working to eliminate the human race.

So Schmidt is being a little disingenous when he says "It's a race between computers and people - and people need to win," since he works with a bunch of people who believe the computers will definitely win, and maybe quite soon. The longer quote on FT.com suggests he added:
“I am clearly on that side [without saying which side, exactly]. In this fight, it is very important that we find the things that humans are really good at.”
Well, until extinction, anyway.

Of course, the Singularity idea breaks down on a number of levels. For example, it's only a human belief that machines will achieve superintelligence. If machines were to get so smart, how would we know what they might think or do? They'd have their own ideas (one of which might be to look after their pet data sources, but more on that shortly). And there's no accounting for 'soul' or 'free will' or any of the things we regard as human, though perhaps the zealots believe those things are superfluous and the machines won't need them to evolve beyond us. Finally, this is all in the heads of the Silicon Valley elite...

Anyhow, Schmidt suggests we have to find alternatives to what machines can do and only humans are really good at. He says:
"As more routine tasks are automated, this will lead to much more part-time work in caring and creative industries. The classic 9-5 job will be redefined." 
Which is intended to focus our attention away from the trick that Google and others in the Big Data world are relying on to power up their beloved machines and stuff them full of enough data to go rogue. 
By offering some stupid humans 'free' services that suck in lots of data, Big Data can charge other stupid humans for advertising to them. That way, the machines hoover up all the humans' money and data at the same time.

This works just fine until the humans start insisting on receiving genuine value for their data.

Which is happening right now in so many ways that I'm in the process of writing a book about it. 

Because it turns out humans aren't that dumb after all. We are perfectly happy to let the Silicon Valley elite build cool stuff and charge users nothing for it. Up to a point. And in the case of the Big Data platforms, we've reached that point. Now its payback time.

So don't panic. The human race is not about to go out of fashion - at least not the way Big Data is planning. Just start demanding real value for the use of your data, wherever it's being collected, stored or used. And look out for the many services that are evolving to help you do that.

You never know, but if you get a royalty of some kind every time Google touches your data, you may not need that 9 to 5 job after all... And, no, the irony is not lost on me that I am writing this into the Google machine ;-)


Image from Wikipedia
Related Posts with Thumbnails