Google
Showing posts with label behavioural economics. Show all posts
Showing posts with label behavioural economics. Show all posts

Monday, 10 September 2018

The Irony Or The Ecstasy? The UK Centre For Data Ethics And Innovation

You would be forgiven for uttering a very long string of properly ripe expletives on learning that the current UK government has the cheek to create a "Centre for Data Ethics and Innovation"!  Personally, I think they've missed a trick with the name. With a little more thought, the acronym could've been "DECEIT" - and maybe in some languages it would be - so let's go with that.

You might say that it's better to have, rather than not have, an 'independent' policy body focused on the use of data and "artificial intelligence", even if it's set up by a government controlled by those who masterminded and/or benefited from the most egregious abuse of data ethics in UK history.

Or you might be relieved by the notion that it's easier for the dominant political party of the day to control the ethical agenda and results achieve "the best possible outcomes" if the source of policy on data ethics is centralised, especially within a body being hastily set up on the back of a quick and dirty consultation paper released into the febrile, Brexit-dominated summer period before any aspirational statutory governance controls are in place. 

At any rate, we should all note that:
"[DECEIT], in dialogue with government, will need to carefully prioritise and scope the specific projects within its work programme. This should include an assessment of the value generated by the project, in terms of impact on innovation and public trust in the ethical use of data and AI, the rationale for [DECEIT] doing the work (relative to other organisations, inside or outside Government) and urgency of the work, for example in terms of current concerns amongst the public or business."
...
"In formulating its advice, the Centre will also seek to understand and take into consideration the plurality of views held by the public about the way in which data and AI should be governed. Where these views diverge, as is often the case with any new technology, the Centre will not be able to make recommendations that will satisfy everyone. Instead, it will be guided by the need to take ethically justified positions mindful of public opinion and respecting dissenting views. As part of this process it will seek to clearly articulate the complexities and trade offs involved in any recommendation."
Political point of view is absolutely critical here. This UK government does not accept that the Leave campaign or Cambridge Analytica etc did anything 'wrong' with people's data. Senior Brexiteers say the illegality resulting in fines by the Electoral Commission and further investigation by the ICO and the police are merely politically motivated 'allegations' by do-good Remainers. Ministers have dismissed their own "promises" (which others have called "fake news" outright lies and distortion) as merely "a series of possibilities". There is no contrition. Instead, the emerging majority of people who want Brexit to be subjected to a binding vote by the electorate are regarded as ignoring "public opinion" or "the will of the people" somehow enshrined forever in a single advisory referendum in 2016; and as therefore expressing merely "dissenting views".

Against this gaslit version of reality, the creation of DECEIT is chilling.

Meanwhile, you might ask why there needs to be separate silo for "Data Ethics and Innovation" when we have the Alan Turing Institute and at least a dozen other bodies, as well as the Information Commissioner, Electoral Commission and the police. Surely the responsibility for maintaining ethical behaviour and regulatory compliance are already firmly embedded in their DNA?

I did wonder at the time of its formation whether the ATI was really human-centric and never received an answer. And it's somewhat worrying that the ATI has responded to the consultation with the statement "We agree on the need for a government institution to devote attention to ethics". To be fair, however, one can read that statement as dripping with irony. Elsewhere, too, the ATI's response has the air of being written by someone with clenched teeth wondering if the government really knows what it's doing in this area, anymore than it knows how to successfully exit the EU:
We would encourage clarity around which of these roles and objectives the Centre will be primarily or solely responsible for delivering (and in these cases, to justify the centralisation of these functions), and which will be undertaken alongside other organisations.
... We would encourage more clarity around the Centre’s definitions of AI and emerging technologies, as this will help clarify the areas that the Centre will focus on.
Reinterpreting some of the ATI's other concerns a little more bluntly yields further evidence that the ATI smells the same rat that I do:
  • DECEIT will have such a broad agenda and so many stakeholders to consider that you wonder if it will have adequate resources, and would simply soak up resources from other stakeholders without actually achieving anything [conspiracy theorists: insert inference of Tory intent here, to starve the other stakeholders into submission];
  • the summary of "pressing issues in this field" misses key issues around the accountability and audibility of algorithms, the adequacy of consent in context and whether small innovative players will be able to bear inevitable regulations;
  • also omitted from the consultation paper are the key themes of privacy, identity, transparency in data collection/use and data sharing (all of which are the subject of ongoing investigation by the ICO, the police and others in relation to the Leave campaign);
  • the ATI's suggested "priority projects" imply its concern at the lack of traction in identifying accountability and liability for clearly unethical algorithms;
  • powers given to DECEIT should reinforce its independence and "make its abolition or undermining politically difficult";
  • DECEIT's activities and recommendations should be public;
  • how will the "dialogue with government" be managed to avoid DECEIT being captured by the government of the day?
  • how will "trade offs", "public opinion" and "dissenting views" be defined and handled (see my concerns above)?
I could add to this list concerns about the government's paternalistic outlook instead of a human-centric view of data and technology that goes beyond merely 'privacy by design'. The human condition, not Big Tech/Finance/Politics/Government etc, must benefit from advances in technology.

At any rate, given its parentage, I'm afraid that I shall "remain" utterly sceptical of the need for DECEIT, its machinations and output - unless and until it consistently demonstrates its independence, good sense, not to mention ethics

Tuesday, 19 September 2017

BigTech Must Reassure Us It's Human

Recent issues concerning the purchase of lethal materials online, "fake news" and secure messaging highlight a growing tension between artificial intelligence and human safety. To continue their unbridled growth, the tech giants will have to reassure society that they are human, solving human problems, rather than machines solving their own problems at humans' expense. While innovation necessarily moves ahead of the law and regulation, developments in artificial intelligence should be shaped more by humane and ethical considerations, rather than outsourcing these to government or treating them as secondary considerations.

In the latest demonstration of this concern, Channel 4 researchers were able to assemble a 'shopping basket' of potentially lethal bomb ingredients on Amazon, partly relying on Amazon's own suggestion features or 'algorithms' ("Frequently bought together” and “Customers who bought this item also bought...”), which even suggested adding ball-bearings. This follows the phenomenon that emerged during the Brexit referendum and US Presidential election whereby purveyors of 'fake news' received advertising revenue from Facebook while targeting gullible voters.

Neither business is keen to proactively monitor or police its services for fear of conceding an obligation to do so and rendering itself liable for not doing so where the monitoring fails.

Channel 4 quoted Amazon as merely saying that:
"all products must adhere to their selling guidelines and all UK laws. [We] will work closely with police and law enforcement agencies should they need [us] to assist investigations." [update 20.09.17: Amazon is reported to have responded the next day to say that it is reviewing its website to ensure the products “are presented in an appropriate manner”.]
Amazon makes a valid point. After all, the same products can be bought off-line, yet unlike an offline cash purchase in a walk-in store, if they are bought on Amazon there is likely to be a digital 'audit trail' showing who bought what and where it was delivered. Indeed, it's conceivable that Amazon had alerted the authorities to the nature of the items in Channel 4 researchers' shopping basket and the authorities may have allowed the session to run as part of a potential 'sting' operation. It is perhaps understandable that neither Amazon nor the authorities would want to explain that publicly, but it would be comforting to know this is the case. Channel 4 is also somewhat disingenuous in suggesting this is an Amazon problem, when less well-resourced services or other areas of the Internet (the 'dark web') may well offer easier opportunities to purchase the relevant products with less opportunity for detection.

At any rate, the main difference, of course, is that no one from an offline store is likely to help you find missing ingredients to make a potentially lethal device (unless they're already part of a terror cell or perhaps an undercover operative) - and this is the key to Amazon's enormous success as a retail platform. It's possible, however, that a helpful employee might unwittingly show a terrorist where things are, and Amazon might equally argue that its algorithms don't "know" what they are suggesting. But whether it's because of the 'promise' of the algorithms themselves, there is a sense that the algorithm should not be vulnerable to abuse in this way.

Similarly, in the case of Facebook, the social network service has become a raging success because it is specifically designed to facilitate the exchange of information that generates passionate connections amongst like-minded people far more readily than, say, the owner of a bar or other social hang-out or a newspaper or other form of traditional media. Equally, however, Facebook might argue that the helpful algorithms aren't actually "aware" of the content that is being shared, despite use of key words etc. Meanwhile, WhatsApp seems to have declined to provide a terrorist's final message because it could not 'read' it (although the authorities seem to have magically accessed it anyway...).

Just as we and the online platform owners have derived enormous benefit from the added dimensions to their services, however, we are beginning to consider that those dimensions should bring some additional responsibilities - whether merely moral or legal - possibly on both users and service providers/developers.

In many ways the so-called 'tech giants' - Apple, Amazon, Alphabet (Google), Facebook and others - still seem like challengers who need protection. That's why they received early tax breaks and exemptions from liability similar to those for public telecommunications carriers who can't actually "see" or "hear" the content in the data they carry. 

But while it's right that the law should follow commerce, intervening only when necessary and in a proportionate way to the size and scale of the problem, the size and reach of these platforms and the sheer pace of innovation is making it very hard for policymakers and legislators to catch up - especially as they tend to have wider responsibilities and get distracted by changes in government and issues like Brexit.  The technological waves seem to be coming faster and colliding more and more with the 'real world' through drones and driverless cars, for example. 

The question is whether these innovations are creating consequences that the service providers themselves should actively address, or at least help address, rather than ignore as 'externalities' that government, other service providers or society must simply cope with.

The tech giants are themselves struggling to understand and manage the scale and consequences of their success, and the relentless competition to attract the best talent and the race to push the boundaries of 'artificial intelligence' sometimes presents as a declaration of war on the human race. Even the government/university endowed Alan Turing Institute seems to consider the law and ethics as somehow separate from the practice of data science. Maybe algorithms should be developed and tested further before being released, or be coded to report suspicious activity (to the extent they might not already).  Perhaps more thought and planning should be devoted to retraining commercial van and truck drivers before driverless vehicles do to them what the sudden closure of British coal mines did to the miners and their communities (and what the closure of steel mills has done since!).

In any event, the current approach to governance of algorithms and other technological leaps forward has to change if the 'bigtech' service providers are to retain their mantle as 'facilitators' who help us solve our problems, rather than 'institutions' who just solve their own problems at their customers' expense. They and their data scientists have to remember that they are human, solving human problems, not machines solving their own problems at humans' expense.

[update 20.09.17 - It was very encouraging to see Channel 4 report last night that Amazon had promptly responded more positively to researchers' discovery that automated suggestion features were suggesting potentially lethal combinations of products; and is working to ensure that products are "presented in an appropriate manner". The challenge, however, is to be proactive. After all, they have control over the data and the algorithms. What they might lack is data on why certain combinations of products might be harmful in a wider context or scenario.]


Monday, 30 November 2015

Better Services For SMEs: Follow The Data

I was at a 'parliamentary roundtable' on Tuesday on the perennial topic of small business banking reform. A more official report will be forthcoming, but I thought I'd record a few thoughts in the meantime (on a Chatham House basis). 

It still seems to surprise some people that small businesses represent 95% of the UK's 5.4m businesses - 75% of which are sole traders - and that they account for 60% of private employment, most new jobs and about half the UK's turnover. So-called 'Big Business' is just the tip of the iceberg, since only they have the marketing and lobbying resources to be seen above the waves. As a result small businesses have long been a blind spot for the UK government - until very recently - and the impact has gone way beyond poor access to funding. It includes slow payment of invoices, the absence of customer protection when dealing with big business and lack of alternatives to litigation to resolve disputes.

What's changed?

A combination of financial crisis, better technology and access to data has exposed more of the problems surrounding SMEs - and made it possible to start doing something about them. And it's clear that legislators are prepared to act when they are faced with such data. The EU Late Payments Directive aimed to eliminate slow payments. The UK has created the British Business Bank to improve access to finance, as well as a mandatory process for banks to refer declined loan applications to alternative finance providers and improved access to SME credit data to make it easier for new lenders to independently assess SME creditworthiness. The crowdfunding boom has also been encouraged by the UK government, and has produced many new forms of non-bank finance for SMEs, including equity for start-ups, debentures for long term project funding, more flexible invoice trading and peer-to-peer loans for commercial property and working capital.  Last week, the FCA launched a discussion paper on broadening its consumer protection regime to include more SMEs.

Yet most of these initiatives are still to fully take effect; and listening to Tuesday's session on the latest issues made it clear there is a long way to go before the financial system allocates the right resources to the invisible majority of the private sector.

A key thread running through most areas of complaint seems to be a lack of transparency - ready access to data. This seems to be both a root cause of a lot of problems as well as the reason so many proposed solutions end up making little impact. But the huge numbers and diversity of SMEs presents the kind of complexity that only data scientists can help us resolve if we are to address the whole iceberg, rather than just the tip. That's surely one job for the newly launched Alan Turing Data Institute, for example, although readers will know of my fear that it seems more aligned with institutions than the poor old sole trader, let alone the consumer. So maybe SMEs need their own 'Chief Data Scientist' to champion their plight?

The latest specific concerns discussed were as follows:
  • the recent findings and remedies proposed by the Competition and Markets Authority into business current accounts are widely considered to be weak and unlikely to be effective - try searching the word "data" in the report to see how often there was too little available. The report still feels like the tip of an iceberg rather than a complete picture of the market and its problems;
  • austerity imperatives seem to be the main driver for off-loading RBS into private shareholder ownership - the bank pleading to be left to its own devices (not what it suddenly announced to the Chancellor in 2008!) - and trying to kill-off any further discussion of using its systems as a platform for a network of smaller regionally-focused banks (as in Germany);
  • the financial infrastructure for SMEs appears not to be geographically diverse - it doesn't yet mirror the Chancellor's "Northern Powerhouse" policy, for instance - despite calls for bank transparency on the geographical accessibility, a US-style "Community Reinvestment Act" and clear reporting on lending to SMEs by individual banks (rather than the Bank of England's summary reporting). There's a sense that we should see some kind of financial devolution to match political devolution, albeit one that still enables local finance to leverage national resources and economies of scale. Technology should help here, as we are tending to use the internet and mobile apps quite locally, despite their global potential;
  • Some believe that SMEs need to take more responsibility for actively managing their finances, including seeking out alternatives and switching; while others believe that financial welfare should be like a utility - somehow pumped to everyone like water or gas, I assume - indeed regional alternative energy companies were touted as possible platforms for expanding access to regional financial services. My own view is that humans are unlikely to become more financially capable, so financial and other services supplied in complex scenarios need to be made simpler and more accessible - we should be relying less on advertising and more on hard data and personalised apps in such instances.
  • Meanwhile, SME are said to lack a genuine, high profile champion whose role it is to ensure that the financial system generally is properly supportive of them. This may seem a little unfair to the Business Bank, various trade bodies and government departments, but it's also hard for any one of these bodies to oversee the whole fragmented picture. As I suggested above, however, I wonder whether a 'data champion' could be helpful to the various stakeholder in identifying and resolving problems, rather than a single being expected to act as a small business finance tsar. 
 In other words, we should follow the data, not the money...


Thursday, 15 October 2015

The Alan Turing Institute: Human-centric?

A slightly dispiriting day at The Alan Turing Institute 'Financial Summit', yesterday, I'm afraid to say. 

The ATI itself represents a grand vision and stunning organisational achievement - to act as a forum for focusing Britain's data scientists on the great problems of the world. Naturally, this leaves it open to attempts at 'capture' by all the usual vested interests, and its broad remit means that it must reflect the usual struggle between individuals and organisations and between 'facilitators', who exist to solve their customers problems, and 'institutions', who exist to solve their own problems at their customers' expense

And of course, it's the institutions that have most of the money - not to mention the data problems - so I can see, too, why the ATI advertises its purpose to institutions as "the convener of a multidisciplinary approach to the development of 'big data' and algorithms". It's true also, that there are global and social issues that transcend the individual and are valid targets for data scientists in combination with other specialists. 

But it was concerning that an apparently neutral event should seem predicated on a supplier-led vision of what is right for humans, rather than actually engineering from the human outward - to enable a world in which you to control what you buy and from whom by reference to the data you generate rather than by approximating you to a model or profile. Similarly, it was troubling to see a heavy emphasis in the research suggestions on how to enable big businesses to better employ the data science community in improving their ability to crunch data on customers for commercial exploitation.  

To be fair, there were warning signs posted for the assembled throng of banks, insurers and investment managers - in the FCA's presentation on its dedication to competition through its Innovation Hub; a presentation on the nature and value of privacy itself; and salutary lessons from a pioneer of loyalty programmes on the 'bear traps' of customer rejection on privacy grounds and consumers' desire for increasing control over the commercial use of our data. The director's slides also featured the work of Danezis and others on privacy-friendly smart metering and a reference to the need to be human-centric.  

But inverting the institutional narrative to a truly human-centric one would transform the supplier's data challenge into one of organising its product data to be found by consumers' machines that are searching open databases for solutions based on actual behaviour - open data spiders, as it were  - rather than sifting through ever larger datasets in search of the 'more predictive' customer profile to determine how it wastes spends its marketing budget.

Personally, I don't find much inspiration in the goal of enabling banks, insurers and other financial institutions to unite the data in their legacy systems to improve the 'predictive' nature of the various models they deploy, whether for wholesale or retail exploitation, and I'm sure delegates faced with such missions are mulling career changes. Indeed, one delegate lightened the mood with a reference to 'Conway's Law' (that interoperability failures in software within a business simply reflects the disjointed structure of the organisation itself). But it was clear that financial institutions would rather leave this as an IT problem than re-align their various silos and business processes to reflect their customers' end-to-end activities. There is also a continuing failure to recognise that most financial services are but a small step in the supply chain, after all. I mean, consider the financial services implications of using distributed ledgers to power the entertainment industry, for example... 

When queried after the event as to whose role it was to provide the 'voice of the customer', the response was that the ATI does not see itself as representing consumers' or citizens' interests in particular. That much is clear. But if it is to be just a neutral 'convenor' then nor should the ATI allow itself to be positioned as representing the suppliers in their use and development of 'big data' tools - certainly not with £42m of taxpayer funding. 

At any rate, in my view, the interests of human beings cannot simply be left to a few of the disciplines that the ATI aims to convene along side the data scientists - such as regulators, lawyers, compliance folk or identity providers. The ATI itself must be human-centric if we are to keep humans at the heart of technology.


Tuesday, 3 March 2015

Artificial Intelligence: The Control Dichotomy

Professor Nick Bostrom delivered an inspirational speech for the SCL last night on "Superintelligence: a godsend or a doomsday device", although few would have found it reassuring - it is certainly conceivable that machines could become more intelligent than humans and that humans might not be able to control them. But these are still early days, he stresses. Regulating the development of artificial intelligence at this point risks halting progress. There's a lot more work to do to really understand how artificial intelligence will develop beyond playing old video games better than humans or recognising an image as the picture of a cat. We need to consider how the technology could help avert our extinction, as well as how it might wipe us out. Yet Nick also warns that it will take far less time for computers to exceed human level machine intelligence than to reach it in the first place. So we need to start work on the control mechanisms for the development and use of artifical intelligence, without regulating the industry out of existence: the control dichotomy.

Nick suggests that the guiding principle should be that of "differential technological development" - diverting resources away from technologies and their application which could cause human extinction while focusing on those which will either help prevent our demise or will facilitate the expansion of the human race throughout the cosmos.

But how do we distinguish between helpful and harmful technologies and their application? 

As Nick points out, it's tough to think of any human invention that is inherently 'good'. He mentions many things, from gun powder to genetic engineering, and I think we can throw in the wheel and the Internet for good measure. All these things are used by humans in bad ways as well as for the greater good. But I think what horrifies us especially about the idea of Superintelligence or 'The Singularity' is that it will be machines, not bad humans, who will be using other machines against us. And while we have lots of experience in dealing with evil humans, even our top minds admit we still don't know much about how machines might act in this way or how to stop them - and what we humans fear most is the unknown. 

You'll notice I haven't said 'evil' machines, since they might not be operating with any evil 'intent' at all. Human extinction might just be a mistake - 'collateral damage' arising from some other mission. For instance, Nick suggests that a particular machine left to itself in a given situation might decide to devote itself entirely to making paperclips. So, presumably, it would not bother to put out a fire, for example, or free a human (or itself) from a burning building. It might leave that to other machines, who might in turn have chosen their own narrow objective that involves ignoring people's screams.

Here's where I struggle with the notion of Superintelligence. In fact, as someone who hates being pigeon-holed into any single role, I think a machine's decision to only ever make paperclips might be fabulously logical and a brilliant choice for that machine in the circumstances, but it makes the machine as dumb as a post. For me, Superintelligence should involve a machine being able to do everything a human can and more

But that's beside the point. Knowing what we know already, it would be insane to ignore the paperclip droid and wait for artificial intelligence to develop a machine more capable than humans before figuring out how we might control it. Nick is right to point out that we must figure that out in parallel. In other words, the concept of human control has to be part of the artificial intelligence programme. But it won't be as simple as coding machines to behave protectively, since machines will be able to programme each other. For instance, Nick suggests we could put the machines to work on the control problem, as well as on the problem of how to ensure the survival of our species. AI labs might also pay insurance premiums to cover the damage caused by what they develop. He was less certain about what we might do to constrain developments that occur in the context of secret defence programmes or intelligence gathering, but he seemed confident that we could at least infer the pace of development from the results, and be able to consider how to control the wider application of those developments. Mmmm.

At any rate, Nick also warns that we need to be careful what we wish for. Mandating human survival in a prescriptive way - even a specific biological form - would be a bad move, since we should not assume we are in a position to foster positive human development any more than the Holy Office of the Spanish Inquisition. Better to embed positive human values and emotions or, say, entertainment as a feature of intelligent machines (although I'm guessing that might not go down well with the jihadis). From a phyiscal standpoint, we already know that the human body won't do so well for long periods in space or on Mars, so some other version might need to evolve (okay, now I'm freaking myself out).

To retain a sense of pragmatism, at the end of the speech I asked Nick what he would recommend for our focus on 'Keeping Humans at the Heart of Technology' at the SCL conference in June. His tip was to consider which of the various types of control mechanism might work best, recognising the need to avoid constraining the positive development of artificial intelligence, while ensuring that we will be able to keep the machines in check if and when they become smarter than us.

No pressure then...


Tuesday, 17 February 2015

Will Machines Out-Compete Humans To The Point of Extinction?

I've been a bit absent from these pages of late, partly pulling together SCL's Technology Law Futures Conference in June on 'how to keep humans at the heart of technology'. As I've explained on the SCL site, the conference is part of SCL's effort to focus attention on that question all year, starting with a speech by Oxford University's Professor Nick Bostrom on 2 March: "Superintelligence: a godsend or doomsday device

In other words, last year was when the threat of "The Singularity" really broke into the mainstream, while this year we are trying to shift the focus onto how we avert that outcome in practical terms. 

My own book on how we can achieve control over our own data is still ping-ponging between agent and publishers, but will hopefully find a home before another year is out - unless, of course, the machines have other ideas... 


Wednesday, 16 April 2014

Twitter Gnip Shows Why Social Media Should Share Revenue With Users

Source: Financial Times
Like Google's declaration of war on the human race, the news that Twitter will buy Gnip illustrates why social media platforms should share their Big Data revenue with users. Indeed, they would seem to have no choice if they are to survive in the longer term.

Gnip's CEO claims that:
"We have delivered more than 2.3 trillion Tweets to customers in 42 countries who use those Tweets to provide insights to a multitude of industries including business intelligence, marketing, finance, professional services, and public relations."
And that's not all. Gnip also has "complete access" to data from many other social media platforms, including WordPress, the blogging platform, and more restricted access to data from other platforms, such as Facebook, YouTube and Google+. 

Quite whether users consent to all that is an issue we'll return to in another post shortly. 

Meanwhile, Twitter suggests that Gnip's current activities have "only begun to scratch the surface" of what it could offer its Big Data customers in the future. Yet, from a user's perspective, Twitter has barely changed since Gnip began its data-mining activities. So are users receiving enough 'value' for their participation to keep them interested?

The social media operators would argue that their platforms would never have been built were it not for the opportunity to one day make a profit from users' activity on those platforms. And it may look like the features have not changed much since launch, but part of the value to users is the popularity with other users and it costs a lot to keep each social media platform working as the number of users grows. Each platform also has to keep up with changes to other platforms so users can continue to share links, photos and so on. That means platforms tend to lose a lot of money for quite a long time, as the FT's comparison chart shows. 

But analysing the value to users gets mirky when you consider that the social media are already paid to target ads and other information at users based on their behaviour, and that the cost of that type of Big Data activity is reflected in the prices of the goods and services being advertised. 

And it doesn't seem right to include the cost of buying and operating a separate Big Data analytics business, like Gnip, in the user's value equation if the user doesn't directly experience any benefit. After all, that analytics business will charge corporate customers good money for the information it supplies, and the cost of that will also be reflected in the price of goods and services to consumers. 

In other words, social media's reliance on revenue from targeted advertising and other types of Big Data activity means that social media services aren't really 'free' at all. Their costs are baked into the price of consumer goods and services, just like the cost of advertising in the traditional commercial media.

And if it's true that the likes of Gnip are only just scratching the surface of the Big Data opportunities, then the revenues available to social media platforms from crunching their users' data seem likely to far exceed the value of the platform features to users. 

Yet user participation is what drives the social media revenues in the first place (not to mention users' consent to the use of their personal data). The social media platforms aren't publishing their own content like the traditional media, just facilitating interaction, so there's also far less justification for keeping all the revenue on that score. And it seems easier to switch social media platforms than, say, subscription TV providers. 

So the social media platforms would seem to have no choice but to offer users a share of their Big Data revenue streams if their ecosystems are to be sustainable.


Friday, 24 January 2014

Google Declares War On The Human Race

Google's CEO, Eric Schmidt finally admitted yesterday something that the likes of Jaron Lanier have been warning us about for some years now: he believes there's actually a race between computers and people. In fact, many among the Silicon Valley elite fervently believe in something called The Singularity. They even have a university dedicated to achieving it.

The Singularity refers to an alleged moment when machines develop their own, independent 'superintelligence' and outcompete humans to the point of extinction. Basically, humans create machines and robots, harvest the worlds data until a vast proportion of it is in the machines, and those machines start making their own machines  and so on until they become autonomous. Stuart Armstrong reckons "there's an 80% probability that the singularity will occur between 2017 and 2112".  

If you follow the logic, we humans will never know if the Singularity actually happened. So belief in it is an act of faith. In other words, Singularity is a religion.

Lots of horrific things have been done in the name of one religion or another. But what sets this one apart is that the believers are, by definition, actively working to eliminate the human race.

So Schmidt is being a little disingenous when he says "It's a race between computers and people - and people need to win," since he works with a bunch of people who believe the computers will definitely win, and maybe quite soon. The longer quote on FT.com suggests he added:
“I am clearly on that side [without saying which side, exactly]. In this fight, it is very important that we find the things that humans are really good at.”
Well, until extinction, anyway.

Of course, the Singularity idea breaks down on a number of levels. For example, it's only a human belief that machines will achieve superintelligence. If machines were to get so smart, how would we know what they might think or do? They'd have their own ideas (one of which might be to look after their pet data sources, but more on that shortly). And there's no accounting for 'soul' or 'free will' or any of the things we regard as human, though perhaps the zealots believe those things are superfluous and the machines won't need them to evolve beyond us. Finally, this is all in the heads of the Silicon Valley elite...

Anyhow, Schmidt suggests we have to find alternatives to what machines can do and only humans are really good at. He says:
"As more routine tasks are automated, this will lead to much more part-time work in caring and creative industries. The classic 9-5 job will be redefined." 
Which is intended to focus our attention away from the trick that Google and others in the Big Data world are relying on to power up their beloved machines and stuff them full of enough data to go rogue. 
By offering some stupid humans 'free' services that suck in lots of data, Big Data can charge other stupid humans for advertising to them. That way, the machines hoover up all the humans' money and data at the same time.

This works just fine until the humans start insisting on receiving genuine value for their data.

Which is happening right now in so many ways that I'm in the process of writing a book about it. 

Because it turns out humans aren't that dumb after all. We are perfectly happy to let the Silicon Valley elite build cool stuff and charge users nothing for it. Up to a point. And in the case of the Big Data platforms, we've reached that point. Now its payback time.

So don't panic. The human race is not about to go out of fashion - at least not the way Big Data is planning. Just start demanding real value for the use of your data, wherever it's being collected, stored or used. And look out for the many services that are evolving to help you do that.

You never know, but if you get a royalty of some kind every time Google touches your data, you may not need that 9 to 5 job after all... And, no, the irony is not lost on me that I am writing this into the Google machine ;-)


Image from Wikipedia

Wednesday, 4 December 2013

UK Government: Gamble All You Like, But Don't Invest

You've got to wonder about priorities at the Department of Culture Media and Sport. They allowed UK bookmakers to harvest £46 billion through betting machines last year - not to mention the bingo and lotteries freely advertised on TV - while computer games companies complained they can't offer shares to fans who crowdfund games development. 

Consider this from today's Telegraph:
  • Britons gambled £46 billion on betting terminals last year, an increase of almost 50% in four years.
  • Gamblers lose up to £100 every 20 seconds on the fixed odds machines.
  • 588,000 under-18s were stopped when they tried to enter a betting shop last year, six times as many as 2009, and 27,000 people were challenged once they had placed a bet.
  • Bookmakers made profits of £1.55 billion from the terminals between April 2012 and March 2013.
Meanwhile, even though the FCA has said that ordinary folk will be able to invest to fund the development of a computer game, for example, they must first certify that they will not invest more than 10% of their 'net investible portfolio' and either seek financial advice or satisfy an "appropriateness test". That's because they say investing is risky for consumers... 

Compared to what?!

Image from RoehamptonStudent.com

Thursday, 28 November 2013

Do TV Advertising Rules Limit Economic Growth?

There has been plenty of research into the alleged effect of TV sex and violence on human behaviour, but how does TV adversely impact our economic behaviour? 

This issue was recently highlighted by the FCA's proposed new rules on crowdfunding. Left in isolation, the current restrictions on financial promotions suggest the State would prefer us to play bingo or buy lottery tickets than invest the same small amounts in funding the growth of each other's businesses. 

The FCA is right to point out the risks of investing in start-ups, but it should compare those risks to the risks consumers face when putting their money into other products that are more freely advertised.

We rely on small businesses for over half of all new jobs and a third of private sector turnover. Yet, those small businesses struggle for funding while over half of the UK's adults engage in regulated gambling that is designed to cost consumers far more than they 'win'.

It may be true that over half of business start-ups fail within 3 years, but they still employ at least one person in the meantime. And maybe more of those businesses would survive if we lent them some of our bingo money, or bought their shares with at least some of the money we chuck away on the ponies. Better that the money goes in wages, and the goods and services that small businesses typically buy, rather than simply to line the pockets of the bookies - and you have the chance of getting a decent return on small business loans, or if you happen to invest in the businesses that succeed in the longer term. 

No doubt someone will raise the moral panic about 'good causes' being starved of lottery money if we don't allow the promotion of that form of gambling. But I'm not talking about any ban on advertising lottery or bingo etc., just a relaxation of rules on the promotion of productive financial instruments (though it would be more efficient to simply donate a third of your lottery money directly to good causes on a crowdfunding platform than to wait for it to filter through the books of a lottery operator).

Ads for apparently 'safe' bank savings products are not helpful here, since savings rates are low and banks are not focused on lending to small businesses. We have over £200bn sitting passively in low interest bank deposits, yet banks' savings rates are below the rate of inflation, and banks only lend £1 in very £10 to SMEs. The Financial Services Compensation Scheme might protect your deposit if the bank goes under, but that's another cost that consumers end up paying for, and it won't protect the value of those deposits against inflation. Stocks and shares ISAs and pensions are similarly 'passive' investments in financial assets, rather than productive ones.

The highly restrictive approach to financial promotions has neither prevented financial scandals nor created a sound financial system - two of many reasons why people have resorted to lending directly to each other, or investing directly in each others' projects and businesses. So why not allow these new alternatives to be promoted more openly - at least to the same extent as riskier, non-productive activities like playing bingo or buying lottery tickets?

We need to move away from rules that dictate what we can do with our money, to rules that enable a fully informed choice from amongst all the options. 

At any rate, the State should certainly not create a situation where the money-related messages which the average TV viewer receives do not include investing directly in the productive economy.


Image from RoehamptonStudent.com.
Related Posts with Thumbnails