Google
Showing posts with label behavioural targeting. Show all posts
Showing posts with label behavioural targeting. Show all posts

Tuesday, 3 March 2015

Artificial Intelligence: The Control Dichotomy

Professor Nick Bostrom delivered an inspirational speech for the SCL last night on "Superintelligence: a godsend or a doomsday device", although few would have found it reassuring - it is certainly conceivable that machines could become more intelligent than humans and that humans might not be able to control them. But these are still early days, he stresses. Regulating the development of artificial intelligence at this point risks halting progress. There's a lot more work to do to really understand how artificial intelligence will develop beyond playing old video games better than humans or recognising an image as the picture of a cat. We need to consider how the technology could help avert our extinction, as well as how it might wipe us out. Yet Nick also warns that it will take far less time for computers to exceed human level machine intelligence than to reach it in the first place. So we need to start work on the control mechanisms for the development and use of artifical intelligence, without regulating the industry out of existence: the control dichotomy.

Nick suggests that the guiding principle should be that of "differential technological development" - diverting resources away from technologies and their application which could cause human extinction while focusing on those which will either help prevent our demise or will facilitate the expansion of the human race throughout the cosmos.

But how do we distinguish between helpful and harmful technologies and their application? 

As Nick points out, it's tough to think of any human invention that is inherently 'good'. He mentions many things, from gun powder to genetic engineering, and I think we can throw in the wheel and the Internet for good measure. All these things are used by humans in bad ways as well as for the greater good. But I think what horrifies us especially about the idea of Superintelligence or 'The Singularity' is that it will be machines, not bad humans, who will be using other machines against us. And while we have lots of experience in dealing with evil humans, even our top minds admit we still don't know much about how machines might act in this way or how to stop them - and what we humans fear most is the unknown. 

You'll notice I haven't said 'evil' machines, since they might not be operating with any evil 'intent' at all. Human extinction might just be a mistake - 'collateral damage' arising from some other mission. For instance, Nick suggests that a particular machine left to itself in a given situation might decide to devote itself entirely to making paperclips. So, presumably, it would not bother to put out a fire, for example, or free a human (or itself) from a burning building. It might leave that to other machines, who might in turn have chosen their own narrow objective that involves ignoring people's screams.

Here's where I struggle with the notion of Superintelligence. In fact, as someone who hates being pigeon-holed into any single role, I think a machine's decision to only ever make paperclips might be fabulously logical and a brilliant choice for that machine in the circumstances, but it makes the machine as dumb as a post. For me, Superintelligence should involve a machine being able to do everything a human can and more

But that's beside the point. Knowing what we know already, it would be insane to ignore the paperclip droid and wait for artificial intelligence to develop a machine more capable than humans before figuring out how we might control it. Nick is right to point out that we must figure that out in parallel. In other words, the concept of human control has to be part of the artificial intelligence programme. But it won't be as simple as coding machines to behave protectively, since machines will be able to programme each other. For instance, Nick suggests we could put the machines to work on the control problem, as well as on the problem of how to ensure the survival of our species. AI labs might also pay insurance premiums to cover the damage caused by what they develop. He was less certain about what we might do to constrain developments that occur in the context of secret defence programmes or intelligence gathering, but he seemed confident that we could at least infer the pace of development from the results, and be able to consider how to control the wider application of those developments. Mmmm.

At any rate, Nick also warns that we need to be careful what we wish for. Mandating human survival in a prescriptive way - even a specific biological form - would be a bad move, since we should not assume we are in a position to foster positive human development any more than the Holy Office of the Spanish Inquisition. Better to embed positive human values and emotions or, say, entertainment as a feature of intelligent machines (although I'm guessing that might not go down well with the jihadis). From a phyiscal standpoint, we already know that the human body won't do so well for long periods in space or on Mars, so some other version might need to evolve (okay, now I'm freaking myself out).

To retain a sense of pragmatism, at the end of the speech I asked Nick what he would recommend for our focus on 'Keeping Humans at the Heart of Technology' at the SCL conference in June. His tip was to consider which of the various types of control mechanism might work best, recognising the need to avoid constraining the positive development of artificial intelligence, while ensuring that we will be able to keep the machines in check if and when they become smarter than us.

No pressure then...


Tuesday, 17 February 2015

Will Machines Out-Compete Humans To The Point of Extinction?

I've been a bit absent from these pages of late, partly pulling together SCL's Technology Law Futures Conference in June on 'how to keep humans at the heart of technology'. As I've explained on the SCL site, the conference is part of SCL's effort to focus attention on that question all year, starting with a speech by Oxford University's Professor Nick Bostrom on 2 March: "Superintelligence: a godsend or doomsday device

In other words, last year was when the threat of "The Singularity" really broke into the mainstream, while this year we are trying to shift the focus onto how we avert that outcome in practical terms. 

My own book on how we can achieve control over our own data is still ping-ponging between agent and publishers, but will hopefully find a home before another year is out - unless, of course, the machines have other ideas... 


Wednesday, 16 April 2014

Twitter Gnip Shows Why Social Media Should Share Revenue With Users

Source: Financial Times
Like Google's declaration of war on the human race, the news that Twitter will buy Gnip illustrates why social media platforms should share their Big Data revenue with users. Indeed, they would seem to have no choice if they are to survive in the longer term.

Gnip's CEO claims that:
"We have delivered more than 2.3 trillion Tweets to customers in 42 countries who use those Tweets to provide insights to a multitude of industries including business intelligence, marketing, finance, professional services, and public relations."
And that's not all. Gnip also has "complete access" to data from many other social media platforms, including WordPress, the blogging platform, and more restricted access to data from other platforms, such as Facebook, YouTube and Google+. 

Quite whether users consent to all that is an issue we'll return to in another post shortly. 

Meanwhile, Twitter suggests that Gnip's current activities have "only begun to scratch the surface" of what it could offer its Big Data customers in the future. Yet, from a user's perspective, Twitter has barely changed since Gnip began its data-mining activities. So are users receiving enough 'value' for their participation to keep them interested?

The social media operators would argue that their platforms would never have been built were it not for the opportunity to one day make a profit from users' activity on those platforms. And it may look like the features have not changed much since launch, but part of the value to users is the popularity with other users and it costs a lot to keep each social media platform working as the number of users grows. Each platform also has to keep up with changes to other platforms so users can continue to share links, photos and so on. That means platforms tend to lose a lot of money for quite a long time, as the FT's comparison chart shows. 

But analysing the value to users gets mirky when you consider that the social media are already paid to target ads and other information at users based on their behaviour, and that the cost of that type of Big Data activity is reflected in the prices of the goods and services being advertised. 

And it doesn't seem right to include the cost of buying and operating a separate Big Data analytics business, like Gnip, in the user's value equation if the user doesn't directly experience any benefit. After all, that analytics business will charge corporate customers good money for the information it supplies, and the cost of that will also be reflected in the price of goods and services to consumers. 

In other words, social media's reliance on revenue from targeted advertising and other types of Big Data activity means that social media services aren't really 'free' at all. Their costs are baked into the price of consumer goods and services, just like the cost of advertising in the traditional commercial media.

And if it's true that the likes of Gnip are only just scratching the surface of the Big Data opportunities, then the revenues available to social media platforms from crunching their users' data seem likely to far exceed the value of the platform features to users. 

Yet user participation is what drives the social media revenues in the first place (not to mention users' consent to the use of their personal data). The social media platforms aren't publishing their own content like the traditional media, just facilitating interaction, so there's also far less justification for keeping all the revenue on that score. And it seems easier to switch social media platforms than, say, subscription TV providers. 

So the social media platforms would seem to have no choice but to offer users a share of their Big Data revenue streams if their ecosystems are to be sustainable.


Related Posts with Thumbnails