Google
Showing posts with label advertising. Show all posts
Showing posts with label advertising. Show all posts

Tuesday, 28 January 2025

Open Agentic AI And True Personalisation

Sixteen years on from my own initial posts on the subject of a personal assistant that can privately find and buy stuff for you on the web, and we have 'open agentic AI'. But are you really any closer to the automated selection and purchase of your own personalised products without needlessly surrendering your privacy or otherwise becoming the victim? Should this latest iteration of open generative AI be autonomously making and executing decisions on your behalf? 

What is Agentic AI?

An 'agentic' AI is an evolution of generative AI beyond a chatbot. It receives your data and relies on pattern matching to generate, select and execute one of a number of potential pre-programmed actions without human guidance, then 'learns' from the result (as NVIDIA, the leading AI chip maker, explains). 

A 'virtual assistant' that can find, buy and play music, for example, is a form of agentic AI (since it uses AI in its processes), but the ambition involves a wider range of tasks and more process automation and autonomy (if not end-to-end). 

You'll see a sleight-of-hand in the marketing language (like NVIDA's) as developers start projecting 'perception', 'understanding' and 'reasoning' on their agentic AIs, but computers don't actually do any of those human things. 

It's certainly a compelling idea to apply this to automating various highly complex, tedious consumer 'workflows' that have lots of different parameters - like buying a car, perhaps (or booking a bloody rail ticket in the UK!). 

Wearing my legal hat, I also see myriad interesting challenges (which I'd be delighted to discuss, of course!), some of which are mentioned here, but not all...

Some challenges

The main problem with using an 'agentic AI' in a consumer context is the involvement of a large language model and generative AI where there is a significant (e.g. economic, medical and/or legal) consequence for the user (as opposed to a chatbot or information-only scenario (though that can also be problematic). Currently, the household or device based virtual assistants are carrying out fairly mundane tasks, and you could probably get a refund if it bought you the wrong song, for example, if that really bothered you. Buying the wrong car would likely be a different matter.

There may also be confusion about the concept of 'agency' here. The word 'agentic' is used to mean that the AI has 'agency' in the sense it can operate without human guidance. That AI is not necessarily anyone's legal 'agent' (more below) and is trained on generic training data (subject to privacy, copyright consents/licensing), which these days is itself synthetic - generated by an AI. So, agentic AIs are not hosted exclusively by or on behalf of the specific consumer and do not specifically cater to a single end-customer's personalised needs in terms of the data it holds/processes and how it deals with suppliers. It does not 'know' you or 'understand' anyone, let alone you.  

Of course, that is consistent with how consumer markets work: products have generally been developed to suit the supplier's requirements in terms of profitability and so on, rather than any individual customer's needs. Having assembled what the supplier believes to be a profitable product by reference to an ideal customer profile in a given context, the supplier's systems and marketing/advertising arrangements seek out customers for the product who are 'scored' on the extent to which they fit that 'profile' and context. This also preserves 'information asymmetry' in favour of the supplier, who knows far more about its product and customers than you know about the supplier or the product. In an insurance context, for example, that will mean an ideal customer will pay a high premium but find it unnecessary, too hard or impossible to make a claim on the policy. For a loan, the lender will be looking for a higher risk customer who will end up paying more in additional interest and default fees than lower risk customers. But all this is only probabilistic, since human physiology may be 'normally distributed' but human behaviour is not.

So using an agentic AI in this context would not improve your position or relationship with your suppliers, particularly if the supplier is the owner/operator of the agentic AI. The fact that Open AI has offered its 'Operator' agentic AI to its pro-customers (who already pay a subscription of $200 a month!) begs the question whether Open AI really intends rocking this boat, or whether it's really a platform for suppliers like Facebook or Google search in the online advertising world. 

It's also an open question - and a matter for contract or regulation - as to whether the AI is anyone's legal 'agent' (which it could be if the AI were deployed by an actual agent or fiduciary of the customer, such as a consumer credit broker). 

Generative AI also has a set of inherent risks. Not only do they fail to 'understand' data, but to a greater or lesser degree they are also inaccurate, biased and randomly hallucinate rubbish (not to mention the enormous costs in energy/water, capital and computing; the opportunity cost of diverting such resources from other service/infrastructure requirements; and other the 'externalities' or socioeconomic consequences that are being ignored and not factored into soaring Big Tech stock prices - a bubble likely to burst soon). It may also not be possible to explain how the AI arrives at its conclusions (or, in the case of an agentic AI, why it selected a particular product, or executed a specific task, rather than another). Simply overlaying a right to human intervention by either customer or supplier would not guarantee a better outcome on theses issues (due to lack of explainability, in particular). A human should be able to explain why and how the AI's decision was reached and be able to re-take the decision. And, unfortunately, we are seeing less and less improvement in each of these inherent risk areas with each version of generative AIs.

All this means that agentic AI should not be used to fully automate decisions or choices that have any significant impact on an individual consumer (such as buying a car or obtaining a loan or a pension product).  

An Alternative... Your Own Personal Agent

What feels like a century ago, in 2009, I wondered whether the 'semantic web' would spell the end of price comparison websites. I was tired of seeing their expensive TV ads - paid for out of the intermediary's huge share of the gross price of the product. I thought: "If suppliers would only publish their product data in semantic format, a 'widget' on my own computer could scan their datafeeds and identify the product that's right for me, based on my personal profile and other parameters I specify". 

By 2013, I was calling that 'widget' an Open Data Spider and attempted to explain it further in an article for SCL on the wider tech themes of Midata, Open Data and Big Data tools (and elsewhere with the concept of 'monetising you'). I thought then - and still think now - that: 

"a combination of Midata, Open Data and Big Data tools seems likely to liberate us from the tyranny of the 'customer profile' and reputational 'scores', and allow us instead to establish direct connections with trusted products and suppliers based on much deeper knowledge of our own circumstances."

Personalised assistants are evolving to some degree, in the form of 'personal [online] data stores' (like MyDex or Solid); as well as 'digital wallets' or payment apps that sit on smartphones and other digital devices and can be used to store transaction data, tickets, boarding passes and other evidence of actual purchases. The former are being integrated in specific scenarios like recruitment and healthcare; while the latter tend to be usable only within checkout processes. None seems to be playing a more extensive role in pre-evaluating your personal requirements, then seeking, selecting and purchasing a suitable product for you from a range of potential suppliers (as opposed to a product that a supplier has created for its version of an 'ideal' customer that you seem to fit to some degree). 

Whether the providers of existing personal data stores and digital wallets will be prepared to extend their functionality to include more process automation for consumers may also depend on the willingness of suppliers to surrender some of their information advantage and adapt their systems (or AIs) to respond to and adapt products according to actual consumer requests/demand.

Equally, the digital 'gatekeepers' such as search providers and social media platforms will want to protect their own advertising revenue and other fees currently paid by suppliers who rely on them for targeting 'ideal' customers. Whether they can 'switch sides' to act for consumers and preserve/grow this revenue flow remains to be seen.

Overall, if I were a betting man, I'd wager that open agentic AI won't really change the fundamental relationship between suppliers, intermediaries and consumers, and that consumers will remain the targets (victims) for whatever suppliers and intermediaries dream up for them next...

I'd love to be corrected!



Monday, 10 September 2018

The Irony Or The Ecstasy? The UK Centre For Data Ethics And Innovation

You would be forgiven for uttering a very long string of properly ripe expletives on learning that the current UK government has the cheek to create a "Centre for Data Ethics and Innovation"!  Personally, I think they've missed a trick with the name. With a little more thought, the acronym could've been "DECEIT" - and maybe in some languages it would be - so let's go with that.

You might say that it's better to have, rather than not have, an 'independent' policy body focused on the use of data and "artificial intelligence", even if it's set up by a government controlled by those who masterminded and/or benefited from the most egregious abuse of data ethics in UK history.

Or you might be relieved by the notion that it's easier for the dominant political party of the day to control the ethical agenda and results achieve "the best possible outcomes" if the source of policy on data ethics is centralised, especially within a body being hastily set up on the back of a quick and dirty consultation paper released into the febrile, Brexit-dominated summer period before any aspirational statutory governance controls are in place. 

At any rate, we should all note that:
"[DECEIT], in dialogue with government, will need to carefully prioritise and scope the specific projects within its work programme. This should include an assessment of the value generated by the project, in terms of impact on innovation and public trust in the ethical use of data and AI, the rationale for [DECEIT] doing the work (relative to other organisations, inside or outside Government) and urgency of the work, for example in terms of current concerns amongst the public or business."
...
"In formulating its advice, the Centre will also seek to understand and take into consideration the plurality of views held by the public about the way in which data and AI should be governed. Where these views diverge, as is often the case with any new technology, the Centre will not be able to make recommendations that will satisfy everyone. Instead, it will be guided by the need to take ethically justified positions mindful of public opinion and respecting dissenting views. As part of this process it will seek to clearly articulate the complexities and trade offs involved in any recommendation."
Political point of view is absolutely critical here. This UK government does not accept that the Leave campaign or Cambridge Analytica etc did anything 'wrong' with people's data. Senior Brexiteers say the illegality resulting in fines by the Electoral Commission and further investigation by the ICO and the police are merely politically motivated 'allegations' by do-good Remainers. Ministers have dismissed their own "promises" (which others have called "fake news" outright lies and distortion) as merely "a series of possibilities". There is no contrition. Instead, the emerging majority of people who want Brexit to be subjected to a binding vote by the electorate are regarded as ignoring "public opinion" or "the will of the people" somehow enshrined forever in a single advisory referendum in 2016; and as therefore expressing merely "dissenting views".

Against this gaslit version of reality, the creation of DECEIT is chilling.

Meanwhile, you might ask why there needs to be separate silo for "Data Ethics and Innovation" when we have the Alan Turing Institute and at least a dozen other bodies, as well as the Information Commissioner, Electoral Commission and the police. Surely the responsibility for maintaining ethical behaviour and regulatory compliance are already firmly embedded in their DNA?

I did wonder at the time of its formation whether the ATI was really human-centric and never received an answer. And it's somewhat worrying that the ATI has responded to the consultation with the statement "We agree on the need for a government institution to devote attention to ethics". To be fair, however, one can read that statement as dripping with irony. Elsewhere, too, the ATI's response has the air of being written by someone with clenched teeth wondering if the government really knows what it's doing in this area, anymore than it knows how to successfully exit the EU:
We would encourage clarity around which of these roles and objectives the Centre will be primarily or solely responsible for delivering (and in these cases, to justify the centralisation of these functions), and which will be undertaken alongside other organisations.
... We would encourage more clarity around the Centre’s definitions of AI and emerging technologies, as this will help clarify the areas that the Centre will focus on.
Reinterpreting some of the ATI's other concerns a little more bluntly yields further evidence that the ATI smells the same rat that I do:
  • DECEIT will have such a broad agenda and so many stakeholders to consider that you wonder if it will have adequate resources, and would simply soak up resources from other stakeholders without actually achieving anything [conspiracy theorists: insert inference of Tory intent here, to starve the other stakeholders into submission];
  • the summary of "pressing issues in this field" misses key issues around the accountability and audibility of algorithms, the adequacy of consent in context and whether small innovative players will be able to bear inevitable regulations;
  • also omitted from the consultation paper are the key themes of privacy, identity, transparency in data collection/use and data sharing (all of which are the subject of ongoing investigation by the ICO, the police and others in relation to the Leave campaign);
  • the ATI's suggested "priority projects" imply its concern at the lack of traction in identifying accountability and liability for clearly unethical algorithms;
  • powers given to DECEIT should reinforce its independence and "make its abolition or undermining politically difficult";
  • DECEIT's activities and recommendations should be public;
  • how will the "dialogue with government" be managed to avoid DECEIT being captured by the government of the day?
  • how will "trade offs", "public opinion" and "dissenting views" be defined and handled (see my concerns above)?
I could add to this list concerns about the government's paternalistic outlook instead of a human-centric view of data and technology that goes beyond merely 'privacy by design'. The human condition, not Big Tech/Finance/Politics/Government etc, must benefit from advances in technology.

At any rate, given its parentage, I'm afraid that I shall "remain" utterly sceptical of the need for DECEIT, its machinations and output - unless and until it consistently demonstrates its independence, good sense, not to mention ethics

Tuesday, 19 September 2017

BigTech Must Reassure Us It's Human

Recent issues concerning the purchase of lethal materials online, "fake news" and secure messaging highlight a growing tension between artificial intelligence and human safety. To continue their unbridled growth, the tech giants will have to reassure society that they are human, solving human problems, rather than machines solving their own problems at humans' expense. While innovation necessarily moves ahead of the law and regulation, developments in artificial intelligence should be shaped more by humane and ethical considerations, rather than outsourcing these to government or treating them as secondary considerations.

In the latest demonstration of this concern, Channel 4 researchers were able to assemble a 'shopping basket' of potentially lethal bomb ingredients on Amazon, partly relying on Amazon's own suggestion features or 'algorithms' ("Frequently bought together” and “Customers who bought this item also bought...”), which even suggested adding ball-bearings. This follows the phenomenon that emerged during the Brexit referendum and US Presidential election whereby purveyors of 'fake news' received advertising revenue from Facebook while targeting gullible voters.

Neither business is keen to proactively monitor or police its services for fear of conceding an obligation to do so and rendering itself liable for not doing so where the monitoring fails.

Channel 4 quoted Amazon as merely saying that:
"all products must adhere to their selling guidelines and all UK laws. [We] will work closely with police and law enforcement agencies should they need [us] to assist investigations." [update 20.09.17: Amazon is reported to have responded the next day to say that it is reviewing its website to ensure the products “are presented in an appropriate manner”.]
Amazon makes a valid point. After all, the same products can be bought off-line, yet unlike an offline cash purchase in a walk-in store, if they are bought on Amazon there is likely to be a digital 'audit trail' showing who bought what and where it was delivered. Indeed, it's conceivable that Amazon had alerted the authorities to the nature of the items in Channel 4 researchers' shopping basket and the authorities may have allowed the session to run as part of a potential 'sting' operation. It is perhaps understandable that neither Amazon nor the authorities would want to explain that publicly, but it would be comforting to know this is the case. Channel 4 is also somewhat disingenuous in suggesting this is an Amazon problem, when less well-resourced services or other areas of the Internet (the 'dark web') may well offer easier opportunities to purchase the relevant products with less opportunity for detection.

At any rate, the main difference, of course, is that no one from an offline store is likely to help you find missing ingredients to make a potentially lethal device (unless they're already part of a terror cell or perhaps an undercover operative) - and this is the key to Amazon's enormous success as a retail platform. It's possible, however, that a helpful employee might unwittingly show a terrorist where things are, and Amazon might equally argue that its algorithms don't "know" what they are suggesting. But whether it's because of the 'promise' of the algorithms themselves, there is a sense that the algorithm should not be vulnerable to abuse in this way.

Similarly, in the case of Facebook, the social network service has become a raging success because it is specifically designed to facilitate the exchange of information that generates passionate connections amongst like-minded people far more readily than, say, the owner of a bar or other social hang-out or a newspaper or other form of traditional media. Equally, however, Facebook might argue that the helpful algorithms aren't actually "aware" of the content that is being shared, despite use of key words etc. Meanwhile, WhatsApp seems to have declined to provide a terrorist's final message because it could not 'read' it (although the authorities seem to have magically accessed it anyway...).

Just as we and the online platform owners have derived enormous benefit from the added dimensions to their services, however, we are beginning to consider that those dimensions should bring some additional responsibilities - whether merely moral or legal - possibly on both users and service providers/developers.

In many ways the so-called 'tech giants' - Apple, Amazon, Alphabet (Google), Facebook and others - still seem like challengers who need protection. That's why they received early tax breaks and exemptions from liability similar to those for public telecommunications carriers who can't actually "see" or "hear" the content in the data they carry. 

But while it's right that the law should follow commerce, intervening only when necessary and in a proportionate way to the size and scale of the problem, the size and reach of these platforms and the sheer pace of innovation is making it very hard for policymakers and legislators to catch up - especially as they tend to have wider responsibilities and get distracted by changes in government and issues like Brexit.  The technological waves seem to be coming faster and colliding more and more with the 'real world' through drones and driverless cars, for example. 

The question is whether these innovations are creating consequences that the service providers themselves should actively address, or at least help address, rather than ignore as 'externalities' that government, other service providers or society must simply cope with.

The tech giants are themselves struggling to understand and manage the scale and consequences of their success, and the relentless competition to attract the best talent and the race to push the boundaries of 'artificial intelligence' sometimes presents as a declaration of war on the human race. Even the government/university endowed Alan Turing Institute seems to consider the law and ethics as somehow separate from the practice of data science. Maybe algorithms should be developed and tested further before being released, or be coded to report suspicious activity (to the extent they might not already).  Perhaps more thought and planning should be devoted to retraining commercial van and truck drivers before driverless vehicles do to them what the sudden closure of British coal mines did to the miners and their communities (and what the closure of steel mills has done since!).

In any event, the current approach to governance of algorithms and other technological leaps forward has to change if the 'bigtech' service providers are to retain their mantle as 'facilitators' who help us solve our problems, rather than 'institutions' who just solve their own problems at their customers' expense. They and their data scientists have to remember that they are human, solving human problems, not machines solving their own problems at humans' expense.

[update 20.09.17 - It was very encouraging to see Channel 4 report last night that Amazon had promptly responded more positively to researchers' discovery that automated suggestion features were suggesting potentially lethal combinations of products; and is working to ensure that products are "presented in an appropriate manner". The challenge, however, is to be proactive. After all, they have control over the data and the algorithms. What they might lack is data on why certain combinations of products might be harmful in a wider context or scenario.]


Thursday, 15 October 2015

The Alan Turing Institute: Human-centric?

A slightly dispiriting day at The Alan Turing Institute 'Financial Summit', yesterday, I'm afraid to say. 

The ATI itself represents a grand vision and stunning organisational achievement - to act as a forum for focusing Britain's data scientists on the great problems of the world. Naturally, this leaves it open to attempts at 'capture' by all the usual vested interests, and its broad remit means that it must reflect the usual struggle between individuals and organisations and between 'facilitators', who exist to solve their customers problems, and 'institutions', who exist to solve their own problems at their customers' expense

And of course, it's the institutions that have most of the money - not to mention the data problems - so I can see, too, why the ATI advertises its purpose to institutions as "the convener of a multidisciplinary approach to the development of 'big data' and algorithms". It's true also, that there are global and social issues that transcend the individual and are valid targets for data scientists in combination with other specialists. 

But it was concerning that an apparently neutral event should seem predicated on a supplier-led vision of what is right for humans, rather than actually engineering from the human outward - to enable a world in which you to control what you buy and from whom by reference to the data you generate rather than by approximating you to a model or profile. Similarly, it was troubling to see a heavy emphasis in the research suggestions on how to enable big businesses to better employ the data science community in improving their ability to crunch data on customers for commercial exploitation.  

To be fair, there were warning signs posted for the assembled throng of banks, insurers and investment managers - in the FCA's presentation on its dedication to competition through its Innovation Hub; a presentation on the nature and value of privacy itself; and salutary lessons from a pioneer of loyalty programmes on the 'bear traps' of customer rejection on privacy grounds and consumers' desire for increasing control over the commercial use of our data. The director's slides also featured the work of Danezis and others on privacy-friendly smart metering and a reference to the need to be human-centric.  

But inverting the institutional narrative to a truly human-centric one would transform the supplier's data challenge into one of organising its product data to be found by consumers' machines that are searching open databases for solutions based on actual behaviour - open data spiders, as it were  - rather than sifting through ever larger datasets in search of the 'more predictive' customer profile to determine how it wastes spends its marketing budget.

Personally, I don't find much inspiration in the goal of enabling banks, insurers and other financial institutions to unite the data in their legacy systems to improve the 'predictive' nature of the various models they deploy, whether for wholesale or retail exploitation, and I'm sure delegates faced with such missions are mulling career changes. Indeed, one delegate lightened the mood with a reference to 'Conway's Law' (that interoperability failures in software within a business simply reflects the disjointed structure of the organisation itself). But it was clear that financial institutions would rather leave this as an IT problem than re-align their various silos and business processes to reflect their customers' end-to-end activities. There is also a continuing failure to recognise that most financial services are but a small step in the supply chain, after all. I mean, consider the financial services implications of using distributed ledgers to power the entertainment industry, for example... 

When queried after the event as to whose role it was to provide the 'voice of the customer', the response was that the ATI does not see itself as representing consumers' or citizens' interests in particular. That much is clear. But if it is to be just a neutral 'convenor' then nor should the ATI allow itself to be positioned as representing the suppliers in their use and development of 'big data' tools - certainly not with £42m of taxpayer funding. 

At any rate, in my view, the interests of human beings cannot simply be left to a few of the disciplines that the ATI aims to convene along side the data scientists - such as regulators, lawyers, compliance folk or identity providers. The ATI itself must be human-centric if we are to keep humans at the heart of technology.


Tuesday, 17 February 2015

Will Machines Out-Compete Humans To The Point of Extinction?

I've been a bit absent from these pages of late, partly pulling together SCL's Technology Law Futures Conference in June on 'how to keep humans at the heart of technology'. As I've explained on the SCL site, the conference is part of SCL's effort to focus attention on that question all year, starting with a speech by Oxford University's Professor Nick Bostrom on 2 March: "Superintelligence: a godsend or doomsday device

In other words, last year was when the threat of "The Singularity" really broke into the mainstream, while this year we are trying to shift the focus onto how we avert that outcome in practical terms. 

My own book on how we can achieve control over our own data is still ping-ponging between agent and publishers, but will hopefully find a home before another year is out - unless, of course, the machines have other ideas... 


Wednesday, 16 April 2014

Twitter Gnip Shows Why Social Media Should Share Revenue With Users

Source: Financial Times
Like Google's declaration of war on the human race, the news that Twitter will buy Gnip illustrates why social media platforms should share their Big Data revenue with users. Indeed, they would seem to have no choice if they are to survive in the longer term.

Gnip's CEO claims that:
"We have delivered more than 2.3 trillion Tweets to customers in 42 countries who use those Tweets to provide insights to a multitude of industries including business intelligence, marketing, finance, professional services, and public relations."
And that's not all. Gnip also has "complete access" to data from many other social media platforms, including WordPress, the blogging platform, and more restricted access to data from other platforms, such as Facebook, YouTube and Google+. 

Quite whether users consent to all that is an issue we'll return to in another post shortly. 

Meanwhile, Twitter suggests that Gnip's current activities have "only begun to scratch the surface" of what it could offer its Big Data customers in the future. Yet, from a user's perspective, Twitter has barely changed since Gnip began its data-mining activities. So are users receiving enough 'value' for their participation to keep them interested?

The social media operators would argue that their platforms would never have been built were it not for the opportunity to one day make a profit from users' activity on those platforms. And it may look like the features have not changed much since launch, but part of the value to users is the popularity with other users and it costs a lot to keep each social media platform working as the number of users grows. Each platform also has to keep up with changes to other platforms so users can continue to share links, photos and so on. That means platforms tend to lose a lot of money for quite a long time, as the FT's comparison chart shows. 

But analysing the value to users gets mirky when you consider that the social media are already paid to target ads and other information at users based on their behaviour, and that the cost of that type of Big Data activity is reflected in the prices of the goods and services being advertised. 

And it doesn't seem right to include the cost of buying and operating a separate Big Data analytics business, like Gnip, in the user's value equation if the user doesn't directly experience any benefit. After all, that analytics business will charge corporate customers good money for the information it supplies, and the cost of that will also be reflected in the price of goods and services to consumers. 

In other words, social media's reliance on revenue from targeted advertising and other types of Big Data activity means that social media services aren't really 'free' at all. Their costs are baked into the price of consumer goods and services, just like the cost of advertising in the traditional commercial media.

And if it's true that the likes of Gnip are only just scratching the surface of the Big Data opportunities, then the revenues available to social media platforms from crunching their users' data seem likely to far exceed the value of the platform features to users. 

Yet user participation is what drives the social media revenues in the first place (not to mention users' consent to the use of their personal data). The social media platforms aren't publishing their own content like the traditional media, just facilitating interaction, so there's also far less justification for keeping all the revenue on that score. And it seems easier to switch social media platforms than, say, subscription TV providers. 

So the social media platforms would seem to have no choice but to offer users a share of their Big Data revenue streams if their ecosystems are to be sustainable.


Thursday, 30 January 2014

P2P Goes Cloud-to-Cloud


In Part 2 of my response to Google's 'computers vs people' meme, I explained that humans can win the war for economic control of their data by transacting on peer-to-peer marketplaces. That's because the P2P platforms don't derive their revenue primarily by using their users' data as bait to attract advertising revenue. Instead, they enable many participants to transact directly with each other in return for relatively small payments towards the platforms' direct operational costs, leaving the lion's share of each transaction with the parties on either side. This post covers some technological developments which move the P2P front line deep into Big Data territory.

Perhaps the ultimate way to avoid Big Data's free ride on the ad revenue derived from your data is to cut your reliance on the World Wide Web itself. After all, the Web is just the 'human-readable' network of visible data that sits on the Internet - just one of many other uses. As I've mentioned previously, having your own pet 'open data spider' that gathers information based on your data without disclosing it would transform the advertiser's challenge from using Big Data tools to target you with their advertising, to enabling their product data to be found by your spider as and when you need it.

But that would not necessarily solve the problems that arise where your data has to be shared.

Fortunately, all but the most hardcore privacy lobbyists have finally moved beyond debating the meaning of "privacy" and "identity" to realise two important things. First, 'personal data' (data that identifies you, either on its own or in combination with other data) is just one type of user-related data we should be concerned about controlling in a Big Data world. Second, it's critical to our very survival that we share as much data about ourselves as possible to the right recipient in the right context. The focus is now firmly on the root cause of all the noise: lack of personal control over our own data. 

Perhaps the leading exponents of this turnaround have been those involved in the Privacy by Design initiative. As explained in their latest report, they've become convinced by a range of pragmatic commercial and technological developments which together produce a 'personal data ecosystem' with you at the centre. You are now able to store your data in various 'personal cloud' services. 'Semantic data interchange' enables your privacy preferences to be attached to your data in machine-readable form so that machines can process it accordingly. Contractually binding 'trust frameworks' ensure data portability between personal clouds, and enable you to quickly grant others restricted access to a subset of your data for a set time and revoke permission at will. The advent of multiple 'persistent accountable pseudonyms' supports your different identities and expectations of privacy in different contexts, allowing for a lawful degree of anonymity yet making your identity ascertainable for contractual purposes. You can also anonymise your own data before sharing it, or stipulate anonymity in the privacy preferences attached to it, so your data can be processed in the aggregate for your own benefit and/or that of society.

All that's missing is a focus on determining the right value in each context. I mean, it should be a simple matter to attach a condition to your data that you are to be paid a certain amount of value whenever Big Data processes it. But 'how much'? And are you to be 'paid' in hard currency, loyalty points or cost savings?   

The ability to put a value on your data in any scenario is not as far away as you might think. The Privacy by Design report notes that the personal data ecosystem (PDE) is "explicitly architected as a network of peer-to-peer connectivity over private personal channels that avoid both information silos and unnecessary “middlemen” between interactions."

Sound familiar?

As explained in the previous post, P2P marketplaces already enable you to balance your privacy and commercial interests by setting a value on your data that is appropriate to the specific context. Your account on each platform - whether it's eBay or Zopa or one of many others - is effectively a 'personal cloud' through which you interact with other users' personal clouds to sell/buy stuff or lend/borrow money on service terms that leave most of the transaction value with you and the other participants.

The wider developments in semantic data interchange, trust frameworks etc., that are noted in the Privacy by Design report enable these clouds or marketplaces to be linked with other personal clouds, either directly or through the 'personal information managers',  as envisaged in the Midata programme

Ultimately, we could use one or two personal information managers to host and control access to our data and derive income from the use of that data by transacting on different P2P platforms dedicated to discrete activities. Not only would this make it simpler to understand and verify whether the use of our data is appropriate in each context, but it would also enable us to diversify our sources of value - a concept that is just as important in the data world as it is in financial services. You don't want all your data and income streams (eggs) in the one cloud (basket).

The Privacy by Design report claims that "all these advancements mean that Big Privacy will produce a paradigm shift in privacy from an "organisation-centric" to a balanced model which is far more user-centric".

I agree, but would add a cautionary note.

In the context of the 'computers vs people' meme, I'm concerned by references in the report to "cloud-based autonomous agents that can cooperate to help people make even more effective data sharing decisions". Has Privacy by Design been unwittingly captured by the Singularity folk?

I don't think so. Such 'cloud-based agents' are ultimately a product of human design and control. Whether the technologists at the Singularity University choose to believe it or not, humans are in fact dictating each successive wave of automation. 

At any rate, we should take advantage of technology to keep things personal rather than submit to the Big Data machines.


Wednesday, 29 January 2014

Humans Win In The P2P Economy

There's been a lot of heat rising from Google CEO Eric Schmidt's recent assertions about a "race between computers and people" that obliges people to avoid jobs that machines can do. Initially, I suggested this was somewhat disingenuous, given the belief amongst the Silicon Valley elite that machines will achieve the 'Singularity', a state of autonomous superintelligence in which point they will outcompete humans to the point of extinction. Merely pushing people into a narrower and narrower range of 'creative' jobs only furthers that cause, since their creative output attracts the vast advertising revenues Big Data needs to build ever smarter machines.

But I also suggested there's an antidote, and today I want to focus more on that.

Not all Internet platforms finance themselves primarily by using free content as bait for advertising revenue. Since eBay enabled the first person-to-person auction in 1995, the 'P2P' model has spread to music and file sharing, voice and data communications, payments, donations, savings, loans, investments and so on. There are now too many such platforms to list. Even political campaigning has become a person-to-person proposition. In Japan a person can offer to care for another person's elderly parents in his city, if someone else will care for his own parents in another.

Like their meat-space counterparts - the 'mutual society' and the 'co-operative' - online P2P platforms enable people to transact and communicate directly with each other in return for relatively small payments towards the platforms' direct operational costs of facilitating the connection. The P2P model vastly limits the need for advertising, since the platform either enables participants to find each other or automatically matches and connects them using the data the participants enter. Through central service terms, each participant agrees with the others how the platform works and how their data is to be used. Typically, every participant has their own data account in which they can view their transaction history. Some platforms will allow that data to be downloaded, along with all the transaction data on the platform, and this is to be encouraged. Low charges make this a high volume business, like Big Data, but platform operators are able to achieve profitability without commanding the lion's share of the margin in each transaction. This helps explain why eBay is solidly profitable but has a lower market capitalisation than, say, Facebook or Google. It's a leaner intermediary - a facilitator rather than institution. That Wall Street attaches a lower value to a comparatively democratic and sustainable business model tells you all you need to know about Wall Street.

Google and Facebook might argue they are a kind of P2P platform. But aside from a few services, like App sales, they don't directly facilitate the negotiation and conclusion of transactions, so they cannot justify a transaction fee. Perhaps they might say they own the web pages and the servers or virtual 'land' on which their advertising is displayed. But that doesn't ring true. They provide the tools for users to create web pages, but if users did not build them there would be no facade on which to display ads, and no one to look at them. Besides, the supply of creative tools is a one-off, while users supply limitless amounts of data in return. Meanwhile, the advertising revenue that was once merely enough to sustain the Big Data ecosystem now dwarfs the value derived by all participants except the platform operators themselves. Any essence of mutuality - and humanity - has been lost in exactly the same way that banks grew from their mutual origins to capture more and more of the 'spread' between savings and loans. And just as banks now allocate most of the money they create to add financial assets to their balance sheets, rather than financing the productive economy, the Big Data platforms are investing in more ways to capitalise on free user data to lure advertising spend, rather than figuring out new ways to leave most of the value with their users.

Dealing with people and businesses over P2P platforms is a good way to use your own data to claw some of that value back.



Friday, 24 January 2014

Google Declares War On The Human Race

Google's CEO, Eric Schmidt finally admitted yesterday something that the likes of Jaron Lanier have been warning us about for some years now: he believes there's actually a race between computers and people. In fact, many among the Silicon Valley elite fervently believe in something called The Singularity. They even have a university dedicated to achieving it.

The Singularity refers to an alleged moment when machines develop their own, independent 'superintelligence' and outcompete humans to the point of extinction. Basically, humans create machines and robots, harvest the worlds data until a vast proportion of it is in the machines, and those machines start making their own machines  and so on until they become autonomous. Stuart Armstrong reckons "there's an 80% probability that the singularity will occur between 2017 and 2112".  

If you follow the logic, we humans will never know if the Singularity actually happened. So belief in it is an act of faith. In other words, Singularity is a religion.

Lots of horrific things have been done in the name of one religion or another. But what sets this one apart is that the believers are, by definition, actively working to eliminate the human race.

So Schmidt is being a little disingenous when he says "It's a race between computers and people - and people need to win," since he works with a bunch of people who believe the computers will definitely win, and maybe quite soon. The longer quote on FT.com suggests he added:
“I am clearly on that side [without saying which side, exactly]. In this fight, it is very important that we find the things that humans are really good at.”
Well, until extinction, anyway.

Of course, the Singularity idea breaks down on a number of levels. For example, it's only a human belief that machines will achieve superintelligence. If machines were to get so smart, how would we know what they might think or do? They'd have their own ideas (one of which might be to look after their pet data sources, but more on that shortly). And there's no accounting for 'soul' or 'free will' or any of the things we regard as human, though perhaps the zealots believe those things are superfluous and the machines won't need them to evolve beyond us. Finally, this is all in the heads of the Silicon Valley elite...

Anyhow, Schmidt suggests we have to find alternatives to what machines can do and only humans are really good at. He says:
"As more routine tasks are automated, this will lead to much more part-time work in caring and creative industries. The classic 9-5 job will be redefined." 
Which is intended to focus our attention away from the trick that Google and others in the Big Data world are relying on to power up their beloved machines and stuff them full of enough data to go rogue. 
By offering some stupid humans 'free' services that suck in lots of data, Big Data can charge other stupid humans for advertising to them. That way, the machines hoover up all the humans' money and data at the same time.

This works just fine until the humans start insisting on receiving genuine value for their data.

Which is happening right now in so many ways that I'm in the process of writing a book about it. 

Because it turns out humans aren't that dumb after all. We are perfectly happy to let the Silicon Valley elite build cool stuff and charge users nothing for it. Up to a point. And in the case of the Big Data platforms, we've reached that point. Now its payback time.

So don't panic. The human race is not about to go out of fashion - at least not the way Big Data is planning. Just start demanding real value for the use of your data, wherever it's being collected, stored or used. And look out for the many services that are evolving to help you do that.

You never know, but if you get a royalty of some kind every time Google touches your data, you may not need that 9 to 5 job after all... And, no, the irony is not lost on me that I am writing this into the Google machine ;-)


Image from Wikipedia

Tuesday, 12 March 2013

Monetizing You

Jaron Lanier, the computer scientist and writer, has been busy explaining that we need to reward each person for the data they reveal or post on the Internet, otherwise it will become unsustainable as an ecosystem. This idea perhaps chimes with the EU's requirement for much more explicit consent to 'cookies', futile as it has proved to be so far. Could we see the advent of paid-for marketing cookies, or will technology evolve to get rid of this problem entirely?

To date, the debate about the future of the Internet has largely been driven by investors, principally, who have insisted that online businesses generate short to medium term profits. Fearful of killing off a nascent commercial channel, most Web 2.0 giants have clustered around the advertising model, making their services 'free' to the consumer, and leaving advertisers to pass on the cost of marketing, as happens offline. Others have adopted the 'Freemium' model, in which perhaps only 10% of customers are relied upon to pay for extra functionality and so on, thereby subsidising a free ride for the rest. Indeed, Jakob Nielson has estimated that:
"In most online communities, 90% of users are lurkers who never contribute, 9% of users contribute a little, and 1% of users account for almost all the action."
But since the dawn of television, in particular, advertisers and ad agencies have been on a quest to figure out who sees their advertisements and how to target their ads ever more accurately. The open nature of Internet technology and the advent of the 'cookie' has made it easier to follow users from site to site, targeting advertising at them along the way, and some internet service providers have gone so far as to filter traffic as it passes through their services. 

Now it's one thing for advertisers to rely on data from your TV set, but people are understandably less comfortable about being followed around all day and having their every preference logged. Jaron Lanier says participants should be paid for that privilege, and I have to agree. Unless participants feel they are being compensated directly enough, they will stop participating online - to the extent they have the choice. This is not just a matter of persuading the 1% to remain high content contributors, or the 10% to actually pay for stuff in the Freemium context. The other 90 to 99% also need some recompense for agreeing to reveal evidence of their behaviour. Great service may be enough in some cases, but otherwise people will surely want a fairly direct economic return for disclosing their location etc., whether in the form of cash or some other sufficiently direct economic benefit, like genuine discounts or cost savings.

While a legal solution seems rather unlikely, the battleground is at least becoming more defined as the European Commission struggles to hold the perimeter of its (unduly broad) General Data Protection Regulation. Unfortunately, the media appears to be naively positioning this as simply big business against the individual, equating BT's interests with those of US retailers. But this fails to recognise that old world institutions, like the telecommunications companies and traditional media empires have the knives out for the revenue streams enjoyed by facilitators like Google, Facebook and others. Yet it's also important that the interests of consumers are represented in a balanced way, rather than by unduly paranoid consumer advocates or European Commission officials zealously toting their single market fantasy. The European Parliament isn't exactly the best candidate to stand in for the pragmatic consumer...

However, for any problem on the Internet it's always worth looking for an acceptable technological solution before a legal one.

The first is the addition of payment features within online games and other applications. In-app payments are increasingly common, but there is a lot more scope to make it easier to charge for content. It's not enough to be able to host advertisements on blogs or email. We need low-friction features to enable direct payment without erecting pay walls, like the TipJar on Vimeo. Others, like HonestyBoxx, are enabling people to monetize their advice by adding an application to their blogs and and personal web sites. But now that we've been forced to endure all these 'cookie' consent boxes that hover around European web sites, why not add payment functionality to sweeten the deal?

Of course, all these solutions suffer from being human-readable, and I believe that the Internet will evolve to remove the issue altogether. As I explain in my recent article for SCL, once product data is published in machine-readable format, the marketing challenge won't be to find customers, but to enable products to be found by customers' machines as required. In a 'Linked Data' world, our computers won't need to disclose anything about us. Our own personal 'spiders' can run around the Internet collecting and analysing openly available data and reporting their findings to us personally. As a result: 
 "a combination of Midata, Open Data and Big Data tools seems likely to liberate us from the tyranny of the 'customer profile' and reputational 'scores', and allow us instead to establish direct connections with trusted products and suppliers based on much deeper knowledge of our own circumstances."

Image from TheTechStuff.

Monday, 8 October 2012

Google, Amazon and The Shape of SME Finance

In November 2007 it seemed clear that facilitators like Google and Amazon would capitalise on their alignment with their customers' day-to-day activities to disrupt banking. Both of these giants already have e-money licences in Europe (I helped Amazon apply for its own), and the latest foray is into trade finance. Google will offer a line of credit for AdWords advertising spend, while it appears Amazon will lend to selected small businesses against their projected sales over the Christmas season. 

While these services may be offered initially in the US, where there are lots of small business funding options, bear in mind that only four UK banks control 90% of the small business finance market and are lending less and less to them. And while some UK banks enable some merchants to obtain cash advances against their card receivables, it's not exactly a core activity.

The competition alone must prove welcome, yet the critical feature of both the Google and Amazon services is that they are seamlessly intertwined with customer behaviour. Both businesses could have decided to launch free-standing, me-too banking services (like the UK supermarkets), but they have not done so. No doubt they also intend to attract new customers with the latest services, but only by showing that they support what small businesses want to do - namely, sell their own goods and services across a staggering array of markets and demographies.

And by patiently facilitating their customer's activities, neither Google nor Amazon needs to incentivise staff to sell services to people who don't need them, as banks have done.


Monday, 15 March 2010

Supermeercats


These guys have a real job to do in Barcelona

Thursday, 29 October 2009

Do Price Comparison Sites Increase Premiums?

Hypothesis: the rising cost of car insurance is actually driven by marketing costs.

Gross premiums have increased 14% in the past year, and the car insurance industry would have us believe this is driven by the 'rising cost of personal injury claims and fraud'. But that would suggest net premiums (the proportion of the gross premium that actually covers the insured event) are being priced wrongly, which I find difficult to believe. Actuaries must be fairly good at predicting accident rates, deaths and injuries and so on by now. And these actually appear to be in decline, according to the Office of National Statistics research published in April 2009:
"The total number of deaths in road accidents fell by 7 per cent to 2,946 in 2007 from 3,172 in 2006. However, the number of fatalities has remained fairly constant over the last ten years...

The total number of road casualties of all severities fell by 4 per cent between 2006 and 2007 to approximately 248,000 in Great Britain. This compares with an annual average of approximately 320,000 for the years 1994-98.

The decline in the casualty rate, which takes into account the volume of traffic on the roads, has been much steeper. In 1967 there were 199 casualties per 100 million vehicle kilometres. By 2007 this had declined to 48 per 100 million vehicle kilometres."
So I wonder if there's a variable cost in there that's proving difficult to control. A chief culprit might be marketing. We're certainly being inundated with TV advertisements for insurance price comparison sites, so I wonder if that is in turn being paid for by higher 'costs per click' associated with advertising on those sites? While some insurers are refusing to list their products on the price comparison sites, they may still face competition for key search terms, for example.

Needless to say, I haven't yet been able to find much public analysis on this, but there is a helpful post on analysing the efficiency of cost per click by The Catalsyst. It's easy to over-spend, and the competition may force you to.

If my hypothesis is right, the case is building for semantic web applications to enable people's computers to find deals simply by interrogating insurers' computers - without all the expensive advertising noise.

Wednesday, 30 September 2009

Market Research and Social Media

Today I presented again on 'Behavioural Targeting of Online Advertising', this time at the 5th Annual Online Research Conference in London. Not that I advise any clients in the area, but I've tried to keep up to date in light of the whole Phorm controversy.

Unfortunately I couldn't stay for the day, but I did catch the morning.

I enjoyed Mark Earls' presentation on the changing relationships between people and organisations, and the role of market researchers as mediators who can help everyone adjust to the new reality. It was also interesting that he picked up on the useful role that the tons of publicly available data can play, and that reminded me of Hans Rosling's excellent presentation on that subject at TED:



Mike Hall of Verve tried to define a new medium called the 'online brand community'. There was no time for questions but this seems to assume the brand is at the centre of things, and I wonder what Mike would say about the research value of comments people publish in the complete absence of the brand? In distilling the essence of community in 6 'rules', Mike also said that 'participation is the oxygen of the community'. But surely the 'oxygen' is whatever induces participation. And it's too simplistic to state as another rule that people participate online to obtain information. Some want to broadcast, others to listen. As the guys from InSites Consulting reported, people tweet to chat socially, 'show off' a rare URL, upload photos, or because they're curious, want a laugh or to be made to wonder. I guess that information is at the heart of all those things, but there's far more to it.

I'm sure the afternoon was just as thought-provoking. Definitely an event to keep an eye out for next year.

Monday, 7 September 2009

New Firms Best At Leveraging Social Media?

A hat tip to Mark Nepstad for pointing out Chris Perry's article on the challenge for any established business trying to leverage the social media. Just as the military potential of the aeroplane was not fully realised until the challenge was eventually handed over by the Army to a newly created Air Force, Chris suggests that marketing teams need to be re-engineered in order for businesses to realise the potential afforded by a phenomenon as 'revolutinary' as the social media.

But this misses the wood for the trees.

The rise of the Air Force and the success of Google, eBay, Amazon etc. illustrate that leveraging horizontal technological innovations is not achieved by shuffling the deckchairs in the marketing department of established organisations, but by forging new and separate businesses.

That leaves the challenge for the old guard to engage with the upstarts in order to leverage their greater success with the new technology. Time Warner (AOL), NewsCorp (MySpace) and even eBay (Skype) have famously demonstrated that acquiring one of these new firms doesn't necessarily result in successful engagement. So it seems that established businesses should both encourage new businesses to flourish around significant new horizontal innovations, and focus on co-operating with them to serve their customers, rather than outright ownership. Some, including the Wharton Business School, have called this 'coopetition'.

Figuring out how to compete by co-operating shouldn't necessarily entail wholesale reorganisation, especially when deep knowledge of the capabilities and shortcomings of your own business is key to knowing what's needed from the other party. Indeed it might be more beneficial to give managers and staff 'permission' to admit their organisation's shortcomings and figure out where they need help to adequately serve their customers, rather than to drive the organisation through complex wholesale change programmes.

At any rate, the scale of the challenge posed by horizontal technological shifts may at least partly explain why the average lifespan of a major western corporation is 40-50 years...


Friday, 24 July 2009

Conspicuous Thrift - Your 5 New Consumer Tactics

Hat-tip to none other than EU Commissioner Meglena Kuneva, whose recent post alerted me to the report of the McKinsey Global Institute on our highly effective responses to the credit crunch, at least as consumers.

McKinsey reports that a third of the decline in Eurozone GDP in Q4 2008 was the result of reduced consumer spending. That's particularly signicant because consumer spending was the single largest factor fueling GDP growth during 2002-07.

They say our spending is driven by confidence, incomes, wealth, the availability of credit and cost of living. All these currently point downward, though the cost of living is obviously less of a problem as prices fall in line with declining demand.

The McKinsey report suggests our savings are a function the type of items we target for savings, the type of consumers we are (discussed below) and the tactic we choose (replace items only when needed, control spending, do-it-yourself, seek value and shop smarter).

McKinsey also say that a "cheap is chic refrain" has inspired the 'do it yourself' tactic. This is more confirmation that the Age of Conspicuous Thrift has dawned. This may also be shorthand for moral or ethical choices - 'green' sentiment and the 'counter-Veblen' effect ("preferences for goods increasing as their price falls, over and above the traditional supply and demand effect, due to conspicuous thrift amongst some consumers"). Do The Green Thing, for example, lists seven tactics for leading a greener life:
"1. You get from A to B without any C when you Walk The Walk
2. It’s delicious but it causes more CO2 than cars so go Easy On The Meat
3. Resist the urge to buy the latest and Stick With What You Got
4. Turn down the central heating and turn up the Human Heat
5. The art of wasting nothing and using up everything: All-Consuming
6. Instead of jetting your way around the world, Stay Grounded
7. Don’t leave it on or even put it on, Plug Out"
What type of items are we targeting for the most savings?

McKinsey say that of all spending categories, eating out has taken the biggest hit. Interestingly, however, electronic gadgets rank 13th on average - we'd rather an iPhone than a fancy meal. But if incomes drop by 20% gadgets rise to the 6th most likely to be cut. The least likely area to target for savings is insurance - perhaps reflecting higher anxiety about what the future holds.

What type of consumer are you? What are you tactics?

The report finds there are 4 consumer types: "party's over", "domestic downsizers", "food scrooges" and "basic bargainers". Of all categories, most people are "party's over" types and these have the largest impact on all spending (especially eating out, clothes, booze and fags). The other consumer types cut back more selectively using specific tactics. For example, "domestic downsizers" target equipment (cars, electronics and home furnishings) by 'replacing only when needed' and make 45% of their total savings on holidays, by simply controlling their spending. Electronic goods tend to fall prey to people engaged in 'controlled spending' and 'shopping smarter'.

What's the message to businesses?

Retailers will have to think carefully about how their offerings may fall victim to these tactics. McKinsey suggests assisting people to budget, pricing competitively and transparently, highlighting usage costs, incentivising the purchase of replacements, less wasteful packaging and promoting home-use/assembly of products. Those selling consumer equipment and holidays in the UK would do well to diversify into insurance, utilities, education or groceries.

I was in a chain store today, and the cashier left her station to take me to the shelves where I could get a 2-for-one item and add something else to cut my overall bill. I was stunned. But in a good way. I'll go back there.

Friday, 1 May 2009

Phoul-Mouthing the Phoul-Mouthers Who Phoul-Mouth Etc

I'm very much looking forward to a balanced, impartial, rational presentation of my balanced, impartial and rational - and very much personal - views on behavioural targeting or interest-based advertising at the SCL Information Governance Conference on 12 May.

In the meantime, I would only observe that this site is a nice illustration of the implications I discussed a few months back, of trying to build a brand that is perceived as an institution, rather than trying to build one that is perceived as a facilitator.

Tuesday, 7 April 2009

Phorm Town Meeting


By the end of Phorm's "2nd Town Hall Meeting" it became obvious that the company is still trying to launch a product with both hands tied behind its back.

It's structure means that Phorm's online behavioural advertising service will only be successful if internet service providers implement it, then successfully market it to individual users, advertisers and web site owners. At that point, the company says, advertisers will experience less wastage in advertising spend, content owners will find it easier to monetise content, web site owners can charge more for space, and end-users will see more relevant ads as they browse.

Exactly what this means in commercial terms is naturally unclear. And Phorm rightly points out that it would be wrong for it to release the details of ISPs' trials or take-up incentives likely to be offered to ISPs' customers, at least until the ISPs are good.. and... ready..... to...... launch....... After 7 years of development, Phorm says it has learned to be patient - a revolution in the internet space.

It seems fairly pointless to have public meetings to talk about offering "choice" when you have no product in the market and the meat of your proposition is under wraps for commercial or regulatory reasons. Nevertheless, Phorm chose the opportunity to engage in further damage limitation on the privacy front and to set the commercial context for its service with a rundown on the online advertisting market.

All the legal points have been made on the privacy front, and don't bear repeating here - though I'll summarise them at the SCL's Information Governance conference. Phorm seems to think they've all gone away, or will be made to go away by launch. Network opt-out was mentioned. Network opt-in is preferred, as is a way to block the service altogether, so that I don't need to store either their opt-in or opt-out cookies. Having to choose whether to store Phorm's opt-in or opt-out cookies is only a choice about how you use Phorm's service, not a choice between using its service and not. Phorm says the current cookie practices are less transparent than its own service will be. From a user standpoint this doesn't deal with the point that I can choose not to go to certain sites, and to clear their cookies selectively, but I can't as readily avoid Phorm's service - or choose to use it on some sites and not others - if it's being run at the ISP level. That "choice" doesn't feel very personalised at all, and personalisation is at the heart of how the web is developing. Phorm asks why the likes of [Google and Facebook] don't have "town meetings" to explain their privacy policies and settings, but I can't think of a venue big enough - and of course they do constantly explain and respond to privacy queries from their massive, global communities in a very public way, online, where everyone can participate.

Phorm also appears to be creating some kind of moral panic by saying that it is part of the solution to preserving the humble newspaper - not to mention journalistic integrity. Shock, horror: journalists are apparently being asked to insert certain keywords in their stories to help attract the right traffic to their newspaper's online ads. Apparently, if Phorm were implemented and used by [everybody] content publishers would not [have to] do this. But the newspapers I read from time to time don't seem all that averse to coupling themes and stories with advertising in their offline manifestations, so it's hardly the end of the world as we know it. And I don't see how newspapers can escape people's desire to see their content unbundled any more than the record companies could. Their challenge is to keep innovating, as Eric Schmidt told US newspapers yesterday. Phorm suggests that the major ad service operators (Google, Facebook et al) aren't entitled to their current or growing flows of advertising revenues. The market will no doubt decide, but this suggestion ignores how those companies finance their own core businesses, which millions and millions of people clearly find very compelling - apparently more so than limited bundles of "news". It also ignores the importance of search and online communities for newspapers' content, not to mention ad deals.

Ulimately, comparisons with Google and Facebook highlight the fact that Phorm is not a bottom-up phenomenon. It's something that will only happen if big telecoms providers say so, and that collides with the Web 2.0 ethos. This, coupled with the Orwellian privacy issues - whether real or perceived - makes Phorm's marketing job very much harder.


Related Posts with Thumbnails