Sixteen years on from my own initial posts on the subject of a personal assistant that can privately find and buy stuff for you on the web, and we have 'open agentic AI'. But are you really any closer to the automated selection and purchase of your own personalised products without needlessly surrendering your privacy or otherwise becoming the victim? Should this latest iteration of open generative AI be autonomously making and executing decisions on your behalf?
What is Agentic AI?
An 'agentic' AI is an evolution of generative AI beyond a chatbot. It receives your data and relies on pattern matching to generate, select and execute one of a number of potential pre-programmed actions without human guidance, then 'learns' from the result (as NVIDIA, the leading AI chip maker, explains).
A 'virtual assistant' that can find, buy and play music, for example, is a form of agentic AI (since it uses AI in its processes), but the ambition involves a wider range of tasks and more process automation and autonomy (if not end-to-end).
You'll see a sleight-of-hand in the marketing language (like NVIDA's) as developers start projecting 'perception', 'understanding' and 'reasoning' on their agentic AIs, but computers don't actually do any of those human things.
It's certainly a compelling idea to apply this to automating various highly complex, tedious consumer 'workflows' that have lots of different parameters - like buying a car, perhaps (or booking a bloody rail ticket in the UK!).
Wearing my legal hat, I also see myriad interesting challenges (which I'd be delighted to discuss, of course!), some of which are mentioned here, but not all...
Some challenges
The main problem with using an 'agentic AI' in a consumer context is the involvement of a large language model and generative AI where there is a significant (e.g. economic, medical and/or legal) consequence for the user (as opposed to a chatbot or information-only scenario (though that can also be problematic). Currently, the household or device based virtual assistants are carrying out fairly mundane tasks, and you could probably get a refund if it bought you the wrong song, for example, if that really bothered you. Buying the wrong car would likely be a different matter.
There may also be confusion about the concept of 'agency' here. The word 'agentic' is used to mean that the AI has 'agency' in the sense it can operate without human guidance. That AI is not necessarily anyone's legal 'agent' (more below) and is trained on generic training data (subject to privacy, copyright consents/licensing), which these days is itself synthetic - generated by an AI. So, agentic AIs are not hosted exclusively by or on behalf of the specific consumer and do not specifically cater to a single end-customer's personalised needs in terms of the data it holds/processes and how it deals with suppliers. It does not 'know' you or 'understand' anyone, let alone you.
Of course, that is consistent with how consumer markets work: products have generally been developed to suit the supplier's requirements in terms of profitability and so on, rather than any individual customer's needs. Having assembled what the supplier believes to be a profitable product by reference to an ideal customer profile in a given context, the supplier's systems and marketing/advertising arrangements seek out customers for the product who are 'scored' on the extent to which they fit that 'profile' and context. This also preserves 'information asymmetry' in favour of the supplier, who knows far more about its product and customers than you know about the supplier or the product. In an insurance context, for example, that will mean an ideal customer will pay a high premium but find it unnecessary, too hard or impossible to make a claim on the policy. For a loan, the lender will be looking for a higher risk customer who will end up paying more in additional interest and default fees than lower risk customers. But all this is only probabilistic, since human physiology may be 'normally distributed' but human behaviour is not.
So using an agentic AI in this context would not improve your position or relationship with your suppliers, particularly if the supplier is the owner/operator of the agentic AI. The fact that Open AI has offered its 'Operator' agentic AI to its pro-customers (who already pay a subscription of $200 a month!) begs the question whether Open AI really intends rocking this boat, or whether it's really a platform for suppliers like Facebook or Google search in the online advertising world.
It's also an open question - and a matter for contract or regulation - as to whether the AI is anyone's legal 'agent' (which it could be if the AI were deployed by an actual agent or fiduciary of the customer, such as a consumer credit broker).
Generative AI also has a set of inherent risks. Not only do they fail to 'understand' data, but to a greater or lesser degree they are also inaccurate, biased and randomly hallucinate rubbish (not to mention the enormous costs in energy/water, capital and computing; the opportunity cost of diverting such resources from other service/infrastructure requirements; and other the 'externalities' or socioeconomic consequences that are being ignored and not factored into soaring Big Tech stock prices - a bubble likely to burst soon). It may also not be possible to explain how the AI arrives at its conclusions (or, in the case of an agentic AI, why it selected a particular product, or executed a specific task, rather than another). Simply overlaying a right to human intervention by either customer or supplier would not guarantee a better outcome on theses issues (due to lack of explainability, in particular). A human should be able to explain why and how the AI's decision was reached and be able to re-take the decision. And, unfortunately, we are seeing less and less improvement in each of these inherent risk areas with each version of generative AIs.
All this means that agentic AI should not be used to fully automate decisions or choices that have any significant impact on an individual consumer (such as buying a car or obtaining a loan or a pension product).
An Alternative... Your Own Personal Agent
What feels like a century ago, in 2009, I wondered whether the 'semantic web' would spell the end of price comparison websites. I was tired of seeing their expensive TV ads - paid for out of the intermediary's huge share of the gross price of the product. I thought: "If suppliers would only publish their product data in semantic format, a 'widget' on my own computer could scan their datafeeds and identify the product that's right for me, based on my personal profile and other parameters I specify".
By 2013, I was calling that 'widget' an Open Data Spider and attempted to explain it further in an article for SCL on the wider tech themes of Midata, Open Data and Big Data tools (and elsewhere with the concept of 'monetising you'). I thought then - and still think now - that:
"a combination of Midata, Open Data and Big Data tools seems likely to liberate us from the tyranny of the 'customer profile' and reputational 'scores', and allow us instead to establish direct connections with trusted products and suppliers based on much deeper knowledge of our own circumstances."
Personalised assistants are evolving to some degree, in the form of 'personal [online] data stores' (like MyDex or Solid); as well as 'digital wallets' or payment apps that sit on smartphones and other digital devices and can be used to store transaction data, tickets, boarding passes and other evidence of actual purchases. The former are being integrated in specific scenarios like recruitment and healthcare; while the latter tend to be usable only within checkout processes. None seems to be playing a more extensive role in pre-evaluating your personal requirements, then seeking, selecting and purchasing a suitable product for you from a range of potential suppliers (as opposed to a product that a supplier has created for its version of an 'ideal' customer that you seem to fit to some degree).
Whether the providers of existing personal data stores and digital wallets will be prepared to extend their functionality to include more process automation for consumers may also depend on the willingness of suppliers to surrender some of their information advantage and adapt their systems (or AIs) to respond to and adapt products according to actual consumer requests/demand.
Equally, the digital 'gatekeepers' such as search providers and social media platforms will want to protect their own advertising revenue and other fees currently paid by suppliers who rely on them for targeting 'ideal' customers. Whether they can 'switch sides' to act for consumers and preserve/grow this revenue flow remains to be seen.
Overall, if I were a betting man, I'd wager that open agentic AI won't really change the fundamental relationship between suppliers, intermediaries and consumers, and that consumers will remain the targets (victims) for whatever suppliers and intermediaries dream up for them next...
I'd love to be corrected!
No comments:
Post a Comment