Google
Showing posts with label peer-to-peer finance. Show all posts
Showing posts with label peer-to-peer finance. Show all posts

Monday, 10 September 2018

The Irony Or The Ecstasy? The UK Centre For Data Ethics And Innovation

You would be forgiven for uttering a very long string of properly ripe expletives on learning that the current UK government has the cheek to create a "Centre for Data Ethics and Innovation"!  Personally, I think they've missed a trick with the name. With a little more thought, the acronym could've been "DECEIT" - and maybe in some languages it would be - so let's go with that.

You might say that it's better to have, rather than not have, an 'independent' policy body focused on the use of data and "artificial intelligence", even if it's set up by a government controlled by those who masterminded and/or benefited from the most egregious abuse of data ethics in UK history.

Or you might be relieved by the notion that it's easier for the dominant political party of the day to control the ethical agenda and results achieve "the best possible outcomes" if the source of policy on data ethics is centralised, especially within a body being hastily set up on the back of a quick and dirty consultation paper released into the febrile, Brexit-dominated summer period before any aspirational statutory governance controls are in place. 

At any rate, we should all note that:
"[DECEIT], in dialogue with government, will need to carefully prioritise and scope the specific projects within its work programme. This should include an assessment of the value generated by the project, in terms of impact on innovation and public trust in the ethical use of data and AI, the rationale for [DECEIT] doing the work (relative to other organisations, inside or outside Government) and urgency of the work, for example in terms of current concerns amongst the public or business."
...
"In formulating its advice, the Centre will also seek to understand and take into consideration the plurality of views held by the public about the way in which data and AI should be governed. Where these views diverge, as is often the case with any new technology, the Centre will not be able to make recommendations that will satisfy everyone. Instead, it will be guided by the need to take ethically justified positions mindful of public opinion and respecting dissenting views. As part of this process it will seek to clearly articulate the complexities and trade offs involved in any recommendation."
Political point of view is absolutely critical here. This UK government does not accept that the Leave campaign or Cambridge Analytica etc did anything 'wrong' with people's data. Senior Brexiteers say the illegality resulting in fines by the Electoral Commission and further investigation by the ICO and the police are merely politically motivated 'allegations' by do-good Remainers. Ministers have dismissed their own "promises" (which others have called "fake news" outright lies and distortion) as merely "a series of possibilities". There is no contrition. Instead, the emerging majority of people who want Brexit to be subjected to a binding vote by the electorate are regarded as ignoring "public opinion" or "the will of the people" somehow enshrined forever in a single advisory referendum in 2016; and as therefore expressing merely "dissenting views".

Against this gaslit version of reality, the creation of DECEIT is chilling.

Meanwhile, you might ask why there needs to be separate silo for "Data Ethics and Innovation" when we have the Alan Turing Institute and at least a dozen other bodies, as well as the Information Commissioner, Electoral Commission and the police. Surely the responsibility for maintaining ethical behaviour and regulatory compliance are already firmly embedded in their DNA?

I did wonder at the time of its formation whether the ATI was really human-centric and never received an answer. And it's somewhat worrying that the ATI has responded to the consultation with the statement "We agree on the need for a government institution to devote attention to ethics". To be fair, however, one can read that statement as dripping with irony. Elsewhere, too, the ATI's response has the air of being written by someone with clenched teeth wondering if the government really knows what it's doing in this area, anymore than it knows how to successfully exit the EU:
We would encourage clarity around which of these roles and objectives the Centre will be primarily or solely responsible for delivering (and in these cases, to justify the centralisation of these functions), and which will be undertaken alongside other organisations.
... We would encourage more clarity around the Centre’s definitions of AI and emerging technologies, as this will help clarify the areas that the Centre will focus on.
Reinterpreting some of the ATI's other concerns a little more bluntly yields further evidence that the ATI smells the same rat that I do:
  • DECEIT will have such a broad agenda and so many stakeholders to consider that you wonder if it will have adequate resources, and would simply soak up resources from other stakeholders without actually achieving anything [conspiracy theorists: insert inference of Tory intent here, to starve the other stakeholders into submission];
  • the summary of "pressing issues in this field" misses key issues around the accountability and audibility of algorithms, the adequacy of consent in context and whether small innovative players will be able to bear inevitable regulations;
  • also omitted from the consultation paper are the key themes of privacy, identity, transparency in data collection/use and data sharing (all of which are the subject of ongoing investigation by the ICO, the police and others in relation to the Leave campaign);
  • the ATI's suggested "priority projects" imply its concern at the lack of traction in identifying accountability and liability for clearly unethical algorithms;
  • powers given to DECEIT should reinforce its independence and "make its abolition or undermining politically difficult";
  • DECEIT's activities and recommendations should be public;
  • how will the "dialogue with government" be managed to avoid DECEIT being captured by the government of the day?
  • how will "trade offs", "public opinion" and "dissenting views" be defined and handled (see my concerns above)?
I could add to this list concerns about the government's paternalistic outlook instead of a human-centric view of data and technology that goes beyond merely 'privacy by design'. The human condition, not Big Tech/Finance/Politics/Government etc, must benefit from advances in technology.

At any rate, given its parentage, I'm afraid that I shall "remain" utterly sceptical of the need for DECEIT, its machinations and output - unless and until it consistently demonstrates its independence, good sense, not to mention ethics

Tuesday, 19 September 2017

BigTech Must Reassure Us It's Human

Recent issues concerning the purchase of lethal materials online, "fake news" and secure messaging highlight a growing tension between artificial intelligence and human safety. To continue their unbridled growth, the tech giants will have to reassure society that they are human, solving human problems, rather than machines solving their own problems at humans' expense. While innovation necessarily moves ahead of the law and regulation, developments in artificial intelligence should be shaped more by humane and ethical considerations, rather than outsourcing these to government or treating them as secondary considerations.

In the latest demonstration of this concern, Channel 4 researchers were able to assemble a 'shopping basket' of potentially lethal bomb ingredients on Amazon, partly relying on Amazon's own suggestion features or 'algorithms' ("Frequently bought together” and “Customers who bought this item also bought...”), which even suggested adding ball-bearings. This follows the phenomenon that emerged during the Brexit referendum and US Presidential election whereby purveyors of 'fake news' received advertising revenue from Facebook while targeting gullible voters.

Neither business is keen to proactively monitor or police its services for fear of conceding an obligation to do so and rendering itself liable for not doing so where the monitoring fails.

Channel 4 quoted Amazon as merely saying that:
"all products must adhere to their selling guidelines and all UK laws. [We] will work closely with police and law enforcement agencies should they need [us] to assist investigations." [update 20.09.17: Amazon is reported to have responded the next day to say that it is reviewing its website to ensure the products “are presented in an appropriate manner”.]
Amazon makes a valid point. After all, the same products can be bought off-line, yet unlike an offline cash purchase in a walk-in store, if they are bought on Amazon there is likely to be a digital 'audit trail' showing who bought what and where it was delivered. Indeed, it's conceivable that Amazon had alerted the authorities to the nature of the items in Channel 4 researchers' shopping basket and the authorities may have allowed the session to run as part of a potential 'sting' operation. It is perhaps understandable that neither Amazon nor the authorities would want to explain that publicly, but it would be comforting to know this is the case. Channel 4 is also somewhat disingenuous in suggesting this is an Amazon problem, when less well-resourced services or other areas of the Internet (the 'dark web') may well offer easier opportunities to purchase the relevant products with less opportunity for detection.

At any rate, the main difference, of course, is that no one from an offline store is likely to help you find missing ingredients to make a potentially lethal device (unless they're already part of a terror cell or perhaps an undercover operative) - and this is the key to Amazon's enormous success as a retail platform. It's possible, however, that a helpful employee might unwittingly show a terrorist where things are, and Amazon might equally argue that its algorithms don't "know" what they are suggesting. But whether it's because of the 'promise' of the algorithms themselves, there is a sense that the algorithm should not be vulnerable to abuse in this way.

Similarly, in the case of Facebook, the social network service has become a raging success because it is specifically designed to facilitate the exchange of information that generates passionate connections amongst like-minded people far more readily than, say, the owner of a bar or other social hang-out or a newspaper or other form of traditional media. Equally, however, Facebook might argue that the helpful algorithms aren't actually "aware" of the content that is being shared, despite use of key words etc. Meanwhile, WhatsApp seems to have declined to provide a terrorist's final message because it could not 'read' it (although the authorities seem to have magically accessed it anyway...).

Just as we and the online platform owners have derived enormous benefit from the added dimensions to their services, however, we are beginning to consider that those dimensions should bring some additional responsibilities - whether merely moral or legal - possibly on both users and service providers/developers.

In many ways the so-called 'tech giants' - Apple, Amazon, Alphabet (Google), Facebook and others - still seem like challengers who need protection. That's why they received early tax breaks and exemptions from liability similar to those for public telecommunications carriers who can't actually "see" or "hear" the content in the data they carry. 

But while it's right that the law should follow commerce, intervening only when necessary and in a proportionate way to the size and scale of the problem, the size and reach of these platforms and the sheer pace of innovation is making it very hard for policymakers and legislators to catch up - especially as they tend to have wider responsibilities and get distracted by changes in government and issues like Brexit.  The technological waves seem to be coming faster and colliding more and more with the 'real world' through drones and driverless cars, for example. 

The question is whether these innovations are creating consequences that the service providers themselves should actively address, or at least help address, rather than ignore as 'externalities' that government, other service providers or society must simply cope with.

The tech giants are themselves struggling to understand and manage the scale and consequences of their success, and the relentless competition to attract the best talent and the race to push the boundaries of 'artificial intelligence' sometimes presents as a declaration of war on the human race. Even the government/university endowed Alan Turing Institute seems to consider the law and ethics as somehow separate from the practice of data science. Maybe algorithms should be developed and tested further before being released, or be coded to report suspicious activity (to the extent they might not already).  Perhaps more thought and planning should be devoted to retraining commercial van and truck drivers before driverless vehicles do to them what the sudden closure of British coal mines did to the miners and their communities (and what the closure of steel mills has done since!).

In any event, the current approach to governance of algorithms and other technological leaps forward has to change if the 'bigtech' service providers are to retain their mantle as 'facilitators' who help us solve our problems, rather than 'institutions' who just solve their own problems at their customers' expense. They and their data scientists have to remember that they are human, solving human problems, not machines solving their own problems at humans' expense.

[update 20.09.17 - It was very encouraging to see Channel 4 report last night that Amazon had promptly responded more positively to researchers' discovery that automated suggestion features were suggesting potentially lethal combinations of products; and is working to ensure that products are "presented in an appropriate manner". The challenge, however, is to be proactive. After all, they have control over the data and the algorithms. What they might lack is data on why certain combinations of products might be harmful in a wider context or scenario.]


Thursday, 15 October 2015

The Alan Turing Institute: Human-centric?

A slightly dispiriting day at The Alan Turing Institute 'Financial Summit', yesterday, I'm afraid to say. 

The ATI itself represents a grand vision and stunning organisational achievement - to act as a forum for focusing Britain's data scientists on the great problems of the world. Naturally, this leaves it open to attempts at 'capture' by all the usual vested interests, and its broad remit means that it must reflect the usual struggle between individuals and organisations and between 'facilitators', who exist to solve their customers problems, and 'institutions', who exist to solve their own problems at their customers' expense

And of course, it's the institutions that have most of the money - not to mention the data problems - so I can see, too, why the ATI advertises its purpose to institutions as "the convener of a multidisciplinary approach to the development of 'big data' and algorithms". It's true also, that there are global and social issues that transcend the individual and are valid targets for data scientists in combination with other specialists. 

But it was concerning that an apparently neutral event should seem predicated on a supplier-led vision of what is right for humans, rather than actually engineering from the human outward - to enable a world in which you to control what you buy and from whom by reference to the data you generate rather than by approximating you to a model or profile. Similarly, it was troubling to see a heavy emphasis in the research suggestions on how to enable big businesses to better employ the data science community in improving their ability to crunch data on customers for commercial exploitation.  

To be fair, there were warning signs posted for the assembled throng of banks, insurers and investment managers - in the FCA's presentation on its dedication to competition through its Innovation Hub; a presentation on the nature and value of privacy itself; and salutary lessons from a pioneer of loyalty programmes on the 'bear traps' of customer rejection on privacy grounds and consumers' desire for increasing control over the commercial use of our data. The director's slides also featured the work of Danezis and others on privacy-friendly smart metering and a reference to the need to be human-centric.  

But inverting the institutional narrative to a truly human-centric one would transform the supplier's data challenge into one of organising its product data to be found by consumers' machines that are searching open databases for solutions based on actual behaviour - open data spiders, as it were  - rather than sifting through ever larger datasets in search of the 'more predictive' customer profile to determine how it wastes spends its marketing budget.

Personally, I don't find much inspiration in the goal of enabling banks, insurers and other financial institutions to unite the data in their legacy systems to improve the 'predictive' nature of the various models they deploy, whether for wholesale or retail exploitation, and I'm sure delegates faced with such missions are mulling career changes. Indeed, one delegate lightened the mood with a reference to 'Conway's Law' (that interoperability failures in software within a business simply reflects the disjointed structure of the organisation itself). But it was clear that financial institutions would rather leave this as an IT problem than re-align their various silos and business processes to reflect their customers' end-to-end activities. There is also a continuing failure to recognise that most financial services are but a small step in the supply chain, after all. I mean, consider the financial services implications of using distributed ledgers to power the entertainment industry, for example... 

When queried after the event as to whose role it was to provide the 'voice of the customer', the response was that the ATI does not see itself as representing consumers' or citizens' interests in particular. That much is clear. But if it is to be just a neutral 'convenor' then nor should the ATI allow itself to be positioned as representing the suppliers in their use and development of 'big data' tools - certainly not with £42m of taxpayer funding. 

At any rate, in my view, the interests of human beings cannot simply be left to a few of the disciplines that the ATI aims to convene along side the data scientists - such as regulators, lawyers, compliance folk or identity providers. The ATI itself must be human-centric if we are to keep humans at the heart of technology.


Tuesday, 3 March 2015

Artificial Intelligence: The Control Dichotomy

Professor Nick Bostrom delivered an inspirational speech for the SCL last night on "Superintelligence: a godsend or a doomsday device", although few would have found it reassuring - it is certainly conceivable that machines could become more intelligent than humans and that humans might not be able to control them. But these are still early days, he stresses. Regulating the development of artificial intelligence at this point risks halting progress. There's a lot more work to do to really understand how artificial intelligence will develop beyond playing old video games better than humans or recognising an image as the picture of a cat. We need to consider how the technology could help avert our extinction, as well as how it might wipe us out. Yet Nick also warns that it will take far less time for computers to exceed human level machine intelligence than to reach it in the first place. So we need to start work on the control mechanisms for the development and use of artifical intelligence, without regulating the industry out of existence: the control dichotomy.

Nick suggests that the guiding principle should be that of "differential technological development" - diverting resources away from technologies and their application which could cause human extinction while focusing on those which will either help prevent our demise or will facilitate the expansion of the human race throughout the cosmos.

But how do we distinguish between helpful and harmful technologies and their application? 

As Nick points out, it's tough to think of any human invention that is inherently 'good'. He mentions many things, from gun powder to genetic engineering, and I think we can throw in the wheel and the Internet for good measure. All these things are used by humans in bad ways as well as for the greater good. But I think what horrifies us especially about the idea of Superintelligence or 'The Singularity' is that it will be machines, not bad humans, who will be using other machines against us. And while we have lots of experience in dealing with evil humans, even our top minds admit we still don't know much about how machines might act in this way or how to stop them - and what we humans fear most is the unknown. 

You'll notice I haven't said 'evil' machines, since they might not be operating with any evil 'intent' at all. Human extinction might just be a mistake - 'collateral damage' arising from some other mission. For instance, Nick suggests that a particular machine left to itself in a given situation might decide to devote itself entirely to making paperclips. So, presumably, it would not bother to put out a fire, for example, or free a human (or itself) from a burning building. It might leave that to other machines, who might in turn have chosen their own narrow objective that involves ignoring people's screams.

Here's where I struggle with the notion of Superintelligence. In fact, as someone who hates being pigeon-holed into any single role, I think a machine's decision to only ever make paperclips might be fabulously logical and a brilliant choice for that machine in the circumstances, but it makes the machine as dumb as a post. For me, Superintelligence should involve a machine being able to do everything a human can and more

But that's beside the point. Knowing what we know already, it would be insane to ignore the paperclip droid and wait for artificial intelligence to develop a machine more capable than humans before figuring out how we might control it. Nick is right to point out that we must figure that out in parallel. In other words, the concept of human control has to be part of the artificial intelligence programme. But it won't be as simple as coding machines to behave protectively, since machines will be able to programme each other. For instance, Nick suggests we could put the machines to work on the control problem, as well as on the problem of how to ensure the survival of our species. AI labs might also pay insurance premiums to cover the damage caused by what they develop. He was less certain about what we might do to constrain developments that occur in the context of secret defence programmes or intelligence gathering, but he seemed confident that we could at least infer the pace of development from the results, and be able to consider how to control the wider application of those developments. Mmmm.

At any rate, Nick also warns that we need to be careful what we wish for. Mandating human survival in a prescriptive way - even a specific biological form - would be a bad move, since we should not assume we are in a position to foster positive human development any more than the Holy Office of the Spanish Inquisition. Better to embed positive human values and emotions or, say, entertainment as a feature of intelligent machines (although I'm guessing that might not go down well with the jihadis). From a phyiscal standpoint, we already know that the human body won't do so well for long periods in space or on Mars, so some other version might need to evolve (okay, now I'm freaking myself out).

To retain a sense of pragmatism, at the end of the speech I asked Nick what he would recommend for our focus on 'Keeping Humans at the Heart of Technology' at the SCL conference in June. His tip was to consider which of the various types of control mechanism might work best, recognising the need to avoid constraining the positive development of artificial intelligence, while ensuring that we will be able to keep the machines in check if and when they become smarter than us.

No pressure then...


Tuesday, 17 February 2015

Will Machines Out-Compete Humans To The Point of Extinction?

I've been a bit absent from these pages of late, partly pulling together SCL's Technology Law Futures Conference in June on 'how to keep humans at the heart of technology'. As I've explained on the SCL site, the conference is part of SCL's effort to focus attention on that question all year, starting with a speech by Oxford University's Professor Nick Bostrom on 2 March: "Superintelligence: a godsend or doomsday device

In other words, last year was when the threat of "The Singularity" really broke into the mainstream, while this year we are trying to shift the focus onto how we avert that outcome in practical terms. 

My own book on how we can achieve control over our own data is still ping-ponging between agent and publishers, but will hopefully find a home before another year is out - unless, of course, the machines have other ideas... 


Wednesday, 12 March 2014

Thoughts On The Potential For P2P Insurance

Some interesting discussions these past few weeks about the potential for innovation and 'disruption' in the insurance markets. As ever, there are stark differences between areas that industry players see as ripe for innovation/disruption and the opportunities outsiders see...

A signficant source of this disconnect - and a great source of opportunity for outsiders - is the tendency for established institutions to view the market through the narrow lens of their own existing products and activities, rather than from the customer's standpoint. To really solve a customer's problem, a supplier has to understand the end-to-end activity in which that customer is engaged; and has to consider that it might need to collaborate with other suppliers in the process.

For instance, as a consumer of car insurance, it's important to understand that you don't simply drive you car. You drive it from A to B in the course of some other activity. Is it a one-off journey, or a commute? Does it involve both city streets, motorways and/or rural roads? What time of day is it? Are the road conditions always the same, often wet or sometimes extreme? Why couldn't I switch insurers, policies and/or premiums as these variables change? Could my car be covered by household insurance while parked at home? The answer hardly requires advanced telematics.

Another problem for insurers is their preoccupation with managing short term financial performance within regulatory capital requirements. This favours cost-reduction at the expense of more strategic, long term business development. In fact some insurers may be better off admitting they are simply running-off their existing book. [Update on 26 March: FT coverage of RSA's rights issue underlines this point - it's all about cost-cutting and disposals, to which CEOs have tied some nice incentives].

At any rate, this tells me that insurers will end up reacting to changing demand, rather than reinventing insurance in any substantial way.

The same goes for the insurance industry's attitude to Big Data. While large insurers are quite sophisticated exponents of Big Data, the industry is merely dedicating itself to persuading customers to disclose more and more personal data about themselves for use in marketing extra products, reducing fraud or improving claims-handling.

This ignores the evolution of personal information management services that go in search of products that are right for you personally. Insurers argue that's what happens on price comparison sites already, and the Cheap Energy Club takes that a step further. But we have not yet seen the truly personal 'open data spider' that some of us have been dreaming about. In that machine-readable future, the challenge for insurers won't be to find customers, but to be able to instantly formulate policies in response to customer devices directly peppering their systems with requests for tailored cover.

To be fair, there are also plenty of mistaken assumptions by outsiders about how insurance actually works (or doesn't) today, and which elements of the value/supply chain that are ripe for improvement or disintermediation. For instance, people forget the key role of reinsurers and reinsurance brokers in diffusing the risk of loss across many sources of capital.

So before disrupting today's insurance markets, it's worth pausing briefly to understand the nature of insurance and how the markets operate.

In layman's terms, insurance is a way for you (the 'insured') to transfer to someone else (an 'insurer') the risk of loss, in return for payment (a 'premium'). 

But it's not quite that simple. In legal terms, that 'risk of loss' translates into 'a defined event, the occurrence of which is uncertain and adverse to the interests of the recipient'. The practice of pooling risks also lies at the heart of modern insurance, such that premiums paid for insuring lower risks are used to fund payouts on higher risks. This of course presents a significant moral hazard, and the scandals involving payment protection insurance and so-called 'identity theft' insurance illustrate how the industry has tended to seek out customers who don't actually face a genuine risk that is adverse to their interests and/or would never be able to make a claim (even if they were aware they'd bought the insurance).

Which brings us to the main problem with insurance markets today - they are highly complex and heavily intermediated, often by players who have little or no interest in seeing a genuine risk is insured appropriately.

Modern insurance can be traced to the need to insure property against the risk of fire after the Great Fire of London (and some might say little has changed since then in the way non-retail insurance business is transacted!). The need to spread the exposure to other risks of loss has created markets around certain types of other events, businesses and property. Reinsurance markets have developed to enable insurers to insure themselves against the risks they underwrite. In each of these markets, the distribution, marketing and sale of insurance is heavily intermediated by brokers and others who take their own cut from the gross premium that you pay (the net premium being what the insurer receives in return for underwriting the risk). Insurers also must invest their premiums in order to help fund payouts and ensure they have enough capital to cover their exposures. So there are strong links between global markets for insurance and other financial products, which brings with it hidden costs and fees, the risk of re-concentrating risk in suprising places and exposure to global financial crises...

Rolling all of these issues together, it seems to me that the real purpose of an insurance business is to find people who genuinely face adverse consequences from specific events, the occurrence of which are uncertain, and then to diffuse that risk across as many different sources of capital as possible, as efficiently as possible. 

Some would say that this amounts to concentrating the risk of loss, since those who don't genuinely need insurance would be excluded (but allowed to buy it if they genuinely do just want it for 'peace of mind').  But that only means we should cease pooling risk and find another way to spread it, such as the peer-to-peer marketplace model that is at work in many other industries.

Peer-to-peer insurance would involve the operator of an electronic platform enabling direct insurance contracts between each insured and many investors (whether traditional insurers or not), each of whom would receive a small portion of the overall premium yet only have to pay out small sums in the event of loss. In this way, the risk of loss could be diffused amongst many investors who would only provide insurance as part of a widely diversified portfolio.  In common with the impact of the P2P model in other industries, removing all the middlemen would cut the margin between net and gross premium to a transparent fee for running the platform, leaving the lion's share of the difference with the market participants.

There are some interesting examples that are headed in this direction. Friendsurance, for example, goes part of the way by enabling a crowd of people to fund the excess on each of their insurance policies. I'm also aware of jFloat (yet to launch), which some have suggested is an application of the P2P model. But I understand that it will still involve pooling risk on a kind of mutual basis, whereas I'm talking more about a 'pure' P2P model.

Presumably, this is not what today's insurers, brokers, reinsurers, reinsurance brokers and other established industry participants want to hear. But they too could benefit in the longer term (if they can afford to think that far ahead) by setting up their own platforms or contributing their own capital and expertise.

It's okay, everyone, I'm not holding my breath...

Friday, 24 January 2014

Google Declares War On The Human Race

Google's CEO, Eric Schmidt finally admitted yesterday something that the likes of Jaron Lanier have been warning us about for some years now: he believes there's actually a race between computers and people. In fact, many among the Silicon Valley elite fervently believe in something called The Singularity. They even have a university dedicated to achieving it.

The Singularity refers to an alleged moment when machines develop their own, independent 'superintelligence' and outcompete humans to the point of extinction. Basically, humans create machines and robots, harvest the worlds data until a vast proportion of it is in the machines, and those machines start making their own machines  and so on until they become autonomous. Stuart Armstrong reckons "there's an 80% probability that the singularity will occur between 2017 and 2112".  

If you follow the logic, we humans will never know if the Singularity actually happened. So belief in it is an act of faith. In other words, Singularity is a religion.

Lots of horrific things have been done in the name of one religion or another. But what sets this one apart is that the believers are, by definition, actively working to eliminate the human race.

So Schmidt is being a little disingenous when he says "It's a race between computers and people - and people need to win," since he works with a bunch of people who believe the computers will definitely win, and maybe quite soon. The longer quote on FT.com suggests he added:
“I am clearly on that side [without saying which side, exactly]. In this fight, it is very important that we find the things that humans are really good at.”
Well, until extinction, anyway.

Of course, the Singularity idea breaks down on a number of levels. For example, it's only a human belief that machines will achieve superintelligence. If machines were to get so smart, how would we know what they might think or do? They'd have their own ideas (one of which might be to look after their pet data sources, but more on that shortly). And there's no accounting for 'soul' or 'free will' or any of the things we regard as human, though perhaps the zealots believe those things are superfluous and the machines won't need them to evolve beyond us. Finally, this is all in the heads of the Silicon Valley elite...

Anyhow, Schmidt suggests we have to find alternatives to what machines can do and only humans are really good at. He says:
"As more routine tasks are automated, this will lead to much more part-time work in caring and creative industries. The classic 9-5 job will be redefined." 
Which is intended to focus our attention away from the trick that Google and others in the Big Data world are relying on to power up their beloved machines and stuff them full of enough data to go rogue. 
By offering some stupid humans 'free' services that suck in lots of data, Big Data can charge other stupid humans for advertising to them. That way, the machines hoover up all the humans' money and data at the same time.

This works just fine until the humans start insisting on receiving genuine value for their data.

Which is happening right now in so many ways that I'm in the process of writing a book about it. 

Because it turns out humans aren't that dumb after all. We are perfectly happy to let the Silicon Valley elite build cool stuff and charge users nothing for it. Up to a point. And in the case of the Big Data platforms, we've reached that point. Now its payback time.

So don't panic. The human race is not about to go out of fashion - at least not the way Big Data is planning. Just start demanding real value for the use of your data, wherever it's being collected, stored or used. And look out for the many services that are evolving to help you do that.

You never know, but if you get a royalty of some kind every time Google touches your data, you may not need that 9 to 5 job after all... And, no, the irony is not lost on me that I am writing this into the Google machine ;-)


Image from Wikipedia

Wednesday, 12 June 2013

A Directory of Crowdfunding Directories?

Crowdfunding directories are becoming useful, given the wide variety of potential models, specific geographic and other constraints, and the rapidly increasing numbers of new platforms opening up new niches. 

Each directory seems to take a slightly different tack or favour certain types of platform, so it will be interesting to see which 'prevail' and why, and whether they represent a source of customers. 

For instance, Nesta recently launched Crowdingin.com, which aims to list information on platforms open to fundraising from individuals and businesses in the UK. 

Directories with a broader focus include AllStreet, Crowdfund Insider, and Crowdsourcing. The Canadian NCFA has its own nationally-oriented directory.  

Of course, trade body membership lists are also important, particularly where regulation is still evolving and the trade body has a published set of rules that members have committed to follow, e.g. the P2PFA, UKCFA.

By all means suggest any others you have found useful (and why)... At this rate, we'll need a directory of directories!

Image from gCodeLabs.

Friday, 25 January 2013

More Sunlight Needed On Perverse Tax Incentives

Our continuing economic woes seem to reveal a UK Treasury that has lost touch with the fundamental tax and regulatory problems in the UK economy and is unwilling to engage openly and proactively on how to resolve them.

Not only did the Treasury lose any grip it had on the financial system when it mattered most during the last decade, but the rocky passage of the Financial Services Bill and the need to create a joint parliamentary Commission on Banking Standards also reveal that any such grip remains elusive. This, coupled with the UK's bizarrely complicated system of stealth taxes and incentives, demonstrates the urgent need for more transparency and openness in how the Treasury is going about the task of addressing our economic issues.

The latest example comes with the news that the government might revisit the bizarre decision to delay the revaluation of business rates, which are still based on the higher rental values of 2008. The task of setting business rates every five years lies buried in the Valuation Office Agency, an 'executive agency' of HM Revenue and Customs within HM Treasury. So it's nicely insulated from anyone who might complain about the impact of the rather occasional exercise of its responsibility. Instead, businesses have complained to Vince Cable, over at Business Innovation and Skills, and he's bravely (insanely?) promised to do what he can. However, the hermetically sealed nature of civil service silos means the Valuation Office Agency can safely ignore the issue.

Anyone else afflicted by perverse public sector tax issues faces the same problem. 

UK-based retailers are wasting their time by complaining they are disadvantaged compared to international businesses that are better able to minimise their tax liabilities. Not only is this a welcome distraction from the bigger issue of how the public sector wastes money, (which the Cabinet Office has been left to address), but the Treasury hides behind BIS, no doubt laughing-off the complaints as an example of businesses not understanding how the arcane world of taxation really works. The trouble is the Treasury doesn't understand how that world really works either. Nobody does. That was the whole point of Gordon Brown's stealth approach to taxation. But this should be no excuse for the department that's supposed to be in charge. The Treasury needs to take responsibility for understanding and explaining how it all works, including the unintended consequences.

Similarly, the Treasury needs to take responsibility for the fact that the UK's small businesses face a funding gap of £26bn - £52bn over the next 5 years. Here, again, BIS has had to act as a human shield, even threatening to launch its own 'bank'. Yet HMT has allowed four major banks to get away with controlling 90% of the small business finance market while only dedicating 10% of the credit they issue to productive firms. This, despite the fact that small businesses represent 99.9% of all UK enterprises, are responsible for 60% of private sector employment and are a critical factor in the UK's economic growth which has slipped into reverse yet again. Meanwhile, the Treasury continues to resist allowing a broader range of assets to qualify for the ISA scheme, which currently incentivises workers to concentrate their savings into low yield deposits with the same banks that are turning away from small business lending just when it's needed most.

More sunlight please!

Wednesday, 7 November 2012

Rise Of The Facilitators: Big Society Capital

Last night, at a ResPublica event, I heard Nick O'Donohoe, CEO of Big Society Capital, outline a pragmatic vision for a social investment market in the UK. Critically, BSC's role is not to hand out £600m in cash to well-intentioned social entrepreneurs. Instead, it's focused on creating the capability for deprived communities to identify, manage and finance projects that will have a mainly social impact, but with the expectation of some financial return. 

Let's say you want to introduce 'makerspaces' for local people with expertise in operating machinery to invent stuff and make individual items to order. It seems reasonable to believe this could help regenerate some industrial towns. Consider the adventures of Chris Anderson, who recently announced his departure as editor of Wired to run a drone manufacturing business he built as a hobby, as described in his latest book

How would you make it happen? How would you establish the feasibility of such a project, identify the right equipment, locate an appropriate building, obtain any necessary planning permission and so on? 

This takes time and expertise, not to mention seed money. Numerous intermediaries must be available to help entrepreneurs co-ordinate and finance their project locally. It can't be done by Big Society Capital from its offices in Fleet Street. It can't be done by civil servants from Westminster, or even by the local council. This has to be a distributed effort all around the country, leveraging online resources where that makes sense. Such intermediaries - or facilitators - will include social banks, active social investors, professional and other support businesses, as well as platforms that enable funds to flow directly from people with cash to social entrepreneurs. The role of Big Society Capital is to invest in the development of a strong network of these social investment intermediaries.

But maybe we shouldn't be too definitive about what is 'social'. I think this approach will be truly successful when facilitators and entrepreneurs aren't necessarily conscious of the fact that the positive social impact of their activities is far greater than the scale of their financial results. To this end, we should factor into all our corporate and project objectives an obligation to take responsibility for somehow improving the community to which the corporation or project relates. In this way, all businesses would have an overlapping social purpose as well as a financial one. 

Similarly, financial services need to support this broader responsibility. Of course it's critical that investors know exactly whether they are donating money, receiving interest payments or getting a share in a company. But if I'm putting £20 directly into any project, my customer experience shouldn't be different depending on whether I'm offered a ticket to a concert, interest at 3% per annum or 2 shares in the project operating company - in fact the same project should be able to offer me all three, seamlessly. That's the sentiment behind efforts to proportionately regulate peer-to-peer finance. All types of enterprise should be able to offer all kinds of instruments over a proportionately regulated digital platform, within an ISA.

Now that would generate some serious big society capital.

Wednesday, 31 October 2012

Kickstarter's Kick In The Butt For UK Banks

The news that Kickstarter, a US rewards-based crowdfunding operator, has opened a dedicated UK platform is hugely encouraging for anyone concerned about our banking problems.

No doubt Kickstarter is responding to demand from the UK-based entrepreneurs and their supporters who were already using the US platform. But it's also a big bet on the future of alternative finance in the UK, and Kickstarter's expansion will mean a lot of focus on the different ways that people can directly fund other people's personal finances, projects and businesses.

The term 'crowdfunding' first gained currency to describe US 'rewards-based' peer-to-peer platforms like ArtistShare and Kickstarter, and similar platforms already operate in the UK (e.g. Peoplefund.it, Crowdfunder and those mentioned here). These platforms are designed to raise money for small budget projects via the internet without infringing laws that control the offer of 'securities' to the public. Entrepreneurs can post 'pitches' seeking donations, and may offer a 'reward' of some kind in return.

Other peer-to-peer finance platforms enable markets for personal loans and small business loans - called 'person-to-person lending' or 'peer-to-peer lending'. Examples include Zopa, Ratesetter and Funding Circle in the UK, Comunitae in Spain and IsePankur in Estonia which just announced that anyone from the EEA and Switzerland can lend to Estonian borrowers.

The peer-to-peer model has also been adapted to fund charities or not-for-profit projects, which is known as 'social finance' (e.g. Buzzbnk); and to enable many people to fund tiny local businesses in developing countries - referred to as 'micro-finance' (e.g. Kiva, MyC4).

Finally, the peer-to-peer model is being developed to enable direct investments in return for shares and more complex loan arrangements (debentures). This has proved impossible to date in the US, where even Lending Club and Prosper have had to register their peer-to-peer lending platforms with the Securities Exchange Commission. But in the UK, Crowdcube and, more recently, Seedrs and BankToTheFuture appear to have found ways through the regulatory maze to enable the crowd to invest in the shares of start-up companies. Abundance Generation enables funding for alternative energy. Kantox enables people to switch foreign currency and Platform Black enables the sale of trade invoices. CrowdBnk, Trillion Fund and CrowdMission say they're coming soon.

There are signs that the regulatory maze will become much easier to navigate. Both the US and UK governments have recognised that more needs to be done to encourage the growth of these alternative forms of finance. 

The US passed the JOBS Act to provide ways to enable crowd investment in securities. And against a backdrop of proposed legislative changes in the UK, the government has praised self-regulation by the industry and set up a working group to assess the need for changes to the legal framework. That working group includes representatives from the Office of Fair Trading, the Department of Business Innovation and Skills, HM Treasury, the Financial Services Authority and the Cabinet Office. The Department for Culture Media and Sport is also interested in the potential for peer-to-peer finance to fund the development of arts and entertainment. 

The European Commission is also taking an interest in this field, and a regulatory summit is being planned in early December to introduce industry leaders and EU/UK policy-makers and regulatory officials to discuss proportionate regulation to encourage the responsible growth of peer-to-peer finance.

Kickstarter has made a pretty solid bet.


Friday, 21 September 2012

UK Takes Joined-up Regulatory Approach To P2P Finance

The UK government has announced a cross-departmental working group to support the sustainable development of peer-to-peer finance (aka 'crowdfunding'), as part of its latest response to the Red Tape Challenge. 

The composition of this working group is testimony to the broad policy implications and opportunities posed by this new form of financial model for consumers and small businesses. The list of members includes the Office of Fair Trading, the Department of Business Innovation and Skills, HM Treasury, the Financial Services Authority and the Cabinet Office. However, it is known that the Department for Culture Media and Sport is also very interested in the potential for peer-to-peer finance to fund the development of the arts and entertainment industry.  

Specifically, that working group will "monitor the appropriateness of the current regulatory regime for peer-to-peer platforms" and take the lead in engaging with the peer-to-peer finance industry.  In the meantime, the government wishes to encourage continued self-regulatory efforts by the Peer-to-Peer Finance Association to address common operational risks, and to engage with policy-makers and regulators.

Other aspects of the government's response to the 'disruptive business models' Challenger Businesses Red Tape Challenge are discussed here.

Thursday, 13 September 2012

Credit Drives Growth (Not Interest Rates)


Thanks to IPPR and The Finance Innovation Lab for an invigorating seminar on bank reform this morning. I've noted some of the highlights below, but in summary: Chris Hewett gave a great overview of the range of proposals; Richard Werner debunked the myth that interest rates drive economic growth and explained why the Bank of England must guide bank credit away from speculation and into productive firms; and Baroness Susan Kramer explained the work being done in Parliament.

Chris's 'policy map' in particular is worth studying in particular (zoom out of his presentation to find it). It reveals the ideas that are merely 'a glint in the eye', those that are attracting support and those that are being fought over by stakeholders in a way that is likely to produce change in the near term. 

Richard showed that interest rates do not drive economic growth. Rather, they lag changes in economic growth by as much as a year. So it's a myth that lowering interest rates will increase economic growth, or that raising them will slow growth. Instead, the evidence proves that those in charge of monetary policy merely react to a slowing economy by lowering interest rates, and react to a growing economy by raising them. In other words, economic growth drives the setting of interest rates not the other way around (so GDP growth and interest rates are positively correlated, not negatively correlated as many people suggest).

So the current low Bank of England base rate merely reflects the current economic malaise, and changing it one way or the other won't drive economic growth (GDP). Mortgage rates are already much higher, anyway, and it may be doubted whether banks would pass on any rise to savers.

In fact, Richard observed that the only driver of growth in GDP is bank credit that is used for productive investment. Credit used for consumption merely raises inflation, and credit used to buy financial assets (which don't count towards GDP) merely drives up non-GDP asset prices.

Richard explained the importance of recognising that we derive 97% of our money supply from banks extending credit. They 'create' money every time they make a loan. But here's the killer: only about 10% of credit created by UK banks actually goes to productive firms. The rest of the credit created is used by investment banks, hedge funds, private equity and so on to speculate on non-GDP assets.

In addition, the risk-weightings under bank capital rules discourage banks from lending to small firms (as I've also mentioned before), effectively encouraging lending to fund speculative property deals - even though the overall risk profile of loans to small businesses is lower than lending for speculative purposes, and in spite of the fact that small firms represent 99.9% of all enterprises and are responsible for 60% of private sector employment).

Richard explained that Project Merlin and the more recent efforts by the Treasury to shame banks into lending to productive firms all fail because the banks can afford to ignore the Treasury. But central banks have been successful in guiding credit to the right sectors previously, because the banks rely on the faith of the central bank to stay in business. The IMF has previously discouraged the use of this so-called "window guidance" because it has been abused in certain countries (e.g. to aid speculators or political cronies). But a transparent programme could work. A longer term alternative is to create new banks that never lend for speculative purposes - in Germany, for example, 70% of banks (about 2000 of them) only lend locally. Spain had a similar system, but then required its local 'cajas' to lend nationally, with devastating effects.

Finally, Richard said that the banks' could lend more to productive firms and still meet their capital requirements. But they need to lower the bar to obtaining credit (which German banks have commonly done during a downturn) and to incentivise staff for making productive loans. Currently, it's easier for bankers to earn bonuses for supporting speculative activity.

Baroness Kramer explained that Parliament is focused on four main aspects of the financial crisis: the market failure to provide bank credit to productive small firms; capital/cost barriers to launching new banks; encouraging peer-to-peer finance platforms; and ensuring that the Financial Services Bill and the up-coming Banking Bill are fit for purpose. 

Susan said that the Joint Parliamentary Committee on Banking Standards should have the membership and resources to get to the root cause of market failures and make improvements to fix them. While the evidence of market failure is clear, more evidence of the underlying problems and causes is very much welcome (even after the deadlines for submissions have expired). There is a belief amongst some in the House of Lords that the same regulator should be responsible for addressing market failure, as well as enterprise risk and market abuse, because they are all linked. The FDIC in the US provides an example of how this can work.

Proposals to reduce capital/costs that prevent the launch of new banks include reduced capital requirements for local banks that won't be systemic; and the regulation of a common banking platform that takes care of most operational risks, so that small banks could simply 'plug-in'. Susan observed that credit unions only cover about 2% of the borrowing population, so are not a replacement for new, local institutions.

Baroness Kramer has led the way in proposing amendments to the Financial Services Bill to proportionately regulate peer-to-peer finance. In the course of discussing those proposals, it appears that the Treasury has conceded that there is already a provision in the Financial Services Bill that could enable such regulation. However that still leaves the job of agreeing the detailed secondary legislation (and any further enabling legislation) required, so the industry should keep up the pressure in that regard.

Finally, Susan praised the white paper that underpins the Banking Bill as containing 'pretty good' language on enabling new entrants to the banking industry. However, it is going to be important for everyone to be vigilant in ensuring the spirit of this is captured in the provisons of the Bill.




Monday, 23 July 2012

Let's Play Piggy-In-The-Middle

On Saturday I got my Level 1 rugby coaching ticket, which means I can teach kids up to 12 to play rugby. I also get to wear a knee support for a few days, but that's another story. 

These days rugby is more about 'social and personal development' and getting the skills right than 'winning' or even 'participating'. That's because the International Rugby Board is not only keen to contrast the game from the many ugly aspects of football but is also listening to the harshest sports critics of all: children. So we value teamwork, respect, enjoyment, discipline and sportsmanship. 'Drills' have been replaced by 'small-sided games' built on age-old playground games which are active, purposeful, enjoyable and safe (and naturally promote balance, co-ordination, agility and speed). Coaches have responsibility for developing the 'whole person', with the emphasis on competence, confidence, connection, character and creativity. Gone are the old touch-line tirades. Now we ask the players to explain what's working and what isn't - and why. 

And it works.
 
So I got to thinking: if an institution as entrenched as the good ol' Rugby Football Union can change this much from the grassroots, why can't some of our other institutions?
 
I mean, the various peer-to-peer platforms have simply re-imagined retail banking as just a giant game of piggy-in-the-middle: savers, get your cash directly to those who need it without the 'piggy' bank intercepting it. Go!


Tuesday, 3 July 2012

What Bank Customers Want And Why They Don't Get It

There's a lot of talk about 'restoring responsibility to banking', and 'returning banking to its sober Quaker roots', 'removing the casino culture', 'getting banks lending again' and so on.

But all these soundbites are focused on the activity of banking.

Nobody, except banks, engages in "banking". We may use a bank's service, but only in the context of a much wider activity, such as buying a house or a birthday present on the way to a party, or getting clothing made in China to sell over here. "Banking" is only what banks think customers are doing, because banks only view the world through the lens of their own products and not customers' activities. Which is why bank products are inherently designed to make money for banks and not to benefit customers (and why they fiddled 'Liebor').

All bank customers want is what the customers of any supplier wants - solutions to their own day-to-day problems rather than those of the supplier.

But if you believe banks are capable of aligning with their customers' activities any time soon - you're flogging a dead horse.

Back in 2009 and again in 2010, there was a lot of discussion about whether the role of social media would ever play a role in consumer finance. Typically, the banks claimed that using Twitter to communicate with customers was a publicity stunt - making the almighty assumption that Twitter is somehow divisible from the vast entanglement of services that make up the social media. They said online peer-to-peer finance platforms wouldn't scale. In 2011 that sort of discussion wasn't repeated. Why? Because even banks realised it was rubbish - just as it proved to be in the markets for retail services, music, entertainment, travel, politics, newspapers, television and so on, where online 'facilitators' have hacked great chunks out of the market shares of once dominant, sleepy, 'traditional' institutions that were not aligned with their customers' day-to-day activities.


Why not? Comparisons between the rise of facilitators in other retail markets are not apt because there was no regulatory regime that protected high street stores from online competition. There were no tax incentives to persuade consumers it's safer to buy their music on CDs rather than download it. No compensation scheme for advertisers who don't get the return they want on their advertising spend in the newspaper or on TV, leaving online advertisers to fend for themselves. No taxpayer guarantee that allows high street electronics retailers to spend whatever it takes to maintain market share against online marketplaces.

Banks, on the other hand, rejoice in all that protection against innovation and competition.

In these circumstances, it is unrealistic to assume that new business models will thrive without some alteration to the regulatory framework.


Related Posts with Thumbnails