Google

Sunday, 29 March 2015

Who Is Late In Paying Our #SMEs £41bn?!

In an attempt to eradicate late payments to small businesses of approximately £41bn, the government has proposed that, from April 2016, large listed companies will have to report twice-yearly on: 
  • their standard payment terms;
  • average time taken to pay; 
  • the proportion of invoices paid within 30 days, 31-60 days and beyond agreed terms; 
  • amount of late payment interest owed/paid; 
  • incentives charged to join/remain on preferred supplier lists; 
  • dispute resolution processes; 
  • the availability of e-invoicing, supply chain finance and preferred supplier lists; and 
  • membership of a Payment Code.
A copy of the simple but effective sample report is attached to the government's announcement.

Not only should this data result in the naming and shaming of late payers, but it should also further define and foster growth in the market for discounting these invoices, to help fund the growth of the affected SMEs.

Monday, 23 March 2015

8 Financial Services Policy Requests - Election Edition

If you've been lumped with the job of writing your party's General Election Manifesto, here are 8 financial policies to simply drag and drop:

1. Remove the need for FCA credit-broking authorisation just to introduce borrowers whose finance arrangements will be 'exempt agreements' anyway - it makes no sense at all;

2. Remove the need for businesses who lend to consumers or small businesses on peer-to-peer lending platforms to be authorised by the FCA - again, it makes no sense, because the platform operator already has the responsibility to ensure the borrower gets the right documentation and so on; an alternative would be to allow such lenders to go through a quick and simple registration process;

3. Remove the requirement for individuals who wish to invest on crowd-investment platforms to certify that they are only investing 10% of their 'net investible portfolio' and to either pass an 'appropriateness test' or are receiving advice - it's a disproportionately complex series of hoops compared to the simplicity of the investment opportunities and the typical amounts at stake;

4. Focus on the issues raised in this submission to the Competition and Markets Authority on competition in retail banking, particularly around encouraging a more diverse range of financial business models;

5. Re-classify P2P loans as a standard pension product, rather than a non-standard product - the administrative burden related to non-standard products is disproportionately high for such a simple instrument as a loan;

6.  Reduce the processing time for EIS/SEIS approvals to 2 to 3 weeks, rather than months - investors won't wait forever;

7.  Reduce the approval time for FCA authorisation for FinTech businesses from 6 months to 6 weeks; alternatively, introduce a 'small firms registration' option with a process for moving to full authorisation over time, so that firms can begin trading within 6 weeks of application, rather than having to spend 3 months fully documenting their business plans, only to then wait 6 to 12 months before being able to trade - others entrepreneurs and investors will stop entering this space;

8. Proportionately regulate invoice discounting to confirm the basis on which multiple ordinary retail investors can fund the discounting of a single invoice - it's a rapidly growing source of SME funding, simple for investors to understand and their money is only at risk for short periods of time.


Tuesday, 3 March 2015

Artificial Intelligence: The Control Dichotomy

Professor Nick Bostrom delivered an inspirational speech for the SCL last night on "Superintelligence: a godsend or a doomsday device", although few would have found it reassuring - it is certainly conceivable that machines could become more intelligent than humans and that humans might not be able to control them. But these are still early days, he stresses. Regulating the development of artificial intelligence at this point risks halting progress. There's a lot more work to do to really understand how artificial intelligence will develop beyond playing old video games better than humans or recognising an image as the picture of a cat. We need to consider how the technology could help avert our extinction, as well as how it might wipe us out. Yet Nick also warns that it will take far less time for computers to exceed human level machine intelligence than to reach it in the first place. So we need to start work on the control mechanisms for the development and use of artifical intelligence, without regulating the industry out of existence: the control dichotomy.

Nick suggests that the guiding principle should be that of "differential technological development" - diverting resources away from technologies and their application which could cause human extinction while focusing on those which will either help prevent our demise or will facilitate the expansion of the human race throughout the cosmos.

But how do we distinguish between helpful and harmful technologies and their application? 

As Nick points out, it's tough to think of any human invention that is inherently 'good'. He mentions many things, from gun powder to genetic engineering, and I think we can throw in the wheel and the Internet for good measure. All these things are used by humans in bad ways as well as for the greater good. But I think what horrifies us especially about the idea of Superintelligence or 'The Singularity' is that it will be machines, not bad humans, who will be using other machines against us. And while we have lots of experience in dealing with evil humans, even our top minds admit we still don't know much about how machines might act in this way or how to stop them - and what we humans fear most is the unknown. 

You'll notice I haven't said 'evil' machines, since they might not be operating with any evil 'intent' at all. Human extinction might just be a mistake - 'collateral damage' arising from some other mission. For instance, Nick suggests that a particular machine left to itself in a given situation might decide to devote itself entirely to making paperclips. So, presumably, it would not bother to put out a fire, for example, or free a human (or itself) from a burning building. It might leave that to other machines, who might in turn have chosen their own narrow objective that involves ignoring people's screams.

Here's where I struggle with the notion of Superintelligence. In fact, as someone who hates being pigeon-holed into any single role, I think a machine's decision to only ever make paperclips might be fabulously logical and a brilliant choice for that machine in the circumstances, but it makes the machine as dumb as a post. For me, Superintelligence should involve a machine being able to do everything a human can and more

But that's beside the point. Knowing what we know already, it would be insane to ignore the paperclip droid and wait for artificial intelligence to develop a machine more capable than humans before figuring out how we might control it. Nick is right to point out that we must figure that out in parallel. In other words, the concept of human control has to be part of the artificial intelligence programme. But it won't be as simple as coding machines to behave protectively, since machines will be able to programme each other. For instance, Nick suggests we could put the machines to work on the control problem, as well as on the problem of how to ensure the survival of our species. AI labs might also pay insurance premiums to cover the damage caused by what they develop. He was less certain about what we might do to constrain developments that occur in the context of secret defence programmes or intelligence gathering, but he seemed confident that we could at least infer the pace of development from the results, and be able to consider how to control the wider application of those developments. Mmmm.

At any rate, Nick also warns that we need to be careful what we wish for. Mandating human survival in a prescriptive way - even a specific biological form - would be a bad move, since we should not assume we are in a position to foster positive human development any more than the Holy Office of the Spanish Inquisition. Better to embed positive human values and emotions or, say, entertainment as a feature of intelligent machines (although I'm guessing that might not go down well with the jihadis). From a phyiscal standpoint, we already know that the human body won't do so well for long periods in space or on Mars, so some other version might need to evolve (okay, now I'm freaking myself out).

To retain a sense of pragmatism, at the end of the speech I asked Nick what he would recommend for our focus on 'Keeping Humans at the Heart of Technology' at the SCL conference in June. His tip was to consider which of the various types of control mechanism might work best, recognising the need to avoid constraining the positive development of artificial intelligence, while ensuring that we will be able to keep the machines in check if and when they become smarter than us.

No pressure then...


Tuesday, 17 February 2015

Will Machines Out-Compete Humans To The Point of Extinction?

I've been a bit absent from these pages of late, partly pulling together SCL's Technology Law Futures Conference in June on 'how to keep humans at the heart of technology'. As I've explained on the SCL site, the conference is part of SCL's effort to focus attention on that question all year, starting with a speech by Oxford University's Professor Nick Bostrom on 2 March: "Superintelligence: a godsend or doomsday device

In other words, last year was when the threat of "The Singularity" really broke into the mainstream, while this year we are trying to shift the focus onto how we avert that outcome in practical terms. 

My own book on how we can achieve control over our own data is still ping-ponging between agent and publishers, but will hopefully find a home before another year is out - unless, of course, the machines have other ideas... 


Monday, 2 February 2015

Humans, Not Big Data, Must Benefit From Opening Up The Banks

One of the 'good news' items in last year's Autumn Statement was the endorsement of an Open Data Institute report on how innovators could make better use of bank data on our behalf (instead of on the banks' behalf). Last week, the Treasury followed up with a call for evidence "on how best to deliver an open standard for application programming interfaces (APIs) in UK banking and... whether more open data in banking could benefit consumers."

This is important, because enabling our own machines to start crunching our financial data on our behalf is key to humans winning the race against Big Data.

'Open data' involves enabling public access to data by connecting data sets using uniform resource identifiers ('Linked Data') and publishing data about them in machine-readable formats. While the government and other public institutions have done a good job of opening up public data sets so far, our private institutions are lagging behind, as discussed here. That's because they benefit from making sure we know less than them about our own requirements - 'information asymmetry'.

The Midata initiative was the first to aim at restoring the balance in favour of consumers, by taking aim at energy utilities, telecoms providers and banks. Now the energy companies face being opened up like so many tin cans by smart metering, while the telecoms market looks like it will go from bad to worse. Banks have also proved a tougher nut to crack. Yes, some people can download their current account data via internet banking. But in addition to consumers, there are 4.5 million small businesses out there that need a hell of a lot more than that. So far, it's taken regulatory action just to start the process of improving access to small business credit data and making a market in rejected small business loan applications. Now the Treasury is trying to force the pace on the technology necessary to support all that.

The ODI's recommendations for opening up banking are: 
  • banks should agree an open API standard to support third party access to bank data - basically firms that can help you make sense of the data (but Big Data firms will try to persuade you to share it with them too);
  • independent guidance should be provided on technology, security and data protection standards that banks can adopt to ensure data sharing meets all legal requirements; 
  • an industry wide approach should be established to vet third party software applications and publish a list of vetted applications as open data - this would allow visibility of firms that are acting on consumers' behalf and those who are not;
  • standard data on Personal Current Account terms and conditions should be published by banks as open data; and 
  • credit data should be made available as open data.
My main concern is that requiring agreement on 'standards' as a precondition for opening up banking will enable the banks to delay the whole process by a decade - as they did with Faster Payments. As was recommended by the security working group in the Midata initiative, I would prefer to see banks required to immediately make their data available to each consumer/SME in whatever open format the banks choose, while adhering to common data security protocols, and leave it to the open data community to figure out how to re-format and display whatever rubbish they dish up.

Apart from complaints by the banks, Treasury officials expect to hear positive contributions from consumer groups, other financial services providers, financial technology firms and app and software designers.

Let's not disappoint them. This is a good opportunity to ensure the government clears the way for innovation that puts you and me at the heart of financial services, without mistakenly creating further barriers in the process.


Related Posts with Thumbnails