Google

Sunday 29 March 2015

Is There Really A Single EU Market?

Some sobering figures from the European Commission for single market fantasists enthusiasts (as if Greece wasn't sobering enough).

EU cross-border services account for 4% of all online services, as opposed to national services within the US (57%) and in each of the EU member states (39%). 

15% of EU consumers bought online from other member states, compared to 44% who bought online nationally, with online content seeing double-digit growth.

Only 7% of SMEs sell online across EU borders - and it costs an average of €9,000 to adapt their processes to local law in order to do so. 

The cost/price of delivery is (obviously) cited as a major problem, as well as differing VAT arrangements. But suggested solutions seem to ignore these and other key barriers to cross-border retail that have been cited in previous market studies, such as lack of marketing strategy, preference for national brands, language barriers and local employment law challenges. Presumably, that's because the Commission can do little to address such fundamental practicalities. Instead, they want to focus on:
  • stronger data protection rules;
  • broadband/4G roll-out;
  • use of 'Big Data' analytics; and
  • better digital skills amongst citizens and e-government by default.
The sense of futility that permeates such reports by Eurocrats only emphasises the fact that the law follows commerce; it doesn't catalyse markets.  

Yet, ironically, in areas where commercial and consumer pressure to enable cross-border activity is emerging, such as crowdfunding and crypto-technology, we find European institutions taking an unduly restrictive approach.

When will they simply get out of the way?


Who Is Late In Paying Our #SMEs £41bn?!

In an attempt to eradicate late payments to small businesses of approximately £41bn, the government has proposed that, from April 2016, large listed companies will have to report twice-yearly on: 
  • their standard payment terms;
  • average time taken to pay; 
  • the proportion of invoices paid within 30 days, 31-60 days and beyond agreed terms; 
  • amount of late payment interest owed/paid; 
  • incentives charged to join/remain on preferred supplier lists; 
  • dispute resolution processes; 
  • the availability of e-invoicing, supply chain finance and preferred supplier lists; and 
  • membership of a Payment Code.
A copy of the simple but effective sample report is attached to the government's announcement.

Not only should this data result in the naming and shaming of late payers, but it should also further define and foster growth in the market for discounting these invoices, to help fund the growth of the affected SMEs.

Monday 23 March 2015

8 Financial Services Policy Requests - Election Edition

If you've been lumped with the job of writing your party's General Election Manifesto, here are 8 financial policies to simply drag and drop:

1. Remove the need for FCA credit-broking authorisation just to introduce borrowers whose finance arrangements will be 'exempt agreements' anyway - it makes no sense at all;

2. Remove the need for businesses who lend to consumers or small businesses on peer-to-peer lending platforms to be authorised by the FCA - again, it makes no sense, because the platform operator already has the responsibility to ensure the borrower gets the right documentation and so on; an alternative would be to allow such lenders to go through a quick and simple registration process;

3. Remove the requirement for individuals who wish to invest on crowd-investment platforms to certify that they are only investing 10% of their 'net investible portfolio' and to either pass an 'appropriateness test' or are receiving advice - it's a disproportionately complex series of hoops compared to the simplicity of the investment opportunities and the typical amounts at stake;

4. Focus on the issues raised in this submission to the Competition and Markets Authority on competition in retail banking, particularly around encouraging a more diverse range of financial business models;

5. Re-classify P2P loans as a standard pension product, rather than a non-standard product - the administrative burden related to non-standard products is disproportionately high for such a simple instrument as a loan;

6.  Reduce the processing time for EIS/SEIS approvals to 2 to 3 weeks, rather than months - investors won't wait forever;

7.  Reduce the approval time for FCA authorisation for FinTech businesses from 6 months to 6 weeks; alternatively, introduce a 'small firms registration' option with a process for moving to full authorisation over time, so that firms can begin trading within 6 weeks of application, rather than having to spend 3 months fully documenting their business plans, only to then wait 6 to 12 months before being able to trade - others entrepreneurs and investors will stop entering this space;

8. Proportionately regulate invoice discounting to confirm the basis on which multiple ordinary retail investors can fund the discounting of a single invoice - it's a rapidly growing source of SME funding, simple for investors to understand and their money is only at risk for short periods of time.


Tuesday 3 March 2015

Artificial Intelligence: The Control Dichotomy

Professor Nick Bostrom delivered an inspirational speech for the SCL last night on "Superintelligence: a godsend or a doomsday device", although few would have found it reassuring - it is certainly conceivable that machines could become more intelligent than humans and that humans might not be able to control them. But these are still early days, he stresses. Regulating the development of artificial intelligence at this point risks halting progress. There's a lot more work to do to really understand how artificial intelligence will develop beyond playing old video games better than humans or recognising an image as the picture of a cat. We need to consider how the technology could help avert our extinction, as well as how it might wipe us out. Yet Nick also warns that it will take far less time for computers to exceed human level machine intelligence than to reach it in the first place. So we need to start work on the control mechanisms for the development and use of artifical intelligence, without regulating the industry out of existence: the control dichotomy.

Nick suggests that the guiding principle should be that of "differential technological development" - diverting resources away from technologies and their application which could cause human extinction while focusing on those which will either help prevent our demise or will facilitate the expansion of the human race throughout the cosmos.

But how do we distinguish between helpful and harmful technologies and their application? 

As Nick points out, it's tough to think of any human invention that is inherently 'good'. He mentions many things, from gun powder to genetic engineering, and I think we can throw in the wheel and the Internet for good measure. All these things are used by humans in bad ways as well as for the greater good. But I think what horrifies us especially about the idea of Superintelligence or 'The Singularity' is that it will be machines, not bad humans, who will be using other machines against us. And while we have lots of experience in dealing with evil humans, even our top minds admit we still don't know much about how machines might act in this way or how to stop them - and what we humans fear most is the unknown. 

You'll notice I haven't said 'evil' machines, since they might not be operating with any evil 'intent' at all. Human extinction might just be a mistake - 'collateral damage' arising from some other mission. For instance, Nick suggests that a particular machine left to itself in a given situation might decide to devote itself entirely to making paperclips. So, presumably, it would not bother to put out a fire, for example, or free a human (or itself) from a burning building. It might leave that to other machines, who might in turn have chosen their own narrow objective that involves ignoring people's screams.

Here's where I struggle with the notion of Superintelligence. In fact, as someone who hates being pigeon-holed into any single role, I think a machine's decision to only ever make paperclips might be fabulously logical and a brilliant choice for that machine in the circumstances, but it makes the machine as dumb as a post. For me, Superintelligence should involve a machine being able to do everything a human can and more

But that's beside the point. Knowing what we know already, it would be insane to ignore the paperclip droid and wait for artificial intelligence to develop a machine more capable than humans before figuring out how we might control it. Nick is right to point out that we must figure that out in parallel. In other words, the concept of human control has to be part of the artificial intelligence programme. But it won't be as simple as coding machines to behave protectively, since machines will be able to programme each other. For instance, Nick suggests we could put the machines to work on the control problem, as well as on the problem of how to ensure the survival of our species. AI labs might also pay insurance premiums to cover the damage caused by what they develop. He was less certain about what we might do to constrain developments that occur in the context of secret defence programmes or intelligence gathering, but he seemed confident that we could at least infer the pace of development from the results, and be able to consider how to control the wider application of those developments. Mmmm.

At any rate, Nick also warns that we need to be careful what we wish for. Mandating human survival in a prescriptive way - even a specific biological form - would be a bad move, since we should not assume we are in a position to foster positive human development any more than the Holy Office of the Spanish Inquisition. Better to embed positive human values and emotions or, say, entertainment as a feature of intelligent machines (although I'm guessing that might not go down well with the jihadis). From a phyiscal standpoint, we already know that the human body won't do so well for long periods in space or on Mars, so some other version might need to evolve (okay, now I'm freaking myself out).

To retain a sense of pragmatism, at the end of the speech I asked Nick what he would recommend for our focus on 'Keeping Humans at the Heart of Technology' at the SCL conference in June. His tip was to consider which of the various types of control mechanism might work best, recognising the need to avoid constraining the positive development of artificial intelligence, while ensuring that we will be able to keep the machines in check if and when they become smarter than us.

No pressure then...


Related Posts with Thumbnails