Wednesday, 20 May 2009

Private Sheriffs in Cyberspace, Counter-Regulation

Last night I attended the lecture by Professor Jonathan Zittrain on "The Future of the Internet: Private Sheriffs in Cyberspace", organised by the SCL organised in collaboration with the The Oxford Internet Institute. Jonathan is a Professor at Harvard Law School, Co-Founder and Faculty Director of the Berkman Center for Internet & Society, a great intellect and a fabulous speaker.

As the title suggests, Jonathan was highlighting the role of private rule-makers in the development of Internet-based services. Helpfully, he suggested a quadrant on which you can place rule-making for all scenarios. On the vertical plane, one considers whether rules are decided "top-down" by a dictator or small group of individuals, or evolve bottom-up amongst all interested participants. On the horizontal plane, one considers whether the rules are handed down and enforced via a single hierarchy or via a polyarchy of different people or agencies. I've re-drawn it here for the purposes of discussion, and hope Jonathan doesn't mind:



You can plot various examples on the chart, with a totalitarian regime being in the upper left corner, and Wikipedia being in the lower right.

Interestingly, Jonathan suggests that the likes of Google, Apple and Facebook are top-down rule makers, because their site terms and policies are all decided by the company and not the users of their services, albeit those companies tend to be very responsive to bottom-up pressures. He cites the exclusion of certain lawful, though potentially offensive, applications from the iPhone and Facebook platforms as examples of decisions that might not be consistent with previous decisions, nor deemed constitutional in the public environment. He queries whether, in time, these might result in some alternate form of regulation and considers what that might entail.

My sense is that this scenario is not quite so clear cut, since the evolution of services or platforms provided by those companies (read iPhone apps in the case of Apple) seems primarily based on user participation, feedback and complaint, rather than board or departmental decision-making. I'm not even sure that, when push comes to shove, those companies necessarily triumph. There are significant instances where - to their enduring credit - each of those companies backed down and modified services and terms in the face of widespread user vitriol.

However, it is true that in general terms, at least before push comes to shove, such firms are the 'sheriff' of their own platforms. And it is conceivable that there could be a substantial gap in time, and a significant amount of individual consumer detriment - mild or otherwise - before any arbitrary, inconsistent or harmful exercise of corporate discretion is corrected by some kind of mass user "action". But of course this phenomenon occurs even in the context of highly regulated businesses all the time - e.g. retail financial services, as Financial Ombudsman statistics demonstrate. Offline retailers and distributors also decide not to distribute certain products on their own whim, or due to informal pressure from certain interest groups.

So the responsiveness of a service provider to its users, and the legality of its behaviour, does not seem to be a function of how that service provider or its services are regulated. But is users' trust or faith in the provider a function of the type of regulation that applies to the service?

Jonathan looks at various models for keeping the private sheriffs honest, e.g. vicarious liability for harmful material of which the service provider is on notice (see PanGloss), public law constraints on municipal authorities and 'due process' requirements. But, crucially, he points out that when users start to feel powerless they look to top-down bodies for help - i.e. towards the top left of the quadrant - when perhaps the online world is demonstrating there are more trustworthy solutions to the lower left and right. To the lower left, Jonathan cites the adherence to the robots.txt exclusion standard, whereby researchers effectively agree not to interrogate certain parts of web publisher's domains. To the lower right, he cites the broad editorial body of interested participants in Wikipedia. Either solution might be safer than entrusting control to, say, government institutions that think nothing of bending or breaking the law under the guise of detecting crime, or the vague notion of "national security".

And here's the crux of the problem. When does a trusted service provider suddenly cease to be trusted to make and enforce its own rules?

To me, this seems to me to be answered by whether the service provider is perceived to be acting in its own interests or that of its users - or when it loses its "human effect", as I think Jonathan put it in answer to a different question. Here, the Wikipedia example is an interesting one. As Jonathan noted there is a constant preoccupation amongst the Wikipedia editorial community about what Wikipedia is and what it means to be a Wikipedian. This has also been touched on in the context of brands striving to be facilitators rather than institutions. Is this human element necessary for rule-makers and service providers to preserve users' trust in them?

As I've mentioned previously in a wider context, the rise of Web 2.0 facilitators that have enabled us to seize control of many of our own retail, political and other personal experiences has been accompanied by a plunge in our faith in our society's institutions. Are they causally related, or inter-related?

In this context, it is interesting to consider a shining example of a service provider and rule-maker that has utterly lost its way, and our respect: the UK's own House of Commons. Weeks of attention on MPs' excessive expense claims - widely viewed as a proxy for their attitude to the taxpayer generally - has forced the nation's legislators to reconsider how they themselves should be governed. And it's worth noting that much of that attention has been brought to bear via the Internet. Ironically, and in line with Jonathan's observation about where we look to when we feel powerless, the MPs are looking to the upper left quadrant in suggesting yet another Quango as an external regulator of their activities - a so-called "Parliamentary Standards Authority". That such a body needs to exist raises huge questions about the ethics of the body it is supposed to supervise.

But who on earth should comprise the members of such an authority? How could it bring about a positive change in the attitude of MPs to us, their constituents?

Which brings us to the notion that the private sheriffs of cyberspace may have a lot to teach their 'real world' counterparts about what it means to act in the interests of their users in order to retain their trust. This is a notion that I explored in an article for the SCL in May 2006, entitled "Counter-regulation" - a term I used to describe when the law requires offline businesses to implement the benefits of successful online business models. So, to borrow from Jonathan, perhaps MPs should be looking to the lower left and right of the rule-making quadrant for an alternative regulatory solution that could begin to restore a human element and raise the level of our faith in Parliament. And maybe our suspicion of Quangos as merely a means to reward government supporters with a nice cushy job would also be eased if the Quango in question comprised a very large, active group of UK taxpayers.

No comments:

Post a Comment