- Following two years of debate against Tech giants, Mark Zuckerberg wrote an article asking for regulation
- Digital platforms had distanced themselves from content editing to ensure immunity granted by a law from 1990s. That law is out of date today and we need new rules. However, setting them is not straight forward
- Social media and digital platforms have huge scale. Regulating them will need to balance effectiveness and censorship
- Stakeholders, governments and privacy advocates, have conflicting requests – content regulation vs user privacy. These are difficult to reconcile
- Given the burden of regulation entrenches an incumbent’s position, Facebook likely doesn’t fear it. Perhaps, in order to position the business for the new internet, Zuckerberg is asking for a unified set of demands from various stakeholders from different countries.
The backlash against the Tech giants has hardly been out of the headlines in the past two years. Google has been fined twice by the European regulator, Facebook has appeared in front of the US Senate to explain its involvement with Cambridge Analytica and the European Parliament has passed a copyright directive, making internet platforms liable for content their users upload.
A tumultuous two years was followed last month by Mark Zuckerberg calling for regulation in an article for the Washington Post. But why does Facebook want regulation?
The debate
A year ago, I discussed that most of the issues with various stakeholders can be summarised into three categories: a) Anti-competitive practices b) Ownership of content, and c) Compromised user privacy.
While the issue of anti-competitive acquisitions is straight forward to fix with greater scrutiny, the last two issues are intertwined and at conflict with each other.
These issues arise not only because these digital platforms are maximising their economic returns, but also because of laws written back in the mid-1990s to sustain growth of the Internet. In the US, the law suggested “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Because that law applied only if the information service provider was not the publisher nor played any editorial role, the platforms responded to the incentives and remained neutral to user generated content. Similar laws were passed elsewhere. This allowed the advertising based business model for the internet to scale.
Given how the internet has developed since these laws were established, these rules are out of date and recent events have demonstrated that they can be abused by ‘bad’ actors – state sponsored, extremists or individuals.
So how do we set new rules? There are two issues that need to be addressed:
The issue of scale
Let’s tackle this one first. Facebook and YouTube both reach roughly two billion users, and every second a huge amount of content is added to the existing vast content library they host. In the case of YouTube, 400 hours of content is uploaded every minute. The challenge for the tech giants is how to moderate such a huge amount of content and if it is humanly possible.
If the moderation is automated, what is the balance to ensure that the tech giants are not censoring free speech or debates on topics that are deemed offensive?
While automation will be key to the solution, alongside human judgement, it is unlikely to be without error. This means that such a solution will not be available imminently for the long term as it would require several iterations to get the balance right.
Conflicting requests
Content regulation and user privacy are difficult to reconcile at the same time. While governments increasingly want the digital platforms to be responsible for the content they host, the users want more privacy and control over their data (i.e. content on these platforms). How do you satisfy them both?
Governments have wanted a degree of control over the content being shared on social platforms. To achieve this, their answer is to make social platforms responsible for the content they host. A failure to comply could lead to fines for the company and/or imprisonment of executives. At the same time, we have heard users and privacy groups voice their concerns about control that platforms have over user data. They want to liberate it. Regulators, on the other hand, want more competition and to break the monopoly of these platforms over user data. However, if and when the data is liberated, it could be exposed to misuse, as was the case with the Cambridge Analytica scandal, or it could perhaps increase the chances of it getting hacked and ending up in the hands of ‘bad’ actors. If the data is encrypted to ensure it ends up only in the right hands, control over the data is potentially lost, which is not what governments want.
So, where do we go from here?
Given the breadth and reach of these platforms today, the tech giants now have a responsibility to distinguish between good and bad user generated content. However, that could lead to potential errors of judgement and the censoring of free speech.
For a wide reaching and mature internet, we need new rules. Governments need to think long and hard before writing those new rules, as they will decide the internet of the future. Regulatory burden in most industries entrenches the market position of incumbents and makes it onerous for new entrants to compete. The governments need to balance the ‘red tape’ with the level of competition in the industry and consumer surplus they want to preserve. Why does Zuckerberg want regulation? He knows the internet needs new rules and the trade-offs that will be involved to satisfy everyone. Anticipating the issues that will become apparent further down the road, he is asking for a unified set of requests from different stakeholders that get more complex when you involve multiple governments. Standardised regulation is probably in everyone’s interest – governments, consumers, new entrants, and Facebook.