In a widely praised and generally excellent speech last week, Sacha Baron Cohen, AKA Borat, AKA Ali G, launched a scathing attack on what he named as the “Silicon six” – six American billionaires who have made their substantial wealth from social media and data companies, primarily Facebook, Twitter, and Google. Baron Cohen argues these companies have changed the information environment for the worse, with real consequences for democratic processes and practices.
In his speech, Baron Cohen argues for a regulatory and moral response: for companies to be held accountable by regulators, and to hold themselves accountable, for the material they allow to be shared online. The most striking bow he draws is one between Nazi era Germany and today, noting that Hitler himself could have bought an ad for the Final Solution on Facebook.
Baron Cohen is probably right: Hitler could have bought such an ad and targeted it to heighten anti-Semitic fervour in Germany, or perhaps even to influence policymakers across the English Channel wavering in their resolution to address the looming horrors of the Third Reich. It is difficult to argue with Baron Cohen’s sentiment, and he is far from the first to express it.
The radicalisation (and stupidity) resulting from the business model Baron Cohen identifies can often be a result of microtargeting, which allows eyeballs to be traded for cash in data economies and narrative wars.
The links he draws between dangerous speech, false speech, microtargeting, and old hatreds rendered new by social media demonstrate the breadth of the problem as many others have. It just means more when the comments are made by someone who has made a career out of treading the fine line between offense and comedy.
But what does it mean to call for the regulation of information online in this way? Baron Cohen was speaking to an American audience, at the Anti-Defamation League. In his speech, he recognised the US Constitutional framework which underpins much of the discourse around the regulation of social media in that country, at least. The weighty responsibility of the First Amendment and the narratives of free speech it drives have become political bullets in recent years, and shape much of the debate about regulation of hate speech in the US.
Those same frameworks don’t exist elsewhere. In Europe, for example, limits on speech have been regularly enforced prior to the social media explosion, and afterwards. This speaks to the problem of social media and disinformation for all of us. It’s a globalised phenomenon, but the US is arguably a compromised player, shaped not only by its own political and legal framework, but by the special role tech companies, not least the Silicon Six, play in the US economy.
Despite the best protestations of many in the current crop of Democratic hopefuls, it is highly unlikely that big tech in the US, at least, is going to be regulated in any meaningful way, especially in the context of speech. Speech is too integral to American politics, and tech is too integral to the American economy.
Perhaps the response, then, is not to focus on the definition of speech, but on the microtargeting which allows advertising to target individuals most susceptible to its messaging, removing the commercial incentives and facility for malicious voices, rather than policing those voices directly. The radicalisation (and stupidity) resulting from the business model Baron Cohen identifies can often be a result of microtargeting, which allows eyeballs to be traded for cash in data economies and narrative wars.
Shaping the rules around microtargeting is not subject to speech rules in the same way that requires corporations to make a political (and moral) judgement. Indeed, efforts afoot outside the US, especially in Europe, show that the discussion is possible, even if regulation is still less than effective.
But these efforts are only possible where certain conditions exist: market power and political will. In Europe, that political will is borne of a particular conception of individual rights, especially privacy. These conditions are particular, not general, and will not address the global scourge of rumour and disinformation. It is unlikely that Myanmar, for example, will be able anytime soon to implement meaningful regulations against hate speech or microtargeting.
But lessons can be learned from the experience of other effective regulations. India, for example, as Facebook’s largest market, has been able to influence the trial and implementation of new measures to stop the spread of disinformation and rumour on Whatsapp, some of which have now been rolled out worldwide. Market power, especially largely untapped, is a potential stick against disinformation to be wielded more effectively than fines, or fantasy regulation in small, saturated, or compromised markets.
Confidence, and lessons, may be drawn from the leadership of giant and still developing markets, including India, Thailand, and Indonesia, in implementing a variety of regulatory measures, including recent discussions at the Organisation for Economic Cooperation and Development concerning measures to tax social media giants more fairly.
Relatedly, Kazakhtan is still very firmly of the status of a developing market for social media. There may be (some) hope for Borat yet, though very little for his Los Angeles–dwelling alter ego.