Unlike most other companies, growth and user engagement – not revenue, profit or even market capitalisation – have been the driving metrics for Mark Zuckerberg since the beginning. Any attempt to limit Facebook’s reach would have to contend with those core values.
But we currently lack any methods to do so.
If all the external forces of regulation have failed, some say, maybe we can count on a “greenwashing” operation intended to assure regulators that the company can police itself: the Facebook Oversight Board. But this collection of civic leaders, all of whom seem sincere in their commitment to improving Facebook, are unable to actually do anything about the core problems of Facebook.
The board only considers decisions to remove or retain content and accounts, as if those decisions were the reason Facebook threatens democracy and human rights around the world. The board pays no attention to algorithmic amplification of content.
It does not concern itself with linguistic bias or limitations within the company. It does not question the commitment to growth and engagement. It does not examine the problems with Facebook’s commitment to artificial intelligence or virtual reality.
The more seriously we take the impotent Oversight Board, the less likely we are to take Facebook as a whole seriously.
The Oversight Board is mostly useless and “self-regulation” is an oxymoron. Yet for some reason, many smart people continue to take it seriously, allowing Facebook itself to structure the public debate and avoid real accountability.
What about us? We are the three billion, after all. What if every Facebook user decided to be a better person, to think harder, to know more, to be kinder, more patient and more tolerant? Well, we’ve been working on improving humanity for at least 2,000 years, and it’s not going that well.
There is no reason to believe, even with “media education” or “media literacy” efforts aimed at young people in a few wealthy countries, that we can count on human improvement – especially when Facebook is designed to exploit our tendency to favour the shallow, emotional and extreme expressions that our better angels eschew.
Facebook was designed for better animals than humans. It was designed for beings that don’t hate, exploit, harass, or terrorise each other – like golden retrievers. But we humans are nasty beasts. So, we have to regulate and design our technologies to correct for our weaknesses. The challenge is figuring out how.
First, we must recognise that the threat of Facebook is not in some marginal aspect of its products or even in the nature of the content it distributes. It’s in those core values that Zuckerberg has embedded in every aspect of his company: a commitment to unrelenting growth and engagement. It’s enabled by the pervasive surveillance that Facebook exploits to target advertisements and content.
That means we can’t organise a political movement around the mere fact that Donald Trump exploited Facebook to his benefit in 2016 or that Donald Trump got tossed off of Facebook in 2021 or even that Facebook contributed directly to the mass expulsion and murder of the Rohingya people in Myanmar.
We can’t rally people around the idea that Facebook is dominant and coercive in the online advertising market around the world. We can’t explain the nuances of Section 230 and expect any sort of consensus on what to do about it (or even if reforming the law would make a difference to Facebook). None of that is sufficient.
Facebook is dangerous because of the collective impact of 3 billion people being surveilled constantly, then having their social connections, cultural stimuli, and political awareness managed by predictive algorithms that are biased toward constant, increasing, immersive engagement. The problem is not that some crank or president is popular on Facebook in one corner of the world. The problem with Facebook is Facebook.
Facebook is likely to be this powerful, perhaps even more powerful, for many decades. So while we strive to live better with it (and with each other), we must all spend the next few years imagining a more radical reform program. We must strike at the root of Facebook – and, while we are at it, Google. More specifically, there is one recent regulatory intervention, modest though it is, that could serve as a good first step.
In 2018, the European Union began insisting that all companies that collect data respect certain basic rights of citizens. The resulting General Data Protection Regulation grants users some autonomy over the data that we generate and it insists on minimal transparency when that data is used. While enforcement has been spotty and the most visible sign of the GDPR has been extra warnings that demand we click through to accept terms, the law offers some potential to limit the power of big data vacuums like Facebook and Google.
It should be studied closely, strengthened, and spread around the world. If the US Congress – and the parliaments of Canada, Australia, and India – would take citizens’ data rights more seriously than they do content regulation, there might be some hope.
Beyond the GDPR, an even more radical and useful approach would be to throttle Facebook’s (or any company’s) ability to track everything we do and say, and limit the ways it can use our data to influence our social connections and political activities. We could limit the reach and power of Facebook without infringing speech rights. We could make Facebook matter less.
Imagine if we kept our focus on how Facebook actually works and why it’s as rich and powerful as it is. If we did that, instead of fluttering our attention to the latest example of bad content flowing across the platform and reaching some small fraction of users, we might have a chance. As Marshall McLuhan taught us 56 years ago, it’s the medium, not the message, that ultimately matters.
- A Wired report