technology
politics

Was Twitter Right To Have Booted Trump?

Stanford University
Electronic Frontier Foundation
Genesis
Response
Penultimate
Finale

Francis Fukuyama

Stanford University

January 19th, 2021
I don't believe that the current model of civil society groups pressuring platforms to "do the right thing" is a sustainable response to the political harms they cause. The problem is their underlying power to artificially amplify or silence political speech, which is akin to the oligopolistic power of the three major TV networks back in the 1950s and 60s. In addition, these companies control what you see and hear through non-transparent algorithms meant to enhance their bottom lines. I liken this power to leaving a loaded gun sitting on the table in front of you: you may trust the person sitting opposite (Jack Dorsey, Mark Zuckerberg) not to deliberately pick up the gun and shoot you, but no democracy can rely on the good intentions of existing power holders to protect itself in the long run.
When the three TV networks occupied a similar position as the platforms today, the nation saw fit to regulate them through the FCC's Fairness Doctrine, which met sustained conservative opposition and was eventually rescinded in 1987. A new Fairness Doctrine is inconceivable now given the country's higher degree of polarization.
Besides government regulation there are several other routes to reducing platform power that have been put forth, including antitrust actions, encouraging platform switching and competition through data portability, and using privacy law to limit the platform's ability to exploit user data (something Europe's GDPR theoretically does already). I believe that each of these approaches has major weaknesses that will prevent them from seriously reducing platform power.
Over the past year, I have been leading a Stanford Working Group on Platform Scale, and we have put out a proposal to solve the political harms problem through creation of a competitive layer of what we call "middleware" companies. Middleware would stand between the platforms and users, and allow the latter to control their feeds. The platforms would in effect be made to outsource content curation to a much greater diversity of firms, and users would gain the ability to control their content. For example, a consortium of universities could direct students and faculty to use certified middleware that would direct them to credible information sources.
The usual criticism of this approach is that it would reinforce filter bubbles since middleware would help filter out content that users didn't want to see. The objective of public policy should not however be to stamp out fake news and conspiracy theories. The First Amendment protects the right of people to say what they want. What we want to avoid is amplification or silencing of voices on a scale that could conceivably change the result of an election.
There are many issues that remain to be worked out if middleware is to succeed; the initiative would likely require the creation of a new regulator, and a sustainable business model. But it seems like the best available path forward.
Our Working Group report that goes through all these issues at great length is available here.
0 Comments