Jonathan Zittrain (YC ’91) is one of the world’s leading authorities on Internet regulation. He is the co-founder and director of Harvard’s Berkman Klein Center for Internet & Society, the George Bemis Professor of International Law at Harvard Law School, a professor at the Harvard Kennedy School of Government, a professor of computer science at the Harvard School of Engineering and Applied Sciences, and director of the Harvard Law School Library. Zittrain has worked with the US federal government as a Distinguished Scholar-in-Residence at the Federal Communications Commission, whose Open Internet Advisory Committee he chaired, and as a panel member of the National Security Agency Advisory Board. He has also served on the Board of Directors for the Electronic Frontier Foundation and the Board of Advisors for Scientific American. Zittrain’s work is frequently published by popular outlets like the New York Times, the Washington Post, the Atlantic, and the World Economic Forum.
The Politic: The regulatory elephant in the room right now for all sectors –– not just tech –– is the Biden transition. The latest personnel scoop is that Tim Wu [who coined the phrase net neutrality and has advocated for aggressively enforcing antitrust laws against big technology companies] might join the National Economic Council.
Jonathan Zittrain: Tim was a student in the very first class I co-taught, with Larry Lessig, on cyberlaw! He was, and is, a deeply creative and grounded thinker on all things technology.
In what ways do you expect a Biden administration’s approach to digital governance to be different from the prior administration’s?
I expect that the Biden administration will get into the intricacies and difficulties of digital governance that the Trump administration just never touched –– because by its own account it was not so interested in technocratic policy across the board.
The primary distinctive impulses of the Trump administration were to seek revenge on the president’s enemies. How did that translate into tech? Well, it meant approving certain mergers and denying others in pursuit of a tech policy personalized to the president’s grievances.
I don’t know how much Biden enters with a slant one way or the other, except to some extent if there’s anything adjacent to lenders or credit card companies given his Delaware connections and how many of those companies are in Delaware.
Certainly, if he’s bringing on Tim Wu, he’s thinking about a tech policy that is perhaps foregrounding user interests and aggressive around competition.
What are some specific policy areas where that foregrounding might show up?
It’s surely going to show up on antitrust, where the government may go beyond weighing in on mergers and formally launch or continue investigations into a company and its relationship to other companies and to its users.
There’s a ton of discretion within the Department of Justice around which targets of interest the department wants to pursue. I don’t know if anybody can reliably predict yet if the White House will set an agenda, or if it will just put people into the Justice Department and tell them to pursue cases independently as they might in other areas of enforcement.
It will be interesting, too, to see the priorities that the Federal Communications Commission sets as it becomes a majority Democratic-appointee body. The resignation of the [Trump-appointed] chairman Ajit Pai was not to be taken for granted since he did have time left in his term as a member of the FCC, even if he would no longer chair. Looking back to the Obama FCC, if you think there might be continuity of policy there, you might expect action on things such as price gouging on prison phone calls, net neutrality, and narrowing or crossing the digital divide. Broadband deployment initiatives have certainly become much more innately understandable during the pandemic: If you can’t communicate with others through the way we’re communicating right now, you’re left behind.
Of course, another elephant in the room is CDA 230 [Section 230 of the Communications Decency Act, which provides Internet companies with immunity from some forms of liability associated with third party content on their platforms]. What are the responsibilities of social media platforms around misinformation, harassment, and how accountable might they be with their newsfeed ranking and algorithms?
Is Section 230 an area where you think there might be bipartisan consensus –– since it’s an issue that has rankled both parties?
Well, I think there is a superficial emerging bipartisan consensus that nobody likes the status quo. I mean, you have candidate Biden and incumbent Trump both on the record saying they think 230 should be repealed or at least substantially pared back. Trump at times tweeted, “Repeal 230!” But they’re coming from very different angles.
As I understand it, Biden’s position on social media liability is that the social media platforms should be investing more money and effort into concern with what people see –– in particular misinformation on their platforms. That’s the Biden starting point. He sees CDA 230 as an obstacle to that because it makes the platforms not legally responsible for what they curate (or choose not to).
The Trump starting point was that the platforms are discriminating against conservatives and it’s CDA 230 that permits that kind of discrimination to take place. If 230 were gone, the supposition is that the platforms would be incentivized to back off, not to invest more in moderating but to invest less, since under the law if you moderate less, you are less responsible for what appears. The thinking goes that without 230, platforms would behave much like Verizon, which isn’t responsible for the content of every phone call, or a bookseller, which has less exposure to liability than a newspaper because it doesn’t, and isn’t expected to, review each book on its shelves.
That’s a weird policy bank shot — many pieces have to fall into place for that to be the case and I’m not sure they would.
But it is interesting that you have both sides saying they want to get rid of 230 because they anticipate opposite reactions by the companies. They can’t both be right.
You mentioned antitrust, too. We’re used to thinking about anti-trust in terms of competition and monopoly pricing power. To what extent do you think about breaking up big tech in those traditional terms versus in terms of governance issues that are more unique to tech –– like privacy, misinformation, and democracy?
Great question. I’m firmly in the second category because these aren’t regular products of defined categories like orange juice for which we simply don’t want to see price gouging or an absence of competition over price and quality for consumers.
That’s not the problem with tech. Many of the services in question are free. I think that means whatever harms we’re considering, whether we’re talking about privacy or misinformation or something –– and it’s important to know what we’re talking about –– it’s probably going to call for creative doctrines and remedies.
I’m not sure that just yanking Instagram or Whatsapp back out from Facebook is particularly meaningful. Rather, we might think of ways to structure access to third-party infrastructure on behalf of consumer welfare. Again, unlike other products, what exactly tech’s “orange juice” is –– it’s very malleable. Facebook is a product containing potentially many products: a search engine that searches Facebook; a special kind of search engine that presents stuff to you if you search nothing –– when you show up to Facebook, you enter nothing in any search box and it still gives you results; it’s also supporting the storage and presentation of all those links and comments and smiley faces. Those functions could be separated from one another. You could imagine companies separate from Facebook whose only job is to search content and that they might present it in entirely different ways and let people pick their windows into the cauldron of the soup that is Facebook content.
How separable those things are isn’t obvious because the technology is invented. It’s not like orange juice.
Another development that the Biden administration will have to contend with is that the US –– and by extension, the West –– is no longer a unipolar locus for anything, including digital innovation. There was a lot of attention paid under the Trump administration to the national security implications of China’s technological advancement, but what about the implications for systems of values and governance? What does the future of the Internet look like if it includes a more expansive role for China?
I confess I’ve never really understood the more narrowly defined problem of China and Huawei and 5G, for example.
If it is a problem, it’s so pervasive that I’m not even sure where to get started because things may be designed in Cupertino, California, but then fabricated elsewhere. If you’re worried about literal intentional defects in the supply chain hardware to facilitate surveillance or some kind of attack later, whatever brand is stuck on the outside of the box is irrelevant to how dangerous the stuff inside the box is. If you’re feeling paranoid, that’s a big problem that is not specific to 5G. To just treat it as –– well, should people purchase Huawei products? –– is closing a window of a very well-ventilated house thinking that you’re done with your heating problems.
More broadly, it’s been interesting for me to see around the world how governments are trying to achieve their own ends –– for example, promoting their specific regime story –– and making it so that dissident voices can’t be heard.
It used to be that the voices were the voices and the governments would just try to censor at the network level. For example, blocking the website for MIT from China because there’s a webpage run by an MIT student organization with stuff that the government doesn’t want people to see.
That evolved over time to governments creating licensing and rules for companies hosting social interactions like chat rooms, so that companies must occasion censorship along government principles. Then it’s left to those private companies to invest in censorship on pain of losing their licenses if they don’t do it well.
And then the landscape has further evolved into governments as semi-hidden or fully-hidden players among the voices. In the echo chamber of a social network, if you say something and it’s met with backlash against you, that uprising can in fact be orchestrated by one person hired by a government. That can make expressions of a view –– done earnestly by someone participating in a social network –– feel very painful.
That’s something that I think is a big enough problem that probably has to be addressed government-to-government. That kind of propaganda should be the topic of treaties and understandings and things where if the other country is harassing your diplomats, there will be sanctions if you keep it up. Better understanding the scope of the problem and trying to create an organic space where people can interact –– returning us to the more organic Internet of 2009 –– should be pursued with our statecraft.
One other front in the technological competition between the US and China –– and something that has been a focus of your own work –– is artificial intelligence. Thinking broadly about AI, what excites you about AI and what aspects of it raise the most governance concerns for you?
What excites me is a prospect that scares a lot of people: people should ideally spend the bulk of their time doing what moves them –– rather than doing that which if they stopped doing it, they might starve.
In that sense, it can be quite positive if mechanization, including embedded AI, can mean that any occupation can be done by a machine if no one is interested in doing it. Look at it as the introduction of meaningful leisure for people and meaningful work, which is work undertaken by choice, rather than solely by economic duress.
Part of what scares people about this — and, to be sure, me — is an aspect of the same thing, which is that one of the last resorts of protest against a government that’s unresponsive to its citizenry is the general strike, where if people stop working, your economy shuts down, your government shuts down, the leaders can’t get the whiskey they want. That’s a form of pressure. If the gears of the economy are being turned with the flip of a switch, there’s nothing to withhold anymore as a citizen. At an extreme, that could be an issue.
And of course, there are the disruptions that could occur if a citizen is deemed to have nothing to offer that’s worthy of payment because machines are doing all the work. If you have no other means culturally and economically of supporting that person, even though there is a society of plenty because machines are doing all the work at zero wage, then you have a problem of people being dispossessed because you haven’t updated the notion of a market required only when there’s work that has to be done.
That starts to pull in things around universal basic income and what it means to live in a world of plenty and how to distribute that plenty among people.
That of course hasn’t even touched the many issues around AI and bias that are now very much in the public sphere.
And another area is when today’s AI, machine learning in particular, is capable of giving us answers and no explanations. What does it mean to take an answer and run with it without backfilling an understanding of why that might be an effective answer. You don’t know the boundaries of when an answer might stop being effective or whether there might’ve been alternatives that aren’t as costly or unfair. I call this “intellectual debt.” I’d hate for AI advances to be a reason to stop thinking collectively.
There’s one last question that I’d like to ask since we’re a political journal interested in journalism. Australia just passed a law that requires platforms like Facebook and Google to compensate media companies for their journalism. To what extent do you see that as a model for the US and other countries?
I think the resolution is probably not a model, although it’s TBD. It looks like a nearly complete win for Facebook occasioned by Facebook cutting off all Australian news preemptively to say, “We can make life very hard for you, are you sure you want this?”
I say this as somebody who was skeptical 10 years or so when Spanish publishers and newspapers felt like Google News was costing them a lot of money and yet they couldn’t help but want to be featured in Google News. You know, Google’s answer was, “Hey, if this isn’t working for you, just say the word and we’ll cut you off.” That ended that.
I found myself not very sympathetic to the publishers’ position because it would be catastrophic to an open web –– which is already under pressure –– to make it so that linking to something could cost the linker money.
But we know that well-researched news is both valuable and costly. It’s valuable in a way that orange juice isn’t valuable. It’s the lifeblood of a civic society, especially when produced under principles of fair journalism that professionals subscribe to, rather than just as a product. It has to be paid for somehow and that’s a problem. It’s everybody’s problem, not just the newspapers’ problem. If it turns out that aggregating all that stuff is extremely lucrative to the platforms, but none of that money finds its way back to the newspapers, that’s a problem for everybody to try to figure out how to solve. It’s a failure of our imagination and a lack of attention by the companies that we’re just left with these fitful government actions trying to do a one-sized government solution.