On February 8, 1996, from the World Economic Forum in Davos, Switzerland, an American rock lyricist and state legislator’s son named John Perry Barlow issued a “Declaration of the Independence of Cyberspace.”
Back home in the United States, Congress had recently passed the Communications Decency Act, in which the largely elderly and analog legislature asserted its power over the still nascent Internet. Barlow protested that they should keep out. He believed the Internet was a wholly new kind of place, one that Congress had no right to govern. Evoking language from the country’s original Declaration of Independence, he wrote:
Governments derive their just powers from the consent of the governed. You have neither solicited nor received ours. We did not invite you. You do not know us, nor do you know our world. Cyberspace does not lie within your borders. Do not think that you can build it, as though it were a public construction project. You cannot. It is an act of nature and it grows itself through our collective actions.
Despite Barlow’s complaint, government regulation has shaped the Internet in profound ways. The Internet itself was the product of a military research program, and government decisions made it publicly available in the first place. Since then, public policy (or the lack thereof) has affected users’ daily experience of social media, as well as the fortunes of companies that reap profits from it.
Even in 1996, Barlow need not have been so worried. The following year the US Supreme Court struck down much of the Communications Decency Act for violating the First Amendment of the Constitution. One part of the law that remained standing has since become famous. It is known simply as Section 230. This provision protects online platforms from being sued for most of the content that users post, freeing the platform’s owner to moderate as they see fit. Without this law, social media as we know it might not have been possible. Internet companies would have to behave more like offline newspapers and television stations, carefully vetting everything they publish. Section 230 meant that, at least in certain ways, the Internet could play by its own rules, just as Barlow had hoped.
Through the very law that he objected to, the Internet got a kind of declaration of independence. But that happened because of, not despite, government policy.
This chapter will explore how social media has been and continues to be shaped by the often invisible effects of legal regulation—in increasingly diverse ways around the world.
Section 1: What media regulation does
Whenever media has had influence over cultural and political life, governments have sought to regulate how media circulates. Pre-digital governments established rules about the production of manuscripts and required licenses for printing presses. The US news industry benefited early on from a postal subsidy, which reduced costs for delivery, along with the relatively expansive free speech allowed under the First Amendment. As telegraph lines enabled instantaneous communication across long distances, governments attempted to challenge the growing power of companies that controlled the lines.
The rise of radio and television came with new sets of rules about technological standards, political speech, and acceptable content. For instance, between 1949 and 1987 in the United States, the “fairness doctrine” required broadcasters to include diverse viewpoints on controversial social issues. Federal law prohibited content deemed obscene from public airwaves but was more permissive with private cable networks.
Some countries have regulated by creating high-quality, government-funded public media that retained independence from politicians in power. The best-known example is the British Broadcasting Corporation, or BBC, which was founded in 1922. This kind of public media can put pressure on private companies to meet higher standards of quality.
Meanwhile, governments have used antitrust law to prevent companies from becoming too powerful, and media companies have often been targets of antitrust action. For instance, US regulators used antitrust enforcement to break up the Bell telephone system in 1982, followed by actions against Microsoft in the 1990s and Google in the 2020s. Some scholars have argued more recently that antitrust law should have been used more for social media, such as by preventing Meta from acquiring Instagram and WhatsApp.
Media regulation enables and constrains the media available to us in various ways. It limits what content is allowed, keeps corporate power in check, imposes technological standards, and establishes government-funded media outright. For many media consumers and producers, the role of regulation may not be obvious in their everyday lives if they never learn to see it, but it is touching their media lives nonetheless.
Click here for a captioned version of this video.
Section 2: How regulation shaped early social media
As online social media became increasingly mainstream in the 2000s, many of the major platforms were based in the United States, such as Facebook, Twitter, and YouTube. The US government regulated the new social media with a light touch compared to broadcast media. Leading politicians and Silicon Valley entrepreneurs believed that important decisions should be left to the free market rather than to government. Ideas like the “independence of cyberspace” were influential. But regulation continued having far-reaching effects.
Section 230 became, in the words of legal scholar Jeff Kosseff, “the twenty-six words that created the internet.” Because of this law, content moderation was largely in the hands of platform companies, except in the most violent or abusive cases. Another legal scholar, Kate Klonick, argues that platforms became “the new governors,” usurping government’s traditional regulatory role. Section 230 has been widely criticized, both for giving platforms too much power to censor user posts and for not requiring them to censor more. But many experts also fear that without Section 230 the whole social media industry would be in peril.
One significant limit on Section 230’s protection of platform companies was the 1998 Digital Millennium Copyright Act, or DMCA. If you have ever experienced an automated removal after posting someone else’s copyrighted material, like a song or video clip, this is why. The DMCA raised the prospect of major fines for copyright infringement on the Internet. It also encouraged platforms to develop ways to pay copyright holders for the use of content that they own. The DMCA was a severe blow to the once-widespread belief that digital technology made copyright obsolete, that “information wants to be free,” as the early Internet advocate Stewart Brand once claimed.
Another important kind of regulation that has shaped social media might not seem relevant to media at all: the regulation of finance. Most major social media platforms grew thanks to a kind of financing called venture capital. This practice involves large, risky investments in companies that are expected to grow very quickly and take over entire markets. Changes to financial regulation in the 1970s and 1980s brought vast sums of money into venture capital, setting the stage for the growth-centered, all-or-nothing culture that dominates social media platforms today.
Finally, the regulation of work has also affected social media. In the early days of online communities it became a practice to compensate users with discounts for serving as moderators of chat rooms and forums. But labor regulators became concerned that this amounted to a violation of labor law, since the compensation typically did not reach the minimum wage and failed to include other benefits that employees are entitled to. Subsequently it became a norm that moderators of online communities are expected to serve as volunteers. More recently, the influencer economy has provided new ways for users to monetize their communities, such as through third-party sponsorships or revenue-sharing with the platforms.
As the social media economy has matured, leading companies have shifted from preferring to ignore government to spending many millions of dollars every year on influencing government. Much of this spending seeks to block changes that the industry sees as threatening.
Although US law has already shaped social media profoundly, the Internet has generated less ambitious regulation than radio and television did. At the same time, countries that have become dependent on US-based platforms are increasingly seeking ways to regulate their citizens’ online lives more, as they see fit.
Section 3: A deepening “splinternet” around the world
While media regulation usually takes place in the context of particular national governments, it is also an international concern. During the mass media era, for instance, people in many countries viewed the influx of Hollywood productions as a threat to local culture; some actively restricted the circulation of Hollywood films to encourage homegrown entertainment industries. A 1980 United Nations report, Many Voices, One World, provided an international framework for policies to protect what came to be called communication rights—the rights of all people to express their cultures and beliefs, and to access the media necessary to be heard. This framework involved seeking a balance between the free flow of information and the ability for local communities to have collective control over their media ecosystems.
Similar concerns have returned in the age of the Internet. With most social-media platforms headquartered in just a few countries, critics have began describing the online economy as digital colonialism: a situation where a wealthy few can dominate others politically, economically, and culturally. Just as earlier forms of colonialism extracted valuable raw materials from poorer countries, the argument goes, digital platforms make money from the social interactions of people whose societies do not control or benefit from those platforms. When the whistleblower Edward Snowden revealed the extent of US spying on the major platforms in 2013, both allied and adversary countries sought ways to protect their secrets and their citizens.
The Internet is increasingly becoming what some have called a splinternet. A growing number of governments are asserting their sovereignty over digital space, resulting in an Internet that is more fragmented in how users experience it. The splinternet is producing social networks that are more diverse and more accountable to local societies, but it also raises the danger of greater censorship. This section will review some of the ways that the splinternet is splintering more deeply through regulation.
One type of regulation seeks consumer protection. This approach has been the focus of lawmakers in the European Union. The EU’s 2016 General Data Protection Regulation (GDPR), for instance, imposed a variety of rules on tech platforms for the management of their users’ data. The GDPR requires foreign companies to keep data about EU citizens in the EU, and it expects platforms to obtain permission from users for all data collected. While the GDPR was mainly designed to affect EU citizens, it has shaped platforms’ behavior globally. (Whenever you get a pop-up on a website asking you to let it track you with cookies, that might be because of the GDPR.) In 2022, the EU passed another sweeping regulation, the Digital Services Act, which seeks to reduce the spread of disinformation online. In these ways, European governments have become more active regulators of US-based social media giants than the US government has generally been.
Another strategy for regulation involves forms of taxation, which attempt to capture some of the value that foreign platforms generate from local economies. In Australia and Canada, governments have passed requirements that social media companies compensate news organizations when users post their content. Meanwhile, countries such as Uganda and Lebanon have tried (and ultimately failed) to impose laws on their own citizens for using social media platforms. While the Australian and Canadian laws were attempts to fund domestic news production, the user taxes were widely seen as a form of censorship by discouraging communication that could lead to unrest.
Censorship has taken various forms across the emerging splinternet. Countries such as India and Iran have used Internet blackouts—simply shutting down the Internet for certain periods of time—to inhibit protest movements. India’s government has also used the employees of foreign social media companies as leverage; if companies do not reply with the government’s requests for the removal of content or accounts, the government has threatened local employees with arrest. In addition to blocking online speech, many countries have developed armies of humans and bots to flood networks with pro-government or distracting content, drowning out the dissent.
Perhaps the strongest force driving the splinternet is China’s Great Firewall. This combination of technical and legal barriers has produced a Chinese Internet partially cut off from what most of the world can see. Major US platforms such as Facebook, Instagram, and X, as well as Google and Wikipedia, are not available on the Chinese Internet. But the great firewall is not merely a tool of censorship, as Western critics tend to describe it. By preventing foreign access to its growing consumer market, China has developed the only major platform economy that poses a significant challenge to US-based companies. Chinese platforms such as WeChat have developed features that US platforms have sought—often in vain—to copy. Regulators in China have also developed rules for ensuring certain privacy protections, limiting corporate power, and overseeing the algorithms that choose what users encounter. China has also exported its Great Firewall techniques to other countries seeking more control over their Internet, such as Iran and Russia.
When TikTok became the first Chinese-owned social media platform to gain mass adoption in the US market, the US got a taste of its own medicine—a taste of what others had long been calling digital colonialism. Elected officials began to worry about how the popular app could become an asset for the Chinese government in a conflict, whether through its data on US users or its ability to manipulate flows of information. States and the federal government began restricting use of the app among government employees and, in some cases, entire populations. Although the US has often championed an open, unfettered Internet, it is revealing that its leaders have reacted in ways similar to those of other countries when faced with similar threats.
Section 4: Major tradeoffs of media regulation
Regulating social media always involves tradeoffs. A tradeoff occurs when no course of action is perfect and decision-makers must find a balance among competing values. This chapter ends with a few of the major tradeoffs that seem to arise with every attempt to regulate social media.
Free speech vs. safety. Free speech sounds good in theory, but it can less so in practice. One person’s free speech can result in harm to other people, making them feel less comfortable speaking the truth as they see it. At the same time, efforts to help users feel safe through rule-making can dampen the liveliness of an online community. Online networks are powerful precisely because they are less constrained than earlier forms of media. But truly free speech is only possible within the context of some constraints. To experience some of these tradeoffs for yourself, try playing the free game Trust & Safety Tycoon.
Free enterprise vs. democratic oversight. The Internet as we know it arose through both government investment and private entrepreneurship. Startup companies have built the most successful social media platforms, exhibiting a kind of creativity and risk-taking that governments often lack. This is partly why Section 230 put a lot of trust in platform companies to moderate content based on the pressures they get from the market. Yet when startups become successful, their appetite for risk can have enormous consequences for society, mental health, the economy, and politics. Managing potential risks is why the GDPR sought to strengthen government control over platforms, especially foreign ones. Regulators seek to find the right balance—to foster a vibrant market of ideas and products while ensuring that the market plays by rules determined through democratic processes.
Local control vs. global standards. Part of the Internet’s early promise was to bring people together across borders. In countries that have historically restricted free speech, social media platforms based abroad can give people a way to say what they otherwise could not. But as the advocates of communication rights might remind us, global flows of information can threaten local cultures. Governments have increasingly sought to assert local values and priorities over those of the major global platforms. But human rights advocates also worry that the quest for local control will result in losing the opportunity for a more free, interconnected world.
No genuine tradeoff is easy; it wouldn’t be a tradeoff if it were. The major challenges of regulation involve trying to find balance in dilemmas where the answer is not obvious. Societies will have different ideas about what that balance should look like—and those differences are splintering the Internet into many pieces.
Regulation has already, and always, shaped the social media we already experience. The effects of regulation can be hard to see. But learning to see regulation helps us recognize that the media ecosystems around us are the result of choices—choices with both intended and unintended consequences. To see the regulatory choices made in the past can help us see more clearly how our choices today might affect the future.