Social media regulation in the United States: Past, present, and future
Social media regulation is hotly discussed and many believe it is long overdue. From its associations with increased political polarization and war, it has its share of issues. We explore in this rundown of its past, present, and future in the U.S.

Thentia is a highly configurable, end-to-end regulatory and licensing solution designed exclusively for regulators, by regulators.

RELATED TOPICS

Thentia is a highly configurable, end-to-end regulatory and licensing solution designed exclusively for regulators, by regulators.

RECOMMENDED FOR YOU

SHARE

Share on linkedin
Share on twitter
Share on email
Share on facebook

RECOMMENDED FOR YOU

SHARE

Share on linkedin
Share on twitter
Share on email
Share on facebook

Since the explosion of Myspace (and later Facebook) in the early 21st century, social media has shaped, informed, and influenced world events and public discourse on a massive scale. Websites and applications centered on communication between users—platforms like Twitter, Tumblr, and Instagram—have proven unique in the issues that arise from their use. Issues like disinformation campaigns, incitements to violence, and violations of privacy rights have created new challenges for governments seeking to regulate commercial activity in the public interest.

As the consequences of problematic content and user activity increase in severity—many argue, for example, that the January 6th attack on the U.S. Capitol in 2021 developed out of a protest that was heavily encouraged by then-president Donald Trump through Twitter—so too do the calls for social media regulation.

In this article, we will take a look at the history of U.S. social media regulation and its current implementation, as well as future measures that governments may take to regulate the companies involved.

How does the U.S. government regulate online content?

The federal level

In the United States, the Communications Decency Act of 1996 helped to lay the groundwork for federal regulation of obscene, indecent, and illegal content online. Though some parts of the act were struck down by courts in the years following its passage, Section 230, which generally protects website platforms from liability for illegal content posted to their sites, continues to inform the legal discussion regarding social media regulation.

Section 230 says that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another content provider.” Other verbiage in the section provides immunity from civil liabilities for websites with “obscene, lewd… or otherwise objectionable” content uploaded, so long as the service provider acts “in good faith” to remove the content.

There have been many calls for Congress to expand and/or change Section 230 with regards to social media providers in light of major public events over the past decade. For example, allegations of Russian interference in the 2016 U.S. presidential election have sparked a debate over whether companies like Facebook and Twitter should be held more liable than they are for disinformation uploaded to their sites.

Since the Federal Communications Commission lacks the authority to discipline these companies, however, Section 230 doesn’t have much of an effect on the current social media ecosystem. Similarly, the Federal Trade Commission mostly regulates social media in terms of the truthfulness of material claims that companies make using it. Like the FCC, the FTC has no mandate to punish violators. Instead, it collects complaints and uses them to build a case against a company, sometimes with the help of the Justice Department.

It follows that the U.S. federal government is essentially limited in its ability to regulate social media. Though Congress has presented measures to overhaul policies like Section 230 to hold companies more accountable for problematic content—measures like the Platform Accountability and Consumer Transparency (PACT) Act, introduced by Senators Brian Schatz and John Thune in 2021—there is no mandate currently in place for federal regulators to directly handle social media activity.

The state level

As the U.S. federal government has failed, in the eyes of many, to adequately establish regulations over social media giants, certain states have acted on their own to hold these service providers more accountable. The vision of “accountability,” however, differs across the American political spectrum. For example, Florida Governor Ron DeSantis proposed a bill in 2021 that would fine companies for “knowingly de-platforming” political candidates, likely in response to Twitter’s suspension of former president Trump over the January 6th attacks.

In Maryland, state politicians passed legislation that taxes the revenue on digital advertisements sold by big tech companies like Facebook, Google, and Amazon. The tax, inspired by European policies like the French tax on digital revenue and the Austrian tax on digital advertising, is the first of its kind in that it applies solely to the revenue a company receives from digital advertising in the United States. It is expected to generate as much as $250 million in state revenue.

Virginia’s Consumer Data Protection Act, signed into law by Governor Ralph Northam in March 2021, allows state residents to opt out of having their data collected and sold by social media companies. The law will be enforced by the state’s attorney general, and it does not allow individuals to sue companies on their own for violations. It also allows residents to see what data companies have collected about them, and to correct or delete this information.

The Virginia legislation follows the passage of a similar measure that took effect in California in 2020. California’s Consumer Privacy Rights Act, while more industry-friendly in comparison, affords similar protections to residents regarding the collection of their personal information. It also differs from the Virginia legislation in that it creates a new state agency, the California Privacy Protection Agency, which can prosecute companies who violate the policy.

The future of social media regulation

The struggle for data

“Social Media and Democracy: The State of the Field and Prospects for Reform,” published by the Cambridge University Press in August 2020, attempts to break down the debate over social media regulation. In the book, author Tim Hwang argues that public policy researchers need far more access to relevant data from social media providers than they currently receive. Facebook, for example, has historically elected not to release much of its data, instead either publishing its own internal analyses or working closely with academics on company-approved research.

The European Union’s General Data Protection Regulation (GDPR) provides an important and influential example of a federal attempt to regulate access to social media data. The law is intended to provide users with more control over their data while making data collection more transparent on the part of tech companies. Hwang writes that it has proven to be somewhat inscrutable in its constraints on public access to individual user data.

GDPR is predicated on a set of rights endowed to social media users, among which are the right to easily give and withdraw consent, the right to data access, the right to be forgotten, and the right to data portability. In the interest of minimizing the amount of data collected, GDPR defines six basic situations in which a company can process personal data. It rigorously defines consent for this process and requires service providers to keep documentary evidence of this consent.

Hwang argues that since each country in the E.U. has implemented the law in a different way, allowing different exceptions for academics and researchers, the government needs to go above and beyond merely clarifying these exceptions. He writes that the research community needs a “clearly defined safe harbor or a research pathway sanctioned by the European Commission,” one that protects user privacy but allows vetted researchers total access to company data. In exchange for this full surrender of information, companies could be offered legal immunity for granting access.

What does the future hold?

Governments will continue to find themselves facing new regulatory challenges as social media shapes the flow of information around the world. In the United States, tech companies are generally left alone to self-regulate problematic activity, but as the consequences of this activity have become more severe, states have stepped in to compensate for a perceived lack of action from the federal government.

Legislation passed in states like Florida, Maryland, and Virginia in 2020 and 2021 seems to indicate that this trend will likely continue for the foreseeable future. However, large-scale regulation in the European Union and measures like those proposed in the Senate to overhaul Section 230 provide evidence that serious federal social media regulation could be just around the corner for U.S. citizens.

Regardless of who enacts it, it can be argued that effective regulatory policy regarding social media will only be possible through extensive public policy research and unfettered access to data from technology companies. In the massive and multilayered web of interests that govern the social media regulation debate, only time will tell whether U.S. government actors are able to effectively police problematic activity and protect citizens from social media’s most dangerous potential.

SHARE

Share on linkedin
Share on twitter
Share on email
Share on facebook
Ascend Editorial Team
Written byAscend Editorial Team
Jordan Milian is a writer covering government regulation and occupational licensing for Ascend, with a professional background in journalism and marketing.