Introduction: The Enduring Principle in a Transformed World
The right to freedom of expression, long hailed as a cornerstone of democratic society and a bulwark against tyranny, has entered its most transformative and turbulent era. Enshrined in foundational documents from the United States Bill of Rights to the Universal Declaration of Human Rights, this principle was forged in an age of print, pamphlets, and public orators. Its core tenets were designed to constrain the power of the state, ensuring that governments could not silence dissent or impose an official orthodoxy. The digital revolution, however, has not created a new right to free expression but has fundamentally and irrevocably altered the ecosystem in which it operates. This transformation, characterized by the meteoric rise of privately-owned social media platforms as the de facto public square, has created unprecedented challenges for legacy legal frameworks designed primarily to regulate state power. It forces a global re-evaluation of foundational principles, necessary limits, and the very locus of control over public discourse, revealing a stark and deepening divergence between the American and European legal traditions.
This report will navigate the central conflicts that define the contemporary landscape of free expression. It will explore the enduring tension between individual liberty and the prevention of collective harm—a conflict magnified to an extraordinary degree by the speed, scale, and algorithmic nature of online communication. The analysis will dissect the critical shift in power dynamics from a traditional, vertical relationship between the citizen and the state to a complex, triangular one involving the citizen, the state, and the platform. In this new configuration, corporate actors, through their terms of service, content moderation practices, and algorithmic designs, wield an immense and often unaccountable influence over public discourse, acting as a new form of private governance. The contemporary debate is thus framed as a clash of legal and philosophical worldviews, contrasting the robust, almost absolutist American emphasis on a “marketplace of ideas” with the European focus on balancing rights with responsibilities to protect human dignity and social cohesion. The very affordances of social media—its immediacy, reach, and capacity for virality—pose a profound challenge to traditional conceptions of free expression, compelling a necessary and urgent reconsideration of long-held assumptions about how speech is regulated and by whom.
At the heart of this modern dilemma lies a profound paradox of digital empowerment. The same technologies that have radically democratized speech by lowering the costs of content creation and global distribution have also created new, highly centralized points of control and enabled novel, scalable forms of harm, such as the algorithmic amplification of hate speech and the industrial-scale dissemination of disinformation. The initial promise of the internet was one of decentralization, offering a means to “route around” the traditional gatekeepers of the mass media and giving a voice to the voiceless. This was widely perceived as a pure and unalloyed expansion of free speech. However, the architecture of the digital world did not evolve into a distributed utopia. Instead, it has consolidated around a handful of “Very Large Online Platforms” (VLOPs) that dominate the flow of information. These platforms are not the neutral conduits or common carriers they are sometimes portrayed to be; they are privately owned, for-profit corporations that actively curate, moderate, and amplify content to maximize user engagement and, by extension, advertising revenue. Their business models, optimized for attention, have been shown to favor emotionally charged, sensationalist, and polarizing content, as these are the most effective at capturing and retaining user engagement. This dynamic creates a fertile ground for the spread of harmful content. Consequently, the very technology that “freed” speech from the old gatekeepers has simultaneously erected new ones—gatekeepers who are more powerful, less transparent, and whose moderation decisions and algorithmic designs constitute the primary form of speech regulation for billions of people, operating largely outside traditional constitutional frameworks. This has created a regulatory vacuum that both the United States and the European Union are now attempting to fill, albeit through fundamentally different and often conflicting legal philosophies.
Philosophical and Historical Foundations of Free Expression
The contemporary struggles over online speech are deeply rooted in a rich history of philosophical and political thought that championed expression as a fundamental human liberty. Understanding this intellectual heritage is essential to appreciating the principles that animate modern legal frameworks and the ways in which the digital age challenges their foundational assumptions.
The Enlightenment Heritage: From Censorship to Liberty
The modern, principled defense of freedom of expression can be traced to the intellectual ferment of 17th-century Europe, where it emerged in close concert with the demand for religious toleration. Political theorist John Locke was a pivotal figure, arguing that matters of conscience and belief were outside the legitimate purview of the state. This idea—that the government should not compel belief—laid the groundwork for the broader principle that it should not compel or prohibit expression.
This nascent ideal found its most powerful early articulation in John Milton’s 1644 polemic, Areopagitica. Written in opposition to a parliamentary ordinance requiring the prior licensing of all printed materials, Milton’s tract was a passionate argument against censorship, or “prior restraint.” He famously argued that truth is not a fragile entity that needs protection from falsehood; rather, it is strengthened and ultimately prevails through open confrontation. “Let her and Falsehood grapple,” Milton wrote, “who ever knew Truth put to the worse, in a free and open encounter?”. This introduced a profoundly influential, audience-centered justification for free speech: the idea that citizens have a duty to actively engage with a wide range of ideas, even erroneous or heretical ones, to discern the truth for themselves. For Milton, passive deference to authority was a dereliction of both civic and religious responsibility, as truth unexamined sickens into a “muddy pool of conformity and tradition”.
These Enlightenment ideals were gradually codified into law. England’s Bill of Rights of 1689 established the principle of freedom of speech within Parliament, shielding legislators from prosecution for their debates—a concept known as parliamentary privilege. This principle was a direct reaction to the Crown’s attempts to punish dissenting voices. Across the continent, similar movements took root. Sweden enacted one of the world’s first freedom of the press acts in 1766, largely abolishing censorship and establishing public access to official records. The French Revolution produced the Declaration of the Rights of Man and of the Citizen in 1789, which explicitly affirmed in Article 11 that “The free communication of ideas and of opinions is one of the most precious rights of man”. These documents, along with their American counterparts, marked a historic shift away from the presumption of state control over expression toward a presumption of liberty.
John Stuart Mill’s Enduring Influence: On Liberty
Perhaps the most comprehensive and enduring philosophical defense of free expression comes from John Stuart Mill’s 1859 essay, On Liberty. Mill sought to establish “one very simple principle” to govern the relationship between society and the individual: the Harm Principle. This principle holds that “the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others”. An individual’s own good, whether physical or moral, is not a sufficient warrant for societal interference. Mill clarified that this standard was not based on abstract natural rights but on the principle of utility—the greatest good for the greatest number. He also noted its limitations, arguing that it applied only to mature, “civilized” adults, and not to children or “barbarian nations” who might benefit from a period of benevolent despotism.
Within this framework, Mill mounted a powerful, three-pronged defense of what he termed the “liberty of thought and discussion,” which forms the intellectual bedrock of the modern “marketplace of ideas” concept. His argument is structured to address all possibilities regarding the truth-value of a silenced opinion:
- The Silenced Opinion May Be True. Mill’s first and most straightforward point is an argument from human fallibility. To silence any opinion is to assume one’s own infallibility, a position no rational person can hold. History is replete with examples of opinions once deemed certain that were later proven false, and ideas once condemned as heretical that are now accepted as truth. By refusing to hear a dissenting opinion because one is sure it is false, society robs itself of the “opportunity of exchanging error for truth”.
- The Silenced Opinion May Contain a Portion of the Truth. Mill recognized that complex issues are rarely black and white. More often than not, the prevailing or orthodox view contains only part of the truth. It is only through the “collision of adverse opinions” that the remainder of the truth has any chance of being supplied. Suppressing dissent, therefore, ensures that society remains in a state of partial understanding, clinging to an incomplete and potentially distorted version of the truth.
- Even if the Silenced Opinion is Wholly False, It Is Valuable. This is Mill’s most radical and perhaps most important argument. He contended that even if a prevailing opinion is the whole truth, it is essential that it be “vigorously and earnestly contested.” Otherwise, it will be held not as a living truth but as a “dead dogma”—a prejudice inherited without a full understanding of its rational grounds. Without the challenge posed by falsehood, the very meaning of the truth is in danger of being lost. It is the “collision with error” that produces a “clearer perception and livelier impression of truth”. For Mill, constant questioning and debate are necessary to ensure that beliefs do not become mere socially accepted customs, devoid of intellectual force.
Core Justifications for Free Speech in Legal Theory
Mill’s arguments, along with those of his predecessors, have been synthesized in modern legal theory into three primary justifications for the robust protection of freedom of expression. These justifications are not mutually exclusive and often work in concert to support the principle in different contexts.
- The Pursuit of Truth (The “Marketplace of Ideas”). This theory, most famously articulated in Justice Oliver Wendell Holmes Jr.’s dissenting opinion in Abrams v. United States (1919), posits that “the best test of truth is the power of the thought to get itself accepted in the competition of the market”. It is a skeptical view, doubtful of any individual’s or government’s ability to definitively know the truth, and thus argues that the best course is to allow all ideas to compete freely. The belief is that, over time, true ideas will prevail over false ones through a process of public reason and debate. This is a fundamentally listener-centered theory, focused on the benefit the audience reaps from a vibrant and uninhibited public discourse.
- Democratic Self-Governance. This justification holds that freedom of expression is an indispensable condition for a functioning democracy. For citizens to participate meaningfully in decision-making, hold their elected officials accountable, and make informed choices at the ballot box, they must have access to a wide range of information and be free to discuss governmental affairs without fear of punishment. This theory sees free speech as a structural necessity for popular sovereignty, ensuring social stability by allowing for the peaceful discussion and compromise of differences rather than their violent suppression.
- Individual Self-Fulfillment and Autonomy. This view posits that freedom of expression is an intrinsic human good, essential for individual self-realization. The act of forming one’s own beliefs and expressing them to others is seen as a fundamental aspect of defining and developing one’s “self”. From this perspective, restricting speech is not just a political harm but a personal one, infringing upon an individual’s liberty and their ability to achieve their full potential as a human being.
The classical “marketplace of ideas” metaphor, as conceived by thinkers like Milton and Mill, operates on a set of crucial, yet increasingly tenuous, assumptions. It presupposes a forum for rational discourse where truth, through its inherent strength, will eventually prevail over falsehood on its own merits. However, the modern digital environment, dominated by social media platforms, fundamentally disrupts these presuppositions. The digital “market” is not a neutral field of competition but an actively engineered ecosystem where virality often triumphs over veracity. Mill’s own argument was predicated on the idea that a free and unfettered search for knowledge would ultimately allow truth to rise to the surface. This implies a process of reasoned deliberation and good-faith argumentation. The contemporary digital ecosystem, however, is not primarily designed for this purpose. Social media platforms are engineered to maximize user engagement for commercial profit. Their algorithms are finely tuned to identify and promote content that is emotionally provocative, sensationalist, and polarizing, because such content is exceptionally effective at capturing and holding human attention. This system is systematically exploited by a range of actors—from state-sponsored propagandists to domestic conspiracy theorists—who employ bots, fake accounts, and sophisticated micro-targeting techniques to manipulate public discourse and inject disinformation into the information bloodstream. Mill himself presciently warned that “men are not more zealous for truth than they often are for error, and a sufficient application of legal or even social penalties will generally succeed in stopping the propagation of either”. In the digital age, the “social penalty” is replaced by the algorithmic reward for outrage. Therefore, the digital “marketplace” is not the level playing field envisioned by classical liberal theory. It is heavily skewed by bad-faith actors and governed by algorithms that optimize not for truth, but for engagement. This reality suggests that a purely unregulated application of the marketplace theory online may lead not to the discovery of truth, but to the widespread entrenchment of falsehood, the erosion of a shared factual basis necessary for democratic governance, and the very “tyranny of the prevailing opinion” that Mill so feared. This profound disconnect between the theory’s assumptions and the reality of the digital public sphere provides the core philosophical justification for the modern regulatory efforts, particularly in Europe, that seek to impose a degree of order and responsibility on this new and chaotic market.
Codification and Divergence: Key Legal Frameworks
The philosophical principles of free expression have been codified in numerous national constitutions and international human rights instruments. While sharing a common heritage, these legal frameworks have evolved along divergent paths, establishing different standards for protection and permissible limitations. The three most influential models—the international standard set by Article 19, the robustly protective U.S. First Amendment, and the balanced European approach in ECHR Article 10—reveal the fundamental differences in how societies weigh the value of speech against other societal interests.
The International Standard: Article 19
The modern international consensus on freedom of expression was first articulated in Article 19 of the Universal Declaration of Human Rights (UDHR), adopted by the United Nations General Assembly in 1948 in the aftermath of World War II. It states, in clear and expansive terms:
“Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.”
The UDHR, while profoundly influential, is a declaration of principles rather than a binding treaty. These rights were given binding legal force in the International Covenant on Civil and Political Rights (ICCPR), which entered into force in 1976. Article 19 of the ICCPR largely mirrors the UDHR but introduces a crucial and detailed framework for permissible restrictions. Paragraph 1 protects the right to hold opinions “without interference,” an absolute right that cannot be limited. Paragraph 2 protects the right to freedom of expression, including the freedom to “seek, receive and impart information and ideas of all kinds”.
The critical addition comes in Paragraph 3, which states that the exercise of this right “carries with it special duties and responsibilities” and may therefore be subject to certain restrictions. However, these restrictions are strictly circumscribed: they must be “provided by law” and be “necessary” for one of two legitimate aims:
- (a) For respect of the rights or reputations of others;
- (b) For the protection of national security or of public order (ordre public), or of public health or morals.
This structure establishes a global standard that, while robustly protecting expression, explicitly acknowledges that the right is not absolute and must be balanced against other fundamental rights and societal needs.
Furthermore, the ICCPR contains another provision, Article 20, which stands in direct and significant tension with the U.S. constitutional approach. Article 20(2) goes beyond permitting restrictions and actively requires states to prohibit certain forms of extreme speech:
“Any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law.”
This provision reflects a global consensus, born from the horrors of the 20th century, that certain types of speech are so pernicious and destructive to the foundations of a pluralistic society that they fall outside the realm of legitimate expression and must be legally proscribed. The existence of Article 20 creates an affirmative duty on states that is fundamentally at odds with the negative-liberty framework of the U.S. First Amendment.
The American Exception: The First Amendment
Ratified in 1791, the First Amendment to the U.S. Constitution provides what is arguably the world’s most protective legal guarantee for free speech. Its text is a model of simplicity and force:
“Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.”
The drafting process, led by James Madison, involved several iterations, but the final text reflects a core commitment to limiting the power of the federal government. This principle was tested early in the nation’s history by the Sedition Act of 1798, which criminalized “false, scandalous, and malicious” writings against the government. Although the Supreme Court never ruled on its constitutionality before it expired, the Act was widely condemned and its rejection helped to crystallize a uniquely American understanding of free expression: that the ultimate “censorial power is in the people over the government, and not in the government over the people”.
Over two centuries of jurisprudence, the Supreme Court has interpreted the First Amendment’s protections expansively. Through the Fourteenth Amendment, these protections have been applied not just to the federal government but to state and local governments as well. The Court has consistently held that the amendment protects not only ideas that are popular or conventional but also those that are deeply offensive, illogical, immoral, and even hateful. This robust protection is grounded in the philosophical belief that it is not the proper role of the government to act as an arbiter of truth or to shield individuals from ideas and opinions they find disagreeable or abhorrent. While not absolute—certain narrow categories of speech such as incitement to imminent lawless action, true threats, and defamation receive no protection—the default position in American law is that speech is protected unless the government can meet an exceptionally high burden to justify its restriction.
The European Model: Article 10 of the ECHR
The European Convention on Human Rights (ECHR), adopted in 1950, provides a third, distinct model for protecting free expression. Article 10 is structured in two parts, reflecting a philosophy of balanced rights.
Paragraph 1 establishes the right in broad terms, similar to the UDHR:
“Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.”
Paragraph 2, however, immediately qualifies this right, establishing a framework for limitations that is more explicit and detailed than that in the ICCPR:
“The exercise of these freedoms, since it carries with it duties and responsibilities, may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society, in the interests of national security, territorial integrity or public safety, for the prevention of disorder or crime, for the protection of health or morals, for the protection of the reputation or rights of others, for preventing the disclosure of information received in confidence, or for maintaining the authority and impartiality of the judiciary.”
This two-part structure is the defining feature of the European approach. The jurisprudence of the European Court of Human Rights (ECtHR) has developed a rigorous “triple test” for evaluating any state interference with expression under Paragraph 2. To be permissible, a restriction must be:
- Prescribed by law: The restriction must have a clear and foreseeable basis in domestic law.
- Pursuing a legitimate aim: It must serve one of the specific aims listed in Paragraph 2.
- Necessary in a democratic society: This is the most critical element. The interference must correspond to a “pressing social need” and be “proportionate” to the legitimate aim pursued.
This framework necessitates a constant balancing act, weighing the importance of the expression against the harm it may cause. While the ECtHR, in its landmark ruling in Handyside v. The United Kingdom (1976), famously stated that Article 10 protects not only ideas that are “favourably received” but also those that “offend, shock or disturb the State or any sector of the population,” this principle is always weighed against the extensive list of potential justifications for restriction in Paragraph 2. This inherent balancing makes the European model fundamentally more flexible—and more permissive of regulation, particularly concerning hate speech—than its American counterpart.
Comparative Analysis of Free Speech Frameworks
To crystallize the fundamental differences between these legal instruments, the following table provides a side-by-side comparison of their core tenets. This structured overview highlights the textual and philosophical divergences that drive the contrasting legal outcomes discussed throughout this report, particularly in the contentious realms of hate speech and platform regulation.
Legal Instrument | Core Text | Historical/Philosophical Basis | Scope of Protection | Permissible Restrictions | Approach to Hate Speech |
U.S. First Amendment | “Congress shall make no law… abridging the freedom of speech…” | Reaction to colonial-era suppression; Enlightenment ideals of popular sovereignty; Mill’s “marketplace of ideas.” | Extremely broad, protecting political, artistic, and commercial speech, including offensive and hateful ideas. | Very narrow, judicially created categories: incitement to imminent lawless action, true threats, defamation, obscenity, etc. | Generally protected unless it falls into an unprotected category like “true threats” or “incitement.” No “hate speech” exception. |
ICCPR (Arts. 19 & 20) | “Everyone shall have the right to freedom of expression…” but this right “carries with it special duties and responsibilities.” | Post-WWII effort to establish universal human rights norms; a global compromise between different legal traditions. | Broad protection for seeking, receiving, and imparting information and ideas of all kinds, through any media. | Restrictions must be provided by law and necessary for rights/reputations of others, national security, public order, public health, or morals. | Article 20 requires states to prohibit by law any advocacy of national, racial, or religious hatred that constitutes incitement. |
ECHR (Art. 10) | “Everyone has the right to freedom of expression,” but its exercise “carries with it duties and responsibilities.” | Post-WWII reaction to totalitarianism; commitment to democracy, pluralism, and tolerance, balanced with social responsibility. | Broadly protects ideas that “offend, shock or disturb,” but is explicitly qualified by the duties of the speaker. | Extensive list of legitimate aims for restrictions, which must be prescribed by law and “necessary in a democratic society” (proportionate). | Permissible to restrict and punish hate speech to protect the rights of others and prevent disorder; a balancing test is applied. |
The Digital Disruption: Social Media and the New Public Square
The advent of the internet and the subsequent rise of social media platforms have catalyzed the most significant transformation in the landscape of free expression since the invention of the printing press. This digital disruption has challenged long-standing legal doctrines, blurred traditional distinctions between media forms, and created a new global infrastructure for speech governed by a complex interplay of national laws, corporate policies, and algorithmic systems.
From Print to Pixels: The Legal Evolution Across Media
Historically, U.S. courts have grappled with how to apply the enduring principles of the First Amendment to each new communication technology. The process has often been one of cautious adaptation. Initially, in Mutual Film Corp. v. Industrial Commission of Ohio (1915), the Supreme Court refused to grant First Amendment protection to motion pictures, viewing them as mere “business, pure and simple” with a “capacity for evil”. This decision legitimized a widespread regime of film censorship that lasted for decades. It was not until
Joseph Burstyn, Inc. v. Wilson (1952) that the Court reversed course, recognizing that movies are a “significant medium for the communication of ideas” and thus fall under the protection of the First Amendment.
As technology evolved, the Court continued to extend First Amendment principles, recognizing a right to receive information as a corollary to the right to speak. This principle was pivotal in cases involving radio, television, and eventually, the internet. The culminating moment for digital speech came in the 1997 landmark case,
Reno v. American Civil Liberties Union. In a sweeping decision, the Supreme Court struck down provisions of the Communications Decency Act of 1996 that sought to criminalize the transmission of “indecent” or “patently offensive” material to minors online. The Court unequivocally declared that speech on the internet is entitled to the highest level of First Amendment protection, akin to that afforded to the print press. It recognized the internet as a “vast democratic forum” and found “no basis for qualifying the level of First Amendment scrutiny that should be applied” to online communication. This decision established a foundational legal principle for the digital age: government attempts to regulate online content based on its substance would be subject to the most rigorous constitutional review.
The “Modern Public Square”: A Flawed Analogy?
In subsequent decisions, most notably Packingham v. North Carolina (2017), the Supreme Court famously described social media platforms as the “modern public square”. This metaphor powerfully captures the functional role these platforms now play in society as the primary venues for public discourse, political debate, and social organization. However, as a legal analogy, it is deeply flawed and has created significant conceptual confusion.
The core problem with the analogy lies in the distinction between public and private property. A traditional public square—a park or a town common—is government-owned property. As such, it is considered a “traditional public forum” where the government’s ability to restrict speech is severely limited by the First Amendment. In contrast, social media platforms like Facebook, X (formerly Twitter), and YouTube are privately owned and operated for-profit corporations. They are not state actors, and therefore, the First Amendment’s constraints on government censorship do not directly apply to their content moderation decisions. This crucial distinction means that a platform can remove content, suspend users, or enforce its terms of service in ways that would be unconstitutional if done by a government entity.
This has led to legal and academic debates invoking the “state action” doctrine and the legacy of the “company town” case, Marsh v. Alabama (1946). In Marsh, the Supreme Court held that a privately owned town that performed all the functions of a traditional municipality could not prohibit the distribution of religious literature on its sidewalks, effectively treating it as a state actor for First Amendment purposes. Proponents of applying this logic to social media argue that when a private entity controls a space that has become the functional equivalent of a public forum, it should be subject to some constitutional constraints to protect the expressive rights of its users. However, courts have been highly reluctant to extend this doctrine to online platforms, leaving their content moderation practices largely governed by corporate policy rather than constitutional law.
The Intermediary’s Dilemma: Platform vs. Publisher
The debate over how to regulate online intermediaries is often framed, particularly in political discourse, as a choice between treating them as “platforms” or “publishers.” This distinction, however, is largely a red herring with no basis in U.S. law. The term “platform” does not appear in the relevant statute, Section 230, and the law was specifically designed to render the distinction legally irrelevant.
To understand why, one must look at the legal landscape for defamation liability before Section 230 was enacted. Common law traditionally distinguished between a “publisher” (like a newspaper) and a “distributor” (like a bookstore or newsstand). A publisher could be held strictly liable for any defamatory content it printed, even if it was written by a third party (e.g., in a letter to the editor). A distributor, on the other hand, was only liable if it knew or had reason to know that the material it was distributing was defamatory.
In the early days of the internet, courts struggled to apply this framework. In Cubby, Inc. v. CompuServe Inc. (1991), a court found that the online service provider CompuServe was a mere distributor of content on its forums and, lacking knowledge of the defamatory material, could not be held liable. However, in
Stratton Oakmont, Inc. v. Prodigy Services Co. (1995), a court reached the opposite conclusion. Because Prodigy actively moderated its forums and used screening software, the court deemed it a publisher and held it liable for defamatory user posts. This pair of rulings created a perverse incentive: online services that tried to be responsible and moderate harmful content would be treated as publishers and face greater legal liability, while those that adopted a hands-off, “anything goes” approach would be protected as distributors. This “moderator’s dilemma” threatened to either stifle the growth of online communities or turn them into unmoderated cesspools of harmful content.
Legislating the Internet: A Transatlantic Chasm
It was in direct response to this dilemma that the U.S. Congress and, decades later, the European Union developed their foundational regulatory frameworks for the internet. Their approaches, however, could not be more different, reflecting the deep philosophical and historical divergences in their respective legal traditions.
Section 230 of the Communications Decency Act (U.S.)
Enacted in 1996, Section 230 of the Communications Decency Act was designed to solve the moderator’s dilemma and foster the growth of the nascent internet. It is often called “the twenty-six words that created the internet” and consists of two key provisions:
- Section 230(c)(1): This is the core immunity provision. It states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”. This language effectively eliminates the publisher/distributor distinction for online intermediaries, granting them broad immunity from a wide range of state-law claims (like defamation) arising from third-party content.
- Section 230(c)(2): This is the “Good Samaritan” provision. It provides a separate immunity for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected”. This provision was intended to empower platforms to moderate content without fear of being sued for their moderation decisions.
The judicial interpretation of Section 230 has been exceptionally broad. The landmark 1997 case Zeran v. AOL established that the immunity under (c)(1) is expansive, protecting platforms from liability even when they have been notified of harmful or defamatory content and fail to remove it. This interpretation has provided a formidable legal shield for online platforms, allowing them to host vast amounts of user-generated content without the constant threat of litigation.
In recent years, however, Section 230 has become the subject of intense political debate. Critics from across the political spectrum argue that this broad immunity shields “Big Tech” from accountability for the real-world harms facilitated on their platforms, including the spread of disinformation, hate speech, terrorist content, and material harmful to children. Conversely, defenders of the law argue that weakening or repealing it would have a catastrophic chilling effect on online speech. Faced with the threat of liability, platforms would likely engage in massive, overly cautious censorship, removing any content that could be remotely controversial, thereby threatening the open and diverse nature of the internet.
The Digital Services Act (DSA) (E.U.)
The European Union has taken a fundamentally different path. Instead of granting broad immunity, the Digital Services Act (DSA), which became fully applicable in early 2024, imposes a comprehensive set of affirmative obligations on online intermediaries, creating a co-regulatory framework designed to enhance user safety and platform accountability. The DSA operates on a tiered system, with the most stringent rules applying to VLOPs and Very Large Online Search Engines (VLOSEs) that have more than 45 million monthly active users in the EU.
Key provisions of the DSA include:
- Notice-and-Action System: The DSA codifies a “notice-and-action” mechanism. While platforms are not required to proactively monitor all content, they are obliged to act “expeditiously” to remove or disable access to content once it is flagged as illegal under either EU or national law.
- Systemic Risk Mitigation: VLOPs and VLOSEs are required to conduct annual risk assessments to identify and analyze systemic risks stemming from their services. These risks include the dissemination of illegal content, negative effects on fundamental rights (including freedom of expression), and the spread of disinformation that impacts civic discourse, electoral processes, or public security. Based on these assessments, they must implement reasonable and effective mitigation measures.
- Transparency and Accountability: The DSA mandates a high degree of transparency. Platforms must provide clear explanations in their terms of service about their content moderation policies, including the use of algorithmic decision-making. They are also subject to independent audits and must grant vetted researchers access to platform data to study systemic risks.
- User Rights: The Act establishes significant new rights for users. When a platform removes content or suspends an account, it must provide the user with a clear “statement of reasons” for the decision. Users have the right to appeal these decisions through an internal complaint-handling system and also have the right to select a certified out-of-court dispute settlement body to resolve the conflict.
The DSA represents a paradigm shift from the American hands-off approach to a model of co-regulation that treats large platforms as having public responsibilities that must be legally enforced.
The stark contrast between the U.S. and E.U. regulatory models is not merely a matter of academic interest; it has profound global implications. The E.U.’s Digital Services Act, with its stringent obligations and the threat of massive financial penalties—up to 6% of a company’s global annual turnover—is creating a powerful de facto global standard for content moderation. This phenomenon, often termed the “Brussels Effect,” occurs when multinational corporations, in order to avoid the complexity and cost of maintaining different standards for different markets, choose to adopt the strictest regulation as their global baseline. Given that most major social media platforms are U.S.-based but have a significant user base in the E.U., they have a powerful financial incentive to align their worldwide terms of service and content moderation practices with the DSA’s requirements. This effectively means that European speech norms, which are more restrictive regarding issues like hate speech and disinformation, are being “exported” and are increasingly governing what American users can say and see online, regardless of the broader protections afforded by the First Amendment. This creates a direct and unavoidable conflict with the American legal tradition and with recent U.S. state-level legislation, such as laws passed in Texas and Florida, that attempt to
prohibit platforms from engaging in the very kind of viewpoint-based moderation that the DSA implicitly encourages. This sets the stage for major legal and geopolitical clashes over the fundamental principles that should govern online speech in a globalized world.
The Challenge of Hateful Speech
Among the most contentious and divisive issues in the digital era is the regulation of “hate speech.” The ability of social media to amplify hateful ideologies and target marginalized groups has brought the conflict between protecting free expression and preventing harm into its sharpest focus. Here, the philosophical and legal chasm between the United States and Europe is at its widest, reflecting fundamentally different conceptions of the role of the state and the purpose of speech protections.
Defining the Indefinable
A central challenge in regulating hate speech is the lack of a universally agreed-upon legal definition. Under international human rights law, “hate speech” is not a formal legal term of art; rather, it is a concept used to describe a broad category of expression that is deeply pejorative and threatens social peace. The United Nations has adopted a working definition for its own strategic purposes, describing hate speech as:
“…any kind of communication in speech, writing or behaviour, that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, colour, descent, gender, or other identity factor.”
This definition is intentionally broad and is meant to guide policy, not to serve as a precise legal standard for prohibition. The Council of Europe offers a similar definition, encompassing “all forms of expression which spread, incite, promote or justify racial hatred, xenophobia, anti-Semitism or other forms of hatred based on intolerance”. It is crucial to distinguish these broad conceptual definitions from the much narrower category of speech that international law
requires states to prohibit: “incitement to discrimination, hostility or violence,” as stipulated in Article 20 of the ICCPR. Much of what is commonly called “hate speech” may be offensive and promote intolerance but may not rise to the high threshold of incitement. It is in this gray area that the different legal systems diverge most dramatically.
A Tale of Two Systems: Contrasting U.S. and European Approaches
The legal treatment of hate speech in the United States and Europe represents two starkly different models, each rooted in its unique historical experience and constitutional philosophy.
United States: The Primacy of Free Expression
In the United States, there is a broad and firmly established legal consensus: “hate speech” is, for the most part, constitutionally protected speech. The U.S. Supreme Court has repeatedly and emphatically rejected the idea that speech can be punished simply because it is offensive or expresses hateful ideas. This principle is not a recent development but has been a cornerstone of modern First Amendment jurisprudence.
Two landmark cases are particularly illustrative. In R.A.V. v. City of St. Paul (1992), the Court unanimously struck down a local ordinance that prohibited the display of symbols, such as a burning cross, that one knows or has reason to know “arouses anger, alarm or resentment in others on the basis of race, color, creed, religion or gender.” The Court ruled that the ordinance was unconstitutional because it was a form of “viewpoint discrimination.” While the government could ban “fighting words” generally, it could not selectively ban only those fighting words that conveyed a particular hateful message. More recently, in
Matal v. Tam (2017), the Court reaffirmed this principle, striking down a federal law that prohibited the registration of “disparaging” trademarks. Writing for the Court, Justice Samuel Alito declared, “Speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express ‘the thought that we hate'”.
Under U.S. law, therefore, hateful expression can only be restricted if it falls into one of the narrowly defined categories of unprotected speech. For example, if hate speech constitutes a “true threat” of violence against a specific individual or group, it can be prosecuted. Similarly, if it is “directed to inciting or producing imminent lawless action and is likely to incite or produce such action,” it meets the high threshold for incitement established in
Brandenburg v. Ohio (1969) and loses its constitutional protection. Absent these specific elements, however, speech remains protected, no matter how vile its content.
Europe: Balancing Expression with Dignity and Public Order
The European approach is fundamentally different. Shaped by the continent’s 20th-century history of totalitarianism and genocide, which were fueled by state-sponsored hate propaganda, European law explicitly permits the restriction of hate speech to protect the rights of others and maintain social order. This is not seen as an exception to the right of free expression but as an inherent part of it, as reflected in Article 10(2) of the ECHR, which states that the right carries with it “duties and responsibilities”.
The jurisprudence of the European Court of Human Rights (ECtHR) consistently reflects this balancing act. The Court has regularly upheld convictions for speech that incites hatred or denies historical atrocities. In cases like Garaudy v. France (2003), the Court found that Holocaust denial was an abuse of the right to free expression aimed at the destruction of the fundamental rights of others and was therefore not protected by Article 10.
A particularly important recent development in the Court’s case law is the establishment of a positive obligation on states to protect individuals from hate speech. The landmark case of Beizaras and Levickas v. Lithuania (2020) is a powerful example. The case involved a gay couple who posted a photo of themselves kissing on Facebook, which was met with hundreds of violently homophobic and threatening online comments. The Lithuanian authorities refused to open an investigation. The ECtHR ruled unanimously that this failure constituted a violation of the Convention. The Court held that the authorities had a positive obligation under Article 8 (right to private life) and Article 14 (prohibition of discrimination) to effectively investigate whether the comments constituted incitement to hatred and violence. It found that the authorities’ failure to do so was due to a “discriminatory state of mind” and denied the applicants an effective remedy. This judgment sends a strong message that, in Europe, states cannot remain passive in the face of online hate; they have an affirmative duty to apply criminal law to protect targeted individuals and groups.
The fundamental divergence between the U.S. and European legal systems on the issue of hate speech is not merely a product of different legal texts. It stems from a deeper philosophical disagreement over the primary value that the law of free expression is meant to protect. The American system prioritizes a conception of “liberty” that is heavily speaker-centric. Its main purpose is to safeguard the autonomy of the individual speaker from government interference, even if that speech causes profound emotional or dignitary harm to others. The underlying belief, articulated by Justice Louis Brandeis, is that the proper remedy for bad speech is “more speech, not enforced silence”. The system places its faith in the “marketplace of ideas” to eventually sort truth from falsehood and trusts the citizenry to reject hateful ideologies on their own. The core value is individual liberty from state coercion.
In contrast, the European human rights system is more victim-centric and society-centric, prioritizing the value of “dignity”. It recognizes that speech is not just an abstract exchange of ideas but a social act with real-world consequences. It acknowledges that hate speech can inflict deep psychological harm, dehumanize individuals, and poison the social environment, thereby undermining the very foundations of a democratic society built on “pluralism, tolerance and broadmindedness”. From this perspective, protecting the dignity of all members of society is a prerequisite for a functioning democracy, and this may justify placing limits on the liberty of those who would use speech to attack that dignity. This philosophical difference is embedded in the legal language itself. The U.S. First Amendment speaks in negative terms—”Congress shall make no law… abridging”—a constraint on government power. The ECHR speaks in affirmative terms of rights carrying “duties and responsibilities,” a concept that inherently implies the need to balance the rights of the speaker against the rights and well-being of the community. The transatlantic debate over hate speech, therefore, is not just a legal dispute; it is a proxy for a deeper conflict between two distinct liberal traditions: one that elevates liberty as the paramount value, and one that views dignity as a co-equal, and at times overriding, concern.
Intersecting Rights and Competing Values in the Digital Realm
The exercise of freedom of expression online does not occur in a vacuum. It constantly intersects with other fundamental rights and legal values, creating complex and often contentious balancing acts. The digital environment has amplified these intersections, particularly in the areas of privacy, reputation, and intellectual property, forcing courts and legislatures to navigate competing claims in a rapidly evolving technological landscape.
Speech vs. Privacy: Two Sides of the Same Coin
The rights to freedom of expression and privacy are often portrayed as being in conflict, but they are more accurately understood as interdependent and mutually reinforcing. As one analysis puts it, they are “two sides of the same coin,” with each being an essential prerequisite for the enjoyment of the other. The ability to form and express one’s thoughts, particularly on sensitive political, religious, or personal matters, requires an autonomous, private space free from the chilling effect of surveillance.
In the digital age, this interdependence has become critically important. Nearly every online action—from a web search to a social media post to a private message—is an act of expression that generates a digital trace. The pervasive collection of this data by both corporations and governments can have a profound chilling effect on speech. When individuals know their online activities are being monitored, tracked, and analyzed, they are less likely to explore controversial ideas, communicate with dissenting groups, or speak out against powerful interests. This is especially true for journalists relying on confidential sources, human rights defenders documenting abuses, and members of marginalized communities who may face discrimination or physical danger if their identities or associations are exposed. The lack of robust privacy protections online directly threatens the confidence required for the free exercise of speech.
However, conflicts undeniably arise. The right to impart information can clash with an individual’s right to keep personal information private. The phenomenon of “doxing”—the malicious online publication of an individual’s private information, such as their home address or phone number—is a clear example where speech is used as a tool to violate privacy and incite harassment. Similarly, the “right to be forgotten” or the right to erasure, recognized in European data protection law and under discussion elsewhere, creates a direct tension between an individual’s desire to control their digital past and the public’s right to access information, as well as the preservation of a complete historical record. Striking the right balance between these competing interests—protecting individuals from unwarranted intrusions while preserving the free flow of information—is one of the most significant challenges for digital rights in the 21st century.
Speech vs. Reputation: Defamation in the Digital Age
Defamation—the communication of a false statement of fact that harms an individual’s reputation—is a long-established category of speech that receives no First Amendment protection in the United States. The digital age, however, has radically altered the landscape of reputational harm. Falsehoods can now be disseminated to a global audience instantaneously, with the potential to cause devastating and permanent damage to an individual’s personal and professional life.
While victims of defamation can, in theory, sue the original speaker, the legal framework in the U.S. makes it nearly impossible to hold the online platforms that host and amplify this content accountable. This is a direct consequence of Section 230 of the Communications Decency Act. As discussed previously, Section 230 provides near-blanket immunity to “interactive computer services” from liability for content created by third parties. This means that a social media platform, a review site, or an online forum cannot be successfully sued for defamation based on a user’s post, even if the platform is made aware that the post is false and defamatory and refuses to take it down.
This broad immunity was intended to protect online services from being buried under an avalanche of litigation that would stifle the growth of the internet. However, a major consequence has been to leave many victims of online defamation with little effective recourse. Suing the often anonymous or judgment-proof individuals who post the defamatory material is frequently impractical, and the platforms that provide the megaphone for these attacks are legally shielded. This has created a significant imbalance, where the legal tools to protect reputation have been severely weakened in the very environment where reputational attacks are most easily and damagingly launched.
Speech vs. Intellectual Property: Fair Use Online
The intersection of free expression and copyright law presents another critical area of tension and balance. Copyright law grants creators exclusive rights over their work, which can be seen as a restriction on the speech of others who might wish to use that work. To prevent copyright from becoming a tool of censorship and to ensure it aligns with the First Amendment’s goal of promoting the “Progress of Science and useful Arts,” U.S. law contains the vital doctrine of “fair use”.
Fair use is a legal doctrine that permits the unlicensed use of copyright-protected works in certain circumstances. Section 107 of the Copyright Act explicitly lists purposes such as “criticism, comment, news reporting, teaching…, scholarship, or research” as examples of potential fair uses. The doctrine is intentionally flexible; instead of a rigid set of rules, courts apply a four-factor balancing test to determine whether a specific use is fair:
- The purpose and character of the use, including whether it is for commercial or non-profit educational purposes. A key consideration here is whether the use is “transformative”—that is, whether it adds new expression or meaning to the original work, such as in a parody or a critique.
- The nature of the copyrighted work. Use of factual works is more likely to be considered fair than use of highly creative, fictional works.
- The amount and substantiality of the portion used in relation to the copyrighted work as a whole. Using a small, necessary portion is more likely to be fair than using the entire work.
- The effect of the use upon the potential market for or value of the copyrighted work. This is often considered the most important factor, as it examines whether the new use serves as a substitute for the original, thereby harming its commercial value.
In the digital age, fair use has become an essential safeguard for online expression. It is the legal foundation that allows for the creation of memes, video essays, remixes, and other forms of transformative digital culture that quote, critique, and comment on existing media. Without the flexibility of the fair use doctrine, copyright law could be wielded to silence critics, shut down commentary, and stifle the vibrant, participatory culture of the internet. It serves as a crucial “escape valve” that ensures copyright protects creativity without unduly burdening the freedom of expression.
The Algorithmic Age and the Future of Expression
The challenges to free expression in the digital era are now entering a new and more complex phase, driven by the increasing sophistication of algorithms and artificial intelligence. These technologies are not merely tools for communication; they are actively shaping the information environment, curating what we see, influencing what we believe, and even generating new forms of speech. This algorithmic age presents novel threats to democratic discourse and forces a re-evaluation of legal frameworks that were designed for a world of human authors and editors.
The Architecture of Disinformation: Algorithmic Amplification
A fundamental misunderstanding of social media is to view platforms as neutral hosts or passive conduits for user content. In reality, they are highly active curators. Their core business model relies on maximizing user engagement, and the primary tool for achieving this is the recommendation algorithm. These complex systems analyze user data to predict what content will be most likely to capture and hold an individual’s attention, and then prioritize that content in their news feeds and recommendation streams.
While not inherently malicious, this engagement-based model has a systematic and well-documented side effect: it tends to amplify the most extreme, sensationalist, and emotionally charged content, as this is what most reliably provokes a reaction. This creates an architecture that is highly susceptible to the spread of disinformation, conspiracy theories, and extremist propaganda. False and inflammatory content often spreads faster and wider than factual, nuanced information precisely because it is designed to trigger strong emotional responses like anger and fear, which are powerful drivers of engagement. This phenomenon of “algorithmic radicalization” can create “echo chambers” and “filter bubbles” that reinforce existing biases and pull users toward more extreme ideological positions.
The consequences for democratic processes are profound. Both foreign and domestic actors now systematically exploit these algorithmic systems to manipulate public opinion and undermine election integrity. By creating and disseminating divisive content through networks of fake accounts and bots, they can hijack the platforms’ own amplification mechanisms to inject propaganda and disinformation into the public discourse on an industrial scale. This weaponization of social media algorithms poses a direct threat to the shared factual basis upon which informed self-governance depends.
The Rise of Artificial Intelligence: New Frontiers of Speech
The rapid development of generative artificial intelligence (AI) has opened up another frontier of challenges for free expression. These technologies raise novel legal and philosophical questions about the nature of speech and authorship. While it is a settled principle that technologies themselves do not possess rights, the use of AI for expressive purposes by humans clearly implicates the First Amendment.
AI is having a dual impact on the speech landscape. On one hand, it is being increasingly deployed for automated content moderation. Platforms use AI systems to scan and remove vast quantities of content that violate their policies, such as spam, hate speech, or terrorist propaganda. While necessary for operating at scale, these systems are often context-blind and prone to error, leading to the wrongful removal of legitimate and important speech—a form of automated censorship that lacks transparency and due process.
On the other hand, generative AI is now a powerful tool for creating content. This has enabled new forms of creativity, but it has also supercharged the production of disinformation. The most prominent threat is the rise of “deepfakes”—highly realistic but entirely fabricated audio and video content. Maliciously deployed, deepfakes can be used to create convincing but false evidence of a political candidate saying or doing something they never did, to impersonate officials, or to generate non-consensual pornographic material for harassment. This technology makes it easier than ever to pass off fake events as real and, perhaps more insidiously, to dismiss real events as fake—a phenomenon known as the “liar’s dividend”. The potential for AI-generated disinformation to disrupt elections and erode public trust is a threat of the highest order.
Navigating the Future: Regulatory and Policy Horizons
The legal and policy response to these technological challenges is still in its early stages, with significant activity and legal battles underway on both sides of the Atlantic.
In the United States, the judiciary and legislature are grappling with how to apply existing laws to these new realities. The Supreme Court’s 2024-2025 term has been marked by several significant cases involving social media. In Moody v. NetChoice, LLC, the Court addressed the constitutionality of state laws in Florida and Texas that seek to limit platforms’ content moderation choices, raising fundamental questions about the editorial rights of private platforms. In
Murthy v. Missouri, the Court considered whether the federal government’s efforts to persuade platforms to remove COVID-19 misinformation constituted unconstitutional “jawboning” or coercion. These cases highlight the ongoing struggle to define the boundaries of permissible government influence and platform discretion. Meanwhile, the debate over reforming Section 230 continues, with a particular focus on whether its immunity should extend to a platform’s own algorithmic amplification of harmful content—a question the Supreme Court sidestepped in
Gonzalez v. Google but which remains a central point of contention.
In the European Union, the regulatory path is clearer and more assertive. The implementation and enforcement of the Digital Services Act (DSA) and the strengthened 2022 Code of Practice on Disinformation represent a comprehensive attempt to create a binding legal framework for platform accountability. The DSA’s requirements for systemic risk assessment directly target the problems of algorithmic amplification and the spread of disinformation and illegal hate speech. The E.U. model is predicated on the idea that very large platforms have a societal responsibility that must be enforced through regulation, including transparency mandates, independent audits, and the empowerment of users and researchers.
The path forward is fraught with complexity. A rights-respecting digital future will likely require a multi-stakeholder approach that moves beyond the simplistic binaries of total immunity versus government censorship. Key principles for future regulation and policy should include a strong emphasis on algorithmic transparency, allowing independent researchers and the public to understand how information is being prioritized and disseminated; robust procedural fairness in content moderation, giving users meaningful rights to appeal and redress; and the promotion of user agency and control over their own information feeds. The challenge is to craft policies that can mitigate the undeniable harms of the digital age without sacrificing the fundamental principles of free and open discourse that are essential to democratic life.
Conclusion: Reaffirming Principles for a New Era
The digital revolution has brought the global community to a critical inflection point, creating a profound schism in the shared understanding and legal treatment of freedom of expression. The foundational principles, forged in an era of state-centric power, are now being tested by a new paradigm of privately controlled, algorithmically mediated public squares. This report has traced the contours of this transformation, revealing a stark divergence between two major democratic legal traditions. The United States continues to adhere to a robust, liberty-focused framework designed primarily to constrain the state, a position that leaves vast and largely unregulated power over public discourse in the hands of a few private technology companies. The European Union, in contrast, is constructing an ambitious and comprehensive regulatory regime that imposes public duties and responsibilities on these same private actors, prioritizing the protection of individual dignity and social cohesion. As of now, neither approach has proven fully adequate to the multifaceted challenges of algorithmic amplification, AI-driven disinformation, and the scalable spread of online hate.
The path forward requires a move beyond the simplistic and often unhelpful binaries that have come to dominate the debate: liberty versus safety, state control versus platform immunity, censorship versus a “free-for-all.” A sustainable and rights-respecting digital future will depend on the development of a more nuanced, multi-stakeholder approach. This approach must be grounded in principles of transparency, procedural fairness, and user empowerment. Greater transparency into the design and operation of algorithmic systems is not a panacea, but it is an essential prerequisite for accountability, allowing independent researchers, civil society, and the public to scrutinize the forces that shape the information environment. Meaningful due process in content moderation—including clear notice, the right to a genuine appeal, and access to independent redress mechanisms—is necessary to protect individual users from arbitrary or biased enforcement of opaque platform rules. Ultimately, empowering users with greater control over their own data and their information consumption is crucial to restoring a degree of individual agency in an ecosystem designed for passive engagement.
The core principles articulated by thinkers like John Stuart Mill—the relentless pursuit of truth, the intrinsic value of dissent, and the danger of “dead dogmas”—remain as vital today as they were in the 19th century. However, their application must be thoughtfully and courageously adapted to an information ecosystem where the “marketplace of ideas” is no longer a neutral, abstract forum but an actively engineered and commercially driven environment. The enduring challenge is to preserve the immense, emancipatory potential of digital communication—its ability to connect, inform, and empower—while simultaneously building the resilience and safeguards needed to mitigate its unprecedented capacity for harm. This requires not the abandonment of our foundational principles of free expression, but their intelligent and resolute reapplication to a world the framers of these essential rights could never have imagined.
References
- Akee, R., Jones, M. R., & Porter, S. R. (2021). Race, and the Politics of COVID-19. Springer.
- ARTICLE 19. (n.d.). ARTICLE 19: Defending expression since 1987. Retrieved from https://stories.article19.org/article-19-defending-expression-since-1987/index.html
- Balkin, J. M. (2004). How Rights Change: Freedom of Speech in the Digital Era. Sydney Law Review, 26(1). Retrieved from https://openyls.law.yale.edu/bitstream/handle/20.500.13051/1734/How_Rights_Change_Freedom_of_Speech_in_the_Digital_Era.pdf?sequence=2
- Barrett, A. C. (2024). Lindke v. Freed. Supreme Court of the United States.
- Blasi, V. (2025). Is John Stuart Mill’s On Liberty Obsolete? Daedalus.
- Brennan Center for Justice. (2019). Free Speech and the Regulation of Social Media Content. Retrieved from https://www.brennancenter.org/sites/default/files/analysis/First_Amendment_Principles_2019-FINAL_Interactive_O0JA9oV.pdf
- Bipartisan Policy Center. (n.d.). Section 230 and the Future of Online Platforms. Retrieved from https://bipartisanpolicy.org/blog/section-230-online-platforms/
- Brookings Institution. (2020). Disinformation, social media, and foreign interference: What can go wrong in the 2020 elections? Retrieved from https://www.brookings.edu/events/disinformation-social-media-and-foreign-interference-what-can-go-wrong-in-the-2020-elections/
- Brookings Institution. (2022). Misinformation is eroding the public’s confidence in democracy. Retrieved from https://www.brookings.edu/articles/misinformation-is-eroding-the-publics-confidence-in-democracy/
- Brookings Institution. (2022). EU Code of Practice on Disinformation. Retrieved from https://www.brookings.edu/articles/eu-code-of-practice-on-disinformation/
- Brookings Institution. (2023). Interpreting the ambiguities of Section 230. Retrieved from https://www.brookings.edu/articles/interpreting-the-ambiguities-of-section-230/
- Brookings Institution. (2024). Examining the intersection of data privacy and civil rights. Retrieved from https://www.brookings.edu/articles/examining-the-intersection-of-data-privacy-and-civil-rights/
- Carlton Fields. (2025). Top First Amendment Cases of the 2024-2025 Supreme Court Term. Retrieved from https://www.carltonfields.com/insights/publications/2025/top-first-amendment-cases-of-the-2024-2025-supreme-court-term
- Carnegie Council for Ethics in International Affairs. (n.d.). Speech. Retrieved from https://www.carnegiecouncil.org/explore-engage/classroom-resources/worksheets-and-excerpts-on-history-and-government/speech
- Center for Journalistic Responsibility. (n.d.). What should we do about the algorithmic amplification of disinformation? Retrieved from https://www.cjr.org/the_media_today/what-should-we-do-about-the-algorithmic-amplification-of-disinformation.php
- Chambers and Partners. (n.d.). User Content Moderation Under the Digital Services Act: 10 Key Takeaways. Retrieved from https://chambers.com/articles/user-content-moderation-under-the-digital-services-act-10-key-takeaways-2
- Citron, D. K., & Penney, J. (2025). Empowering Speech by Moderating It. Daedalus.
- Claiming Human Rights. (n.d.). UDHR Article 19. Retrieved from http://www.claiminghumanrights.org/udhr_article_19.html
- Columbia Global Freedom of Expression. (n.d.). Theoretical Foundations. Retrieved from https://teaching.globalfreedomofexpression.columbia.edu/node/3
- Columbia Global Freedom of Expression. (n.d.). Artificial Intelligence and Freedom of Expression. Retrieved from https://teaching.globalfreedomofexpression.columbia.edu/index.php/resources/artificial-intelligence-and-freedom-expression
- Congressional Research Service. (2022). False Speech and the First Amendment: Constitutional Limits on Regulating Misinformation. IF12180. Retrieved from https://www.congress.gov/crs-product/IF12180
- Congressional Research Service. (2023). Section 230: An Overview. R46751. Retrieved from https://www.congress.gov/crs-product/R46751
- Congressional Research Service. (2024). Section 230: An Overview. IF12584. Retrieved from https://www.congress.gov/crs-product/IF12584
- Council of Europe. (n.d.). Hate speech. Retrieved from https://www.coe.int/en/web/freedom-expression/hate-speech
- Council on Foreign Relations. (n.d.). Defending America From Foreign Election Interference. Retrieved from https://www.cfr.org/report/defending-america-foreign-election-interference
- Crowell & Moring LLP. (n.d.). Section 230 Reform: What Websites Need to Know Now. Retrieved from https://www.crowell.com/en/insights/client-alerts/section-230-reform-what-websites-need-to-know-now
- Cybersecurity & Infrastructure Security Agency. (n.d.). Election Security. Retrieved from https://www.cisa.gov/topics/election-security
- Digital Media Law Project. (2025). Fair Use. Retrieved from http://www.dmlp.org/legal-guide/fair-use
- Dynamis LLP. (n.d.). Section 230 Immunity: Navigating the Shifting Landscape. Retrieved from https://www.dynamisllp.com/knowledge/section-230-immunity-changes
- Electronic Frontier Foundation. (2020). Publisher or Platform? It Doesn’t Matter. Retrieved from https://www.eff.org/deeplinks/2020/12/publisher-or-platform-it-doesnt-matter
- Electronic Frontier Foundation. (n.d.). CDA 230: Legal Cases. Retrieved from https://www.eff.org/issues/cda230/legal
- Electronic Frontier Foundation. (n.d.). Bloggers’ FAQ on Section 230 Protections. Retrieved from https://www.eff.org/issues/bloggers/legal/liability/230
- Equality and Human Rights Commission. (2021). Article 10: Freedom of expression. Retrieved from https://www.equalityhumanrights.com/human-rights/human-rights-act/article-10-freedom-expression
- European Commission. (n.d.). The 2022 Code of Practice on Disinformation. Retrieved from https://digital-strategy.ec.europa.eu/en/policies/code-practice-disinformation
- European Commission. (n.d.). A strengthened EU Code of Practice on Disinformation. Retrieved from https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/new-push-european-democracy/protecting-democracy/strengthened-eu-code-practice-disinformation_en
- European Commission. (n.d.). How the Digital Services Act enhances transparency online. Retrieved from https://digital-strategy.ec.europa.eu/en/policies/dsa-impact-platforms
- European Commission. (n.d.). Out-of-court dispute settlement under the DSA. Retrieved from https://digital-strategy.ec.europa.eu/en/policies/dsa-out-court-dispute-settlement
- European Commission. (2025). Commission refers five Member States to the Court of Justice of the European Union for failing to fully implement the Digital Services Act. IP/25/1081. Retrieved from https://ec.europa.eu/commission/presscorner/detail/en/ip_25_1081
- European Court of Human Rights. (n.d.). European Convention on Human Rights. Retrieved from https://www.echr.coe.int/documents/d/echr/convention_ENG
- European Court of Human Rights. (2022). Guide on Article 10 of the European Convention on Human Rights: Freedom of expression. Retrieved from https://rm.coe.int/guide-on-article-10-freedom-of-expression-eng/native/1680ad61d6
- European Court of Human Rights. (n.d.). Freedom of Expression. Retrieved from https://www.coe.int/en/web/human-rights-convention/expression1
- European Parliament. (2024). Mis- and disinformation on social media and related risks to election integrity. EPRS_ATA(2024)767150. Retrieved from https://www.europarl.europa.eu/thinktank/en/document/EPRS_ATA(2024)767150
- European Parliament. (2025). Hate speech: Comparing the US and EU approaches. EPRS_BRI(2025)772890_EN.pdf. Retrieved from https://www.europarl.europa.eu/RegData/etudes/BRIE/2025/772890/EPRS_BRI(2025)772890_EN.pdf
- European Parliament Think Tank. (2025). Hate speech: Comparing the US and EU approaches. Retrieved from https://www.europarl.europa.eu/thinktank/en/document/EPRS_BRI(2025)772890
- European Union Agency for Fundamental Rights. (n.d.). European Convention on Human Rights – Article 10. Retrieved from https://fra.europa.eu/en/law-reference/european-convention-human-rights-article-10
- European Union Agency for Fundamental Rights. (n.d.). Article 10 – Freedom of thought, conscience and religion. Retrieved from https://fra.europa.eu/en/eu-charter/article/10-freedom-thought-conscience-and-religion?page=1
- Federal Election Commission. (2019). Symposium: Digital Disinformation and the Threat to Democracy. Retrieved from https://www.fec.gov/about/leadership-and-structure/ellen-l-weintraub/symposium-digital-disinformation-and-threat-democracy-information-integrity-2020-elections/
- Foundation for Individual Rights and Expression (FIRE). (n.d.). History of Free Speech. Retrieved from https://www.thefire.org/history-free-speech
- Foundation for Individual Rights and Expression (FIRE). (n.d.). John Stuart Mill’s Enduring Arguments for Free Speech. Retrieved from https://www.thefire.org/research-learn/john-stuart-mills-enduring-arguments-free-speech
- Foundation for Individual Rights and Expression (FIRE). (n.d.). Unprotected Speech Synopsis. Retrieved from https://www.thefire.org/research-learn/unprotected-speech-synopsis
- Foundation for Individual Rights and Expression (FIRE). (n.d.). Artificial Intelligence, Free Speech, and the First Amendment. Retrieved from https://www.thefire.org/research-learn/artificial-intelligence-free-speech-and-first-amendment
- Freedom House. (2019). Digital Election Interference. Retrieved from https://freedomhouse.org/report/freedom-on-the-net/2019/the-crisis-of-social-media/digital-election-interference
- Freedom House. (2023). The Repressive Power of Artificial Intelligence. Retrieved from https://freedomhouse.org/report/freedom-net/2023/repressive-power-artificial-intelligence
- Freedom Online Coalition. (n.d.). Joint Statement on Information Integrity Online and Elections. Retrieved from https://freedomonlinecoalition.com/joint-statement-on-information-integrity-online-and-elections/
- Future of Free Speech Project. (n.d.). Hate Speech Case Database. Retrieved from https://futurefreespeech.org/hate-speech-case-database/
- Future of Free Speech Project. (n.d.). Homepage. Retrieved from https://futurefreespeech.org/
- Garvey, K. (2022). Weak AI Has Free Speech Rights. Fordham Law Review, 91(3), 953-988.
- Georgetown University Free Speech Project. (n.d.). Social Media: The New Public Square? Retrieved from https://freespeechproject.georgetown.edu/social-media-the-new-public-square/
- GetStream.io. (n.d.). The Digital Services Act (DSA) and Content Moderation Requirements. Retrieved from https://getstream.io/blog/dsa-moderation-requirements/
- The Guardian. (2025). Listening to the Law review. Retrieved from https://www.theguardian.com/books/2025/sep/21/listening-to-the-law-review-amy-coney-barrett
- Harvard Law Review. (2018). Section 230 as First Amendment Rule. Harvard Law Review, 131, 2027.
- Howe, A. (2024, June 26). Justices side with Biden over government’s influence on social media content moderation. SCOTUSblog.
- ILGA-Europe. (2020, January 14). ILGA-Europe welcomes landmark judgment on online hate speech. Retrieved from https://www.ilga-europe.org/press-release/ilga-europe-welcomes-landmark-judgment-online-hate-speech/
- Information Technology and Innovation Foundation. (2025). How the EU’s Content Moderation Regulation Harms U.S. Tech Competitiveness. Retrieved from https://itif.org/publications/2025/05/14/eu-content-moderation-regulation/
- International Bar Association. (n.d.). Fake news. Retrieved from https://www.ibanet.org/article/0adbdb24-c0c2-4cc8-bef8-e9b172dcf12a
- Iowa State University. (n.d.). Frequently Asked Questions About the First Amendment and Free Speech. Retrieved from https://freespeech.iastate.edu/faq
- Jaffer, J. (2021, December 1). Liability for Amplification of Disinformation: A Law of Unintended Consequences? American Constitution Society.
- Jefferson University. (2021). Can Social Media Sites Be Held Accountable for Users’ Posts? Retrieved from https://www.jefferson.edu/news/2021/02/can-social-media-sites-be-held-accountable-for-users-posts.html
- Judicial Learning Center. (n.d.). Your 1st Amendment Rights. Retrieved from https://judiciallearningcenter.org/home-page/student-center/landmark-cases/your-1st-amendment-rights/
- Justia. (n.d.). Freedom of Expression: The Philosophical Basis. Retrieved from https://law.justia.com/constitution/us/amendment-01/05-the-philosophical-basis.html
- Justia. (n.d.). Supreme Court Cases by Topic: Free Speech. Retrieved from https://supreme.justia.com/cases-by-topic/free-speech/
- Kiska, T. J. (2012). Hate Speech: A Comparison of the Law in the United States and Europe. Regent University Law Review, 25(1), 103-138.
- Knight First Amendment Institute. (n.d.). Reimagining the First Amendment in the Digital Age. Arnold Ventures.
- Lehigh University. (n.d.). AI, Free Speech and the Future of Democracy. Retrieved from https://cas.lehigh.edu/articles/ai-free-speech-future-democracy
- Libertarianism.org. (2020). An Introduction to John Stuart Mill’s On Liberty. Retrieved from https://www.libertarianism.org/columns/introduction-john-stuart-mills-liberty
- Library of Congress. (n.d.). First Amendment. Constitution Annotated. Retrieved from https://constitution.congress.gov/constitution/amendment-1/
- Library of Congress. (n.d.). Historical Background on Free Speech Clause. Constitution Annotated. Retrieved from https://constitution.congress.gov/browse/essay/amdt1-7-1/ALDE_00013537/
- LitCharts. (n.d.). On Liberty Summary. Retrieved from https://www.litcharts.com/lit/on-liberty/summary
- McKeown, M. M., & Shefet, D. (2022). Hate Speech: A Comparative Analysis of the United States and Europe. In Regulating Cyber Technologies. World Scientific.
- Medium. (n.d.). Reflections on On Liberty by J.S. Mill. Retrieved from https://medium.com/@JonahofTimnath/reflections-on-on-liberty-by-j-s-mill-adb19711e1e7
- Milken Institute. (n.d.). Tech Regulation Digest: Sunsetting Section 230? The Future of Content Moderation, Ads, and AI. Retrieved from https://milkeninstitute.org/content-hub/collections/articles/tech-regulation-digest-sunsetting-section-230-future-content-moderation-ads-and-ai
- Mitchell Hamline Law Review. (2024). Social Media, The Modern Public Forum: The State Action Doctrine and Resurrection of Marsh. Retrieved from https://mhlawreview.org/article/social-media-the-modern-public-forum-the-state-action-doctrine-and-resurrection-of-marsh/
- National Archives. (n.d.). The Bill of Rights: A Transcription. Retrieved from https://www.archives.gov/founding-docs/bill-of-rights-transcript
- National Association of Attorneys General. (n.d.). The Future of Section 230: What Does It Mean for Consumers? Retrieved from https://www.naag.org/attorney-general-journal/the-future-of-section-230-what-does-it-mean-for-consumers/
- National Governors Association. (2025). Key Takeaways from the 2024-2025 U.S. Supreme Court Term: Implications for States and Territories. Retrieved from https://www.nga.org/updates/key-takeaways-from-the-2024-2025-u-s-supreme-court-term-implications-for-states-and-territories/
- Newsweek. (2024). Platform or Publisher? Social Media Can’t Be Both. Retrieved from https://www.newsweek.com/platform-publisher-social-media-cant-both-opinion-1874001
- Niskanen Center. (n.d.). The Struggle Between Free Speech and Privacy. Retrieved from https://www.niskanencenter.org/struggle-free-speech-privacy/
- Nowak, M. (2020). Commentary on the International Covenant on Civil and Political Rights. Cambridge University Press.
- Nyst, C. (2012). Two sides of the same coin: The right to privacy and the freedom of expression. Privacy International.
- Online Representations and Certifications Application. (n.d.). From Clicks to Chaos: How Social Media Algorithms Amplify Extremism. Retrieved from https://www.orfonline.org/expert-speak/from-clicks-to-chaos-how-social-media-algorithms-amplify-extremism
- Organization for Security and Co-operation in Europe (OSCE). (n.d.). Artificial intelligence and freedom of expression. Retrieved from https://www.osce.org/saife/essay
- Parliament of the United Kingdom. (2018). Social Media, Online Platforms and the Role of Publishers. LLN-2018-0003.
- Privacy International. (n.d.). Two sides of the same coin: The right to privacy and the freedom of expression. Retrieved from https://privacyinternational.org/blog/1111/two-sides-same-coin-right-privacy-and-freedom-expression
- RAND Corporation. (2021). Countering Foreign Interference in U.S. Elections. Retrieved from https://www.rand.org/pubs/research_reports/RRA704-4.html
- Reagan Presidential Library. (n.d.). Constitutional Amendments – Amendment 1 – “The Freedom of Speech”. Retrieved from https://www.reaganlibrary.gov/constitutional-amendments-amendment-1-freedom-speech
- Redish, M. H. (2022). The Digital Services Act and the Brussels Effect in Platform Content Moderation. Chicago Journal of International Law, 24(1).
- Richards, N. M., & Rotenberg, M. (2022). The Struggle Between Free Speech and Privacy. Niskanen Center.
- Schauer, F. (2021). Free Speech: A Philosophical Enquiry. Cambridge University Press.
- SCOTUSblog. (2024). Justices side with Biden over government’s influence on social media content moderation. Retrieved from https://www.scotusblog.com/2024/06/justices-side-with-biden-over-governments-influence-on-social-media-content-moderation/
- Social Media HQ. (n.d.). If Social Media Companies Are Publishers and Not Platforms, That Changes Everything. Retrieved from https://www.socialmediahq.com/blog/if-social-media-companies-are-publishers-and-not-platforms-that-changes-everything
- Socially Aware. (2022). A Section 230 Spotlight. Retrieved from https://www.sociallyawareblog.com/topics/part-2-a-section-230-27-years-old-and-still-in-the-spotlight-
- SparkNotes. (n.d.). On Liberty: Summary. Retrieved from https://www.sparknotes.com/philosophy/mill/section3/
- Stanford University. (n.d.). What is Fair Use? Retrieved from https://fairuse.stanford.edu/overview/fair-use/what-is-fair-use/
- Stanford University. (n.d.). Protected Speech, Discrimination, and Harassment. Retrieved from https://communitystandards.stanford.edu/resources/protected-speech-discrimination-and-harassment
- Stanger, A. (2023). Section 230 and the Digital Public Sphere. Journal of Free Speech Law.
- Strasbourg Observers. (2025). Hate Speech in the Case Law of the European Court of Human Rights. Retrieved from https://solidaritywithothers.com/hate-speech-in-the-case-law-of-the-european-court-of-human-rights/
- Supreme Court of the United States. (2025). Free Speech Coalition, Inc. v. Paxton. Retrieved from https://www.supremecourt.gov/opinions/24pdf/23-1122_3e04.pdf
- Tech Hive Advisory. (2021). The Intersection of Freedom of Expression Online and Protection of Personal Information. Retrieved from https://www.techhiveadvisory.africa/report/the-intersection-of-freedom-of-expression-online-and-protection-of-personal-information
- TechPolicy.Press. (2025). The EU’s Code of Practice on Disinformation is Now Part of the Digital Services Act. What Does It Mean? Retrieved from https://www.techpolicy.press/the-eus-code-of-practice-on-disinformation-is-now-part-of-the-digital-services-act-what-does-it-mean/
- Thomas Jefferson Center for the Protection of Freedom of Expression. (n.d.). Free Speech in the Digital Age. Retrieved from https://www.theusconstitution.org/wp-content/uploads/2020/05/Free-Speech-in-the-Digital-Age.pdf
- Ubeda de Torres, A. (2006). Freedom of Expression under the European Convention on Human Rights: A Comparison With the Inter-American System of Protection of Human Rights. Human Rights Brief, 14(1).
- UNESCO. (n.d.). Countering hate speech: What you need to know. Retrieved from https://www.unesco.org/en/countering-hate-speech/need-know
- UNHCR. (2025). Special considerations for hate speech. Retrieved from https://www.unhcr.org/handbooks/informationintegrity/understanding-challenge/special-considerations-hate-speech
- United Nations. (n.d.). Universal Declaration of Human Rights. Retrieved from https://www.un.org/en/about-us/universal-declaration-of-human-rights
- United Nations. (n.d.). Illustrated Universal Declaration of Human Rights. Retrieved from https://www.ohchr.org/en/universal-declaration-of-human-rights/illustrated-universal-declaration-human-rights
- United Nations. (n.d.). What is hate speech? Retrieved from https://www.un.org/en/hate-speech/understanding-hate-speech/what-is-hate-speech
- United Nations. (n.d.). International Human Rights Law. Retrieved from https://www.un.org/en/hate-speech/united-nations-and-hate-speech/international-human-rights-law
- United Nations Digital Library. (n.d.). General comment no. 34, Article 19, Freedoms of opinion and expression. Retrieved from https://digitallibrary.un.org/record/715606
- United Nations Office of the High Commissioner for Human Rights. (2018). Universal Declaration of Human Rights at 70: 30 Articles on 30 Articles – Article 19. Retrieved from https://www.ohchr.org/en/press-releases/2018/11/universal-declaration-human-rights-70-30-articles-30-articles-article-19
- U.S. Copyright Office. (n.d.). FAQ: Fair Use. Retrieved from https://www.copyright.gov/help/faq/faq-fairuse.html
- U.S. Courts. (n.d.). Elonis v. U.S. Retrieved from https://www.uscourts.gov/about-federal-courts/educational-resources/educational-activities/first-amendment-activities/elonis-v-us
- U.S. Department of Justice. (n.d.). Department of Justice’s Review of Section 230 of the Communications Decency Act of 1996. Retrieved from https://www.justice.gov/archives/ag/department-justice-s-review-section-230-communications-decency-act-1996
- Vanderbilt University. (2024). Protecting Free Speech in the AI Era. Retrieved from https://news.vanderbilt.edu/2024/07/16/protecting-free-speech-in-the-ai-era/
- Welch, P. (2025, March 5). Welch Reintroduces the Digital Integrity in Democracy Act [Press release]. Retrieved from https://www.welch.senate.gov/welch-reintroduces-the-digital-integrity-in-democracy-act/
- Wikipedia. (n.d.). Freedom of speech. Retrieved from https://en.wikipedia.org/wiki/Freedom_of_speech
- Wikipedia. (n.d.). On Liberty. Retrieved from https://en.wikipedia.org/wiki/On_Liberty
- Wikipedia. (n.d.). Article 10 of the European Convention on Human Rights. Retrieved from https://en.wikipedia.org/wiki/Article_10_of_the_European_Convention_on_Human_Rights
- Wikipedia. (n.d.). United States free speech exceptions. Retrieved from https://en.wikipedia.org/wiki/United_States_free_speech_exceptions
- Wikipedia. (n.d.). Fair use. Retrieved from https://en.wikipedia.org/wiki/Fair_use
- Wikipedia. (n.d.). List of European Court of Human Rights judgments. Retrieved from https://en.wikipedia.org/wiki/List_of_European_Court_of_Human_Rights_judgments
- Yale Law Journal. (n.d.). Beyond the Public Square: Imagining Digital Democracy. Retrieved from https://www.yalelawjournal.org/forum/beyond-the-public-square-imagining-digital-democracy
- Zick, T. (2019). The History of Freedom of Expression. Taylor & Francis Online, 17(1), 1-15.