Introduction: The New Constitutional Crisis
The rapid proliferation of autonomous, opaque, and statistically-driven Artificial Intelligence (AI) systems presents a fundamental challenge to the American constitutional order, which is predicated on principles of human reason, knowable intent, transparency, and individual accountability. The legal and social landscape is not merely facing a series of isolated technical puzzles but a systemic crisis that compels a re-evaluation of how constitutional rights are defined and protected in an age of automated governance. AI, once a subject of speculative fiction, now underpins critical decisions in domains ranging from criminal justice and employment to public health and the allocation of government benefits. This integration of machine learning models into the core functions of society creates a foundational conflict between two disparate modes of logic: the probabilistic, correlational reasoning of AI and the intent-based, causal reasoning that forms the bedrock of constitutional law.
The core of this tension lies in a profound “translation problem.” As Professor Aziz Huq has observed, “Many constitutional rules are focused upon intent, and it’s not clear how you think about intent when it’s a machine”. This conceptual gap destabilizes settled legal conventions that have evolved over centuries to regulate human behavior. When an algorithm denies a loan, flags a defendant as a high risk for recidivism, or amplifies certain speech while suppressing other voices, traditional legal inquiries into motive, purpose, and state of mind become fraught with ambiguity. The decision-making process is often distributed across vast datasets, complex code, and multiple human actors, making it nearly impossible to pinpoint a singular, legally cognizable “intent”. This opacity undermines the very possibility of accountability, a cornerstone of a government of laws.
The scope of this challenge is vast, touching upon the most essential guarantees of the U.S. Constitution. The First Amendment’s protections for speech and expression are tested by generative AI that can create and disseminate information—and disinformation—at an unprecedented scale, blurring the lines of authorship and liability. The Fourteenth Amendment’s promises of Due Process and Equal Protection are threatened by “black box” algorithms that make life-altering decisions without providing comprehensible notice or a meaningful opportunity to be heard, while simultaneously perpetuating and amplifying historical biases in ways that are difficult to prove under existing legal standards. AI is no longer a niche topic for technologists and legal specialists; it has become a “disruptive technology” that is fundamentally reshaping how we “work, live and even think and behave,” thereby affecting a broad spectrum of fundamental rights.
This domestic constitutional reckoning is unfolding within a global context of profound regulatory divergence. The European Union has forged ahead with its comprehensive, rights-based AI Act, a landmark piece of legislation that seeks to impose a harmonized, risk-based framework across its member states. This approach stands in stark contrast to the more fragmented, innovation-centric, and politically fluid model in the United States, which has thus far relied on a patchwork of executive orders, state-level legislation, and the application of existing, often ill-suited, laws. This report will demonstrate that these are not merely different policy choices but reflections of deeply rooted philosophical disagreements about the relationship between the state, the market, and the individual in the digital age.
This report will proceed in five parts. Section I will deconstruct the application of First Amendment principles to AI, examining the challenges to defining the “speaker,” allocating liability for harmful speech, and preserving copyright’s human authorship requirement. Section II will argue that algorithmic decision-making poses the most acute threat to the constitutional guarantees of Due Process and Equal Protection, analyzing how AI can create systemic discrimination that is nearly impossible to challenge under current legal doctrines. Section III will ground these theoretical concerns in an analysis of landmark court cases in criminal justice, employment, and the emerging battleground of “digital redlining.” Section IV will provide a detailed comparative analysis of the U.S. and EU regulatory paradigms, revealing the competing geopolitical visions they represent. Finally, Section V will explore emerging governance models, from technical solutions like “Constitutional AI” to novel liability frameworks, in the search for a legitimate and constitutionally sound path forward.
Section I: The Ghost in the Machine: AI and the First Amendment
The application of First Amendment principles to Artificial Intelligence is an area of profound legal strain. While existing doctrines provide a conceptual starting point, they are being stretched to their limits by the scale, speed, and non-human nature of generative AI. The core challenges revolve around three fundamental questions: Who is the “speaker” when content is generated by an algorithm? How can liability for harmful speech, such as defamation or incitement, be allocated when human intent is absent or obscured? And how can the intellectual property framework, specifically copyright law’s foundational requirement of human authorship, survive in an era of machine-generated creativity? The answers to these questions will not only shape the future of expression but also define the legal and economic landscape of the burgeoning AI industry.
Who Speaks? Redefining “Speech” and “Speaker”
The prevailing legal consensus, articulated by scholars such as Professor Jack Balkin, is that AI programs themselves do not possess First Amendment rights. The Constitution protects people, not technologies. Consequently, AI should not be treated as an artificial person in the same vein as a corporation, which is granted First Amendment rights as a legal fiction representing a collective of human beings. Instead, the rights-holders are the people and companies who utilize AI as a tool for expression; they are the “speakers” in the constitutional sense. Similarly, individuals have a protected right to receive and consume information, including content generated by AI.
This framework, while sound under current doctrine, faces pressure from the increasing autonomy of AI systems. Some legal theorists have begun to explore whether highly advanced AI could one day be considered a “new actor” deserving of some form of constitutional consideration, much as the Supreme Court extended free speech rights to cable operators as a new medium of communication emerged. While this remains a speculative proposition, it highlights the law’s historical capacity to adapt its concepts of personhood and rights to new technological realities. For now, however, the more immediate and solid constitutional protection lies not with the AI’s output but with its underlying architecture. It is a long-established principle that computer code itself is a form of protected speech. This provides a significant constitutional shield for the development and dissemination of AI models, independent of the specific content they generate. Any government regulation targeting the code of an AI system would likely be subject to a high level of judicial scrutiny, requiring a compelling government purpose and narrow tailoring.
The Liability Vacuum: Defamation, Deepfakes, and Disinformation
The question of accountability for harmful AI-generated speech presents one of the most vexing challenges to constitutional law. The basic principle is clear: a human cannot evade liability for unprotected speech, such as defamation or fraud, simply by using an AI to generate it. A medical provider who uses an AI to give negligent advice is still subject to malpractice liability, and an individual who prompts an AI to create a defamatory story and then publishes it as fact is still liable for defamation.
The situation becomes profoundly more complex, however, when an AI system “hallucinates”—that is, produces false and harmful content without a specific human prompt directing it to do so. For example, a user might ask a chatbot for a biography of a living person, and the AI, in its process of statistical pattern-matching, might invent a false criminal history. In this scenario, harm has occurred, but the traditional legal standards for fault, such as “actual malice” in defamation cases involving public figures, are conceptually difficult to apply. The AI program itself lacks human intentions, making it impossible to prove it acted with knowledge of falsity or reckless disregard for the truth. This creates a potential “liability vacuum,” where injurious speech is propagated, but no actor in the chain meets the requisite standard of fault under existing law.
This problem is magnified in the context of deepfakes and political disinformation. While there is no “hyper-realistic” exception to the First Amendment that would allow for the censorship of a deepfake simply because it is convincing, such content can fall into existing categories of unprotected speech when used for fraud, impersonation, or to create a false light invasion of privacy. Yet, the broader challenge of AI-generated political disinformation is formidable. Courts have consistently held that false speech, particularly in the political arena, enjoys strong First Amendment protection. This complicates efforts to regulate AI-driven campaigns designed to sway elections or incite violence through the mass production of false or misleading content, as such regulations could be struck down as impermissible content-based restrictions on speech.
The unresolved question is where liability should ultimately lie when harm occurs. Should it be with the end-user who provided the prompt, even if they did not intend the specific harmful output? Or should it be with the company that developed and hosted the AI, on the grounds that they released a powerful and unpredictable tool into the world? The courts will have to grapple with these questions, likely developing new standards for negligence or responsibility in the context of generative AI, a task that may ultimately require legislative intervention.
The Authorless Creation: AI and the Copyright Crisis
The rise of generative AI has precipitated a full-blown crisis for copyright law, centered on the doctrine’s foundational requirement of human authorship. The U.S. Copyright Office has remained steadfast in its position that copyright protection extends only to “original works of authorship” created by human beings. Consequently, works generated solely by an AI system, without sufficient human creative input, are not copyrightable and fall immediately into the public domain. This principle was affirmed in federal court in the case of
Thaler v. Perlmutter, where Stephen Thaler’s attempt to register a copyright for an image he claimed was “autonomously” created by his AI was rejected.
To navigate the gray area where humans use AI as a creative tool, the Copyright Office has developed a “creative control” test. Under this standard, copyright protection is available only for the human-authored contributions to a work that incorporates AI-generated material. The key question is “the extent to which the human had creative control over the work’s expression”. In a significant recent clarification, the Office concluded that user prompts, by themselves, are generally insufficient to establish authorship. It argues that prompts function as unprotectable “ideas” or “instructions,” while the AI system itself determines the “expressive elements” of the output. This means that a prompt engineer who provides a detailed description to an image generator cannot claim copyright in the resulting image, though they might be able to copyright a creative arrangement or modification of multiple such images.
This strict adherence to the human authorship requirement has created a powerful, if perhaps unintended, economic dynamic that fuels the other major copyright battle: the use of copyrighted works as training data. Since AI companies cannot typically claim ownership over the outputs of their models, the primary commercial value of their technology resides in the models themselves—their capability to generate content. This capability is directly and inextricably linked to the volume and quality of the data on which they are trained. To build state-of-the-art models, companies must ingest and copy trillions of data points, including a vast swath of the internet’s text, images, and code, much of which is protected by copyright. This has led to a wave of high-stakes lawsuits from authors, artists, and media companies who allege mass infringement. The AI industry’s defense rests heavily on the doctrine of “fair use,” arguing that the copying is for a “transformative” purpose—training a new system—and is analogous to the precedent set in
Authors Guild, Inc. v. Google, Inc., which permitted Google to scan copyrighted books to create a searchable database. This legal clash represents a fundamental conflict over the economic underpinnings of both the creative industries and the AI sector, and its resolution will have monumental consequences for the future of both.
The First Amendment’s interaction with AI thus reveals a dangerous asymmetry. On one hand, technology companies can invoke free speech principles to defend their right to develop and deploy powerful AI models, arguing that code is speech and that innovation should not be curtailed by content-based regulation. On the other hand, the very non-human nature of these systems creates a liability shield, diffusing responsibility and making it difficult to hold anyone accountable for the harmful speech the models produce. This dynamic allows for the technologically amplified proliferation of speech while simultaneously eroding the traditional mechanisms of legal and social responsibility that accompany it.
Section II: Algorithmic Justice: Due Process and Equal Protection in an Automated State
While the First Amendment implications of AI are profound, the most immediate and significant constitutional threats posed by algorithmic decision-making lie in the domains of the Fourteenth Amendment’s guarantees of Equal Protection and Due Process. In these areas, AI systems are not merely tools for expression but are increasingly becoming instruments of governance, making critical determinations about individuals’ rights, liberties, and access to essential services. These systems, often operating as opaque “black boxes,” can create and amplify systemic discrimination in ways that are nearly impossible to challenge under existing legal doctrines. Furthermore, their inherent inscrutability fundamentally undermines the procedural safeguards of notice and a meaningful hearing that are the bedrock of constitutional due process.
The Intent Standard and the Futility of Equal Protection
The core deficiency of the Equal Protection Clause (EPC) in the age of AI is the Supreme Court’s long-standing interpretation that the clause only bars intentional discrimination. To succeed on a constitutional claim of discrimination, a plaintiff must prove that a state actor possessed a discriminatory purpose. This “intent standard” is conceptually broken when applied to algorithmic systems. When an AI used for predictive policing disproportionately targets minority neighborhoods or a system for allocating public benefits disproportionately denies aid to a protected class, whose intent matters? Is it the intent of the programmers who wrote the code? The data scientists who selected the training data? The government officials who procured the system? Or the collective, latent biases embedded within the terabytes of historical data on which the model was trained?. The decision-making chain is so complex and distributed that ascribing a singular, legally cognizable intent to the system is a Sisyphean task. This conceptual mismatch threatens to render the Equal Protection Clause a “dead letter” in the face of automated discrimination.
This reality has pushed legal scholars and civil rights advocates toward “disparate impact” liability as the most viable framework for addressing algorithmic bias. Found in various federal statutes like Title VII of the Civil Rights Act of 1964, disparate impact theory does not require proof of discriminatory intent. Instead, a plaintiff can establish a prima facie case by showing that a facially neutral practice—such as the use of a specific algorithm—has a disproportionately adverse effect on a protected group. The burden then shifts to the defendant to demonstrate that the practice is justified by a “business necessity” or other legitimate goal. Even then, the plaintiff can prevail by showing that a less discriminatory alternative was available. This framework is far better suited to the nature of algorithmic harm, which is often unintentional but nonetheless real and systemic. However, the disparate impact doctrine itself faces significant legal and political headwinds, with courts often interpreting it narrowly and recent executive actions seeking to limit its application in federal enforcement.
A more profound challenge emerges from the fact that AI systems do not merely replicate existing societal bias; they effectively launder it. The process begins when an algorithm is trained on historical data that reflects long-standing patterns of discrimination in areas like policing, lending, or hiring. The machine learns to associate certain proxies—such as zip codes, arrest records, or gaps in employment history—with negative outcomes. These proxies, while not explicitly mentioning race or gender, are often highly correlated with them due to systemic inequality. The algorithm’s internal decision-making process is typically opaque, a “black box” even to its creators. The final output is then presented as a quantitative, data-driven “risk score” or recommendation, which carries a powerful veneer of scientific objectivity. A human decision-maker, who may be susceptible to “automation bias,” is likely to defer to this seemingly neutral assessment. The result is that a decision rooted in historical, social bias is laundered through the machine and legitimized as an impartial, technical judgment. This process of algorithmic laundering makes it exceptionally difficult for a plaintiff to meet the EPC’s intent standard and poses a formidable obstacle even in disparate impact cases.
Algorithmic Affirmative Action and the Specter of Strict Scrutiny
The challenge of algorithmic fairness gives rise to a profound constitutional paradox when developers attempt to correct for bias. Many of the most effective techniques for mitigating algorithmic bias require the explicit use of protected characteristics like race or gender during the model’s training or evaluation phases. This is done to ensure that the system’s performance is equitable across different demographic groups.
However, when a government entity develops or deploys such a “race-aware” algorithm, its use of a racial classification could trigger “strict scrutiny,” the most exacting standard of judicial review under the Equal Protection Clause. To survive strict scrutiny, the government must prove that its use of race is narrowly tailored to achieve a compelling governmental interest. This raises the troubling question of whether these fairness-enhancing techniques could be legally challenged as a form of “algorithmic affirmative action” and potentially be found unconstitutional, particularly in light of a legal framework that is increasingly skeptical of race-conscious remedies.
Legal scholars are divided on this issue. Some argue that such practices could survive strict scrutiny if they are narrowly tailored to the compelling interest of preventing the algorithm from producing discriminatory outcomes, distinguishing this corrective function from affirmative action in contexts like university admissions. Others suggest that the legal logic from government contracting cases, which allows for some consideration of past discrimination, might provide a more viable doctrinal path. This debate highlights the deep-seated tension between the technical necessity of using protected data to achieve fairness and a constitutional doctrine that is increasingly moving toward a formalistic “colorblind” interpretation.
The Black Box on Trial: The Collapse of Procedural Due Process
The use of AI in governmental decision-making systematically violates the core tenets of procedural due process. When the government seeks to deprive an individual of a protected “property” or “liberty” interest—such as public benefits, parole, or a professional license—the Due Process Clause requires, at a minimum, adequate notice and a meaningful opportunity to be heard. Algorithmic systems, by their very nature, often make compliance with these requirements impossible.
First, the requirement of adequate notice is frequently unmet. The explanations provided for algorithmic decisions, if they are provided at all, are often opaque, technical, or simply incomprehensible to the average person. They fail to provide the “ascertainable standards” that the Constitution requires, leaving individuals unable to understand the basis for the adverse action taken against them.
Second, and more fundamentally, these systems eliminate the possibility of a meaningful hearing. The inability to know the “true reasoning” behind a decision—the specific data points used, the weights assigned to them, and the correlational logic of the model—makes it impossible to mount an effective challenge or to identify and correct errors. There can be no meaningful cross-examination when the key witness is a proprietary, inscrutable algorithm. This creates what legal scholars have termed a “procedural gap,” where the focus on achieving a certain outcome (distributive justice) comes at the expense of the fairness of the process (procedural justice), thereby undermining the decision’s legitimacy. This danger is not merely theoretical. In Michigan, a flawed algorithm designed to detect unemployment insurance fraud wrongly accused tens of thousands of people, leading to devastating financial consequences; the system was later found to be wrong 93% of the time.
This systematic failure of due process creates a new and dangerous form of unaccountable state power. When a government agency can deny essential benefits, flag an individual as a high-risk threat, or make other life-altering determinations using a secret, unexplainable, and potentially flawed process, it erodes the fundamental constitutional bargain of a government of laws, not of men—or of machines. This leads to a profound crisis of legitimacy, where citizens are subjected to the dictates of automated authorities they cannot understand, appeal, or meaningfully contest, transforming the relationship between the individual and the state into one of arbitrary power.
Section III: The Constitution in Practice: Landmark Cases and Emerging Battlegrounds
The theoretical conflicts between AI and constitutional principles are no longer confined to academic journals; they are being actively litigated in courtrooms across the country. A series of landmark cases are beginning to shape the legal landscape, revealing a judiciary struggling to apply centuries-old doctrines to a complex and rapidly evolving technology. These legal battles, concentrated in the critical areas of criminal justice, employment, and access to housing and credit, provide a concrete illustration of the constitutional challenges at hand and are setting precedents that will have far-reaching consequences.
Predictive Justice and its Perils: State v. Loomis
The 2016 decision by the Wisconsin Supreme Court in State v. Loomis stands as a seminal case on the use of algorithmic risk assessments in the criminal justice system. The court upheld the use of a proprietary risk-assessment tool called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) in a defendant’s sentencing hearing. Eric Loomis argued that the use of the secret, proprietary algorithm violated his due process rights to be sentenced based on accurate information and to receive an individualized sentence.
The court’s reasoning in rejecting these claims reveals a fundamental misunderstanding of the technology. In response to the accuracy challenge, the court held that a defendant could verify the COMPAS score simply by reviewing the answers they provided on the input questionnaire. This analysis “misses the mark by a mile,” as it fails to address the core of the due process concern: the defendant had no access to the factors the algorithm actually considered, the weights assigned to those factors, or the underlying data used to train the model. Without this information, a meaningful challenge to the algorithm’s logic or potential biases is impossible.
The court also dismissed the individualization argument, reasoning that the COMPAS score was not the sole determining factor in the sentence and that judges retained discretion to disregard it. This conclusion overlooks the powerful effect of cognitive biases, particularly “automation bias”—the well-documented human tendency to over-rely on and grant undue authority to automated systems. A judge presented with a numerical score labeling a defendant as “high-risk” is likely to be heavily influenced by it, even if instructed to treat it as just one piece of information. The
Loomis case thus perfectly encapsulates the central conflict at the heart of algorithmic justice: the collision between the public’s constitutional right to a fair and transparent legal process and a private company’s intellectual property rights. The fact that the inner workings of COMPAS are protected as a trade secret means that defendants, prosecutors, and even the judges relying on its outputs are operating in the dark, unable to scrutinize a tool that plays a role in depriving individuals of their liberty.
Holding the Vendor Accountable: Mobley v. Workday
In the realm of employment law, the ongoing class-action lawsuit Mobley v. Workday, Inc. represents a potentially groundbreaking development in the fight against algorithmic discrimination. The case alleges that Workday’s AI-powered applicant screening tools, used by thousands of companies, have a discriminatory disparate impact on applicants based on race, age, and disability. The most significant ruling in the case thus far is the federal court’s decision to allow the lawsuit to proceed not just against the employers, but directly against Workday, the AI
vendor.
The court’s key finding was that Workday could potentially be held liable as an “agent” of the employers using its platform. The judge reasoned that because Workday’s software allegedly performs a traditional hiring function—screening and rejecting candidates—it is an active participant in the decision-making process. The court explicitly stated that there is no legal distinction between delegating this function to “an automated agent versus a live human one,” warning that to hold otherwise would “potentially gut anti-discrimination laws in the modern era”. This ruling is a potential game-changer. Historically, liability for discriminatory hiring practices has fallen on the employer. By opening the door to vendor liability, the
Mobley case could create powerful new incentives for technology companies to proactively design and audit their systems for fairness.
Further amplifying its impact, the court certified a nationwide collective action for the age discrimination claim. This establishes the crucial precedent that a single, centralized algorithmic system can be treated as a “unified policy” that causes a disparate impact across many different employers and job positions. This procedural victory makes it much easier for plaintiffs to band together to challenge systemic bias in widely used AI platforms. If the “agent” theory of liability is ultimately upheld and adopted by other courts, it could fundamentally restructure the AI-as-a-service market. It would shift the legal and financial risk of algorithmic bias upstream to the technology’s creators, forcing them to treat fairness and transparency as core product requirements rather than as afterthoughts or the sole responsibility of their customers.
Digital Redlining: Algorithmic Bias in Housing and Credit
The principles of equal opportunity are also being eroded by the rise of “digital redlining”—the use of algorithms in ways that perpetuate and deepen historical patterns of discrimination in housing and credit. For decades, laws like the Fair Housing Act and the Equal Credit Opportunity Act have sought to combat discrimination in these critical sectors. However, opaque algorithms now threaten to reverse this progress by making discrimination more efficient, harder to detect, and cloaked in a veneer of objectivity.
In mortgage lending, studies have shown that algorithmic underwriting systems are significantly more likely to deny loans to applicants of color than to similarly qualified white applicants. In some cases, these systems have been found to charge Black and Latinx borrowers higher interest rates than their white counterparts with equivalent credit profiles. Similarly, landlords are increasingly relying on automated tenant-screening software that uses flawed or biased data, such as arrest records that never led to a conviction or eviction filings that were later dismissed. Because communities of color are disproportionately affected by over-policing and housing instability, the use of this data systematically disadvantages them in the rental market.
These systems often operate by using seemingly neutral data points as proxies for protected characteristics. An algorithm might not use race as an input, but it may learn from historical data that factors like an applicant’s zip code, their tendency to shop at certain stores, or even their grammar and spelling are correlated with repayment risk—and these factors are, in turn, often correlated with race and socioeconomic status. This allows discrimination to occur under the guise of neutral risk assessment. The common thread connecting these domains—criminal justice, employment, and housing—is the outsourcing of critical public and quasi-public functions to private, opaque, and proprietary systems. Sentencing is a core function of the state. Hiring and housing decisions are heavily regulated by public anti-discrimination laws. In all three areas, life-altering decisions are being delegated to commercial software whose creators can shield their methods from scrutiny by claiming trade secret protection. This creates an irreconcilable conflict between the constitutional and statutory demands for fairness, transparency, and accountability on one side, and the legal protections of intellectual property on the other. The result is a system where individuals are deprived of their rights by unaccountable algorithmic gatekeepers.
Section IV: Charting the Course: A Comparative Analysis of Regulatory Paradigyms
As nations grapple with the constitutional and societal challenges posed by AI, two dominant regulatory paradigms have emerged, reflecting deep-seated philosophical differences about the relationship between technology, the state, and individual rights. The European Union has pioneered a comprehensive, rights-based approach, codified in its landmark AI Act. The United States, in contrast, has pursued a more fragmented, market-oriented strategy characterized by executive branch directives, sector-specific actions, and a patchwork of state laws. A comparative analysis of these two models reveals not just different policy choices, but competing visions for governing the algorithmic age.
The “Brussels Effect”: The EU’s Rights-Based AI Act
The EU AI Act represents the world’s first comprehensive legal framework for regulating artificial intelligence. At its core is a risk-based pyramid structure that categorizes AI systems into four tiers. At the top are systems posing an “unacceptable risk,” which are banned outright. This category includes technologies deemed to be a clear threat to safety and fundamental rights, such as government-run social scoring systems, AI that uses manipulative subliminal techniques, and most uses of real-time remote biometric identification in public spaces.
The next tier consists of “high-risk” AI systems, which are permitted but subject to stringent and extensive regulation. This category is broad, encompassing AI used in critical infrastructure, education, employment, law enforcement, and the administration of justice and public benefits. Before these systems can be placed on the market, their providers must conduct rigorous conformity assessments and comply with a host of obligations, including implementing robust risk management and quality management systems, ensuring high-quality data governance to mitigate bias, maintaining detailed technical documentation, and designing systems to allow for effective human oversight.
The lower tiers include “limited risk” systems, such as chatbots and deepfakes, which are subject to basic transparency obligations to ensure users know they are interacting with an AI. Systems posing “minimal risk,” which constitute the majority of AI applications, remain largely unregulated. A crucial feature of the AI Act is its extraterritorial scope; its rules apply to any provider or deployer outside the EU if the output of their AI system is used within the Union. This provision is the primary driver of the so-called “Brussels Effect,” whereby EU regulations become de facto global standards because multinational companies find it easier to adopt the strictest rules across all their operations rather than maintain different product standards for different markets.
The American Patchwork: Executive Orders and State Action
In stark contrast to the EU’s single, comprehensive law, the U.S. approach to AI regulation is best described as a “patchwork”. In the absence of federal legislation establishing broad regulatory authority, governance has been driven by a combination of presidential executive orders, the application of existing laws by federal agencies, and a growing body of state-level statutes.
This approach has also been subject to significant political volatility, with recent presidential administrations charting sharply different courses. The Biden administration’s Executive Order on Safe, Secure, and Trustworthy AI emphasized balancing innovation with robust safeguards. It focused on establishing new safety and security standards, protecting privacy, advancing equity and civil rights to combat algorithmic bias, and promoting international cooperation on AI governance. In contrast, the subsequent Trump administration’s “AI Action Plan” and related executive orders explicitly prioritize deregulation and “America’s global AI dominance”. This framework frames regulation as an impediment to innovation and national competitiveness, calling for the removal of regulatory barriers and seeking to ensure that AI systems are free from perceived ideological biases, such as those related to diversity, equity, and inclusion, which it has labeled “woke AI”.
With federal action limited and subject to these partisan shifts, states have begun to step into the void. States like Colorado and California have passed or are considering their own AI-specific laws, often focused on consumer protection and anti-discrimination in high-stakes decisions. While these efforts address important concerns, they are creating a fragmented and inconsistent regulatory landscape that poses significant compliance challenges for businesses operating nationwide and may ultimately spur calls for a unified federal standard.
A Tale of Two Philosophies
The divergence between the EU and U.S. models is rooted in fundamentally different regulatory philosophies. The EU’s AI Act is a clear expression of the “precautionary principle,” a governance philosophy that prioritizes the prevention of potential harm to public health, safety, and fundamental rights, even in the face of scientific uncertainty. It places the burden on innovators to prove their products are safe before they enter the market. The U.S. approach, on the other hand, reflects a long-standing tradition of “permissionless innovation,” where technological development is generally encouraged to proceed with minimal upfront regulation. Under this model, legal intervention is often reactive, occurring only after harm has been demonstrated, typically through litigation under existing tort or anti-discrimination laws.
This philosophical divide is not accidental; it is a product of deeper structural differences in the two legal systems. A primary driver of the U.S.’s fragmented approach to AI is its lack of a comprehensive federal privacy law analogous to the EU’s General Data Protection Regulation (GDPR). The EU AI Act is built upon the legal and institutional foundation of the GDPR, using data protection principles as a cornerstone for regulating AI systems that process personal data. Without this federal foundation, the U.S. is forced to rely on a combination of sector-specific privacy rules (e.g., in healthcare and finance), executive branch directives that lack the enduring force of law, and the inconsistent tapestry of state laws. This structural difference in data privacy governance is a root cause of the divergence in AI governance.
These two regulatory models represent more than just competing legal frameworks; they are competing geopolitical visions for the digital future. The “Brussels Effect” positions the EU as a global standard-setter, exporting its rights-centric values through market power. The U.S. approach, meanwhile, is explicitly framed as a strategic necessity for maintaining economic and national security leadership in a global competition with rivals like China. The outcome of this regulatory contest will have profound implications for international trade, the protection of human rights, and the global balance of technological power.
Table 1: Comparative Analysis of EU and US AI Regulatory Frameworks
Feature | European Union (AI Act) | United States (Current Approach) |
Legal Status | Comprehensive, binding, horizontal regulation directly applicable in all member states. | Fragmented; non-binding federal guidelines (Executive Orders), existing sector-specific rules, and a patchwork of state-level laws. |
Regulatory Philosophy | Precautionary principle; prioritizes fundamental rights and safety. Aims to build a “hub for trustworthy AI”. | “Permissionless innovation”; prioritizes economic competitiveness and national security. Aims to sustain “global AI dominance”. |
Risk Classification | Formal, tiered system: Unacceptable risk (banned), High-risk (heavily regulated), Limited risk (transparency), Minimal risk (unregulated). | No unified federal risk classification. Risk is addressed on a sector-by-sector basis by various agencies and through state laws. |
Key Prohibitions | Social scoring, real-time public biometric surveillance (with narrow exceptions), manipulative AI, and exploitation of vulnerabilities. | No broad federal prohibitions. Bans are limited to specific use cases or technologies at the state level. |
High-Risk Obligations | Mandatory conformity assessments, risk management systems, data governance standards, technical documentation, human oversight, and registration in an EU database. | Varies by sector and state. Executive Orders direct agencies to develop best practices and guidelines (e.g., NIST AI RMF), but compliance is often voluntary for the private sector. |
Transparency Requirements | Mandatory for limited-risk systems (e.g., users must be informed they are interacting with an AI). Generative AI must disclose that content is AI-generated. | No general federal transparency mandate. Requirements are emerging in state laws and are a focus of Executive Orders, particularly for government procurement. |
Enforcement Mechanisms | Significant fines for non-compliance, administered by national authorities and a central European AI Board. Fines can reach up to 7% of global annual turnover. | Enforcement relies on existing agency authorities (e.g., FTC, EEOC), private litigation under existing laws (e.g., anti-discrimination statutes), and state-level enforcement. No specific federal penalties for general AI misuse. |
Section V: The Search for Legitimacy: Governance Models for a Constitutional AI
As the limitations of existing legal doctrines and the divergence of regulatory philosophies become clear, the search for effective and legitimate AI governance models has intensified. This search moves beyond critique to construction, exploring novel frameworks that aim to align powerful AI systems with constitutional principles and democratic values. These emerging models range from technical solutions embedded in the code itself to new legal and social structures for oversight and accountability. A common thread unites them: the recognition that true legitimacy for algorithmic authorities cannot be achieved through technical fixes alone, but requires a deliberate effort to embed human judgment and public will into the entire AI lifecycle.
Technical Constitutions: From “Constitutional AI” to “Public Constitutional AI”
One of the most innovative technical approaches to AI alignment is “Constitutional AI,” a method pioneered by the AI company Anthropic. This technique involves training a large language model to adhere to a set of explicit, hard-coded principles laid out in a written “constitution.” The AI is trained first to critique and revise its own responses based on these principles (e.g., principles drawn from the UN Declaration of Human Rights or other ethical charters) and then to internalize these principles so that its outputs are helpful, harmless, and aligned with the constitutional framework from the outset. This represents a significant step toward making AI decision-making more transparent and accountable by grounding it in human-understandable rules.
However, this corporate-led model has been critiqued for its significant limitations. Drawing on the work of legal scholar Gilad Abiri, two key deficits emerge. First is the “opacity deficit”: while the overarching principles are known, the model’s reasoning in any specific instance remains a black box, failing to provide the explainability needed for true accountability. Second, and more fundamentally, is the “political community deficit”: the “constitution” is written by a small group of engineers and researchers within a private company. It lacks the democratic process of deliberation, contestation, and public consent that gives a real constitution its legitimacy. An AI governed by a corporate charter is not a democratically accountable authority.
To remedy these shortcomings, Abiri proposes a framework of “Public Constitutional AI”. This model seeks to transform the AI constitution from a private, technical solution into a public, political document. It envisions a participatory, democratic process where diverse stakeholders, including ordinary citizens, deliberate on and help draft the principles that will govern powerful AI models operating within a given jurisdiction. To ensure these principles are applied consistently and evolve with societal values, the framework also proposes the creation of “AI Courts.” These bodies would interpret the AI constitution and develop a body of “AI case law,” providing concrete examples and precedents that can be used to further refine AI training. By grounding AI governance in the public will and creating mechanisms for ongoing interpretation and adaptation, Public Constitutional AI offers a path toward imbuing automated systems with the genuine democratic legitimacy they currently lack.
The Inescapability of Human Judgment
While some seek to govern AI from the outside, others are examining the implications of using AI within the practice of law itself, particularly in the complex domain of constitutional interpretation. This inquiry has yielded a powerful insight known as the “Law of Conservation of Judgment,” a concept developed by Professors Andrew Coan and Harry Surden. Their research demonstrates that AI cannot eliminate the need for human moral and political judgment in constitutional law; it can only displace it.
When a judge uses an AI to help decide a constitutional case, the AI’s output is not an objective, neutral answer. It is the product of countless value-laden choices made, often invisibly, throughout the system’s development and deployment. The crucial normative judgments are simply shifted from the judge’s final written opinion to earlier stages of the process: the selection of the AI model, the composition of its training data, the specific framing of the legal question or prompt, and the choice of which interpretive philosophy (e.g., originalism or living constitutionalism) to instruct the AI to follow. Research shows that AI models exhibit “sycophancy,” often telling users what they seem to want to hear and reversing their legal conclusions when presented with counterarguments, which underscores their lack of independent legal reasoning.
This “conservation” principle serves as a crucial rebuttal to techno-solutionist claims that AI can resolve contentious legal debates with objective, data-driven answers. It reinforces the view that AI’s most promising role in the law is as a powerful support tool—a sophisticated research assistant that can summarize vast amounts of precedent, a critic that can “steel-man” an argument to reveal its weaknesses, or an editor that can improve the clarity of legal writing. However, it cannot replace the essential normative function of a human judge, which is to make difficult value choices and take responsibility for them. This insight has implications far beyond the courtroom. It suggests that in any complex human-AI system, the ethical and political choices are never removed, only hidden. The critical task for governance is not to ask if an AI is biased, but to ask where the human judgment has been embedded and whose values that judgment reflects.
Lessons from the Road: Liability Frameworks for Autonomous Systems
As society searches for appropriate liability models for AI, the evolving legal landscape for another complex autonomous technology—self-driving cars—offers valuable, if cautionary, lessons. The legal framework for autonomous vehicles (AVs) is shifting along with the technology’s increasing capabilities, moving along a spectrum from traditional driver negligence to corporate product liability.
For vehicles with lower levels of automation (SAE Levels 1-3), where the human driver is expected to remain attentive and ready to take control, liability for an accident generally remains with the driver under standard negligence principles. However, even at these levels, manufacturers can face liability if their marketing overstates the system’s capabilities—for example, by calling a driver-assist feature “Autopilot”—thereby misleading drivers into a false sense of security.
As vehicles achieve higher levels of autonomy (SAE Levels 4-5), where the system handles all driving tasks under certain conditions and a human driver may not be present at all, the legal consensus is that liability shifts decisively toward the manufacturer. Crashes caused by the AV system are increasingly treated as product liability cases. This can involve a negligence claim, where the plaintiff must prove a specific design or manufacturing defect, or a strict liability claim, where the manufacturer can be held liable for any harm caused by their product, regardless of fault. Some manufacturers, like Volvo, have proactively stated they will accept liability for accidents caused by their fully autonomous systems.
The AV experience demonstrates, however, that liability is rarely simple. An accident can involve a complex chain of causation, with potential fault lying not only with the car manufacturer but also with the developers of specific software components, the providers of mapping data, the technicians who performed maintenance, or even government entities responsible for road infrastructure. This complexity serves as a powerful reminder that a simple, one-size-fits-all liability framework for the vast and diverse applications of AI is unlikely to be effective. It suggests that legal responsibility will need to be carefully allocated based on the specific context of use, the level of autonomy, and the reasonable expectations of all parties involved.
Conclusion: Forging a New Social Contract for the Algorithmic Age
The integration of Artificial Intelligence into the fabric of society has brought the American constitutional order to a critical juncture. This report has detailed the profound and systemic challenges that AI poses to fundamental principles of free expression, equal protection, and due process. The core tensions are now clear: the opacity of algorithmic systems clashes with the constitutional demand for transparency and accountability; the statistical, non-human logic of AI evades a legal framework built on discerning human intent; the public’s right to a fair process is obstructed by private claims of intellectual property; and the nation’s market-driven approach to regulation is increasingly at odds with the rights-based frameworks emerging globally. These are not mere technical problems to be solved with better code, but deep political and philosophical questions about the nature of power, justice, and governance in the 21st century.
Moving forward, the United States must move beyond its current fragmented and reactive posture to develop a coherent federal framework for AI governance. This framework should not be a simple imitation of the EU’s AI Act, but must be carefully adapted to the American legal tradition and constitutional context. Such a framework should be built upon four key pillars.
First, it must legislatively strengthen disparate impact liability as the primary legal tool for combating algorithmic discrimination. Congress should clarify that existing anti-discrimination statutes apply forcefully to automated systems and should streamline the evidentiary burdens on plaintiffs who, through no fault of their own, cannot peer inside the algorithmic black box.
Second, the framework must mandate meaningful transparency and explainability for high-risk systems, particularly those used by the government or in critical domains like employment, housing, and credit. This will require creating a carefully balanced exception to trade secret protection where fundamental rights are at stake, ensuring that individuals have the information necessary to understand and challenge decisions that affect their lives.
Third, it must embrace public deliberation in the governance of public-sector AI. Drawing inspiration from the “Public Constitutional AI” model, federal and state governments should create participatory mechanisms for citizens to help shape the values, principles, and ethical guardrails embedded in the algorithmic systems that serve them. Legitimacy cannot be engineered; it must be earned through democratic engagement.
Fourth, the framework must establish clear and predictable liability regimes that hold both the developers and the deployers of AI systems accountable for the harms they cause. Learning from the complexities of the autonomous vehicle space, these rules must be nuanced, allocating responsibility based on the context of use and the reasonable expectations of safety and fairness.
Ultimately, regulating AI should not be framed as a barrier to innovation but as an essential precondition for its success and social acceptance. Trust is the foundational currency of the digital age. An AI ecosystem that is perceived as unfair, unaccountable, and dangerous will inevitably face a public backlash that stifles progress more effectively than any regulation. By building a governance framework that is grounded in constitutional values, democratic legitimacy, and the rule of law, we can foster the public trust necessary for AI to achieve its immense potential for human flourishing without sacrificing the fundamental rights and dignities that define a free and just society. The task is to forge a new social contract for the algorithmic age—one that ensures this powerful new Leviathan serves the public, rather than the other way around.
References
Balkin, J. (2024).
AI and the First Amendment: Q&A with Jack Balkin. Yale Law School.
Hashiguchi, M. (2024). Constitutional Rights of Artificial Intelligence.
Washington Journal of Law, Technology & Arts, 19(2).
Coan, A., & Surden, H. (2025).
Judgment, Technology, and the Future of Legal Interpretation: A Q&A with Professor Andrew Coan and Professor Harry Surden. University of Arizona James E. Rogers College of Law.
Dawson, A. G. (2024). Algorithmic Adjudication and Constitutional AI—The Promise of A Better AI Decision Making Future?.
SMU Science and Technology Law Review, 27(1), 11.
Abiri, G. (2025). Public Constitutional AI.
Georgia Law Review, 59(2).
University of Chicago Law School. (n.d.).
AI and the Law.
Equal Rights Trust. (n.d.).
Principles on Equality by Design in Algorithmic Decision Making.
(2024). Resetting Antidiscrimination Law in the Age of AI.
Harvard Law Review, 138.
Daly, A., & Booth, P. (2024). Algorithmic justice and the procedural ‘gap’.
International Journal of Law and Information Technology.
Terry, C. L. (2024). Due Process and the Algorithmic State.
University of Richmond Journal of Law & Technology, 30(2).
American Bar Association. (2025).
AI’s Complex Role in Criminal Law: Data, Discretion, and Due Process. GPSolo Magazine.
Columbia Science & Technology Law Review. (2024).
Algorithmic Fairness and the Equal Protection Clause.
OSCE. (2020).
Artificial Intelligence and Freedom of Expression.
Foundation for Individual Rights and Expression (FIRE). (n.d.).
Artificial Intelligence, Free Speech, and the First Amendment.
Lehigh University College of Arts and Sciences. (n.d.).
AI, Free Speech, and the Future of Democracy.
OSCE. (n.d.).
Freedom of expression in the age of artificial intelligence: the risks and challenges to online speech and media freedom.
Cato Institute. (2024).
Artificial Intelligence Regulation Threatens Free Expression.
Tsesis, A., & Surden, H. (2016). Siri-ously?: The First Amendment and Communicative Robots.
Northwestern University Law Review, 110(5), 1169.
Coan, A., & Surden, H. (2024).
AI and Constitutional Interpretation: The Law of Conservation of Judgment. Lawfare.
Rehder, J. (2022). AI as a challenge for legal regulation – the scope of application.
Journal of Law, Market & Innovation, 1(2).
Congressional Research Service. (2024).
Artificial Intelligence: Overview, Recent Executive Actions, and Selected Policy Issues.
IBM. (n.d.).
The EU AI Act.
KPMG. (2024).
Decoding the EU Artificial Intelligence Act.
European Parliament. (2024).
EU AI Act: first regulation on artificial intelligence.
artificialintelligenceact.eu. (n.d.).
High-Level Summary of the AI Act.
EY. (n.d.).
The EU AI Act: What it means for your business.
Wiley Rein LLP. (2025).
White House Launches AI Action Plan and Executive Orders.
The White House. (2025).
Executive Order on Advancing United States Leadership in Artificial Intelligence Infrastructure.
Squire Patton Boggs. (2025).
Key Insights on President Trump’s New AI Executive Order and Policy Regulatory Implications.
Nelson Mullins. (2025).
Understanding the Trump Administration’s Three New AI Executive Orders.
Seyfarth Shaw LLP. (2025).
Trump Administration Releases AI Action Plan and Three Executive Orders on AI: What Employment Practitioners Need to Know.
Tondo, L. (2025). Italy first in EU to pass comprehensive law regulating AI.
The Guardian.
Carta. (2024).
AI regulation: A comparison of the EU AI Act and U.S. policy.
Trilligent. (n.d.).
A Tale of Two Policies: The EU AI Act and the US AI Executive Order in Focus.
DLA Piper. (2023).
Comparing the US AI Executive Order and the EU AI Act.
Electronic Privacy Information Center (EPIC). (n.d.).
EPIC v. DOJ (Criminal Justice Algorithms).
Robbins, I. P. (2017). Injustice Ex Machina: Predictive Algorithms in Criminal Sentencing.
UCLA Law Review.
Harvard Journal of Law & Technology. (2017).
Algorithmic Due Process: Mistaken Accountability and Attribution in State v. Loomis.
Dressel, J. (2019). Pandora’s Algorithmic Black Box: The Challenges of Using Algorithmic Risk Assessments in Sentencing.
American Criminal Law Review, 56(4).
McAfee & Taft. (2025).
Litigation risks increase as more employers use AI tools in hiring decision-making process.
Quinn Emanuel Urquhart & Sullivan, LLP. (2025).
When Machines Discriminate: The Rise of AI Bias Lawsuits.
Fisher Phillips. (2025).
Discrimination Lawsuit Over Workday’s AI Hiring Tools Can Proceed as Class Action: 6 Things Employers Need to Know.
Seyfarth Shaw LLP. (2024).
Mobley v. Workday: Court Holds AI Service Providers Could Be Directly Liable for Employment Discrimination Under “Agent” Theory.
American Bar Association. (2024).
Algorithmic Decision-Making in Child Welfare Cases.
Duke Law & Technology Review. (2024).
Unintentional Algorithmic Discrimination.
Brookings Institution. (2023).
The legal doctrine that will be key to preventing AI discrimination.
Cohen, I. G., & Pasquale, F. (2022). The widening circle of state secrets and the future of algorithmic fairness.
Patterns, 3(12).
Molnár, T. M., & Hamzic, E. (2024). Algorithmic Bias as a Core Legal Dilemma in the Age of Artificial Intelligence.
Laws, 14(3), 41.
Abiri, G. (2024).
Public Constitutional AI. arXiv:2406.16696.
Coan, A., & Surden, H. (2024). Artificial Intelligence and Constitutional Interpretation.
University of Colorado Law Review, 96.
Abiri, G. (2025). Public Constitutional AI.
Georgia Law Review, 59(2).
Katzenbach, M., & Young, M. (2024).
Experimental Publics: Democracy and the Role of Publics in GenAI Evaluation. Knight First Amendment Institute at Columbia University.
National Fair Housing Alliance. (n.d.).
Discriminatory Effects of Credit Scoring on Communities of Color.
Texas A&M Journal of Property Law. (2023).
Digital Redlining: How the Rise of Algorithmic Lending with Artificial Intelligence Systems Threatens to Further Decades of Work to Eliminate Discrimination.
Robert F. Kennedy Human Rights. (n.d.).
Bias in Code: Algorithm Discrimination in Financial Systems.
Kreisman Initiative for Housing Law & Policy at UChicago. (2024).
AI is Making Housing Discrimination Easier Than Ever Before.
Women’s World Banking. (2021).
Algorithmic Bias, Financial Inclusion, and Gender.
ACLU of Washington. (n.d.).
How Biased Algorithms Create Barriers to Housing.
UNESCO. (n.d.).
Recommendation on the Ethics of Artificial Intelligence.
Transcend. (n.d.).
AI and Privacy: A Deep Dive.
ResearchGate. (2024).
Ethical and Legal Implications of AI-Driven Surveillance: Balancing Security and Privacy in a Regulated Environment.
Francis, C. (2024).
Navigating the Intersection of AI, Surveillance, and Privacy. United Nations Department of Economic and Social Affairs.
IAPP. (2024).
Consumer Perspectives of Privacy and AI.
Love the Idea. (n.d.).
The Rise of AI Surveillance: Privacy Concerns and Ethical Debates.
Congressional Research Service. (2025).
Generative Artificial Intelligence and Copyright Law.
Wikipedia. (n.d.).
Artificial intelligence and copyright.
Built In. (2025).
AI-Generated Content and Copyright Law: What We Know.
U.S. Copyright Office. (2025).
Copyright Office Releases Report on Copyrightability of AI-Generated Works.
Tondo, L. (2025). Italy first in EU to pass comprehensive law regulating AI.
The Guardian.
Cooley LLP. (2024).
Copyright Ownership of Generative AI Outputs Varies Around the World.
Byrd Davis Alden & Henrichson, LLP. (n.d.).
Who Is Liable When a Self-Driving Car Causes a Crash?.
McCoy & Sparks, PLLC. (n.d.).
Liability in Self-Driving Car Accidents: Who’s Responsible?.
Clifford Law Offices. (2025).
What Drivers Need to Know About Autonomous Car Liability.
Suffolk University Journal of High Technology Law. (2025).
Accountability of Autopilot: Self-Driving Cars and Liability.
MacCarthy, M. (2025).
Setting the standard of liability for self-driving cars. Brookings Institution.
Villasenor, J. (2014).
Products Liability and Driverless Cars: Issues and Guiding Principles for Legislation. Brookings Institution.
Global Center on Cooperative Security. (n.d.).
Human Rights, Tech, and National Security.
Terry, C. L. (2024). Due Process and the Algorithmic State.
University of Richmond Journal of Law & Technology, 30(2).
Balkin, J. (2024).
AI and the First Amendment: Q&A with Jack Balkin. Yale Law School.
Terry, C. L. (2024). Due Process and the Algorithmic State.
University of Richmond Journal of Law & Technology, 30(2).
Abiri, G. (2025). Public Constitutional AI.
Georgia Law Review, 59(2).
Robbins, I. P. (2017). Injustice Ex Machina: Predictive Algorithms in Criminal Sentencing.
UCLA Law Review.
Quinn Emanuel Urquhart & Sullivan, LLP. (2025).
When Machines Discriminate: The Rise of AI Bias Lawsuits.
Abiri, G. (2025). Public Constitutional AI.
Georgia Law Review, 59(2).