Virtual Patchwork

Explore the fragmented AI regulatory landscape in the U.S., as federal and state laws attempt to govern AI technologies. Learn more about the challenges.

Visual collage of shapes and patterns with traffic cones
Image by iStock/MaxgerSchismStudios

MB

Michael Breslin and Stephen Anstey

November 7, 2024 12:00 AM

Artificial intelligence is a continuously developing and increasingly essential technology with broad applications across sectors. Significant advancements in existing AI systems and in what the federal government titles “emerging AI technologies,” such as generative AI, has led lawmakers and regulatory agencies to focus on how best to govern AI's development and use.

Earlier this year, the European Union enacted its Artificial Intelligence Act (EU AI Act), which aims to ensure the safe and ethical development, deployment, and use of AI. The U.S. has not been able to pass a similar comprehensive law governing AI.

Absent such a comprehensive law, the U.S. federal government and various states are attempting to regulate AI through a patchwork of narrow laws and agency regulations, each addressing only certain aspects of this technology.

While these ad hoc rules will govern the use of AI and set stringent related standards and compliance requirements, this fragmented approach creates a complex web of rules with which effected entities must constantly monitor, understand, and attempt to comply.

Federal

The U.S. Congress has attempted to govern AI through legislation with mixed and limited success. At least 120 bills concerning AI are presently under consideration.

These include, the Office of Management and Budget’s Memorandum on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence; the United States Patent and Trademark Office’s Inventorship Guidance for AI-Assisted Inventions, Guidance Update on Patent Subject Matter Eligibility, Including on Artificial Intelligence, and Guidance on Use of Artificial Intelligence-Based Tools in Practice Before the United States Patent and Trademark Office; and the National Institute of Standards and Technology’s Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile, Managing Misuse Risk for Dual-Use 3 Foundation Models, and Secure Software Development Practices for Generative AI and Dual-Use Foundation Models.

These bills address AI for military use, campaign content, consumer impacts, and financial services. But again, instead of enacting a comprehensive piece of legislation to govern AI, these bills largely target only a specific aspect or impact of the technology.

To date, the U.S. federal government’s most impactful response to AI is the White House’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.

The EO on AI directs over 50 federal entities, primarily agencies, to engage in more than a hundred defined actions regarding AI. These actions range from the solicitation of public input on given AI issues to the development of complex regulations. While the agency-by-agency requirements set forth in the EO on AI further exacerbated the patchwork nature of U.S. AI regulation, it remains the most complete action the federal government has taken to govern emerging AI technologies.

As AI continues to develop, the lack of comprehensive U.S. federal law governing AI is creating a chaotic regulatory landscape."

Numerous federal agencies—either directly as a result of the EO on AI or under their own inherent authority—have taken action to address and further regulate AI. Some have initiated requests for comment on the implementation of AI, its development, and its impacts, and numerous agencies are now publishing complex and stringent AI regulations and guidance.

Notably, the U.S. Department of Justice last month issued an updated Evaluation of Corporate Compliance Programs guidance that now emphasizes AI compliance and outlines the DOJ’s expectations that companies assess the potential impacts of and risks associated with “new technologies, such as AI.” Specifically, the DOJ addresses how companies intend to curb negative or unintended consequences and implement appropriate governance and compliance programs.

Federal agency guidance and regulations concerning AI will continue to develop as the technologies continue to advance. As such, and despite the complexity of following each agency’s actions in this area, companies need to know, monitor and comply with all relevant federal regulations and guidance.

States

In the absence of comprehensive federal legislation on AI, states are passing their own laws and regulations. But, like their federal counterparts, states have not passed comprehensive AI legislation and are instead enacting rules targeting specific aspects and impacts of AI.

Initially, the most significant state laws concerning emerging AI technologies have come from Utah, Tennessee, and Colorado.

On March 13, 2024, Utah enacted the Utah Artificial Intelligence Policy Act. The Act primarily obligates individuals in certain regulated occupations to disclose when a consumer is interacting with generative AI systems.

On March 21, 2024, Tennessee’s Ensuring Likeness Voice and Image Security Act (“ELVIS Act”) was signed into law. The ELVIS Act expands the right of publicity by amending Tennessee’s Protection of Personal Rights law to now prohibit AI-based “voice” misappropriation of songwriters, performers, and music industry professionals.

On May 17, 2024, Colorado enacted SB 24-205 – Consumer Protections for Artificial Intelligence. The law primarily governs the risk of discrimination in certain AI systems and requires entities to use reasonable care to protect consumers from any known or reasonably foreseeable associated risks.

Most recently, California passed 18 laws targeting specific aspects of AI, including consumer protections, AI training data, and digital replicas. Because there is no uniformly accepted definition of AI (with both the EU AI Act and EO on AI providing their own unique definitions) one particularly relevant California law is AB 2885.

Under AB 2885 California adopts is own definition for AI, which is “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.”

At least one state has attempted to pass a comprehensive AI law akin to the EU AI Act.

Earlier this year, the California Senate passed SB1047 – the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which would have broadly governed many aspects of AI development, liability, and use. But Governor Newsom vetoed the bill on September 29, 2024.

As AI continues to evolve and impact various sectors, the lack of a comprehensive U.S. federal law on AI has led to a fragmented regulatory landscape.

While the EU’s AI Act sets a single, comprehensive, risk-based framework, the U.S. relies on a combination of executive orders, agency regulations, and state laws to govern various aspects of AI. This patchwork approach, though effective in addressing specific concerns, creates challenges for companies trying to navigate the complex web of federal and state rules.

To stay compliant and mitigate risks, businesses must closely monitor both federal agency and state-level developments and adapt to emerging guidance and regulations. As AI technology advances, this dynamic regulatory environment will require ongoing vigilance and proactive engagement.

Stephen Anstey is an attorney based in Kilpatrick’s Washington, D.C., office. His practice focuses on artificial intelligence, federal and state regulation and legislation, digital assets, distributed ledger technology, energy, and a broad spectrum of emerging technologies. He was recognized as one of the “Best Lawyers: Ones to Watch” in 2025 and the three years immediately preceding by The Best Lawyers in America®.

Michael Breslin is a partner in Kilpatrick’s Atlanta office and focuses his practice on complex commercial litigation and emerging technologies, including artificial intelligence, telecommunications, fintech innovations, payments, blockchain, and cryptocurrency. He has been recognized by Best Lawyers® in 2024 and 2025 for Technology Law. Stephen and Michael are Directors of Kilpatrick Connect, a legally focused AI consulting and advisory offering built upon Kilpatrick’s AI, legal, and industry expertise that helps clients navigate the intricate legal, regulatory and policy landscape surrounding AI.

Featured Articles