Skip to Main Content
Services Talent Knowledge
Site Search
Menu

News

April 6, 2026

Cybersecurity Law & Strategy, "When AI Gets It Wrong: Managing the Legal Risk of Hallucinations in Business Decision-Making"

Reprinted with permission from the April 2026 edition of the Cybersecurity Law & Strategy newsletter. © 2026 ALM Media Properties, LLC. Further duplication without permission is prohibited.  All rights reserved.

In the past year, courts have sanctioned lawyers for citing nonexistent cases generated by artificial intelligence, regulators have warned companies about unsubstantiated artificial intelligence-driven claims, and businesses have begun quietly absorbing losses tied to AI-assisted decision-making. What was once theoretical is now very real.

AI is no longer experimental. Across industries, businesses are embedding generative tools into daily operations — drafting communications, summarizing contracts, analyzing data and informing strategic decisions. The efficiency gains are real and, in many cases, transformative.

But as adoption accelerates, so does a less appreciated risk: AI hallucinations, outputs that are coherent, confident — and wrong.

These errors are not anomalies. They are a predictable feature of how generative systems operate. And as businesses increasingly rely on AI-generated outputs, a critical legal question emerges: What happens when those outputs are wrong — and relied upon?

The answer under existing law is increasingly clear. The use of AI does not alter fundamental obligations of accuracy, reasonableness and accountability. The legal risk lies not in the existence of hallucinations but in the failure to govern and verify them.

A Known Risk, Not a Technical Glitch

Generative AI systems do not function like traditional databases or search engines. They do not retrieve facts in a deterministic manner. Instead, they generate outputs probabilistically, based on patterns in training data.

That design produces fluent and persuasive responses — but also introduces a known failure mode: fabrication. Hallucinations can include nonexistent citations, incorrect factual assertions or misleading summaries presented with confidence.

Regulators have already begun to address this risk directly. The Federal Trade Commission has cautioned businesses to ensure that AI-generated claims are truthful and substantiated, regardless of how those claims are produced. Automation, in other words, is not a defense.

Where Hallucinations Create Legal Exposure

The legal exposure associated with AI hallucinations turns on how the output is used — and how much reliance is placed on it.

External Communications

Businesses increasingly rely on AI to generate marketing content, client communications and public-facing materials. In that context, hallucinations can quickly create legal risk.

Consider a company that uses AI to draft product descriptions for a cybersecurity offering. The system states that the product is “fully compliant with all federal cybersecurity standards” and “guarantees breach prevention.” Neither claim is verified. Both are inaccurate.

If those statements are published and relied upon, the company may face exposure under deceptive-trade-practices and false-advertising laws. The legal analysis does not turn on how the statements were generated but on the fact that they were made — and that they were false.

Contracting and Legal Analysis

AI is also being used to summarize contracts, identify key provisions and assist in negotiations. While efficient, these uses introduce a subtler but equally significant risk: confident misinterpretation.

Imagine a business team reviewing a vendor agreement with the assistance of an AI tool. The system summarizes the contract as containing mutual indemnification when in fact the provision is one-sided. Relying on that summary, the company proceeds without renegotiating the clause.

When a dispute arises, the error becomes clear — but too late. The resulting exposure is not caused by the AI itself but by reliance on an unverified interpretation.

Courts have already demonstrated little tolerance for such reliance. In Mata v. Avianca, Inc., attorneys submitted filings containing fabricated citations generated by AI. (Mata v. Avianca, Inc., No. 22-cv-1461, 2023 WL 4114965, at *2–3 (S.D.N.Y. June 22, 2023)). The court sanctioned counsel, emphasizing that the failure was not the use of AI but in not verifying its output. Since then, judges have sanctioned lawyers in hundreds of cases for fake citations, citing real cases that do not stand for the proposition for which they are cited, and for failing to come clean about what happened when the court called them on their errors.

Operational and Strategic Decision-Making

Organizations are beginning to use AI to inform financial forecasting, operational planning and strategic decisions. In these contexts, hallucinations may not immediately produce external liability, but they can lead to significant internal harm —misallocated resources, flawed strategies and governance failures.

A Boardroom Scenario: AI-Enabled Fraud
Your finance team receives what appears to be a call from the CEO directing an urgent wire transfer tied to a confidential acquisition that must close right away. The voice is cloned using AI. Internal summaries — also AI-assisted — appear to corroborate the transaction timeline. Your finance team acts quickly to process the wire transfer. Days later, the fraud is uncovered.

This scenario, based on real-life events, implicates more than operational error. It raises questions about:

  • Internal controls and authentication procedures
  • Cybersecurity preparedness
  • Insurance coverage under crime and cyber policies
  • Board-level oversight of emerging risks

The risk here is not simply that AI was used. It is that AI altered the reliability of signals the organization depends on.

Professional Services

For lawyers, consultants and financial advisers, the stakes are higher. The American Bar Association’s Model Rules of Professional Conduct 1.1 makes clear that competence includes an understanding of “the benefits and risks associated with relevant technology.” And Rule 1.4 states that if you do not know the risks, you cannot understand them, much less discuss them with your client.

The emerging standard is straightforward: AI may be used as a tool, but unverified reliance on its outputs will fall below professional standards.

Existing Law and Risk Management

Despite the novelty of the technology, the legal framework governing hallucination risk already exists.

Negligence principles provide the primary lens. Courts will assess whether reliance on AI outputs was reasonable under the circumstances. That analysis will consider the importance of the decision, the availability of verification and the foreseeability of error.

Misrepresentation doctrines operate in parallel. If a business communicates false or misleading information, liability may follow, regardless of intent. From a regulatory standpoint, the source of the statement is irrelevant; the effect on the recipient — in many cases, consumers — controls.

Contract law adds another dimension. Many AI-vendor agreements disclaim accuracy and limit liability, shifting risk to the user. This means that unless they can negotiate protection up front, businesses relying on AI outputs often bear the full risk of error.

This risk-shifting naturally raises insurance issues. Whether AI-driven losses fall within traditional cyber, crime or professional-liability policies remains unsettled. Early disputes focus on the particular facts, policy terms that have rarely been tested by litigation, and — coming soon — exclusions for loss or damage arising from AI use. So while insurance can be an invaluable risk management tool, the value of that tool depends on careful review of competing forms before a policy is issued and the insurer has to respond.

At a broader level, regulators and standards bodies are reinforcing the need for AI governance. The National Institute of Standards and Technology (NIST) has emphasized that AI risks — including issues of reliability and accuracy — must be managed as part of enterprise risk.

From Efficiency Tool to Enterprise Risk

The reality is that most organizations have already adopted AI — often without formal oversight. The risk is not future adoption; it is unmanaged use.

AI does not eliminate human error. It often replicates it on a larger scale.

Errors that might once have been isolated can now be generated rapidly, scaled across an organization and disseminated externally with little friction. Organizations that treat AI as a productivity tool without corresponding governance are not becoming more efficient; they are becoming more exposed.

A Checklist for C-Suite Executives

For executive leadership, the question is not whether AI is being used — it almost certainly is. The question is whether its use, if challenged, can be defended as reasonable.

A practical checklist includes:

  • Taking an inventory of AI usage across the organization, including informal or “shadow AI” tools.
  • Identifying high-risk applications, particularly external communications, legal analysis and decision-making functions.
  • Implementing verification protocols for high-impact outputs, including fact-checking and validation procedures.
  • Establishing human oversight with clear accountability for AI-assisted work.
  • Formalizing governance, including a written AI policy defining acceptable use.
  • Training employees on AI, beginning with appropriate use cases.
  • Reviewing vendor contracts to understand limitations and risk transfer, including indemnification and insurance.
  • Documenting decision-making processes to demonstrate reasonableness.

These steps are quickly moving from best practices to baseline expectations.

Moving Forward

AI hallucinations are not anomalies. They are a predictable feature of generative systems. The legal risk arises not from their existence but from uncritical reliance on them.

Organizations that implement AI governance, verification and accountability will be best positioned to capture the benefits of AI while managing its risks. Those that do not may find that the efficiencies AI promises are offset by legal exposure that existing law is fully equipped to levy. In other words, the dividing line won’t be who uses AI but who uses it responsibly.
 

Subscribe

Click here to sign up for alerts, blog posts, and firm news.

Featured Media

Alerts

Proposed Changes to NYISO Deliverability Could Cut Upstate Transmission Upgrade Costs by Nearly $1 Billion

Alerts

Amendment to the New York Trapped at Work Act Provides Employers With Clarity and Time to Comply

Alerts

New York's Medical Aid in Dying Act: What Patients and Health Care Providers Need to Know

Alerts

Second Department Holds That COVID-Era Executive Orders Toll CVA Revival Window

Alerts

Website Accessibility Lawsuits: Several "Tester" Plaintiffs—Aaron See, Ashley Tesch, Tara Mueller, Mary Ann Deinnocentes, and Judith Palaez—Targeting Businesses in Recent Flurry of Lawsuits

Alerts

NYS Appellate Court Reverses and Dismisses Action Against Liability Insurer