top of page

Who Is Responsible When AI Makes a Mistake?

In today’s hyper-automated world, artificial intelligence (AI) doesn’t just suggest your next movie—it decides whether you get a loan, flags your bank account for fraud, or recommends changes to your investment portfolio. And while it promises speed, precision, and scale, AI also carries a new form of risk: one that isn’t easily pinned on a single person or system.


So what happens when the algorithm gets it wrong? 
Who do you blame? 
ree

The bank that deployed the model? The engineers who built it? The data team that trained it? Or the regulators who didn’t predict this edge case?


In reality, AI is just a software – it isn’t a legal “person” that can be sued.


Current law treats AI as property, so any blame usually falls on humans or companies. 


For example, one analysis notes that in the U.S. “AI systems have no legal rights or duties” and can’t be sued directly. Instead, accountability typically goes to the people around the AI: the developers who built it, the companies that own it, or the users who control it. 


But pinning down who exactly is at fault can be tricky. Unlike a human, an AI might learn on its own in unpredictable ways (a “black box”), so it’s hard to see why it erred. In practice, experts say responsibility often ends up being shared


Why AI Breaks in the First Place


Before we rush to blame,  it’s important to understand what makes AI go wrong. An AI system doesn’t have intentions or morals – it just crunches data by its algorithms. Often errors come from bad data or misunderstood context. 


For example, if an image-recognition AI was trained mostly on light-skinned faces, it may mis-identify darker-skinned individuals. In one study, commercial facial-recognition programs had gender error rates as low as 0.8% for light-skinned men, but over 30% for dark-skinned women. That massive disparity was due to biased training data, not a willful AI. 


In other words, when such bias causes a false arrest or a refusal of service, the root problem is flawed data or design. Another issue is opacity. Many AI models are “black boxes” even to their creators; their decisions can’t be easily explained. This makes it harder to trace exactly what went wrong when an AI errs.


Another issue is that many AI models are “black boxes”: even their creators may not fully understand how they reach a decision. This opacity reduces accountability. As one medical technology writer notes, opaque AI systems “can perpetuate biases” and “inner workings remain unexplained as a ‘black box’”, which makes it hard to trace errors. 


Where the Blame Falls 


When something goes wrong with AI, blaming  isn’t straightforward. A range of parties might be held accountable—from the users operating the system to the developers who built it, the organizations that deployed it, and even the regulators who set the rules. This complexity is especially critical in regulated industries like finance, banking, and healthcare, where errors can lead to large-scale consequences.


Users often bear initial responsibility. For instance, a doctor using an AI diagnostic tool or a trader using an algorithmic assistant may be held liable if they ignore warnings or misuse the system. Under the legal principle of vicarious liability, employers or owners can be held accountable for their agents' actions—including AI tools—especially if oversight was inadequate.


Developers and engineers can also face blame if the root cause lies in faulty design, bugs, or biased training data. Although many tech firms include liability waivers in user agreements, serious defects can still lead to legal action—especially in sectors like finance, where flawed models can cause systemic risks or regulatory breaches.


Organizations that deploy AI—like banks using automated credit risk models or fintech firms offering robo-advisors—carry significant legal exposure. If the AI causes harm, they might face product liability claims, akin to marketing a defective product. As automation increases, legal precedent may shift more responsibility onto these firms, requiring robust governance and audit trails for AI decisions.


Even data providers aren't immune. If incorrect or biased data leads to flawed predictions—say, an AI suggesting risky investments based on outdated market trends—the source of that data could come under scrutiny. In finance especially, where models rely heavily on third-party feeds, ensuring data accuracy is a compliance necessity.


Finally, regulators play a dual role—as enforcers and, sometimes, as targets of criticism. Unclear or outdated laws can blur accountability, making it difficult for institutions to navigate compliance. The EU’s new AI Act begins to address this by treating AI systems as “products” and imposing strict liability for failures, even those emerging from software updates or autonomous learning behaviors.



So, who bears the responsibility when things go wrong? Insurers have started offering AI-error coverage for exactly this reason. And yet, we’re only scratching the surface. In one peculiar case, a court upheld a customer’s claim based on a chatbot-generated coupon, underscoring how unpredictable AI liability can be in practice. In finance, where every automated action can carry monetary and reputational impact, the need for clear accountability frameworks is no longer optional—it's urgent.


Generative AI Companies and Liability


Meanwhile, the creators of large AI models (sometimes called foundation model providers) have also been dragged into court. Major AI labs and platforms like OpenAI, Google, Microsoft, and StabilityAI are already under scrutiny. 


In the U.S., authors and publishers have sued OpenAI and Microsoft over their large language models, arguing that the training data and outputs copy copyrighted works. Artists have filed similar lawsuits against image generators (e.g. StabilityAI) over allegedly infringing use of their art. There are even claims that music publishers and programmers are suing AI makers for using copyrighted lyrics or code in training. 


On the defamation front, a Georgia court recently addressed a claim that ChatGPT falsely accused a radio host of embezzlement. In Walters v. OpenAI, the judge dismissed the case, noting it was the first such lawsuit against a chatbot and that no one was actually harmed by the ChatGPT output. The court emphasized that generative AI is not human – it can’t “knowingly” lie under an actual-malice standard – so imposing strict liability on the company would be unprecedented. 


To complicate matters, the tech industry is pushing liability off onto users. Google, Microsoft, and OpenAI recently told U.S. regulators that when their models inadvertently output copyrighted content, “any resulting liability should attach to the user” who prompted it. OpenAI has even introduced a “copyright shield” for paying customers: it promises to defend and cover legal costs if a user faces a copyright suit over AI-generated material.  In effect, these companies say “we’ll pay your lawyer, but you’re the one sued.”


Bottom line- no jurisdiction has definitively decided if or when a company like OpenAI can be held strictly liable for its AI’s mistakes. Some courts and scholars have noted that generative AI outputs might be protected speech (raising First Amendment issues) or immune under section 230, making liability even murkier. Meanwhile, policymakers are still debating rules. 


In the European Union, for example, lawmakers are crafting new AI regulations that would treat AI systems as products and even impose strict liability on high-risk AI operators. The EU’s draft AI Act also contains explicit obligations for foundation model providers like OpenAI and Google to document and disclose how their models are built. But enforcement mechanisms and legal precedents have not yet fully crystallized.


In summary, when AI makes an error, it’s ultimately human hands that pay the price. Regulators, courts, and the public will look to the organizations and people behind the AI. That means the bank, hospital, government agency, or tech firm that deployed the system – along with the engineers, managers, and executives who built and used it – will be called upon to explain what went wrong. The AI itself can’t be sued, so blame “falls on the humans,” as one commentary put it. In practice, this means the responsibility lies with those who designed, vetted, and published the AI’s output. If a false statement or bad decision gets into the news, a court might find it was the publisher (or user) who disseminated it – not the developer – who is liable. 


In the end, who is responsible? In the eyes of the law, it will be the people and entities around the AI – the model’s creators, deployers, and operators – that carry legal and moral responsibility for its mistakes. These human actors must shoulder the burden of accountability when “the algorithm” fails.  














bottom of page