Challenging False AI
Challenging False AI Representations Online: Legal Remedies and Practical Barriers
Introduction
On 30 September 2025, OpenAI unveiled Sora 2, its next-generation AI model for video and audio. The launch is accompanied by a mobile app, invite-only in the U.S. and Canada initially, which permits users to upload a one time video to verify their identity and create cameos inserting their likeness and voice into generated scenes. According to OpenAI, users control who may employ their cameo, and the system applies visible watermarks and embedded Content Credentials (C2PA) metadata to signal provenance.
Its increased realism and integrated audio significantly lower the technical barrier for creating convincing forgeries, not just silent or glitchy AI but deeply immersive video audio fakes that may easily fool viewers. The addition of cameo likeness insertion further expands the risk: individuals now might find their face and voice seamlessly inserted into fabricated scenes with minimal technical effort. The use of watermarking and provenance metadata is a welcome attempt to counter misuse, but their efficacy depends on platform adoption, enforceability, and the robustness of detection systems.
As AI media advances, so must legal frameworks, enforcement, and public awareness of the real risks posed by AI impersonations.
Such fabrications can cause serious reputational, emotional, and financial harm to those depicted. In one study, the UK regulator Ofcom warned that deepfakes can be used to “demean, defraud and disinform”. Ofcom, ‘A deep dive into deepfakes that demean, defraud and disinform’, 23 July 2024. Despite these threats, English law does not recognise a general image right or personality right that lets individuals control use of their likeness. Instead, victims of deepfakes must resort to a patchwork of existing civil laws, chiefly defamation, misuse of private information, data protection, passing off and malicious falsehood, to seek redress. Each of these legal remedies can address certain harms caused by deepfakes in theory.
In practice, however, pursuing a claim often hinges less on the law’s merits than on identifying and suing a viable defendant and enforcing any judgment across borders. Anonymous online perpetrators or those operating overseas are notoriously difficult to hold accountable, making enforcement and recovery major challenges.
Defamation
A deepfake that falsely portrays someone in a manner that damages their reputation can give rise to a defamation claim. English law applies to any communicated representation, including images or videos, that conveys a defamatory meaning. Under the Defamation Act 2013, a statement, including a deepfake, is only defamatory if its publication has caused or is likely to cause serious harm to the claimant’s reputation, the threshold established in Lachaux v Independent Print Ltd [2019] UKSC 27). For example, a fake video of a politician admitting to criminal wrongdoing or a business leader engaging in gross misconduct could clearly lower that person in the estimation of others and thus be defamatory. The claimant must also show that the deepfake refers to them, i.e. they are identifiable in it, that it was published to a third party, and that serious reputational harm resulted or is likely to result. If these elements are proven, the usual remedies, damages and an injunction to remove or restrain publication, are available.
Defendants in a defamation case may attempt any of the statutory defences under the Defamation Act 2013, such as truth, honest opinion, or publication on a matter of public interest. In the context of a knowingly fabricated video or image, however, these defences rarely apply, a deepfake by definition presents false information, not a genuinely held opinion or an accurate report in the public interest. Assuming no defence succeeds, a defamation claimant can potentially obtain a final injunction to prohibit further distribution of the deepfake and an award of damages for the injury to reputation.
Practical hurdles: While the legal test for defamation can be satisfied by a harmful deepfake, actually enforcing such a claim is difficult in practice. The culprit who created or uploaded the fake may be anonymous, hiding behind a pseudonymous account, and tech platforms will not reveal user identities without a court order. Often the perpetrator is based overseas, beyond the immediate jurisdiction of UK courts. Even if a claimant wins a judgment, collecting damages from an unknown or foreign wrongdoer can be futile.
A defamation claim might establish the wrongfulness of a deepfake and lead to a takedown injunction, but actually getting financial redress or holding the true author to account is often another matter. These enforcement problems often drive claimants toward other approaches (like privacy claims or regulatory complaints) or at least a focus on quick removal over damages.
Misuse of Private Information (Privacy)
Beyond reputation, many deepfakes also implicate privacy rights. English law protects an individual’s reasonable expectation of privacy through the tort of misuse of private information. Notably, this cause of action can apply even to false or fabricated materials if they intrude upon someone’s private life. Courts have recognised that deliberately fake content can still violate privacy so long as it portrays something private or sensitive about the person. For example, a non consensual deepfake purporting to show someone in an intimate or humiliating situation, even if the scene never happened in reality, would be actionable as a misuse of private information. The law reasons that a person’s right to privacy can be infringed by material that is wholly or partly untrue, if it nevertheless unjustifiably reveals, or fabricates, details of their private life.
To determine if a privacy claim succeeds, the courts apply a two-stage test:
- Private information: Did the claimant have a reasonable expectation of privacy in the content in question? For instance, deepfakes depicting someone’s sexual activities, nudity, health records, or other inherently private matters will almost always meet this criteria.
- Balance of rights: If yes, is that privacy interest outweighed by the publisher’s right to freedom of expression, Article 10 of the European Convention on Human Rights, in all the circumstances, with the claimant’s privacy right protected by Article 8, unless the publisher can show a strong public interest justification, highly unlikely for non consensual voyeuristic or pornographic fakes, the individual’s privacy rights will prevail over free expression.
If both stages are satisfied, the individual can obtain an injunction preventing any further distribution of the deepfake and an award of damages, often measured by the distress caused to the victim. Courts have even ordered that images or videos be delivered up or deleted to ensure the harmful content is expunged once identified.
Misuse of private information is particularly well suited to deepfakes of a sexual or deeply personal nature. In those scenarios, the primary harm is the violation of one’s intimacy and dignity, rather than reputational harm in the defamatory sense. Privacy claims do not require proof of serious reputational damage or financial loss, the injury to feelings and autonomy is itself recognised. However, as with defamation, the enforcement challenge looms large: anonymity and online publication can frustrate the claimant’s ability to identify who to sue or to enforce takedown orders across foreign jurisdictions. Even so, privacy law has one advantage: a claimant can sometimes proceed against the website or platform itself (for example, as a data controller under UK data protection law, or via privacy-based injunctions) more easily than under defamation law’s strict intermediary immunity rules. This leads us to the next area, data protection.
Data Protection
Deepfakes almost always involve the processing of personal data without consent, opening another avenue for relief under data protection law. In the UK, a person’s image, voice, or other biometric identifiers are considered personal data; indeed, facial images and voice prints can qualify as special category biometric data requiring extra protection. Creating or sharing an AI generated video of an identifiable individual means you are processing that individual’s personal data. Unless this processing fits within a lawful basis under the UK General Data Protection Regulation (UK GDPR), which is very unlikely if done without consent or any legitimate justification, it is unlawful. For example, taking someone’s photographs from social media and using them to train a deepfake model or to fabricate a video would typically violate UK GDPR principles, no consent, no legitimate purpose, and likely breach specific provisions of the Data Protection Act 2018.
Victims of such misuse of their data can pursue two main remedies under data protection law: compensation and data removal.
- Compensation: Section 168 of the Data Protection Act 2018 provides that individuals may claim compensation for material or non material damage caused by a data protection infringement, which explicitly includes emotional distress, not just financial loss. In theory, a deepfake subject could sue the creator or platform for damages for misuse of their personal data (image/voice) without consent. In practice, however, courts in recent years have set a high bar for these claims. Following the Supreme Court’s decision in Lloyd v Google LLC [2021] UKSC 50 and cases like Rolfe v Veale Wasbrough Vizards LLP [2021] EWHC 2809 (QB), trivial or purely technical data breaches that cause little or no harm are not compensated. Judges have struck out claims for de minimis breaches, emphasising that no one should recover damages for a data breach so minor that no credible distress or loss can be shown. A deepfake can certainly cause non trivial harm, but if the primary injury is reputational or emotional, claimants often prefer defamation or privacy tort claims for damages, since awards under data protection law tend to be modest unless clear, quantifiable harm is proven.
- Data removal (erasure): Data protection law offers powerful non monetary remedies that can aid victims. Under Article 17 of the UK GDPR, the right to erasure or “right to be forgotten”, an individual can demand that platforms or websites erase personal data that has been processed unlawfully. A person depicted in a deepfake can file takedown requests citing this right, independent of any lawsuit. The UK’s Information Commissioner’s Office (ICO) can also receive complaints and potentially investigate or sanction data controllers for sharing deepfakes without consent. In reality, data protection is often used as a procedural tool: it may be easier, and cheaper, to get a platform to remove content by asserting a data privacy violation than by winning a court injunction, especially since major platforms try to comply with privacy and harassment laws to avoid regulatory trouble. Thus, while a deepfake victim might not obtain a large payout via data protection claims, they can leverage these rights to compel deletion of the offending material and even enlist regulators’ help, which for many victims is the primary goal.
Passing Off and Related Intellectual Property
English law does not recognise a general property right in one’s image or likeness, but famous individuals may sometimes use intellectual property laws to fight false or misleading representations. In particular, celebrities and public figures have invoked the tort of passing off when a deepfake or image is used in a commercial context that suggests a false endorsement. Passing off traditionally protects traders against misrepresentations that damage goodwill in their business. Courts have adapted it to cover false endorsements; for example, using a star’s image without permission to promote a product can mislead the public into believing the star endorses the product. In Irvine v Talksport Ltd [2002] EWHC 367 (Ch); Irvine & Tidswell Ltd v Talksport Ltd [2003] EWCA Civ 423, racing driver Eddie Irvine successfully sued a radio station that had fabricated an image of him holding its branded radio, implying he endorsed the station. In Fenty v Arcadia Group Brands Ltd (t/a Topshop) [2013] EWHC 2310 (Ch), pop singer Rihanna won a passing off claim against a retailer selling T-shirts bearing her photograph without consent. The courts in those cases reiterated that English law has no image right, but will provide a remedy if the elements of passing off are proven, namely, (a) the claimant’s goodwill or reputation, (b) a misrepresentation leading the public to believe the claimant endorsed or was associated with the product, and (c) resulting damage, such as loss of sales or loss of control over one’s image.
By analogy, if a deepfake video shows a well-known actor or influencer seemingly promoting a brand or service they have no real connection with, that could be actionable as passing off (false endorsement). The deepfake in that scenario misleads consumers into thinking the celebrity is associated with the product, exploiting the celebrity’s goodwill. A successful claimant could obtain an injunction to stop the misuse and possibly damages, often measured by a notional licensing fee or evidence of lost business.
However, passing off is rarely a viable route for ordinary individuals who lack significant public recognition or goodwill in their persona. The law requires that the person has a substantial existing reputation such that the misrepresentation causes a commercial harm. A private individual deepfaked into an advertisement would struggle to show the public was deceived in a way that affected their goodwill or business, since they have none to speak of. Likewise, other IP rights offer limited protection in these scenarios. Copyright, for example, subsists in the recording or photograph itself, not in one’s face or voice. Unless the victim actually owns the copyright in a source photo or video used to create the deepfake – uncommon outside of professional images, they cannot directly sue for copyright infringement. Performers do have certain rights in recordings of their performances, under Part II of the Copyright, Designs and Patents Act 1988, which might help if a deepfake incorporates part of an actual performance. But deepfakes typically generate new content imitating the person, rather than using exact excerpts of an existing protected performance. In practice, these IP avenues have significant gaps, prompting calls by groups like Equity (the actors’ union) to strengthen image and voice rights in the age of AI. For now, unless the person deepfaked is a celebrity with an established personal brand, traditional IP law is usually only a peripheral help.
Malicious Falsehood and Other Torts
If a false AI generated portrayal does not meet the strict definition of defamation, for instance, it doesn’t clearly damage reputation but causes other harm, a claimant might consider the tort of malicious falsehood. Malicious falsehood involves the publication of a false statement made maliciously that causes financial loss to the claimant. Unlike defamation, this tort is not about reputational harm per se but about provable economic damage from a lie. In the deepfake context, a fabricated video showing a professional in a compromising or unethical situation might lead their clients to cut ties or cancel contracts. If the victim loses income as a direct result, but the content isn’t clearly defamatory enough, or the innuendo is too uncertain to meet defamation’s test, malicious falsehood could be argued as a fallback.
To succeed, the claimant must prove two key elements:
- Malice: The defendant knew the statement was false, or made it with reckless disregard or intent to cause harm (in other words, published the fake with no honest belief in its truth).
- Special damage: The publication caused actual pecuniary loss to the claimant, usually a specific financial loss or loss of a definite opportunity. English law generally requires proof of such special damage in malicious falsehood, unless the statement falls into certain limited categories where damage is presumed by statute.
Malicious falsehood can be a useful fallback cause of action where defamatory meaning is borderline or unclear. The tort was historically used for business disparagement or false claims about products, but it can extend to personal contexts if economic loss is involved; for instance, being fired or losing customers due to a harmful deepfake rumour. The downside is the high bar for proof: malice must be shown, and actual financial loss must be quantified, except in special cases like slander of title where some damages are presumed. Deepfake cases might meet these elements only in egregious scenarios.
Additionally, litigation over a deepfake could potentially invoke other torts depending on the facts. Two examples include:
- Harassment: If someone is subjected to a campaign of malicious deepfakes causing serious alarm or distress, it could fall under the Protection from Harassment Act 1997, a law often used against stalkers or online abusers. Repeated or targeted publication of fake content might amount to harassment in some cases.
- Breach of confidence: If private or confidential source material such as private photos or videos was used in creating the fake, the victim might claim misuse of confidential information in addition to or instead of privacy torts. This would require showing the information had the necessary quality of confidence and was misused to the detriment of the claimant.
These are specialised routes and highly fact-dependent, so they apply only in particular circumstances. In short, English law’s toolbox has many potential instruments, from personal torts to intellectual property to cyber-harassment laws, but each has limits when confronting the novelty of AI-generated falsifications. Often, no single cause of action perfectly fits a deepfake scenario, leading lawyers to be creative in pleading multiple alternative claims.
Remedies and Procedural Tools
Regardless of which cause of action is pursued, several legal and practical remedies are common across these deepfake-related claims. Key tools include:
- Injunctive relief: Courts can issue injunctions to contain the spread of the harmful content. Given that online false images can go viral quickly, a claimant often seeks an interim injunction early in the case to compel removal of the deepfake from websites or to stop further publication. Courts in England are willing to grant interim injunctions in appropriate cases, even against unnamed defendants, provided the claimant presents strong evidence of wrongdoing, for example, clearly privacy violating content, and there’s no practical alternative to prevent irreparable harm. For instance, judges have issued injunctions targeting persons unknown when an anonymous user was posting intimate images, with orders crafted to bind anyone with notice of them. Such orders can even be served via email, social media, or other online means if that is likely to reach the culprit.
- Disclosure orders (Norwich Pharmacal): Identifying an anonymous deepfake perpetrator often requires compelling third parties to divulge information. A Norwich Pharmacal order is a court order used to unmask unknown wrongdoers by obliging an innocent third party who got mixed up in the wrongdoing to disclose relevant information. In the internet context, this is commonly directed at platforms or service providers that might have identifying data, IP addresses, account info, upload logs, for the person who posted the deepfake. Under Norwich Pharmacal principles, if a platform or ISP is likely to have such information, the court can compel them to hand it over to the claimant. Many deepfake perpetrators operate via social media or anonymous websites; a Norwich order against the host (e.g. Facebook, YouTube, a web forum) can force disclosure of the user’s identity or at least an email and IP address, which then enables formal service of legal proceedings on the real person. Courts do require the applicant to show a strong prima facie case of wrongdoing and that the third party probably has relevant data. If granted, the applicant usually must pay the third party’s reasonable costs of compliance. These orders are considered exceptional remedies, but in practice they are often the only way to pierce anonymity online.
- Damages and enforcement: All of the tort claims discussed, defamation, privacy, etc. allow for monetary compensation in principle. However, actually recovering significant damages is often difficult. Many deepfake creators are individual internet users or trolls who have few assets or hide behind jurisdictions where UK judgments cannot reach. Even a high damages award may be unenforceable if the defendant has no money or is overseas. This reality means claimants frequently prioritise non-monetary outcomes: deletion of the content, cessation of further posts, and public vindication (such as apologies or court declarations) over a lengthy fight for a large damages payout.
- Platform liability limits:A common frustration is the inability to hold social media or website platforms financially accountable for hosting deepfakes. Under UK law, intermediaries are generally protected from liability for user-generated content in most civil claims (with caveats). Under s.10 of the Defamation Act 2013, the court lacks jurisdiction to hear a defamation action against a person who was not the author, editor or publisher unless it is not reasonably practicable for the claimant to sue the author, editor or publisher. Additionally, Section 5 of the Defamation Act and the Defamation (Operators of Websites) Regulations 2013 provide a safe harbour for websites: as long as they follow a notice-and-takedown procedure upon receiving a defamation complaint, they are shielded from liability. Similarly, under Regulation 19 of the Electronic Commerce (EC Directive) Regulations 2002, online hosts are not liable for user content if they had no knowledge of its illegality and act expeditiously to remove it once notified. In practice, this means victims can force platforms to remove defamatory or privacy-violating deepfakes by sending complaints, platforms have strong incentives to comply in order to maintain their immunity, but victims usually cannot get damages from the platform itself. Notably, the recent Online Safety Act 2023 creates regulatory duties for platforms to remove certain illegal content, including deepfake pornography, and empowers Ofcom to fine companies that don’t comply, but it does not give individual victims a private right to sue platforms for compensation. Platforms can be regulated into helping remove content, yet victims cannot easily make them pay civil damages under the current law.
Emerging Law and Reforms
The legal landscape around deepfakes is evolving, as lawmakers slowly react to the challenges posed by AI-generated deception. Recent developments and proposals include:
- Criminalisation of deepfake abuse: The UK government has moved to outlaw certain malicious deepfake activities. The Online Safety Act 2023 introduced a criminal offence (effective from January 2024) for sharing or threatening to share non consensual intimate images, including deepfake images and videos. Building on the Law Commission’s recommendations, the forthcoming Crime and Policing Bill 2025 will criminalise the creation of sexually explicit deepfakes without consent, closing a notable gap. Under the planned law, those who make such fake pornographic images (“deepfake porn”) could face up to two years in prison. This will bring deepfake sexual abuse in line with other image-based sexual offences. While these are criminal measures, their introduction is significant. It may deter some would-be creators and makes it easier for victims to get police assistance. That said, these laws focus narrowly on intimate/sexual deepfake scenarios often viewed as extensions of sexual harassment or voyeurism. Harmful deepfakes outside that context, e.g. falsely putting someone’s face in a political propaganda video or a fake news story, are not directly addressed by the new offences.
- Platform oversight and online safety: The Online Safety Act 2023 imposes a duty of care on tech platforms to proactively tackle illegal content, which by definition includes any deepfake that constitutes criminal abuse (such as revenge porn or extreme harassment). Ofcom is empowered to issue codes of practice and levy substantial fines if major platforms fail to have systems for the swift removal of such material. Notably, as of September 2024, sharing intimate images without consent (which covers most sexual deepfakes) was designated a priority offence under the OSA regime. This means platforms must treat that content with the highest removal priority, they need to rapidly detect and delete it, or face enforcement action from Ofcom. While the Online Safety Act does not give individuals a new way to sue platforms, it indirectly benefits victims by pressuring social media companies to be more responsive and responsible when deepfakes are reported.
- EU and international developments: Outside the UK, other jurisdictions are moving ahead with deepfake regulation, which could influence British policy over time. Notably, the EU’s AI Act – a sweeping regulation set to come into effect in 2026, will impose transparency obligations on AI-generated content. The final text of the AI Act requires that AI-generated images, videos or audio that could reasonably mislead someone into thinking they are real must be clearly labelled as synthetically created, unless it is part of art, satire or fiction. In other words, deepfakes circulated in Europe will need watermarks or notices indicating they are fake. The Act also pushes for technical measures like automated detection and traceability of AI outputs. While the UK is not directly bound by the EU AI Act, British lawmakers are closely observing these moves. So far, the UK has leaned toward voluntary codes and industry collaboration rather than hard rules, but as deepfakes proliferate, there could be pressure to implement similar transparency requirements or standards for labelling AI generated media. There is also growing international discussion about creating new civil causes of action for deepfakes. In the United States, for example, some have proposed a federal tort of false light or new statutes specifically addressing deepfake harms, and a number of U.S. states have already enacted deepfake-specific laws, especially targeting election disinformation and pornographic deepfakes. The UK tends to reform cautiously, often relying on adapting existing common law to new problems, but the sheer scale of AI fakery may prompt more direct intervention if current laws prove inadequate.
Conclusion
False or manipulated AI generated representations of individuals deepfakes present a serious challenge to the law. The harms they cause reputational damage, privacy invasion, emotional distress, even financial loss are exactly the kinds of injury that longstanding civil causes of action in English law are meant to remedy.
In practice, however, deepfakes expose the limits and friction in our legal system. Victims often find that obtaining meaningful relief is an uphill battle despite the array of causes of action on the books. The formidable practical barriers anonymity of bad actors, jurisdictional hurdles across the internet, the narrow liability of intermediaries, stringent proof requirements, short limitation periods, and the high likelihood that perpetrators have no reachable assets mean that many legal claims are never pursued or never reach a satisfying conclusion.
The most effective strategy in many cases is swift and proactive: get the content taken down fast, whether through platform reporting tools, legal injunctions, or regulatory channels, and contain its spread. Protracted litigation for damages, while theoretically possible, can be too slow to matter and too costly to justify unless the stakes are exceptionally high or a solvent defendant is available.
For now, anyone affected by a harmful deepfake should be prepared to use multiple approaches. Often the best course is a combination of swift takedown efforts, engagement with platform operators or regulators, and carefully tailored legal action focusing on the most applicable tort (with injunctive relief as a priority). Specialist legal advice is highly recommended, since the interplay between defamation, privacy, data law, and other rights can be complex. The law is evolving in response to these technological threats, but it remains a developing field.
Further Reading
- Malicious Falsehood in English Law: Principles and Recent Developments (May 2025)
- Defamation
- Misuse of private information
If you require expert legal advice, please contact us or email info@carruthers-law.co.uk.
Call us on 0151 541 2040 or 0203 846 2862.
Disclaimer: This article is provided for general information purposes only and does not constitute legal advice. Carruthers Law accepts no responsibility for any reliance placed on the contents. This article may include material from court judgments and contains public sector information licensed under the Open Justice Licence v1.0.