Website Operators Liability
Intermediary Defamation Liability
Need urgent advice on a defamation issue? Contact our specialist team today or call 0203 846 2862 / 0151 541 2040.
Introduction
Defamation law in England and Wales provides specific protections and defences for internet intermediaries, such as website operators, social media platforms, ISPs, and other hosts of third party content. Courts distinguish between those who originated or actively published defamatory content and those who serve as facilitators or distributors. Over the past decades, a framework of common law principles and statutory provisions has evolved to shield intermediaries from liability in many cases, especially when they play a passive role or act promptly upon notice of defamatory material.
Common Law: Publication and Notice
At common law, a defendant must be a publisher of the defamatory statement to be liable. Courts have drawn a line between primary publishers, meaning originators or editors who bear strict liability for what they publish, and secondary publishers, i.e. subordinate distributors who may have an innocent dissemination defence. Crucially, a purely passive intermediary, one with no knowing involvement in the content, may not be a publisher at all. In Bunt v Tilley [2006] EWHC 407 (QB), three ISP defendants, who merely provided internet connectivity or email accounts, were found not liable; the court held that an ISP which only acted as a conduit for others’ postings is not considered a publisher absent proof of knowing participation in the publication. As long as intermediaries have no control or knowledge, they are akin to postal services or telephone companies transmitting third party content. In other words, to be liable for a defamatory publication a defendant must be knowingly involved in the process of publication, i.e. have some active role or awareness.
However, once an intermediary is made aware of a defamatory statement on its platform, the situation changes. English courts apply the principle from Byrne v Deane [1937] 1 KB 818, a pre internet case: if a host, upon receiving notice of a libellous posting, fails to remove it within a reasonable time, it may be inferred that the host has adopted or acquiesced in the continued publication, effectively becoming a publisher from that point forward. The Court of Appeal in Tamiz v Google Inc [2013] EWCA Civ 68 likened Google’s Blogger service to a notice board under Google’s control. Before notice of a defamatory comment, Google was not considered a publisher; after notice, if Google unreasonably delayed removal, it could be treated as having associated itself with the defamatory content, and thus as a publisher of it. The court in Tamiz noted that an intermediary should be allowed a reasonable time to act after notification, but if weeks pass without action, an inference may arise that the platform consents to the publication. In Tamiz, Google did remove the offending comments about five weeks after being notified; the Court of Appeal held it was arguable that this delay made Google liable from some point post notice, defeating its defences, although the claim was ultimately struck out as an abuse of process because the scope of publication and harm were minimal.
Other cases reinforce these principles. Metropolitan Schools Ltd v Designtechnica Corp [2009] EWHC 1765 (QB) held that Google’s automated search results were not publication by Google in any meaningful sense, given the lack of human input, underlining that purely automated or technical functions do not trigger liability. Conversely, Davison v Habeeb [2011] EWHC 3031 (QB) acknowledged that once Google, as owner of Blogger, had been notified of a defamatory blog post, it was of cardinal importance to act; any significant delay could make Google a publisher by acquiescence. The common law has developed a notice based regime: an intermediary is usually safe from liability until it knows of the defamation; after gaining knowledge, it must promptly remove or disable the content to avoid being treated as a publisher. This stands in contrast to jurisdictions like the United States, where statutes confer near total immunity on platforms for third party content, whereas under English law, intermediaries are expected to act once put on notice of a defamatory allegation.
Statutory Defences for Intermediaries
Section 1 Defamation Act 1996 (“Innocent Dissemination”)
This provision codifies the defence of innocent dissemination for secondary publishers. To invoke Section 1, a defendant must show all three of the following conditions are met:
- Not the author, editor or publisher: The defendant was not the author, editor, or commercial publisher of the statement; these terms are defined in the Act. In practice, this means the defendant’s role was limited to processing, distributing, or providing a platform for the content. For example, an ISP, web host, or news vendor could qualify, whereas the originator of the content or someone exercising editorial control would not.
- Took reasonable care: The defendant took reasonable care in relation to the publication. This implies the intermediary maintained any applicable standards or policies to prevent defamatory publications, although in practice this requirement can conflict with the idea of a completely passive conduit; it generally means there was no negligence on the intermediary’s part.
- No knowledge or reason to believe: The defendant did not know and had no reason to believe that their actions contributed to the publication of a defamatory statement. Essentially, at the time of dissemination, they were genuinely unaware of the libel. If they had notice of the defamatory nature, or circumstances pointing to it, and continued to publish, the defence is lost.
If these conditions are satisfied, the defendant will not be held liable in defamation. Section 1 thus protects bookstores, libraries, telecom carriers, and online intermediaries who merely facilitate publication without awareness of wrongdoing. English courts have found that this statutory defence and the common law innocent dissemination doctrine operate in parallel, the statute was intended to broaden and clarify the defence, not abolish the common law. In Tamiz, Google initially satisfied Section 1 for the period before it was notified of the defamatory comments, since it was not the author/editor, had a plausible system of care, and lacked knowledge, but the Court of Appeal held that after a certain point post-notification, Google knew or had reason to believe that it was facilitating a defamatory publication, thereby disqualifying it from the Section 1(1) defence for the subsequent period. In practice, once an intermediary is put on notice and fails to remove the content in a reasonable time, it will likely be unable to prove the third requirement of Section 1(1).
Regulation 19, Electronic Commerce (EC Directive) Regulations 2002
Regulation 19 provides a hosting safe harbour that is broader in scope than defamation law alone, covering all forms of illegal content. Under Reg 19, an online service provider that merely hosts information, content stored at the direction of a user, is not liable for damages arising from that content unless the provider:
- had actual knowledge of the unlawful nature of the information, or facts from which its unlawfulness was apparent, and
- upon obtaining such knowledge, failed to act expeditiously to remove or disable access to it.
A host will be immune if it neither knew of the illegality nor turned a blind eye, and it promptly takes down offending material once it truly becomes aware of it. This EU derived provision, which remains part of UK law, complements the defamation specific defences. A significant aspect is that Reg 19’s protection is not limited to those who are secondary publishers; even a party deemed a publisher at common law can invoke the safe harbour, so long as the lack of actual knowledge is proven.
In practice, however, once a platform is found to have actively participated or knowingly continued a publication it often implies the platform did have knowledge, thereby defeating Reg 19 as well. The threshold of actual knowledge under Reg 19 is fairly high, it is not enough that someone alleges defamation, the illegality must be clear or established. English courts have held that an intermediary who receives a complaint disputing the truth of a statement may not yet have actual knowledge that the content is unlawful, since the truth might be unresolved. For example, in Davison v Habeeb the High Court found that although Google (Blogger) was put on notice of alleged defamation, the complaint was contested by the blog’s author. Faced with these conflicting assertions, Google could not tell on its own whether the post was truly defamatory. The judge concluded the claimant had no realistic prospect of proving Google had actual knowledge of illegality in such circumstances. Google’s forwarding of the complaint to the author and stance of only removing content upon a court order meant it stayed within the safe harbour, as it had not been decisively alerted to unlawful content. Importantly, Reg 19 applies across all civil causes of action, e.g. defamation, privacy, IP infringement, providing a broad immunity for hosts, whereas Section 1 of the 1996 Act is confined to defamation and to parties not responsible for content creation. If a defamation claim against an intermediary proceeds, often both Section 1 and Reg 19 will be invoked in tandem. In practice they rise or fall together based on whether the intermediary had the requisite knowledge and whether it acted swiftly after notice.
Section 5 Defamation Act 2013 (Website Operator’s Defence)
Section 5 of the 2013 Act introduced a tailored defence specifically for operators of websites hosting user generated content. The aim was to encourage responsible behaviour by websites and quick removal of defamatory material, while giving claimants a route to seek redress against the true author. Section 5 provides that a website operator has a defence in a defamation claim over a statement posted on the site if it can show that it was not the one who posted the statement. This makes clear that the defence only protects platforms hosting third party content, not a scenario where the site operator itself is the author. However, the defence can be defeated if the claimant proves all of the conditions in Section 5(3) (supplemented by the Defamation (Operators of Websites) Regulations 2013):
- Identifiability of author: It was not possible for the claimant to identify the person who actually posted the statement. If the original poster is readily identifiable and reachable, the law expects the claimant to sue that person instead of the website. Indeed, Section 5(2) notes the defence won’t even be invoked if the claimant already has sufficient information to pursue the author.
- Notice of complaint: The claimant gave the operator a notice of complaint about the defamatory content, containing the required information, e.g. the URL, the statements complained of, why they are defamatory, etc.
- Failure to respond properly: The operator failed to respond to the notice of complaint in accordance with the prescribed procedure set out in the 2013 Regulations. These regulations outline a process whereby, upon receiving a valid notice, the operator must promptly forward the complaint to the poster, if contact details are known or can be obtained, anonymised if necessary, and then either remove the content or facilitate an exchange of contact information if the poster consents. The site is given a short timeframe, usually 5 days, to notify the poster and then either take down the material or, if the poster objects to removal and is willing to be identified, furnish the claimant with the poster’s identity. If the operator does all that the law requires, for example by removing the post within the timeline or providing the poster’s identity with consent, then it keeps the Section 5 defence. If the operator ignores the notice or does not follow the required steps, it loses the Section 5 protection.
Notably, moderation of comments by the operator does not disqualify it from this defence, as the Act specifies that an operator won’t lose Section 5 simply by having moderated or edited content for abuse, as long as it was not acting with malice. Section 5(11) and (12) provide that the defence is defeated if the claimant shows the operator acted with malicious intent to cause harm, for instance, if the operator colluded with the author to post the defamatory material or deliberately refused to remove it out of spite. Good faith moderation is protected.
In practice, Section 5 and its Regulations create a notice and takedown/identify procedure. If the poster of the defamatory content is anonymous or unreachable, the law gives the claimant a mechanism to require the website to assist in identifying the poster or remove the content. If the website complies with this mechanism, it is shielded from liability; if it fails to comply, the shield falls and the claimant can proceed against the site.
This defence was new in 2013, and while it has not been heavily litigated, as many platforms will comply with valid notices to avoid liability, it stands as an incentive for websites to promptly address defamation complaints. For example, if an anonymous defamatory comment is posted, a claimant’s Section 5 notice compels the operator either to reveal the commenter, if possible, or take the comment down; otherwise, the operator faces potential liability. Importantly, Section 5 applies only to England and Wales and only to posts first published after the law came into effect in 2014.
Section 10 Defamation Act 2013 (Restriction on Suing Secondary Publishers)
Section 10 is a procedural bar designed to funnel claimants towards the primary wrongdoer rather than suing peripheral parties. It provides that the court lacks jurisdiction to hear a defamation claim against a person who is not the author, editor, or commercial publisher of the statement unless the court is satisfied that it is not reasonably practicable for the claimant to pursue the author or editor or publisher. In simpler terms, a claimant generally cannot sue an intermediary, such as a website host, forum operator, or other secondary party, if the primary publisher, the person who actually made the statement or an editor controlling its content, can be identified and sued instead. The statute reinforces the principle that liability for defamation should, where possible, be pinned on the originator of the defamatory remark, not a mere platform or facilitator.
Thus, if a defamatory post is published by an individual user, a claimant should sue that individual. Only if that route is effectively closed, e.g. the author is anonymous and cannot be identified, or perhaps is outside the jurisdiction, might the court allow a claim to proceed against the platform. Section 10 has had a significant impact: courts will routinely strike out or set aside claims against intermediaries if claimants have not demonstrated that suing the original speaker is impracticable.
A recent illustration is the case Wei & Ors v Long & Ors [2025] EWHC 158 (KB): The claimants sued, among others, a U.S based domain name registrar, which was not hosting the content, but merely provided the domain name service for a website containing defamatory material. The High Court held that the registrar was not a publisher at common law, given its purely technical role in the internet infrastructure, and also was not an author, editor or publisher for the purposes of the 1996 Act’s definitions. Moreover, the person who wrote and posted the defamatory statements, the first defendant, had already been identified and even had default judgment entered against them. Therefore, under Section 10, it was reasonably practicable for the claimants to pursue the actual author, and the court refused to entertain the claim against the intermediary. Mrs Justice Hill set aside service on the U.S. registrar applying Section 10 to bar the action against this secondary party. This outcome underlines that, as of 2025, English courts are strict in requiring claimants to go after primary publishers. Only in exceptional cases, for example, if the author is unidentifiable or immune, would an intermediary remain in the frame. In practice, if an anonymous poster cannot be found, claimants might use tools like a Norwich Pharmacal order, a disclosure order, to compel a platform to give identifying information. If that fails, and the defamed person truly has no recourse against the author, then an action against the platform could be allowed, and the platform’s other defences would then be tested. But Section 10 ensures such scenarios are the exception rather than the norm.
Section 10 is a jurisdictional filter. It stops a claim at the outset if the conditions aren’t met. It does not provide a defence to be pleaded at trial, but rather prevents the claim from proceeding at all against the secondary defendant. Together, Section 10 and Section 5 of the 2013 Act reflect a policy decision, intermediaries should generally not be targets of libel claims except as a last resort, and even then they have avenues to avoid liability by assisting the claimant in achieving a remedy, usually removal or identification of the true culprit.
Human Rights Context and European Perspectives
While the above rules form the core of domestic law in England and Wales, the broader context includes European human rights law, which influences how courts balance reputational rights and freedom of expression online. The UK is a party to the European Convention on Human Rights (ECHR), and Article 10 of the ECHR (freedom of expression) and Article 8 (right to private life, including reputation) often pull in opposite directions in defamation cases. Two notable decisions of the European Court of Human Rights (ECHR) have addressed intermediary liability:
Delfi AS v Estonia (ECtHR Grand Chamber, 2015)
The ECtHR upheld the liability of a large news website for grossly defamatory comments left by anonymous users, even though the site (Delfi) had a notice and takedown system and did remove the comments upon request weeks after publication. Delfi was a professionally run, commercial news portal that posted an article which attracted a torrent of vicious comments, including threats, against a particular individual. The domestic Estonian courts fined Delfi €320 for failing to prevent or promptly remove the obviously unlawful comments. The Strasbourg court found no violation of Article 10 in holding Delfi liable under those specific circumstances.
Key factors influencing the decision were:
- The extreme nature of the comments: They were not just unproven allegations but clearly unlawful hate speech and defamation.
- Delfi’s role and size as a platform: It was a major commercial news site that facilitated and financially benefited from user engagement, making it more than a passive intermediary.
- Anonymity of the commenters: The victims had no realistic means to sue the original authors, leaving the host as the only reachable entity.
- The sanction imposed: The fine was small (€320), mitigating any chilling effect on speech.
The ECHR stressed that in such cases, requiring a large platform to take proactive steps or rapid action was justified to protect victims’ rights. Importantly, Delfi does not compel UK courts to hold intermediaries liable, but it illustrates the point that if content is patently illegal, e.g. clear threats or hate speech, and a platform fails to act, liability can be compatible with freedom of expression. English courts, while not bound to follow ECHR decisions in every detail, are mindful of the Article 10/8 balance. Delfi’s scenario was extreme, and the ECHR itself later noted that the judgment was not necessarily applicable to smaller platforms or cases of mere negligence. The UK has not altered its domestic law to impose a general monitoring duty on intermediaries for defamation; the approach remains notice based. However, Delfi signals that under human rights principles, doing nothing in the face of clearly unlawful content can, in rare cases, justify holding an intermediary liable.
Pihl v Sweden (ECtHR, 2017)
This case presented a sharp contrast to Delfi. A small non profit association in Sweden ran a blog where an anonymous comment defamed the claimant. The blog operators had stated they did not pre monitor comments and urged users to be lawful. Upon being notified by the claimant, the operators removed the offending comment the next day and even posted an apology and correction shortly thereafter. The Swedish courts declined to hold the association liable for the brief appearance of the comment. The claimant then argued this refusal breached his Article 8 rights to reputation. The ECHR disagreed and found no violation of Article 8, approving the national courts’ decision not to impose liability on the intermediary.
The Court made several points:
- The blog was a small, volunteer-run platform with a limited audience.
- The defamatory comment was online for only about nine days and was taken down as soon as the hosts were aware.
- The hosts had put up clear disclaimers and guidelines indicating they did not endorse or check comments.
- They acted swiftly and responsibly once notified, including posting an apology.
In such circumstances, the burden of imposing liability on a non profit intermediary was deemed disproportionate, as it would likely have a chilling effect on open internet forums and discussion if hosts had to actively monitor everything. For England and Wales, this aligns neatly with the existing notice and takedown regime: an intermediary that promptly removes defamatory material on notice will rarely face liability and indeed, claimants in such scenarios may find it difficult to prove serious harm to reputation as required by the Defamation Act 2013. The practical lesson is that the law encourages intermediaries to be responsive to complaints, and if they are, both domestic law and human rights law tend to shield them.
Conclusion
Mere conduits and passive hosts, like ISPs or automated services, are not considered publishers and are not liable for unknowingly facilitating defamatory content. The moment an intermediary obtains knowledge of a defamatory publication, however, it must not sit idly by. Failing to remove or block the content within a reasonable time can result in the intermediary being treated as a publisher from that point, with liability exposure unless a defence applies.
Multiple statutory defences offer protection. Section 1 of the Defamation Act 1996 gives a defence to secondary publishers who took reasonable care and had no knowledge of the defamation. Regulation 19 of the E-Commerce Regulations 2002 shields hosts from damages for unlawful content as long as they lack actual knowledge and act quickly when notified. These two defences often overlap in online defamation scenarios, encapsulating the innocent until notified principle. Section 5 of the Defamation Act 2013 further incentivises website operators to cooperate: a site that didn’t post the content can avoid liability by following the notice and takedown, or identification, procedure in the Regulations. And Section 10 of the 2013 Act serves as a gatekeeping rule preventing suits against intermediaries where the primary author can be pursued.
The cumulative effect is that claimants are generally directed to the original speaker, and intermediaries serve more as partners in resolving online defamation, by taking down content or helping identify wrongdoers, rather than as primary litigants. Only if a platform refuses to comply or the primary author is beyond reach will an intermediary face serious risk of liability, and even then, the intermediary can still argue it lacked knowledge or otherwise met the statutory criteria for immunity.
Ready to defend your reputation? Learn more about our Defamation Services or explore further reading below.