There was cause for celebration on the day we sat down with Michael Fitzgerald, EUI Department of Law researcher, to discuss his work. For one, it was the first time the sun peeked through the clouds over Florence after weeks of seemingly endless rain; but, much more importantly, Michael’s new article, 'Not hollowed by a Delphic frenzy: European intermediary liability from the perspective of a bad man,' was published in the Maastricht Journal of European and Comparative Law just hours before. In this article – his first fully-authored one – Michael weaves together a nuanced legal analysis of EU law on digital platforms and how these are implemented by Member State courts, as well as the implications of the interaction between the EU courts and the European Court of Human Rights in Strasbourg.
“My argument comes out of a tradition called legal realism; it is a legal movement that is primarily interested – at least in terms of emphasis – not in the formal system of rules as they are written, but rather in what the rule does in the world when it is put into practice,” Michael explains. “So, I try to examine what is happening in the world because of the rule and how it is put into practice, rather than what the rule says.”
The rule that Michael takes particular interest in is the EU’s ‘safe harbour’ for the liability of online intermediary services, which is today contained in the Digital Services Act (DSA), the EU’s landmark legislation of 2022. In simple terms, the safe harbour is a law which protects digital platforms - from social media platforms like Facebook or X to online newspapers and sites like YouTube – from responsibility for illegal content posted by their users.
The safe harbour affirms that, in cases where users post illegal content on these platforms, the platforms cannot be held liable unless it is proven that they were aware of the existence of the content and had a chance to both verify its illegality and expeditiously remove it. Only once that burden of proof has been met is the protection against liability pulled back.
“Well, at least, that is what was assumed,” Michael explains. “But it seems that, in a lot of these cases, EU Member States have, to a certain extent, been applying their own laws or, at least, interpreting the EU law very loosely.” As an example, he points to a variety of cases concerning hate speech and incitement to violence where EU Member States courts have held platforms – or intermediaries – liable without knowledge, even if this is prohibited under EU law. Michael is not the first to notice this. In fact, his article is a critical response to a 2024 publication by legal scholar Martin Husovec, along with a team of co-authors from the London School of Economics, that identifies the same issue in a series of high-profile Grand Chamber rulings of the European Court of Human Rights (ECtHR) in Strasbourg, such as Delfi v. Estonia (2015) and Sanchez v. France (2024).
In both the Delfi and Sanchez cases, Member State courts found that the involved digital platforms were, indeed, liable for content posted on them. In the first case, the platform involved was a newspaper comments section filled with antisemitic hate speech by some of the readers; in the second one, the wall of a Facebook page managed by politician Julian Sanchez, where Islamophobic content was posted. When both cases were brought before the ECtHR, the court upheld the Estonian and French courts’ rulings, despite the fact that the intermediaries fulfilled the necessary requirements under EU law and thus arguably should have been protected by the EU safe harbour rule.
Husovec argues that these recurrent rulings are evidence that the Strasbourg Court ‘got it wrong,’ potentially placing it ‘on a collision course’ with the DSA. Michael invites the reader, conversely, to consider whether the Court knew what it was doing and was being deliberately ‘Delphic' - ambiguous and obscure, as the ancient oracle. Michael’s analysis breaks new ground by connecting these dots as evidence of a systemic pattern of EU law fragmentation and enquires as to the downstream consequences of this fragmentation. Scholars’ focus, he argues, should go beyond the top level of EU legislation and high court doctrine. “The high-level legislators and courts are in Brussels, Strasbourg, and the like, and a lot of the scholarship is focused on that top level – but we need to go below,” he emphasises. “We need to look at the Member State application, and then the application of the principles by the platforms themselves, because that is what most directly affects society.”
Looking at how platforms respond to these rulings is key, Michael underlines, because on the internet, today, it is the private platform, and not the State, which performs the role of frontline regulator. “When these platforms were emerging in the 90s, there was a realisation that we were dealing with systems of such complexity that the actual State, the police, and regulators were not going to be involved in the day-to-day suppression of illegal behaviour in these platforms,” he explains. “So, they had to rely on and incentivise these platforms to police and regulate themselves by exposing them to a certain amount of risk, and that was done via intermediary liability laws”.
As such, “it does not matter if a legislator in Brussels says ‘no liability without knowledge;’ if the platform is sophisticated enough to see that is not what is happening, it will follow the law as it is actually being applied rather than what seems to be the law, and presumably attempt to protect itself from liability.”
At this point, he suddenly shrugs, all intensity abandoned in a smile that summarises the ease with which legal philosophers approach doubt. “That is what we assume, because very little is actually known about it. That is precisely the conclusion of my paper; that we should have been looking at various actors downstream, particularly the Member State courts and the platform companies themselves, and asking how they perceive the law to understand the effects of this, and we have not been.”
“It is all hypothetical, and at this point I cannot know if I am right,” he continues. “But I do not think it is a coincidence that most well-established, reputable newspapers have tended to shift away from freely accessible comment sections since the Delfi era. They may have reached the conclusion that they were more trouble than they were worth.”
The stakes for societies and States could not be higher. “Because of the nature of the internet, intermediary liability is an umbrella that covers so many subject matters we are concerned about: hate speech, disinformation campaigns, online harassment, cyber bullying, human trafficking, recruiting for terrorist organisations, and more,” he enumerates, emphasising the urgency of regulation. “But what happens if you over-regulate and it deters political speech on mainstream platforms, or speech that is merely controversial? Does this speech really disappear? Or does it move to the fringes, and even manifest offline?”
Michael does not answer his own question, happy to leave it hanging over us in the late afternoon light. Indeed, the last sentence of his article reads ‘we may need to confront the fact that we still know next to nothing.’ But his research points scholars and policymakers in the direction they must follow as they seek answers: beyond Brussels and the written words of the law, and to the actions of those implementing it.
Michael Fitzgerald is a researcher at the EUI Department of Law whose research interests on internet governance include the role and responsibility of the EU as a dominant global regulator in manifesting political rights online. Before joining the academic world, Michael dedicated nearly a decade to the creative sector, examining the role of language, literature, and myth in the formation of identity and autobiography. His article 'Not hollowed by a Delphic frenzy: European intermediary liability from the perspective of a bad man' was published on 9 April 2025 and is also available on Cadmus, the EUI Research Repository.