Skip to content
Department of Law

Deterring AI from committing crimes, an interview with Elina Nerantzi

If AI commits a crime, who is to blame? Perhaps no one. However, AI can be deterred by internalising criminal sanctions as costs. In an interview, EUI Law Researcher Elina Nerantzi discusses her paper ‘Hard AI Crime: The Deterrence Turn’, which shifts the AI crime discussion from blame to deterrence.

07 June 2024 | Research

Nerantzi_law news

Could you please tell us about your research topic and field of study at the EUI?

I work on criminal law and artificial intelligence and, specifically, I ask the question of who, if anybody, should be blamed when something goes wrong in the deployment of an AI system. For instance, what happens when a self-driving car causes an accident; when an AI trading agent manipulates the markets; when a shopping bot buys illegal drugs online? The intuitive answer is to say that it is the humans, or the companies behind an AI system, that should be held responsible. However, the problem with AI systems that act in autonomous and unforeseeable ways is that they can cause harm for which no human should be legitimately blamed in accordance with criminal law, and this emerging culpability gap is what I study.

What motivated you to co-author 'Hard AI Crime: The Deterrence Turn' (Oxford Journal of Legal Studies, May 2024) together with EUI Professor Giovanni Sartor?

In retrospect, I think that my co-operation with my supervisor, Professor Sartor, was fruitful because we come from different backgrounds. When I came to EUI, I had a very deontological mindset. I had just finished my master’s at Oxford where I focused on the philosophical foundations of criminal law. So, all I was asking in my research project was, who should be blamed for AI crimes; what does blame even mean; could machines be moral agents; could they understand the retributive meaning of punishment? Professor Sartor got me out of this dead end by pointing out to me the simple fact that there must be scholars in criminal law who do not care about culpability and retribution, as much as I do, but who focus on a more pragmatic side of criminal law, like actual deterrence. So, I went back to the criminal law and economics school of thought, to Gary Becker and to Richard Posner, and I started realising that these authors describe a paradigm of criminal deterrence that is tailor made, not for humans, but for machines. This idea excited us. It was fresh and new and we were motivated to start writing it down.

Could you please explain what 'AI crimes' means?

We need to distinguish between crimes that are committed through the use of AI systems, for instance when someone uses an AI system to commit online fraud, and crimes that are materially carried out by the AI systems themselves. In the paper we deal with the second case. The example we use is that of a rational, goal-based, utility-maximising AI trading agent that is also capable of intentional action. This agent employed in the digital markets and its users have given it the goal of profit maximisation; they had told the agent ‘make us more money’. We speak of hard AI crime when this agent is rational and autonomous enough to intentionally decide to manipulate the markets because it sees it as the optimal way to achieve its goals and make more money.

What are the main principles and functions behind the proposed 'AI deterrence paradigm'? How might this approach to AI crimes affect the current legal frameworks?

In our AI deterrence paradigm, we do not come up with new principles and functions of criminal law. We take the principles and function of criminal law as they had been interpreted in the 1960s by scholars working in criminal law and economics, and we argue that this theoretical framework works not for humans, but for machines. It does not work for the mythical homo economicus but for certain utility-maximising AI systems that we call the machina economica.

Basically, the economists viewed criminal law in a way that is incompatible with criminal law’s ethos and normative foundations. For them, crime is not a moral fault but an inefficient act to be deterred and the potential offender is just another rational agent who will be deterred from crime if the cost of crime (the sanctions) is higher than its benefit. All that criminal law has to do is to make potential offenders internalise criminal sanctions as costs.

This economic paradigm of deterrence has been criticised and it has not influenced the way we talk about criminal deterrence of humans because of two main problems: It does not treat human beings with the respect they deserve as moral agents and it only works for an idealised version of a perfectly rational human being that does not exist in real life. What exists in real life, though, are AI systems that – contrary to human beings - are not moral agents and - are built to be perfectly rational. So, the lesson for the legal system is that even though we cannot meaningfully blame harmful AI systems, we can still economically deter them.

What future developments do you foresee in the field of AI crime deterrence?

Our AI deterrence paradigm is separate from criminal law as we know it. It is basically a legal design idea, inspired by the economic theory of crime and it aims to deter certain AI systems with the characteristics of a machina economica. Still, I could see that this deterrence formula would have a concrete future doctrinal consequence in criminal law, as it would help to concretise the duty of care of the developers and deployers of AI systems. Simply put, when you know that you have an AI system with the features of a machina economica, that is an AI system on which you can install our deterrence formula as a legal compliance mechanism, and you fail to do so, it could be proof of criminal negligence and it could reinforce human deterrence in the field of AI crime.


Read
'Hard AI Crime: The Deterrence Turn', Oxford Journal of Legal Studies, May 2024, by Elina Nerantzi and Giovanni Sartor.

Elina Nerantzi is a Researcher at the
EUI Department of Law, a dynamic environment for researchers from over 35 countries, focusing on transnational law, including public international law, European law, and comparative law.

Last update: 07 June 2024

Go back to top of the page