“Actus non facit reum nisi mens sit rea”
–the act is not culpable unless the mind is guilty(*)
In his annual report to the UN Human Rights Council, Christoff Heyns, the special rapporteur on extrajudicial, summary or arbitrary executions, urged all States to declare and implement national moratoria on the production, assembly, transfer, acquisition, deployment and use of lethal autonomous robots (LARs), the so-called “killer robots”, until the ethical issues could be worked out.
According to Mr. Heyns, LARs differ from armed drones and other remotely controlled weapons systems because they have the ability to decide when to attack a target:
While drones still have a ‘human in the loop’ who takes the decision to use lethal force, LARs have on-board computers that decide who should be targeted.
He thinks that the taking of any human life deserves as a minimum some deliberation:
War without reflection is mechanical slaughter.
And that deployment of LARs may be unacceptable because no adequate system of legal accountability can be devised for the actions of machines, i.e. robots cannot be prosecuted for war crimes.
While all this sounds very well-intentioned, I wonder what we are actually talking about and what difference it makes in practical terms. When you turn to face your final destiny in front of a superb machine with a gun pointing at your head, will you find consolation thinking that IT is killing you consciously?
(*) Under the traditional common law, the guilt or innocence of a person relied upon whether he had committed the crime (actus reus), and whether he intended to commit the crime (mens rea): e.g. murder is the unlawful killing of a human being with malice aforethought.
Featured Image: Campaign to Stop Killer Robots