DescriptionTechnological advances in the field of automation have introduced the possibility of substituting humans in many activities – including killing other humans, which give rise to regulatory challenges, especially those concerning the attribution of legal responsibility. The study of the potential new technologies to disrupt social systems, such as law, and the subsequent adaptive responses are an integral part of STS. The objective of the present paper is, through the comparison of the legal framework of driverless cars and automated weapons, identify the challenges in regulating situations where the loss of human life is directly caused by a non-human agent.
The notion of automated weapons capable of selecting and engaging targets without human imput – “killer robots” – is highly controversial, with civil society organizations calling for a ban, only permitting automation up to the point where there would still be relevant human control. However, this key concept is problematic, nebulous and possibly without substance in face of the speed of technology. When it comes to driverless cars, even though the potential to kill is still present and hotly debated, the loss of human control is considered acceptable in many scenarios, allowing a more nuanced discussion about legal responsibility with and without the human driver.
I intend to conduct this comparison through document analysis of domestic and international legislation, regulations, administrative documents as well as bibliographic review to identify particularities and differences in the conceptualization of ‘death by algorithm’, in order to contribute to the discussion of the regulation of autonomous weapons.
|Period||20 May 2021|