Topics
With this context at the forefront, we address issues around defining ethical requirements for development and testing. Human control and responsibility are clearly required for the ethical use of AWS, especially LAWS, but how should we understand these requirements? How does the diversity of human behavior affect these requirements? What do roboticists need to establish about the behavior of their systems in order to make human control and responsibility possible in real military contexts?
Panelists
Jonathan Moreno, David & Lyn Silfen University Professor, Professor of Medical Ethics & Health Policy and of History & Sociology of Science, Penn Integrates Knowledge (PIK) Professor, Professor of Philosophy, University of Pennsylvania. “The Neuroethics of LAWS from Neural Networks to Robotics and Back Again”
Many of the ethical issues in LAWS and robotics turn on topics commonly treated in the ethics of neurotechnology, such as our understanding of the brain and nervous system, prospects for the enhancement of its capacities, and the related ethical, legal and social issues (ELSI). Over the past few years a global discourse on the neuroethics of LAWS has emerged with a common set of themes. In this talk I’ll review some of those ELSI issues and the global patterns in that discourse.
Lisa Miracchi Titus, Associate Professor of Philosophy and GRASP Affiliate Faculty, University of Pennsylvania.“Permissible Uncertainty and Meaningful Human Control”
I propose that we shift debates on LAWS away from questions about whether we should let robots make decisions or perform ethically assessable actions and instead work to articulate what the human agents involved, at different stages of command, need to know in order for them to make responsible and ethical decisions about the deployment of such systems. As human agents rarely have total relevant knowledge of the systems they use, I suggest we frame debates around what uncertainty is permissible, for a user, occupying a role, in a physical and socio-political context. This enables us to treat ethical issues around LAWS as continuous with those for other weapon systems and more easily make comparisons between them, and it helps robotics researchers articulate concrete design, explainability, and testing criteria that are sensitive to the complexity of the ethical issues at stake.
Ryan Gariepy, CTO, Clearpath Robotics/OTTO Motors, Board of Directors, Open Source Robotics Foundation, and Co-Chair, Canadian Robotics Council. “Why Is No One Banning Killer Robots?”
Strong cases against lethal autonomy weapons (LAWS) have been made from a multitude of different perspectives, all being generally countered by the broader concept of national security. Despite years of discussions, we seem no closer to consensus. It is more important than ever to go beyond abstract discussions and to start talking specifically about existing weapon systems, recent technology advances, and overall the system characteristics which are the causes of concern.