OUR FUTURE
But what about more advanced AI-enabled weapons? The Campaign to Stop Killer Robots lists nine key problems with lethal autonomous weapons, focusing on the lack of accountability, and the inherent dehumanisation of killing that comes with it.
While this criticism is valid, a full ban of lethal autonomous weapons is unrealistic for two reasons. First, much like mines, pandora’s box has already been opened. Also the lines between autonomous weapons, lethal autonomous weapons and killer robots are so blurred it’s difficult to distinguish between them.
Military leaders would always be able to find a loophole in the wording of a ban and sneak killer robots into service as defensive autonomous weapons. They might even do so unknowingly.
We will almost certainly see more AI-enabled weapons in the future. But this doesn’t mean we have to look the other way. More specific and nuanced prohibitions would help keep our politicians, data scientists and engineers accountable.
For example, by banning black box AI, systems where the user has no information about the algorithm beyond inputs and outputs, as well as unreliable AI, systems that have been poorly tested (such as in the military blockade example mentioned previously).
And you don’t have to be an expert in AI to have a view on lethal autonomous weapons. Stay aware of new military AI developments. When you read or hear about AI being used in combat, ask yourself: Is it justified? Is it preserving civilian life? If not, engage with the communities that are working to control these systems. Together, we stand a chance at preventing AI from doing more harm than good.
Jonathan Erskine is PhD Student, Interactive AI, University of Bristol, while Miranda Mowbray is Lecturer in Interactive AI at the same university. This commentary first appeared in The Conversation.
(Except for the headline, this story has not been edited by PostX News and is published from a syndicated feed.)