As the world’s militaries rush to win the AI race, organizations are raising their voices in opposition. https://interestingengineering.com/ethics-of-robotic-warfare-and-autonomous-weapons
By Eric James Beyer, Mar 16, 2021
While on patrol in the mountains of Afghanistan in 2004, former US Army Ranger Paul Scharre spotted a girl of about five years old walking curiously close to his sniper team amidst a small herd of goats.
It quickly became clear that she was reporting their position to the Taliban in a nearby village. Legally, as Scharre points out in Army of None: Autonomous Weapons and the Future of War, her behavior was that of an enemy combatant. Legally, it would have been within the scope of the laws of war to kill her. Ethically—morally—it was wrong, and Scharre and his team knew that. The girl moved on and their mission continued.
The questions he would later ask himself about that mission were unsettling ones: What if an autonomous weapon had been put in that situation? Could artificial intelligence (AI) ever distinguish the legal from the moral?
In recent years, the debate over how to ethically manage lethal autonomous weapons systems (LAWS) has come to the fore. As militaries the world over march toward an AI future, how will societies program machines with the insight needed to make complex, life-or-death decisions in volatile circumstances? Is such a thing as ethical warfare even possible?
Scharre presents a sobering example of what getting the military AI question wrong could look like. That question is as important as it is urgent. While the idea of deadly AI systems is not exactly new given the pace of technological advances, concerns that were theoretical a decade ago are rapidly becoming very real. Many now wonder whether or not society can scale its ethics accordingly and quickly enough to prevent fears about these technologies from becoming a reality.
Read more at https://interestingengineering.com/ethics-of-robotic-warfare-and-autonomous-weapons