Thursday, November 22, 2007

Automated Killers and the Computing Profession

Published by Noel Sharkey of the University of Sheffield on

When will we realize that our artificial-intelligence and autonomous-robotics research projects have been harnessed to manufacture killing machines? This is not terminator-style science fiction but grim reality: South Korea and Israel have both deployed armed robot border guards, while other nations—including China, India, Russia, Singapore, and the UK—increasingly use military robots. Currently, the biggest player, the US, has robots playing an integral part in its Future Combat Systems project, with spending estimated to exceed $230 billion. The US military has massive and realistic plans to develop unmanned vehicles that can strike from the air, under the sea, and on land. The US Congress set a goal in 2001 for one-third of US operational ground combat vehicles to be unmanned by 2015. More than 4,000 robots presently serve in Iraq, with others deployed in Afghanistan. The US military will spend $1.7 billion on more ground-based robots over the next five years, several of which will be armed and dangerous. Click for more...
[G.K Comment: Computer ethics is an extremely complex issue that is going to play a major role in the months and years to come, as more and more robots are deployed in the battlefield. Autonomous systems are always going to fail under specific circumstances regardless of how intelligent they become. But who is responsible if a robot unfairly causes a casualty? This article raises many valid questions that all computer scientists should be concerned about.]

No comments: