Giving serious thought to the Rules of Robotics: Asimov’s fiction transforming into reality?

On reading an article written on the 15th November in the Independent titled “Robowar: The next generation of warfare revealed – a general’s dream, but are they also humanity’s nightmare?” by Cahal Milmo, which mentions the undergoing development of full autonomous drones, I couldn’t help but wonder the implication such a technology would have on international law on war.

Drone technology already being used by the US and Israeli intelligence agencies against terrorist suspects has many issues in application of the Geneva conventions. By law, a war is only ‘just’ when international law permits such a use of force under the exceptions under its charter i.e. Art 51. However, the jurisprudential understanding of justified killing in times of a war was originated in the aftermath of World War II, where war crimes were committed in ‘Battlefields’ by men engaged in the War. Thus the application of the legal language such as, ‘Combatant’ could logically be attributed to men in uniform or soldiers. But modern military practice and the use of political language, particularly after 9/11, makes attribution or categorization of a legal term a significant problem; the application of legal language constructed in the Geneva Convention seems to fall far behind, as a ‘battlefield’ is no longer a red colored demarcated zone on the map, soldiers on foot are no longer required to kill an enemy, and armed conflict is not as simple as a war against a state or a civil war within a territory. Soldiers on foot are replaced by unmanned aircraft controlled from a base thousand of miles away and the war is against non-state actors who have no defined territory, making the whole globe a potential war zone. How then, under the existing legal language, can a drone be categorized as; and who could be responsible for the war crimes resulting from a drone strike. The chain of responsibility and accountability is further complicated by the authority of the intelligence agencies in these matters.

Development of fully autonomous drones would mean that there would be absolutely no control of a human element involved as the machine could be pre-programmed with complex algorithms to target and kill individuals. Such advancement would not only circumvent the concept of legal personality completely but would also complicate the chain of accountability to a greater extent than the existing technology. For in a sense the robot would not only be the weapon but also a ‘combatant’. The possibility of a robot capable of that level of autonomous functioning was first envisaged in fiction by Isaac Asimov in his novels ‘I, Robot’ and ‘Foundation’. Asimov’s novel however, had one very significant feature, the idea of robotic ethics created due to the social, political and legal implications of the use of robots. The three Rules of robotics according to Asimov were essential part of robotic ethics that revolved around the idea that a robot must not by action or inaction injure a human being.

One wonders if Asimov’s robotic ethics can possibly have any place in tomorrow’s world as the reality of his fiction is beginning to surface with new military technology. Even in Asimov’s fiction, however, a robot malfunction resulting in a death raises an interesting legal point; can a robot commit murder under the legal definition of murder as it is not a ‘person’? Seeing how the present legal language is struggling with categorizing modern military practices; the social, legal and ethical implications that a new wave of fully automated robots would have are undoubtedly be even more disconcerting.