You are about to read a very interesting debate, with implications in humanitarian, criminal and military law. Much like in the case of self-driving cars, the liability for autonomous drones is a hot debate, especially due to their advantages and well spread usage.
Autonomous drones are aircrafts without a human pilot aboard. They are also known as unmanned aerial vehicle (UAV). There are various actors that could be held accountable in case something goes wrong.
In our debate, we narrowed it down to the most interesting two: the programmer, who decides what the drone does once put into use, or the engineer, who decides what the drone can do in any circumstance.
Stay tuned to see who wins!
Andrei Stoica
Stoica Andrei is a future graduate of law studies at Nicolae Titulescu University, in Bucharest. He enjoys studying technology, old and new, law, foreign history and culture. What he is hoping to achieve is working and studying in a more globalised world that has new ways of doing legal work and to be part in helping technology integrate in everyone’s day to day life, as the future unfolds.
Oana Iulia Irimia
Oana is presently studying European and International Law at Nicolae Titulescu University from Bucharest. Passionate about ethics and philosophy of law, she believes it is essential to combine theoretical aspects with applied law and to encourage new directions of research. She considers that a new perspective needs to use an interdisciplinary approach and to position law students and practitioners in key roles.
Opening Statement - Andrei
New weapons and new technologies revolutionised warfare. The shifts brought by the XXth century technology were accompanied by the change of the nature of war and the way it is seen at national and international level. Drones are no exception, and after an analysis on their spread and the clear strategic possibilities they offer, one could certainly say that their use must be strictly assessed by specialists in international humanitarian law and human rights. For example, in 2004, only 41 states had this technology and the first prototypes, but by 2011 their number almost doubled, drones being used in 76 states today. Until now, only the USA, the United Kingdom and Israel are believed to have used armed drones, while the Republic of China and Iran are the only other countries with operationally deployed armed drones.
One particular issue derived from the specific nature of this kind of weapon is the responsibility for the actions and consequences of the use of autonomous armed drones during armed conflicts. Given the fact that autonomous drones are not controlled by any human being, responsibility is distributed between the engineer and the programmer, but the most significant part is given to the first party. In order to demonstrate my position, I will invoke several arguments.
Firstly, in warfare, a truly autonomous system capable to adapt to changing circumstances, thus with artificial intelligence, can be programmed to behave more ethically and cautiously on the battlefield than a human soldier. However, it has not been proven if it is technically possible to respect the principles of distinction and proportionality at all times. Given these particular aspects, it is safe to assume that the most significant impact on the conduct of a drone is not the software, because it rapidly changes due to the circumstances and to the need to take rapid decisions, while the original technical structure is due to remain identical.
Secondly, I would like to emphasize the essential role of the engineer during the use of the particular drone. The engineers are the only ones competent to give a verdict when their product does not seem to work as expected and when it could make the wrong decisions. This is because they designed the structure of the drone and made all the necessary simulations before implementing certain software, so they are the only ones who know which are the possible flaws or errors that could appear.
Thirdly, given that automated weapon systems function in a self-contained and independent manner once deployed, the only cause of any error of a system that is in a continuous progress is technical. Examples of such systems include automated sentry guns, sensor-fused munitions and certain anti-vehicle landmines. Although deployed by humans, such systems independently verify or detect a particular type of target and then attack. An automated sentry gun, for instance, may fire or not, following voice verification of a potential intruder based on a password. Their degree of precision is very high. However some cases of automatic attacks on civilian targets have been signalled. A possible explanation to this is the lack of control or the low quality of sensors used to discriminate between combatants and civilian population.
To sum up, it is important to remember that the tehnical structure is the only constant for a drone system in a permanent change. It is of equal importance that the manufacturer is the only one who knows the possible shortcomings of a drone and has the possibility to replace a certain weapon with another one in case of uncertainty regarding the targets or the environment. Consequently, one must conclude that the role of the engineer is far more decisive than the role of a programmer in terms of responsability for the errors committed by a drone.
Opening Statement - Oana
Drone technology brings forth a series of questions that appear as fast as the technology evolves. Most of them can be answered, such as drone specifications, or how much can a drone do in contrast to its equipment.
A real question that cannot be answered at this time is who is responsible for the drone’s actions? In the military, commanders must respect the Yamashita standard, a standard that dictates accountability for war crimes, both for those done on the ground and for those committed from hundreds of kilometres away. Since military drones are flown by a pilot in the army, the accountability is applicable to them as well, because such drone pilots have the same legal treatment, even if they just fly a drone. But what about civilians who fly and build drones? How are they accountable for their creations?
Since by today’s standards we cannot hold artificial intelligence accountable, we only have the human factor as a liable party.
My first argument for the liability of the programmers is the fact that he designed the drone and the program behind it to prevent civilian damage and most of all to prevent innocent victims in armed conflict. This falls under the criminal negligence or product liability provisions that most law systems include. While the latter might feel more in place, a drone forcedly used by a third party would prove almost impossible for a programmer to be tried for damages since he took all the necessary steps, yet somebody else executed the unlawful action.
Retribution would most likely fall under the criminal negligence. This way if the programmer would not carry out all the necessary research in the system, he will be prosecuted for negligence. This is also applicable to the legal person as well, since corporations will handle programming projects and as such the legal person will be the one held accountable.
My second argument is that current technology is under the banner of being open sourced technology, meaning anybody can develop applications, mobile and desktop devices and operating systems. If for example, a person would develop an application that transforms a civilian drone that was designed for hobby photography into a dive-bomb or a spy camera, that would mean that the developer alongside the user and platform designer would all be held responsible, since all of them allowed such reckless technology to exist. While current legislation does not fully protect manufactures, they can still protect themselves from these unlawful designers by imposing product censorship. If a person developed a dive-bomber application on iOS and published on the application store, for free, but under a different name and description, then Apple could protect itself, but if the application was developed and marketed in the exact way the developer expressed it, then they are part of the problem. This brings further the need for distinct programmer and manufacturer legislations.
To sum up, as it currently stands, programmers are prone to be prosecuted for criminal negligence or product liability since they have the legal, if not moral obligation to ensure that their product meets the requested market criteria and do not blatantly violate basic human rights. To further conclude the idea behind having the need of a stronger legal framework, what the future legal provisions must ensure is a distinction behind who can become a software developer for drones and who a manufacturer is of said gadgets, since in the current economy, drone manufacturers are not the same as software developers. Even though they tend to work side by side, there will always exist freelancing software developers that create other types of applications, either harmful or beneficial.
By now you have learned from our debate how autonomous drones work, how they are used and what can possibly go wrong. Most importantly, both sides showed you who should be liable in case something goes wrong. Their arguments go far beyond the mere mechanism of classic responsibility. This is because, as stated above, the use of drones has various implications. Holding a person accountable for the use of such a device does not fit in the mould of responsibility for products, as Andrei has shown. On the other hand, due to the very widespread implications and the changing circumstances in which a drone must function, Oana believes that it is the engineer that can prevent damage, and not doing so can be a basis for liability.
In order to see how they respond to these issues, keep reading!
Rebuttal - Andrei
Even though autonomous drones are currently used in several purposes like domestic policing, commercial aerial surveillance and motion picture filmmaking, sports or for disaster relief, their use in combat was not made official by any state. Armed unmanned aerial vehicles were developed especially as a response to the vulnerability of remote controlled drones to cyber-attacks.
Despite considering drones safer because they would reduce collateral damage, many incidents in Pakistan, Afghanistan or Yemen demonstrated their incapability to distinguish between combatants and non-combatants. They killed children and the participants to a wedding and several other dozens of civilians in Pakistan (BBC NEWS, 2013). In the case of autonomous and armed drones, things could get much more complicated.
In response to the opponent's first argument, I would like to add that the programmer is not the first to be held responsible in case the drone takes an unfortunate decision by not distinguishing between combatants and non-combatants. This is because such a mechanism has artificial intelligence, so it was created to modify its patterns depending on the circumstances on the field. The only unchanging feature will remain the structure and the capacity of the weapons. More than that, when designing the drone, engineers should also have in mind the characteristics of the area in order to calculate the risk of collateral damage and, in consequence, offer both lethal and non-lethal weapons in order to use them depending on the specific targets and minimise the risk of collateral damage.
Next, I am going to demonstrate that allowing drones to use both lethal and non-lethal weapons depending on the specific conditions is more useful. One can see this is true, by looking at several examples from the most famous fields where these autonomous drones were used .In many cases, drone pilots invoked the low quality of images in order to justify their incapacity to discriminate between a man with a gun or with a shovel in his hands. In the case of autonomous drones with sophisticated mechanism, but without subjectivity or doubt, they could target more innocent civilians, or sick or wounded combatants. In these cases where the degree of probability for collateral victims is very high, a special programme should be used to calculate this report and decide to use non-lethal weapons in order to make a balance between utility and safety.
In response to the opponent's second argument, I must stress upon the fact that the product censorship will not solve the problems caused by the independent programme designers because, as we can observe, more and more technicians or programmers give out relevant data intentionally or non-intentionally that is further hunted by terrorists or other groups. However, in these cases, both technicians and programmers are held guilty for negligence, but especially technicians, since they offered the technical possibility of re-using the same mechanism for two different missions. In other words, the complexity of an autonomous drone mechanism is very hard to reproduce and in order to make sure it is not used by a third party, a self-destruction mechanism could be integrated in it. This would be the job of the manufacturer. The application could be theoretically bought from the internet, but given the fact that it should be compatible with an intelligent computer, it is easily deductible that anyone could buy such a programme, but not anyone could build such a mechanism.
In conclusion, the use of autonomous drones by unauthorised persons is most likely the result of the negligence of the engineers rather than programmers. They, as I already said, are the only ones capable of taking the decision to authorise the use of a drone, because they have most of the information needed. Due to this, they should take all possible measures to adapt their design to the specific circumstances in which the drone will be used.
Rebuttal - Oana
While autonomous drones are not fully integrated in a current armory, they do exist and have been in tests for quite a while. For example, the United States of America uses the new X-47B drone that is capable of autonomous surveillance missions and could be modified to hold weapons in the future, according to mass-media (Huffington Post, 2013; Popsci, 2011), since this new drone is still considered a work-in-progress.
As a response to the idea behind drone killings in flashpoints, such as Pakistan or Afghanistan, one has to recall that those drones were not autonomous, they had a human pilot taking decisions, and they had a hierarchy that gives the order, even though the current drone program of the United States is under the Central Intelligence Agency. This means that the drones used there were simple tools and not higher forms of intelligence that take these kinds of decisions by themselves.
As a reply to the idea that a drone should have different types of arsenal, the current legal framework in most states provides a clear manifest that civilian drones be armed as little as possible and that the armament should only be non-lethal. This is the case of Canada, which uses ‘persuasive’ shock or gas technology in law enforcement. As such, current human operated drones have the option to use non-lethal weapons. Also, current civilian usage prohibits lethally armed drones, even for self-defence.
The idea behind it is that there are low resolution and quality cameras in the current generation of drones (Predator MQ-1). They are almost a decade old technology and as such mistakes will be made. This, coupled with the gun phobia and the preemptive strategy that the United States of America developed over the years, will lead to more mistakes than needed. Autonomous drones can be used with more efficiency, but having two kinds of armaments will not be feasible as this will cause weight problems for drones. The current drone programs in any state have a package that implies intelligence, surveillance and reconnaissance. This means that for a drone to fully be autonomous, it must have access to these packages, and decision is based more on a high quality report, rather than a hunch.
Lastly, I would like to respond to the argument of not using product censorship or faulty criminal negligence prosecution. The idea behind giving a drone information, relevant or not, is part of the intelligence of the human agency or group that is using it. This means that not even the intelligence gathered will always be 100% true, but current drone technology, developed after the 2011 Iranian incident, ensures that when a drone discovers that it has a faulty system, it will go back home on its own or abandon the mission without the human part to instruct it to do so. This is a better mechanism than implementing a self-destruct mechanism, as this would be counterproductive to the cost of developing and building such a device. Drone operating programs are software that is developed specifically for a purpose. Military software is only equipped in a military computerised system and it is heavily monitored by capable personnel. This ensures criminal law to be applicable as a form of retribution. Civilian software, however, is very limited in functions, and this is exactly why it is easily accessible. The responsibility for faulty programming is still given to the programmer or software engineer, while equipping the drone with devices falls under the manufacturer who only provides what the market requests. It is important to understand that military or even terrorist factions buy drones and redesign some aspects to better serve their purposes. This means that if it is substantial, then the manufacturer can no longer be held responsible.
In conclusion, autonomous and semi-autonomous weapon systems are designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force and therefore they should be held responsible for the consequences that occur.
This debate took a quite interesting turn. While in the first part the opponents argued on the liability of the programmers or the engineer for their own actions in the use of a drone, in the second part, both opponents took the discussion further and analysed the possibility of liability for the actions of others who use the drone and damage third parties.
I hope you enjoyed it and it provided some food for thought.
Conclusions - Andrei
It can be clearly seen that the main clashes in the debate formed around four ideas.
Firstly, even though both the technician and the programmer should be held responsible for the acts of autonomous drones, the technician should be held responsible for the acts of an artificial intelligence designed by him, while the programmer should be controlled in terms of the results of the software created by him.
Secondly, in the case of terrorist groups that buy these drones, transform them mechanically and use certain software in order to change their initial purpose - maybe only for surveillance -, we cannot hold the engineer liable for the acts of that particular drone. In this case, the manufacturer should be charged for criminal negligence for selling these products, for lack of surveillance in case they were stolen from the factories, along with the ones who intentionally made the changes.
Thirdly, automated weapon system, created to independently verify and detect different targets, have various forms of control, such as voice verification. This is why the errors that appeared in several contexts of automated fire over civilians have a unique explanation – the technical errors that must be fully attributed to the engineer.
Lastly, there is the possibility of using two kinds of weapons for autonomous drones, depending on the specific circumstances in order to give the system the possibility to immobilise non-combatants or combatants that can be found very close to innocent victims. In this regard, however, the discussion remains open, as up to this moment current civilian usage prohibits the use of lethally armed drones because they present a high degree of risk.
Conclusions - Oana
To sum up the ideas behind the necessity of having drone programmers, manufacturers and pilots accountable for drone usage, one has to look at the technology that was also considered harmful at its conception, but soon proved that it has more strengths than weaknesses. To compare this idea, take the internet. It has evolved since ARPANET first started to develop the framework. Since then, people with the intent of causing damage started to use the internet and as such legal framework has been in effect to stop them, with a high yielding result. In the same way, drone programmers should be dealt with, by existing legislation and future legal provisions, which envision the need of licensing by an aviation administration. For example, the ‘Snoopy’ drone, developed by Glenn Wilkinson, a security researcher, can fly over hundreds of people in a few minutes and steal data located on their mobile phones without them knowing, by simulating a Wi-Fi connection. This paves the way for the idea that drone operators and drone software developers should be prosecuted for unlawful usage of the gadgets.
While civilians have their share of legal provisions, military pilots are susceptible of falling under the principle of the chain of command, as Peter Mauer, president of the International Committee of the Red Cross, stated in 2013, and as such they will be treated as combatants even if they are miles away from the battlefield.
Autonomous drones will never be left unchecked by a human overseer and as such the same person will be prosecuted and judged for every action taken up until that point. This is in part of due to the democratic values and principles of humanity that we have since birth, and in part due to the public opinion which will always be against self-aware artificial intelligence.