Comparing autonomous weapon systems to Decepticons, a malicious robot from Transformers, is a prominent metaphor. The Decepticons showed the dangers of unrestrained, autonomous decision-making in combat, resembling real-world legal and ethical concerns over the use of autonomous weapon systems. It is undeniable that technologies have become one of the prominent factors in the contemporary world, and for the last decade humans have constantly taken up measures to develop and improve several technologies, one of which is in weaponry. Throughout the history of war, mankind used to fight wars with traditional weapons such as landmines and gunpowder. However, in the present status quo, due to the rapid change of technology, war has shifted its conventional weapons to the latest incorporation of artificial intelligence (AI) that drives the autonomous weapon system (AWS).
An autonomous weapon system (AWS) is an advanced weapon design with the capabilities of selecting targets without human intervention. A person activating the autonomous weapon system does not possess the capabilities of knowing specifically what or who it will strike, nor specifically where and when the strike will occur. This is due to the fact that autonomous weapons are triggered by software and sensors that match what the sensors detect in the area against a 'target profile.' These machines operate through the mix combinations of algorithms, sensors, and a pre-programmed parameter that are used to provide the ability for target identification. These robotic weapons will conduct shootings based on threat assessment; whether the identified object is based on shape, heat signatures, movement and patterns meet the threat parameters. The engagement decision for this system itself surrounds the Rules of Engagement (ROE), In its execution, the system will launch an attack using its designated weaponry such as guns, lasers or missiles.
However, the autonomous weapon system has brought up a pressing ethical concern due to the limitations of human participation in life-and-death decisions. The biggest concern that surrounds the autonomous weapon system is, "Should we actually let machines decide who lives or dies?" Undoubtedly, the lack of human judgement in AWS operations arose on ethical concerns. This system is perceived to possess a lack of moral judgment -- human soldiers might hesitate in shooting the target due to its own morality and empathy. Furthermore, the errors and collateral damage AWS could potentially cause, such as target misidentifying amplifies AWS as a cold and unfeeling system.
In comparison to AWS, humans are born to contemplate complex circumstances that cannot be replicated with algorithms. Hence, just by imagining an autonomous drone failing in identifying a target could potentially pose a major harm to civilians, with no clear person to hold accountable for the drone operation, and that itself remains as the major grey area.
This so-called technology advancement in warfare that some mankind has glorified, conversely raises a question posed from the global community on its major potential in breaching the International Humanitarian Law (IHL), a branch of international law that aims to minimise the detrimental effects of armed conflict. Specifically, in the context of the utilizing AWS in armed conflicts, prevalent IHL principles such as: distinction, proportionality, necessity and accountability are considered the main threshold to fulfill. These principles are outlined in the 1977 Geneva Conventions Additional Protocol I: Article 48 that emphasize on the importance of distinguishing between combatant and non-combatants; Article 51(5)(b) which prohibits an attack that cause excessive incidental harms towards civilian and civilians objects; and Article 57 which highlights precautionary measures in the conduct of military operations around civilians. These rules can be summed up in four precepts: do not attack non-combatants, attack combatants only by legal means, treat persons in your power humanely, and protect the victims
In examining real-world applications of AWS, a recent armed conflict in Ukraine has captured global attention, due to both Ukraine and Russia having continuously utilised drones powered with artificial intelligence for surveillance and precision strikes. Specifically, Ukraine, a state party to the 1977 Geneva Conventions, has utilised innovative AI-powered drones aimed at effectively targeting military assets and infrastructure. In December 2022, numerous self-detonating drones are bombing the energy infrastructure in Ukraine. A mass of Shaded-136 drones patrolling the skies autonomously before blasting themselves in colossal waves at the power grid. As a result, the city of Odesa lost two of their energy facilities -- leaving 1.5 million people without electricity and heating access during the coldest months of winter. In addition, Ukraine is currently conducting simulations on a ground-based AWS to landmines. Although its uses are less deployed compared to drones, the system that is designed within ground-based AWS specialises them to conduct a bomb disposal and direct contact with enemy forces in high-risk zones.
Similarly, in 29 August 2021, a recent attack using AWS by the U.S to Pakistan have ultimately failed in differentiating targets and instead killed 10 civilians which includes seven children in a residential neighborhood in Afghanistan. Clearly, this was a violation of the principle of distinction as outlined in Article 49 of the 1977 Geneva Conventions. Moreover, in Pakistan, the Bureau of Investigative Journalism released data where it indicated from June 2004 through mid-September 2012, drone strikes killed roughly 2,562-3,325 people in Pakistan, of whom 474-881 were non-combatants, including 176 children.
Therefore, such cases underlined the ethical and legal challenges that AWS posed towards IHL principles and its brecaches towards IHL principles. Hence, the question now is not whether AWS poses a challenge to IHL, which it undoubtedly does, but rather why IHL did not take significant measures to address the use of autonomous weapon systems.
In order to effectively address this claim, the lack of IHL in addressing the use of AWS portrays a deeper issue in the manner of the global community in facing emerging technologies, especially in armed conflict. A prominent reason is that IHL frequently reacts to an abomination rather than finding a way of anticipating it. International regulations such as the Geneva Convention can only be applicable after a large scale of suffering. Unfortunately, an attempt to bring the numerous cases of autonomous weapon systems misalignment with IHL principles into justice have not been seen as a visible event that needs an immediate change. Yet, waiting for an AWS "apocalypse" would be portrayed as somewhat irresponsible given the consequences.
Political self-interest by powerful countries such as the U.S. and Russia also exists as one of the driving factors. Powerful countries often see AWS as a way to maintain their dominance in the military sector by resisting restrictions yet upholding their priority in asserting a strategic advantage over international humanitarian law principles. This manifests as a political reality that the global community must recognise; countries with significant economic and military power will always hold precedence over other states, or in this case, a law. This also contributes to why international treaties often stall even when proposed.
Given these points, it is also important to note that regulating the use of autonomous weapon systems is more complex compared to traditional weapons -- AI which develops rapidly and is hard to constrain with a robust rule. Yet, although it is complicated, it does not entail that there is justification to ignore it due the recent and urgent armed conflicts currently happening. Robust rules have been applied on the use of nuclear weapons and landmines; consequently, the same level of effort in addressing the use of AWS should be applied.
In drawing to a close, the operation of the autonomous weapon system remains in the shadows. Different from chemical weapons or landmines -- autonomous weapon systems effects are not directly visible to the public, which resulted in less outcry. Due to this, the government possessed a small incentive to act due to the low public activism and awareness. As a result, to address the current problem, solid global cooperation and leadership are critical to ending the gap before autonomous weapons systems redefine the rules of war on their own terms.