AI and Autonomous Lethal Systems
AI and Autonomous Lethal Systems
07/28/2023 :: Jeremy Pickett :: Become a Patron :: Buy Me a Coffee (small tip) :: @jeremy_pickett :: Discussion (FB)
Are autonomous weapons systems the future of high risk engagements, or do they contribute more to problems than solutions?
TL;DR: The post surveys proposals for managing the development of AI-powered autonomous weapons, which range from military robots to self-guided munitions. It notes rising concerns over accountability and ethical risks as the technology advances. Options discussed include weapons bans, international treaties, government regulations, military oversight boards, and voluntary industry guidelines. The piece highlights challenges in all approaches, concluding governance will necessitate nuanced compromises between principles and pragmatism in this complex domain.
Autonomous weapons systems powered by artificial intelligence bring tremendous risks alongside potential military benefits. Policymakers are scrambling to govern these rapidly evolving technologies before deployment outraces ethical constraints. Proposed oversight models range from total bans to self-regulation, each with merits and flaws. Navigating responsibilities between tech firms, the military, and society remains deeply complex but urgent as AI enables ever more independent lethal force, from small scale law enforcement to global cyberwarfare.
Concerns over autonomous weapons focus on diffused accountability and losing human judgment over life-and-death targeting. Critics point to mistakes like accidental drone strikes, and war crimes if deployed without care. However, militaries contend AI-enabled weapons could reduce risks to their personnel. As with many emerging technologies, autonomous systems engender both promise and peril.
Some advocate complete preemptive bans on autonomous weapons development, which prove politically difficult. Superpowers invested in AI research are unlikely to forfeit potential advantages. More incremental approaches aim to establish international norms on usage through treaties, as with chemical or nuclear arms. Yet cyberweapons circumvent traditional arms control, and global consensus on AI policies remains elusive so far. The argument over autonomous weapons and their regulation is a complex and highly contentious one. Ethicists and various civil society groups have called for an outright, preventive ban on the development of autonomous weapons. They advocate for a response to what could become a potentially destructive AI arms race, reminiscent of the restrictions placed on chemical and biological weapons. However, the feasibility of such all-encompassing bans is challenged by the stark reality of geopolitics. Major military powers are deeply entrenched in AI research, viewing it as a cornerstone for future competitive advantage. Would they willingly give up such potentially transformative capabilities?
Take the case of the United States, Russia, and China; these nations have persistently stood against the UN negotiations aimed at establishing such a ban. They hold that autonomous technology could diminish military casualties, countering the humanitarian arguments against autonomous weapons. Without the commitment of these powerful militaries, the implementation of enforceable prohibitions becomes a daunting task. How could global accord on this contentious issue be achieved without the participation of these significant players?
Positive context of autonomous weapons
AI and autonomous systems have shown significant potential in saving lives and mitigating danger. For example, autonomous drones have been used for surveillance and reconnaissance purposes, helping to provide valuable information without putting human soldiers in danger. The US military has also used autonomous systems for bomb disposal in Iraq and Afghanistan, reducing the risk to human life.
Automated defense systems like Israel's Iron Dome are another example. The system is designed to intercept and neutralize short-range rockets and artillery shells, reducing civilian casualties from such attacks. Similarly, the U.S. Aegis Combat System, an advanced, automated naval defense system, has the capability to track, identify, and engage with enemy threats, providing increased protection for navy vessels.
The US Navy's Sea Hunter is an autonomous warship that is designed to patrol the seas and detect potential threats. The ship has been tested successfully, and it could be used to reduce the risk of human casualties in future conflicts. Ukraine in the military struggle with Russia has used autonomous ships to greatly reduce the naval presence of the Russian navy.
The US Army's Lethal Autonomous Weapon System (LAWS) is a prototype weapon system that is designed to identify and engage targets without human intervention. The system is still in development, but it could be used to reduce the risk of friendly fire incidents in future conflicts.
Some suggest a more gradual international approach. This method, akin to how nuclear or chemical arms are handled, aims to formulate norms and impose certain restrictions through treaties. However, this approach comes with its own challenges. Cyberweapons, for instance, have already skirted traditional arms control regimes. Furthermore, global consensus on AI weapon policies remains elusive. Countries tend to engage in drawn-out debates and provide ambiguous definitions surrounding significant issues like human control over autonomous targeting. Can clear, universally accepted definitions be established in such a complex and rapidly evolving field?
Moreover, treaty proposals often fall behind the pace of technological advancement, unable to predict future developments. The emergence of autonomous armed drones serves as an example, as they were operational long before the UN even began discussing restrictions on lethal autonomous weapons. With research in both civilian and military spheres advancing rapidly, comprehensive regulations often find themselves outdated by the time they are enacted. How can policy and regulation keep pace with such fast-evolving technologies?
Problematic situations with autonomous weapons
However, these technologies also have their drawbacks and have led to unintended negative consequences. A well-known example is the accidental bombing of a wedding convoy in Afghanistan in 2013 by a U.S. drone, which resulted in the death of several innocent civilians. The incident raised significant concerns about the accuracy and reliability of autonomous systems.
There is also the question of 'killer robots' or fully autonomous weapons (AWS), which once activated, can select and engage targets without further human intervention. The alleged use of such a system in the ongoing Libyan civil war sparked controversy and debate on the necessity of having meaningful human control over weapon systems. The UN has expressed serious concerns about AWS, citing risks of violations of international humanitarian and human rights law, accountability gaps, and the implications for human dignity.
The use of autonomous weapons systems could lower the threshold for war. If countries believe that they can use autonomous weapons systems without risking human casualties, they may be more likely to go to war.
The development of autonomous weapons systems could create a new arms race. Countries may be tempted to develop more sophisticated autonomous weapons systems in order to maintain their military advantage.
Given these examples, it is clear that while AI and autonomous systems can play a positive role in conflict situations, there are significant ethical, legal, and humanitarian concerns that need to be carefully addressed. How can we ensure that these technologies are used responsibly and under proper oversight? Are current international laws adequate to regulate these technologies, or do we need new, specific regulations? These are crucial questions that need further exploration and discussion.
While piecemeal bans on certain applications (like AI-enabled assassination drones) may seem more practical, comprehensive bans on autonomous weapons seem unlikely. Given the nature of global digital connectivity, any restraint by a single state could inadvertently give an advantage to less scrupulous regimes. For now, efforts to establish international norms and oversight face formidable coordination challenges. In such a complex landscape, what viable strategies can the global community pursue to ensure the ethical use of AI in warfare?
Domestic regulations fare little better on restricting research given its inherent dual-use nature. Comprehensive laws rapidly become outdated as science advances. Legislatures struggle balancing security, ethics, and innovation. Courts lack technical competency on emerging tech issues. And Constitutional rights provide few clear limits on AI systems' military usage.
Within the US military, some want sworn oversight boards enforcing codes of ethical practice over autonomous capabilities and their use in warfare. However internal supervision still permits weaponization. Critics argue only external civilian oversight prevents abuses, but security concerns hamper transparency.
On the industry side, tech firms affirm principles for ethical AI development, but implementing enforceable guidelines and restrictions remains difficult. Engineers’ growing ethical concerns could support limits, but companies respond primarily to profit motives and competitive pressures. Their voluntary cooperation on arms control appears unlikely.
At local levels, policymakers debate measures governing law enforcement's usage of autonomous weapons, from surveillance drones to robots disabling bombs or hostage-takers. Rules-of-engagement for non-military lethal force require careful calibration between security and rights.
References
1. Arkin, R. C. (2010). The ethics of autonomous military robots. Journal of Military Ethics, 9(4), 332-341. [1]
2. Asaro, P. (2012). On banning autonomous weapon systems: human rights, automation, and the dehumanization of lethal decision-making. International Review of the Red Cross, 94(886), 687-709. [2]
3. Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. Cambridge Handbook of Artificial Intelligence, 316-334. [3]
4. Bryson, J. J. (2018). Artificial intelligence as a positive and negative factor in global risk. In Global Catastrophic Risks (pp. 308-329). Oxford University Press. [4]
5. Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349-379. [5]
6. Lin, P., Abney, K., & Bekey, G. A. (2014). Robot ethics: The ethical and social implications of robotics. MIT Press. [6]
7. Sullins, J. P. (2010). When is a robot a moral agent?. International Review of Information Ethics, 13(12), 23-30.
8. Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford University Press.
9. Wendell, D. (2015). Ethics in the age of artificial intelligence. Harvard Journal of Law & Technology, 29(1), 1-68.
10. Yampolskiy, R. V. (2012). Artificial intelligence safety engineering: Why machine ethics is a wrong approach. arXiv preprint arXiv:1206.1568.
References provided by Perplexity.ai