Artificial intelligence in armed conflicts: legal and ethical limits

Authors

DOI:

https://doi.org/10.3989/arbor.2021.800002

Keywords:

Artificial Intelligence (AI), armed conflicts, international humanitarian law, targeting, ethic

Abstract


The rules of International Humanitarian Law (IHL) set limits on the use of the means and methods of combat in the conduct of hostilities. While in its origins IHL was not developed taking into account the challenges posed by Artificial Intelligence in this context, it is a reality that today the evolution of this intelligence, the algorithms and their emerging military application constitute a challenge in the light of humanitarian standards. This challenge comprises three fundamental aspects: legal, technical and ethical.

While it can be said that AI, in the current stage of development, allows a computer programme based on algorithms to perform certain tasks in a complex and uncertain environment, often with greater accuracy than humans, we must also stress that there is no technology that makes a machine behave like a human being who can determine whether an action is lawful or unlawful and decide not to proceed with the programmed action, with the protection of victims as the primary objective. This is one of the dominant themes in doctrinal debates on the application of IHL to means and methods of combat involving AI-related techniques.

States must adopt verification, testing and monitoring systems as part of the process to determine and impose limitations or prohibitions in accordance with the essential principles of distinction and proportionality that IHL establishes in the use of weapons during international or non-international armed conflicts. Moreover, it is worth noting that from a legal as well as an ethical perspective, the human being is at the center of this issue, since the responsibility for the use of force cannot be transferred to weapons systems or algorithms, as it remains a human responsibility.

Downloads

Download data is not yet available.

References

Badaró, Sebastián; Ibáñez, Javier y Agüero, Martín Jorge (2013). Sistemas Expertos: Fundamentos, Metodologías y Aplicaciones. Ciencia y Tecnología, 13: 349-364. Disponible en: https://www.palermo.edu/ingenieria/pdf2014/13/CyT_13_24.pdf https://doi.org/10.18682/cyt.v1i13.122

Cáceres Tello, Jesús (2007). Reconocimiento de patrones y el aprendizaje no supervisado. Trabajo presentado en CISCI 2006. Disponible en línea: https://www.researchgate.net/profile/Jesus_Tello/publication/228857048_Reconocimiento_de_patrones_y_el_aprendizaje_no_supervisado/links/0c960517e7e677b522000000.pdf

CICR/ICRC (1996). The Fundamental Principles of the Red Cross and Red Crescent. Disponible en línea en: https://www.icrc.org/en/doc/assets/files/other/icrc_002_0513.pdf

CICR/ICRC (2006). Guía para el examen jurídico de las armas los medios y los métodos de guerra nuevos. Medidas para aplicar el artículo 36 del Protocolo adicional I de 1977. Disponible en línea: https://www.icrc.org/spa/assets/files/other/icrc_003_0902.pdf

CICR/ICRC (2016). Views of the International Committee of the Red Cross (ICRC) on autonomous weapon system. Artículo presentado en Convention on Certain Conventional Weapons (CCW) Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS) 11-15 April 2016, Ginebra. Disponible en: https://www.icrc.org/en/download/file/21606/ccw-autonomous-weapons-icrc-april-2016.pdf

CICR/ICRC (2018). ICRC Strategy 2019-2022. Disponible en línea en: https://www.icrc.org/en/publication/4354-icrc-strategy-2019-2022.

CICR/ICRC (2019). Artificial intelligence and machine learning in armed conflict: A human-centre approach. Geneva: International Committee of the Red Cross. Disponible en línea en: https://www.icrc.org/en/download/file/96992/ai_and_machine_learning_in_armed_conflict-icrc.pdf

Comisión Europea (2018). Inteligencia artificial para Europa. Comunicación de la Comisión al Parlamento Europeo, al Consejo Europeo, al Consejo, al Comité Económico y Social Europeo y al Comité de las Regiones. COM/2018/237 final. Disponible en línea en: https://eur-lex.europa.eu/legal-content/ES/TXT/?uri=COM:2018:237:FIN

Davison, Neil (2017). Autonomous weapon systems under international humanitarian law. En Perspectives on Lethal Autonomous Weapon Systems, United Nations Office for Disarmament Affairs (UNODA) Occasional Papers No. 30, pp. 5-18. UN Publications. Disponible en: https://www.un.org/disarmament/wp-content/uploads/2017/11/op30.pdf https://doi.org/10.18356/29a571ba-en

Department of Defense, USA (2012). Directive 3000.09. Autonomy in Weapon Systems. Disponible en: https://www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodd/300009p.pdf

Department of Defense, USA (2019). Summary of the 2018 Department Of Defense Artificial Intelligence Strategy. Disponible enhttps://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF

EAAF (Equipo argentino de antropología forense). Nuevos algoritmos para la búsqueda e identificación de personas desaparecidas. Disponible en línea en: en https://eaaf.org/desarrollos-cientificos-e-informaticos/

European Commission AI HLEG (2019). Ethics Guidelines for Trustworthy AI, High-Level Expert Group on Artificial Intelligence, 8 April 2019. Disponible en español en: https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60423

Gutiérrez Posse, Hortensia (2003). Interacción entre tratado y costumbre. Aporte de los Protocolos Adicionales al desarrollo del Derecho Internacional Humanitario. Revista Lecciones y Ensayos, 78: 29-47. Disponible en: http://www.derecho.uba.ar/publicaciones/lye/revistas/78/lecciones-y-ensayos-78-paginas-29-48.pdf

Henckaerts, Jean-Marie y Doswald-Beck, Louise (2007). El derecho internacional consuetudinario. (vol. 1: Normas) Buenos Aires: Comité Internacional de la Cruz Roja. Disponible en: https://www.icrc.org/es/doc/assets/files/other/icrc_003_pcustom.pdf

Heyns, Christof (2013). Informe del Relator Especial sobre las ejecuciones extrajudiciales, sumarias o arbitrarias, Christof Heyns. Consejo de Derechos Humanos de Naciones Unidas. Asamblea General A/HRC/23/47. Disponible en: https://www.ohchr.org/Documents/HRBodies/HRCouncil/RegularSession/Session23/A-HRC-23-47_sp.pdf

Human Rights Watch (HRW) y Harvard Law School's International Human Rights Clinic (IHRC) (2012). Losing Humanity: The Case against Killer Robots. Disponible en línea en: https://www.hrw.org/report/2012/11/19/losing-humanity/case-against-killer-robots

Human Rights Watch. (2020). World report 2020: Ante la amenaza de los robots asesinos, crecen las demandas para que los humanos continúen en control del uso de la fuerza. Disponible en línea en: https://www.hrw.org/es/world-report/2020/countrychapters/337416H

Huston, Pat (2018). Artificial Intelligence at the Frontiers of International Law concerning Armed Conflict at Harvard Law School (workshop) https://blogs.icrc.org/law-and-policy/2019/expert-views-frontiers-artificial-intelligence-conflict/

IFR (2017). The Impact of Robots on Productivity, Employment and Jobs. A positioning paper. International Federation of Robotics. April 2017. Disponible en: https://ifr.org/papers/download

Ledesma Orozco, Sergio (2011). Redes neuronales: consideraciones prácticas. En Grigori Sidorov (ed.). Métodos modernos de Inteligencia Artificial, pp. 187-228. Ciudad de México: Sociedad mexicana de inteligencia artificial.

León Serrano, Gonzalo (2019). Situación y perspectivas de las tecnologías y aplicaciones de inteligencia artificial. La inteligencia artificial aplicada a la defensa. Documentos de Seguridad y Defensa, 79: 39-68. Instituto Español de Estudios Estratégicos, Ministerio de Defensa. Disponible en: https://publicaciones.defensa.gob.es/media/downloadable/files/links/d/s/dsd_79_la_inteligencia_artificial_aplicada_a_la_defensa.pdf

Maas, Matthius M. (2019). International Law Does Not Compute: Artificial Intelligence and the Development, Displacement or Destruction of the Global Legal Order. Melbourne Journal of International War, 20: 1-29. Disponible en: https://law.unimelb.edu.au/__data/assets/pdf_file/0005/3144308/Maas.pdf

Melzer, Nils (2019). Derecho Internacional Humanitario. Una Introducción Integral. Ginebra: Comité Internaiconal de la Cruz Roja.

Ministère des Armées, France Florence Parly souhaite une Intelligence artificielle performante, robuste et maîtrisée, 10 avril 2019. Disponible en línea en: https://www.defense.gouv.fr/fre/actualites/articles/florence-parly-souhaite-une-intelligence-artificielle-performante-robuste-et-maitrisee

Monasterio, Aníbal (2017). Ética algorítmica: Implicaciones éticas de una sociedad cada vez más gobernada por algoritmos. Dilemata. Revista internacional de éticas aplicadas, 24: 185-217. Disponible en: https://www.dilemata.net/revista/index.php/dilemata/article/view/412000107 .

Perel, Maayan y Elkin-Korel, Nivan (2016). Accountability in Algorithmic Copyright Enforcement. Stanford Technology Law Review, 19: 473-533.

Pictet, Jean (1966). The Principles of International Humanitarian Law. International Review of the Red Cross, 66: 455-469. Disponible en: https://international-review.icrc.org/sites/default/files/S0020860400011451a.pdf https://doi.org/10.1017/S0020860400011451

Porcelli, Adriana (2021). La inteligencia artificial aplicada a la robótica en los conflictos armados. Debates sobre los sistemas de armas letales autónomas y la (in)suficiencia de los estándares del derecho internacional humanitario. Revista Estudios Socio-Jurídicos, 23 (1). https://doi.org/10.12804/revistas.urosario.edu.co/sociojuridicos/a.9269

Russell, Stuart y Norvig, Peter (2009). Artificial Intelligence: A Modern Approach (3rd Edition). New Jersey: Pearson Prentice Hall.

Sassóli, Marcos (2014). Autonomous Weapons and International Humanitarian Law: Advantages, Open Technical Questions and Legal Issues to be Clarified. International Law Studies, 90: 308-340. Disponible en: https://digital-commons.usnwc.edu/ils/vol90/iss1/1

Schindler, Dietrich y Toman, Jirí (eds.) (1981). The Laws of Armed Conflicts. A Collection of Conventions, Resolutions and Other Documents, (2nd revised and complete ed.). Alphen aan den Rijn: Sijthoff and Noordhoff; Geneva: Henry Dunant Institute.

Schmidhuber, Jürgen (2015). Deep Learning in Neural Networks: An Overview. Neural Networks, 61: 85-117. https://doi.org/10.1016/j.neunet.2014.09.003 PMid:25462637

Schmitt, Michael N. y Thurnher, Jeffrey S. (2013). "Out of the loop": autonomous weapon systems and the law of armed conflict. Harvard National Security Journal, 4: 231-281. Disponible en https://harvardnsj.org/wp-content/uploads/sites/13/2013/01/Vol-4-Schmitt-Thurnher.pdf

Simeone, Osvaldo (2018). A Very Brief Introduction to Machine Learning With Applications to Communication Systems. IEEE Transactions on Cognitive Communications and Networking, (4). https://doi.org/10.1109/TCCN.2018.2881442

SIPRI. Mapping the Development of Autonomy in Weapon System (2017). https://www.sipri.org/sites/default/files/2017-11/siprireport_mapping_the_development_of_autonomy_in_weapon_systems_1117_1.pdf

Swinarsky, Christophe (1990). Principales nociones e institutos del Derecho Internacional Humanitario como sistema internacional de protección de la persona humana. Cátedra Jean Pictet (Fundada por el Comité Internacional de la Cruz Roja), San José Costa Rica. Instituto Interamericano de Derechos Humanos.

Published

2021-08-26

How to Cite

Vigevano, M. R. . (2021). Artificial intelligence in armed conflicts: legal and ethical limits. Arbor, 197(800), a600. https://doi.org/10.3989/arbor.2021.800002

Issue

Section

Articles