< AI for Humanity and Society 2023 Workshops

WASP-HS Workshop in conjunction with the conference AI for Humanity and Society 2024

In Defense of Dignity in the Face of the Lethal Use of Artificial Intelligence

Host

Lorena De la Barrera, a00991837@tec.mx

About

“Technology is not neutral. We’re inside of what we make, and it’s inside of us. We’re living in a world of connections—and it matters which ones get made and unmade.” — Donna Haraway

Artificial Intelligence (AI) has transcended to become an integral part of our lives. From autonomous vehicles to medical diagnostics, AI systems wield immense power. However, this power comes with ethical responsibilities. This workshop aims to dissect the ethical implications of AI deployment, particularly when it intersects with life-and-death scenarios under semi-autonomous systems.

Context and State of the Art: The rapid advancement of AI technologies necessitates critical reflection. Recent incidents, such as autonomous vehicle accidents and biased algorithmic decisions, underscore the urgency of ethical discussions. We stand at a crossroads where the choices we make today will shape the future of AI’s impact on humanity.

Objectives

I. Contextualize the Threat: We will delve into a specific case study: the deployment of an autonomous weapon in Libya. This weapon targeted fleeing fighters, raising concerns about the lawfulness of lethal autonomous systems in armed conflicts.
II. Techno-Feminist Lens: Drawing inspiration from feminist scholars, we’ll dissect AI’s gendered impact. How do biases creep into algorithms? How can we rectify them? We’ll design a techno-feminist route to dissect these ethical dilemmas, inspired by the atrocities in Libya.
III. Logarithmic Confidence Model: Our proposed model acknowledges uncertainty. It encourages AI practitioners to embrace humility and recognize the limits of algorithmic decision-making. We’ll discuss its practical implementation.

Workshop Organization

a) Group discussion on the ethical dimensions of lethal AI, Why and How to defend humanity from LAWS? 20 min introduction and delivery of keynotes and principles, rules of participation. And 60 min. for the discussion in small groups of 4-5 people, then present their ideas. 20 min to present in front of the group.
b) Design Thinking Session: Attendees will brainstorm ways to integrate techno-feminist principles into AI development. How can we ensure that AI systems respect human dignity? 1 hr. Discussion in small groups of 4-5 people, then present their ideas.
c) Logarithmic Confidence Challenge: We’ll introduce our proposed model and invite feedback. Open discussion and brainstorming for next year´s workshop based in the proposal- solutions from this workshop.

Paper Submission

We encourage participants to author a paper based on their insights from the workshop. Contributions will be considered for a special issue in a reputable journal, fostering ongoing dialogue and research.

Preparation for Participants

The workshop is divided into three sections, based on the previous analysis of the briefing in Annex A and the minimal suggested references.
Notes delivered one month before the workshop (See Annex A) will include a list of concepts and terminologies sent to all participants, so that agreement on the terms is achieved beforehand, and we ensure that all participants are prepared and well-read before the event at Gothenburg. An online forum can be created to post these keynotes and encourage online dialogue in preparation for this session. Also, the richness of the sharing of academic material and viewpoints will be key for the success and preparation of this workshop.

Teams will be assigned 15 days before the workshop and a small integration activity will be assigned to the small practice groups to encourage networking and leverage the time for the in-person workshop.

Annex A

Analyzing the Kargu case through the lens of the defense of dignity involves considering how the deployment of autonomous weapons systems (AWS) like the Kargu-2 drone impacts human dignity. The concept of human dignity is often associated with the inherent worth of individuals and the respect owed to them. In the context of AWS, concerns arise regarding the dehumanization of warfare, as machines are given the power to make life-and-death decisions, potentially without adequate ethical and moral judgment (Sharkey, 2019). Follow this link: https://rdcu.be/dJ0uh

From this perspective, the use of the Kargu-2 drone in Libya can be seen as a challenge to human dignity on several fronts:

  • Devaluation of Life: The ability of the Kargu-2 to autonomously select and engage targets may lead to a perceived devaluation of human life, as the decision to use lethal force is removed from human hands.
  • Accountability: There is a question of who is held accountable for the actions of an autonomous system, which complicates the traditional understanding of moral and legal responsibility in warfare.
  • Transparency: The algorithms driving such systems are often opaque, making it difficult to ensure they align with humanitarian principles and respect for human dignity.

Applying technofeminist principles to developing AI systems like the Kargu-2 involves a commitment to creating inclusive, equitable technology that does not perpetuate existing power imbalances. Technofeminism emphasizes the need for reflexivity, participation, intersectionality, and structural change in technology development (Guerra, 2023) https://feministai.pubpub.org/pub/ghxn5ka8/release/2 and (Tandon, 2021) https://feministai.pubpub.org/pub/practicing-feminist-principles/release/2

In the case of the Kargu-2, this could mean:

  • Reflexivity: Ensuring that the design and deployment of AWS consider the social and ethical implications of their use, reflecting on the power dynamics they may reinforce.
  • Participation: Involving a diverse group of stakeholders, including those who might be affected by AWS, in the development process to ensure a wide range of perspectives and needs are considered.
  • Intersectionality: Recognizing and addressing the different ways in which AWS might impact various groups, particularly those who are marginalized or vulnerable.
  • Structural Change: Working towards altering the underlying social and political structures that allow for the development and use of AWS without sufficient ethical consideration.

By integrating these principles, the development of AI systems like the Kargu-2 could be steered towards outcomes that respect human dignity and promote social justice, rather than exacerbating inequalities and reducing accountability in warfare. This approach would require a fundamental rethinking of how such technologies are created and implemented, with a focus on ethical foresight and the long-term impact on society.

Minimum List of References:

  1. Adam, D. (2024). Lethal AI weapons are here: how can we control them? Nature. https://doi.org/10.1038/d41586-024-01029-0 Abstract: This paper discusses the risks associated with the integration of machine learning in autonomous weapons systems (AWS), emphasizing the potential for geopolitical instability and the negative impact on the free exchange of ideas in AI research. The authors argue that AWS could lower the political cost of waging war and increase the likelihood of conflicts escalating.
  1. AI-Powered Autonomous Weapons Risk Geopolitical Instability and Threaten AI Research. (n.d.). Arxiv.org. Retrieved June 5, 2024, from https://arxiv.org/html/2405.01859vAbstract: This paper discusses the risks to geopolitical stability and AI research posed by the development of lethal autonomous weapons systems (LAWS) with machine learning. It provides regulatory suggestions to mitigate these risks.
  1. Kayser, D. Why a treaty on autonomous weapons is necessary and feasible. Ethics Inf Technol 25, 25 (2023). https://doi.org/10.1007/s10676-023-09685-y Abstract: This article discusses the international media attention on using the Kargu in Libya, an unmanned aerial vehicle with autonomous capabilities.
  1. Libya, The Use of Lethal Autonomous Weapon Systems | How does law protect in war? – Online casebook. (n.d.). Casebook.icrc.org. https://casebook.icrc.org/case-study/libya-use-lethal-autonomous-weapon-systems Abstract: This case study examines the concerns about the lawfulness of deploying LAWS in armed conflicts, with a specific reference to an incident in Libya.
  2. Nasu, H. (2021, June 10). The Kargu-2 Autonomous Attack Drone: Legal & Ethical Dimensions. Lieber Institute West Point. https://lieber.westpoint.edu/kargu-2-autonomous-attack-drone-legal-ethical/ Abstract: Weapon advancements aim to distance attackers from targets, challenging the incorporation of humanitarian principles in war. Ethical debates on autonomous weapons like LAWS struggle due to the vague concept of humanity and the human drive to utilize technology. Effective arms regulation is hindered by political discord and the unpredictable potential of new technologies.

Registration

To register for the workshop please e-mail Lorena De la Barrera at a00991837@tec.mx. Workshop registration, including submission, closes on 15 October.

Please note that in order to participate in this workshop you must also register for the conference via the event page.