Artificial Intelligence: failing better?

Artificial Intelligence: failing better?

"All of old. Nothing else ever. Ever tried. Ever failed. No matter. Try again. Fail again. Fail better" Samuel Beckett

The vulnerabilities we are made of are layers, not labels that stick to our skin. Sometimes we have more, sometimes less, but there are always various fragilities that inhabit our humanness, starting with the mortality that cuts across all living beings. Mortality is present in us through our consciousness: we know what we are, who we are (?), where we are, where we are going and that the journey has an end. We are born with our idiosyncrasies and we will all die one day.

We inhabit a body and we are a body, fragile, inscribed in time and space, capable of thinking, acting, deliberating and deciding. We regret bad decisions that have cut paths that are impossible to recover, without the possibility of sweeping the road backwards, as happens in Mia Couto's novel "Jesusalém" (1), in which Mwanito, his brother Ntunzi and the servant Zakaria Kalash sweep the shortcuts backwards: instead of cleaning the paths, they scatter dust, twigs, stones and seeds on them, preventing these shortcuts, seeds, preventing them from becoming roads. And so they nullify the embryo of any destiny. (p. 39)

We assume or discard responsibility for what we decide and do; we learn to develop our autonomy and make use of it throughout our lives, beset by moral failings, toxic waste, seeds of burn-out in the society of tiredness (Byung Chul-Hahn).

We are unique, singular, imperfect beings in a world of information, of replicable technology capable of solving question marks, equations, problems and dilemmas through correlations between billions and billions of pieces of data. Overwhelmed by what can be done, with no time to think about whether we should do it, we walk like Philippe Petit on a thin barbed wire between skyscrapers, confronted with our essence: "the high wire is an art of solitude, a way of coming to grips with one's life in the darkest, most secret corner of the self"(2); we walk between the real world, vulnerable, subject to failure, mortal, and the virtual world, tending to be unscathed and invulnerable and with great potential to be immune to failure. Space open to better versions 4.0 of each of us, encapsulated in layers upon layers of information, at an ever greater distance from the centre of ourselves: who are we, anyway?

Algorithms, capable of reading (big) data in a (very) short space of time and producing effective results in the field of science, health and the technology that inhabits us and with which we live on a daily basis, can make two dimensions present in the text I have written so far disappear: the long road of reflection and critical thinking (without which the intertextualities inscribed above in this text would not be the same) and the authenticity of the Self that is formed in the relationship with the Other.

Realising the risk associated with the loss of these two dimensions should allow us to frame ethical thinking about Artificial Intelligence (AI) from a different perspective than that which has served the ethics of technologies: instead of applied ethics as a set of technical knowledge to avoid failures despite human factors and guarantee the preservation of Privacy, Security and Justice, it would be important to rescue another sense of ethics.

It's worth reading, stopping, re-reading and reflecting on Boenig-Liptsin's article "Aiming at the good life in the datafied world: A co-productionist framework of ethics", which presents a proposal for applying the relational ethics approach in the context of computing and data science, inviting us to analyse how these technologies mediate and reconfigure the relationship between the Self and the Other (3). The question we need to ask is not only whether we can remedy negative consequences for our society and for the individual through ethics (in other words, whether we can fail less to minimise evil); the question will also and above all be what good we seek to achieve through data technology and computing, taking into account what defines us as a person: an identity co-constructed in the relationship with the Other and with our own Self.

The ethical analysis of AI will thus allow us to think about how algorithms shape the relationship between each other and other human beings; to revisit the notion of normality, of good and evil, anticipating how these concepts are mirrored by AI systems. Instead of focussing our ethical analysis on how technologies work, it will be important to look at the dynamic process of ethical action in technological contexts: how do we preserve trust between doctor and patient in an AI-mediated healthcare context? How do we promote critical thinking in the pedagogical context mediated by ChatGPT and, eventually, by virtual reality laboratories in schools and universities? The Good that we can aspire to through the application of these technologies implies the development of the identity of each of those involved and not just a protective shield against breaches of privacy, security or justice. The technology that allows us to realise ourselves as people, with others and for others, in just institutions (to paraphrase Paul Ricoeur's definition of ethics) is the technology that doesn't prevent us from always returning to our Self, with our frailties, vulnerabilities and flaws. Thinking about the ethics of technology within this broader framework of the relationship between the Self and the Other, and between these two and the community in which they are inserted, allows us to understand that the Good does not lie in eliminating Failure, but in the way we learn to integrate it into our life of relationship: we try again, we fail again, we fail better, preserving our human essence, whose meaning will always be co-constructed.

Notes:

(1) Mia Couto. (2009). Jesusalém. Editorial Caminho.

(2) Philippe Petit. (2019). On the High Wire (trans. by Paul Auster). Orion Publishing Co.

(3) I invite readers to read the following text written by ChatGPT 3.5 (in less than a minute) on the topic of Artificial Intelligence and Failures:

"The rise of artificial intelligence (AI) has brought with it a remarkable promise of technological advances and transformations in various spheres of society. However, it is crucial to recognise that, even in the face of all its potential, AI is not without its flaws. The complex interaction between algorithms, data and the environment in which they operate can result in unexpected and sometimes damaging consequences.

The nature of AI involves learning from data and adapting to identified patterns. However, when this adaptation takes place in a dynamic and ever-changing environment, AI can encounter situations that were not foreseen or poorly understood by its algorithms. These failures can range from incorrect decisions in autonomous systems to unwanted biases in decision-making algorithms.

One of the challenges facing AI is its dependence on training data. If this data contains biases or is not representative enough, AI can perpetuate and even amplify these disparities. This is particularly worrying when AI is applied in sensitive areas such as health, criminal justice and human resources, where fairness and impartiality are essential.

Another critical aspect is the lack of complete understanding of the internal processes of some AI models. In many cases, deep learning algorithms operate as "black boxes", making it challenging to understand how certain decisions are reached. This makes it difficult to identify and correct flaws when they arise, jeopardising transparency and accountability.

Faced with these issues, it becomes imperative to develop robust strategies to mitigate AI failures. This includes implementing ethical practices from the design to the implementation of systems, carrying out regular audits to assess the fairness and safety of algorithms, and promoting an interdisciplinary approach involving experts in ethics, sociology and law.

Artificial intelligence is a powerful tool, but its responsible application requires a careful and constant approach. Understanding that flaws can occur and being willing to correct them are key steps to ensuring that AI contributes positively to society, respecting ethical principles and promoting public trust."

(4) Boenig-Liptsin, M. (2022). Aiming at the good life in the datafied world: A co-productionist framework of ethics. Big Data & Society, 9(2). https://doi.org/10.1177/20539517221139782, p. 3.

(5) Paul Ricoeur (1992). Oneself as Another. Chicago: University of Chicago Press, p. 172.