The ethics of visual scientific communication questions the principles that should optimally guide the production and use of images of science and technology. However, in the field of visual representation of AI, there is a chronic poverty of scientific and technological images of artificial intelligence.
From an ethical, but also an aesthetic, ontological and immediately political point of view, it is not a question of renouncing visual representations of AI – retreating into a form of scientific and technological iconoclasm – but rather of conceiving ‘pensive’ images and uses, to borrow a term used by Jacques Rancière, or ‘agonistic’, to use the language of Chantal Mouffe.
Moreover, the production of images by means of generative AI raises some important ethical and social questions.
Firstly, because such images are created on the basis of training carried out by AI systems from images available online. There is widespread use, for example, of stock images – available on the sites of agencies such as Getty Images and Shutterstock – used as ‘nutrients’ for generative AI. These images are laden with social and cultural biases that risk being replicated in the images of generative AI.
Secondly, generative AI leads to a paradoxical situation in which AI itself creates its own visual representations – its narrative – from AI images previously created by humans and available online. Many of these automatically and autonomously generated images risk perpetuating the same AI imagery ad libitum: smooth white robots, blue backgrounds, lines of numeric code floating in space, etc. And, on the other hand, of giving generative AI, as well as the companies that own it, the privilege of owning not only the material means, but also the symbolic or ‘imaginary’ means linked to AI.
A fundamental element to bear in mind is that working on the images and imaginaries of AI is not the same as taking a position external, or extraneous, to the AI object. Starting from the idea that AI is not only a technology but also, and perhaps above all, a social and cultural phenomenon, we can say that discussing images and imaginaries means being able to account for an important, but often underestimated, aspect of innovation processes in this field.
With respect to this problematic state of the art, the Images of Artificial Intelligence and the Ethics of Science Communication project set out to carry out an analysis of the images with which we represent AI. The approaches used were: a) the philosophical analysis of concepts (mainly with regard to the ethical and political effects of these representations); b) science and technology studies (with regard to conceptualisation and methods for analysing socio-technical imaginaries; c) sociosemiotics of images and critical discourse analysis (mainly to isolate and analyse corpuses of images and texts).
In summary, it is possible to distinguish three axes of discourse that have guided the development of the project:
(a) Aesthetic/artistic axis: what relationship do IA images have with the concept of creation? Does the common and increasingly widespread use, in its recreational and professional dimension, of platforms capable of generating images on the basis of simple linguistic prompts undermine the discourse on the authorship of the image? Does art become an extension of technique? Or do art and technique, since in both cases we are talking about artefacts, have almost the same nature? After all, one could say that producing art in dialogue with a machine, e.g. by means of prompts, is not very different from a common art such as gardening, where production can be directed but not entirely domesticated.
b) Ethical/epistemological axis: the images generated by AI are interwoven with a gigantic corpus of stored invisible data. These are non-human images that disrupt the referential topologies of ontology, of indessentiality, of the imaginary from which we have learnt to position ourselves in the apprehension and judgement of images. How do these images relate to deep-fake culture? Is it useful and possible to judge their veracity? Indeed, one could say that the problem of deep-fake and fake news cannot be solved by resorting to the ontological-epistemological distinction between true and false, but rather to the ethical-political distinction of recognition-non-recognition between individuals who adhere to a certain worldview.
c) social/semiotic/communicative: given that AI images are fed by a body of data, if and how do cognitive biases act in the restitution of generated images? Is it possible to map them? But above all, is it useful, given that, as the hermeneutic tradition teaches, biases are not an external element, but intrinsic to our way of approaching the world? Why ask machines for a neutrality in judgements that we ourselves are unable to provide them with?
Given these various questions, the fundamental question that was identified at the end of the project is: which visual communication can be said to be more effective in the case of AI, communication that aspires to be true (i.e. to refer to the thing itself) or communication that is able to engage different audiences in the debate around AI? This question implies another of no small importance: how far should and can we go in involving the public in this debate, as well as in other techno-scientific debates? What to do, for instance, with the various deniers and conspiracists, considering them as part of this agonistic arena of debate or considering them as outside its limits, since they turn agonism into a form of antagonism (to take up the distinction proposed by Mouffe)? In short, how to respond to the contemporary knowledge crisis: by taking refuge in a scientist and positivist attitude or by reaping the fruits of the social critique of technoscientific practices?
Some outcomes of the research:
§ the images of AI are not only an aesthetic question, but also ontological and above all ethical-political;
§ ontologically, they reveal something about the intrinsic nonrepresentability of AI;
§ politically, they exert an anaesthetising effect on the broader debate surrounding AI;
§ the main solution is not simply to create more transparent (and therefore reliable) images, but rather to encourage the production of agonistic images.
Participants
PI
Alberto Romele
Graziano Lingua
Contributors
Rémy Demichelis
Alessandra Scotti
Department involved
Department of Philosophy and Educational Sciences
Type
Seal of Excellence ROMA_PNRR_YOUR_SOE_22_01 – Images of Artificial Intelligence and the Ethics of Science Communication
Partner
University Sorbonne Nouvelle
Funder
University of Turin
Total contribution obtained
150,000 euro
Period of activity
19/12/2022-19/12/2024
Project duration
24 months
Research products :
Books
Romele, A. (2024). Digital Habitus: A Critique of the Imaginaries of AI. Routledge.
[recensione di Alexei Grinbaum apparsa su Philosophy & Technology, https://www.pdcnet.org/techne/content/techne_2023_0027_0003_0405_0410]
R. Demichelis (2024), L’intelligence artificielle, ses biais et les nôtres. Pourquoi la machine réveille nos démons, Faubourg.
Articles
Romele, A. (2024). ‘Éthique de l’intelligence artificielle’ comme signifiant flottant: Considérations théoriques et analyse critique des discours de presse. Interfaces numériques, 13(1). https://doi.org/10.25965/interfaces-numeriques.5229.
Romele, A. Images of Artificial Intelligence: a Blind Spot in AI Ethics. Philos. Technol. 35, 4 (2022).
Alberto Romele et Marta Severo, Que veulent les images de l’IA ? Une exploration de la communication scientifique visuelle de l’intelligence artificielle, Société & Répresentation, no 55, printemps 2023, p. 179-201.
Furia, P. e Romele, A. (2024), Rappresentazioni della città: immagini di stock e anestetizzazione del paesaggio, in Rivista di estetica 85.
Romele, A., & Lingua, G. (2024). Latour e la filosofia della tecnologia. Aut aut 402.
Romele, A. e Severo, M. (paper submitted), How Effective Are Depictions of AI? Ethical Reflections from an Experimental Study in Science Communication, AI & Society.
Book Chapters
Romele, A. (2024). The AI Imaginary: AI, Ethics, and Communication. In D. J. Gunkel (Ed.), Handbook on the Ethics of Artificial Intelligence (pp. 262–273). Edward Elgar Publishing. https://doi.org/10.4337/9781803926728.00023.
Romele, A. (2024). Emaginary; or Why the Essence of (Digital) Technology Is by No Means Entirely Technological, in Galit Wellner, Geoffrey Dierckxsens, Marco Arienti (eds.) The Philosophy of Imagination: Technology, Art and Ethics. Bloomsbury.
Romele, A., & Rodighiero, D. (2023). Fragmentation du visage et datafication du monde: réflexions à partir d’une analyse qualitative et quantitative des images de stock d’intelligence artificielle. In G. T. Giuliana, & M. Leone (Eds.), Sémiotique du visage futur (pp. 103-127). Roma: Aracne.
Conferences
Seminario annuale BIAS (Banques d’Images et Algorithmisation de la culture viSuelle), Università Sorbonne Nouvelle/Università Paris 1 Panthéon Sorbonne (2024) (video di presentazione: https://vimeo.com/915619894).
Conferenza internazionale Ethique des images de l’IA : des images de stock à l’IA générative, Maison de la recherche, Università Sorbonne Nouvelle, 14-15 octobre 2024.
Conferenza internazionale Forme e crisi dei saperi. A partire da Bruno Latour, Università di Torino, 4-5 novembre 2024.
Dissemination
Romele, A. (2024), Intervista per Édunum, Lettre d’information sur le numérique éducatif del Ministero francese per l’educazione nazionale, https://eduscol.education.fr/document/61099/download (pp. 65-68).
Romele, A. (2024), Intervista per il giornale La Croix, https://www.la-croix.com/france/ethique-du-numerique-pourquoi-cette-expression-ne-fait-pas-l-unanimite-20240528.
Romele, A. (2022), Intervista per il giornale La revue des médias (la rivista dell’INA, gli archivi audiovisivi francesi), https://larevuedesmedias.ina.fr/photos-images-intelligence-artificielle-covid-climat-presse-journaux-sujets-abstraits.