30 juin. 2025The afterlife of art Can AI bring back the dead?

Visuel : © The Next Rembrandt / ING / Microsoft / TU Delft
A few weeks ago, as I was wandering through the quiet, softly lit halls of the Villa Vauban, I rediscovered Jean-Pierre Beckius. Though far from unknown, the scope of his work is broader, more delicate, and more radiant than often assumed. It struck me then: an artist dies twice —once biologically, and once more when their work ceases to be seen, questioned, or remembered.
For the past several months, I have been studying the ethics of artificial intelligence. And that is where two thoughts collided: can we — and should we — revive a vanished artistic voice using AI? What if a machine were asked to continue a body of work a dead artist had left unfinished?
Today, just a few lines of code are enough to generate images ‘in the style of’ any deceased artist. As an experiment — not one we will pursue here — a tool like Midjourney could easily produce a ‘new’ Beckius painting synthesised from digitised works, critical reviews, and biographical data.
Should we open the door to this possibility? Refuse it on principle? Or define clear boundaries?
What AI can already do: real-world cases
While the public became widely aware of artificial intelligence’s potential in 2022, with the release of ChatGPT by OpenAI, these technologies have been used in the arts for several years now. ChatGPT belongs to a family of models known as LLMs (Large Language Models), capable of generating text, engaging in natural language dialogue, and in some cases, producing images or music. These tools rely on massive datasets and neural networks trained to predict, complete, or imitate coherent sequences.
Long before this technology became mainstream, several artistic projects had already explored AI’s capacity to imitate, extend, or reinvent the work of past figures. Here are some notable examples :
The Next Rembrandt (2016, Netherlands): an algorithm trained on 346 works by the Dutch master generated a new, original painting — authored by no one.

Beethoven X: The AI Project (2020, Germany): researchers attempted to ‘complete’ Beethoven’s unfinished 10th symphony using an AI fed with his compositions and sketches.

Dalí Lives (2019, USA): a deepfake of Salvador Dalí interacts with visitors at the Salvador Dalí Museum in Florida — complete with voice, expressions, and gestures. The illusion is striking.

These projects make one thing clear: technically, we are already there. What matters now is how we choose to put this power to use.
Ethics at the heart of the debate
Creating a posthumous work using generative AI raises a fundamental question: Who is the author? The original artist, whose style is extrapolated? The developer of the model? The person who wrote the prompt?
For Professor Finola Finn, a postdoctoral researcher at the Luxembourg Centre for Contemporary and Digital History (C²DH), these are not hypothetical questions. A specialist in the epistemological implications of AI in historical and creative practice, she co-developed the Collective-Centered Creation framework with Donal Khosrowi and Elinor Bell-Clark, a model for credit attribution in AI-generated works.
She stresses that one of the central challenges lies in the notion of creative intention — a difficult concept to apply to a system without consciousness or autonomy:
‘It is hard to think of a system without consciousness or autonomy as having “creative intention,” since many believe intention requires mental states or free will. But even without a mind, we believe AI systems can exert significant control over the form of the works they generate — for example, by giving their outputs a certain style or arrangement that is not explicitly prompted. While this differs from intention, this kind of control is still crucial to consider when trying to understand how an output came about, and who played a role in shaping it.’
Another sensitive issue is the role of the human user. At what point can we say they are the author of an AI-generated work?
‘It really depends,’ says Finn. ‘People interact with generative AI in very different ways. Some use quick, generic prompts. Others spend hours refining them and iterating through hundreds of versions. If someone simply types in “beautiful landscape” and is happy with a variety of loosely related images, it is doubtful they are an author in any meaningful sense. But if someone shows a high degree of control, originality, and independence in their prompting — and their input clearly shapes the final output — then they have a strong claim to authorship. Still, they might not be the only author. Other agents, like the artists whose works were scraped to train the AI, may have also left a recognisable mark that warrants co-authorship.’
As Finn notes, the very concept of authorship is under pressure:
‘We should not water it down too much. So many of our practices — how we assign praise, blame, or responsibility — depend on who we see as the author. These systems break down if authorship becomes too vague. We must remember that it is up to us, as a society, to define what we mean by “author,” and we can push back against attempts to redefine it too radically.’
Memory, mediation, and misuse
At the C²DH, issues of memory, technology, and transmission are at the heart of several research initiatives. Frédéric Clavert, Assistant Professor of Contemporary European History, is part of a working group studying how individuals use technologies — especially generative AI — to engage with the past.
He points to the Dimensions in Testimony project by the USC Shoah Foundation in Los Angeles (https://sfi.usc.edu/dit), which allows visitors to interact with pre-recorded testimonies of Holocaust survivors. For Clavert, this represents a respectful form of mediation: each response was filmed in advance by the individual concerned. However, he warns: if we cross into automated generation, creating new answers using AI, we risk flattening or even betraying deeply individual experiences.
In a world increasingly marked by polarisation and misinformation, staying faithful to the voices of the past is essential. Imagining a similar setup to preserve the testimonies of young Luxembourgers forcibly conscripted during World War II — the Malgré-nous — could serve as a valuable bulwark against forgetting, provided transparency and integrity are ensured.
Clavert also raises the issue of anthropomorphism: our tendency to attribute human qualities to machines. While this may ease social acceptance and even have benefits e.g. in healthcare, in the realms of art and memory, it blurs important lines. We may believe we are speaking to a sentient being, when in fact, we are engaging with an illusion.
Between tribute and transgression
Two recent cases illustrate these tensions.
The first concerns Alain Dorval, the iconic French voice of Sylvester Stallone, who passed away in December 2023. For the upcoming film Armor (set for 2025), AI was used to recreate his voice. While consent for a trial version was reportedly given, it came from his daughter, Aurore Bergé, currently France’s Minister for Gender Equality. The situation caused an uproar among voice actors. Under the banner ‘Touche pas à ma VF,’ several artists protested in May 2025, denouncing what they saw as a degrading and threatening precedent for their profession. Dorval, they claim, would never have approved. Around 15,000 jobs could be at risk. The minister later clarified that no final version had been validated — but the damage was done. In this case, technology seemed to move faster than ethics or regulation.
Listen to the full France Culture report here:
https://www.radiofrance.fr/franceculture/podcasts/un-monde-connecte/la-voix-francaise-de-sylvester-stallone-ressuscitee-par-l-ia-sous-fond-de-polemique-1137320
Another revealing example: Jianwei Xun, a fictional Hong Kong philosopher, praised in intellectual circles for a profound and original book. In reality, Xun didn’t exist. He was the creation of an Italian philosopher, invented through extensive dialogue with generative AI tools. The experiment was revealed in April 2025 by the journal Le Grand Continent, which had first published a fake interview, and then a critical reflection on the entire project.
Read the full analysis and context here:
https://legrandcontinent.eu/fr/2025/04/04/qui-est-vraiment-jianwei-xun-une-conversation-avec-jianwei-xun/
What’s next?
Imagining a ‘Beckius 2025’ could be a fruitful endeavour — if the process is transparent, and if experts and rights holders are involved. Rejecting AI outright would be a mistake. But embracing it blindly would be another.
Can AI pay tribute without betrayal? Enrich memory without overwhelming it?
It is up to us — the living — to decide whether we simply want to generate images, sounds, and texts… or truly engage with the forgotten voices of our artistic past.
Auteurs
Artistes
Institutions
Les plus populaires
- 26 juin. 2025
- 17 juin. 2025
- 27 juin. 2025
- 04 juil. 2025
ARTICLES
Articles
11 juil. 2025Casse-Dalle
Un petit repas artistique sans chichis
Articles
09 juil. 2025Hybrid Futures
Un regard artistique sur nos possibles futurs
Articles
04 juil. 2025Tali
Le talent révélé à l’Eurovision