Deep Fictionality as a Mechanism of Memetic Manipulation and Artistic Subversion

From Dreams to Hallucinations, from Research to Corporations

Since the advent of the new AI Spring, the cultural sphere has undergone radical transformations, prompting a reevaluation of the potential disruptions neural networks (a term we prefer over the more widely used, ambiguously defined, and frequently misapplied “artificial intelligence”) can cause in the artistic domain, cultural industries, legal frameworks, and systemic theories concerning authorship, creativity, and the perception of reality in projected imagery. With the rise of AI in cultural production—ranging from Google DeepDream’s surreal, multicolored, dog-filled imagery to photorealistic synthetic media indistinguishable from documentary photography—the boundary between reality and fabrication has become increasingly blurred. Today, with access to a simple visual or audio sample, generative networks can convincingly replicate a person’s image, voice, or behavior, enabling not only entertainment or creative experimentation but also deception. This large-scale deployment of AI-generated content has significantly amplified the dynamics of the post-truth era. In this article, we examine several case studies that illustrate how AI-generated memes have been used not only to amuse or mock online communities of “netizens” but also to manipulate public opinion and influence discourse in digital spaces. In contrast to these instances of manipulation, we also present examples of constructive artistic practices: approach by an international feminist collective, three Slovak female artists engaged with AI, a writer exploring autofiction through creative applications of neural networks, a generated film, and a postdigital artwork expressing algorithmically generated futurological predictions. These projects foster critical reflection and offer a counter-narrative to the misuse of generative technologies. By providing examples of artistic practices with generative neural networks, mostly from the Central European visual arts scene, we aim to emphasize the importance of a critical artistic practice that challenges the depoliticized aesthetics of computational image-making, revealing the “deep fictionality” that the underlying generative systems enable.

Fig. 1. Balenciaga Pope, www.reddit.com/r/midjourney/comments/120vhdc/the_pope_drip/

The rise of AI-driven memetic culture has permeated visual media, turning viral memes into topics of everyday conversation. A notable example is the viral AI-generated image popularly referred to as the Balenciaga Pope—a depiction of Pope Francis wearing a white winter puffer jacket and a silver, bejeweled crucifix (fig 1.). Although visual inconsistencies (such as his distorted right hand holding an amorphous bottle) clearly indicate the image was artificially generated, many users initially mistook it for a real photograph. The image was created by Reddit user u/trippy_art_special via MidJourney and originally posted on March 25, 2023, under the title “Pope Drip”[1] and is now deleted from that account. It quickly went viral, reaching television broadcasts,[2] journalistic outlets, and social media platforms, sparking widespread discussion on the challenges of distinguishing real from fake in the age of synthetic media.[3]

Just two days later, Pope Francis addressed the issue in his remarks at the Minerva Dialogues, an annual high-level conference organized by the Vatican’s Dicastery for Education and Culture, bringing together scientists and experts. He emphasized the ethical imperative of AI development, stating: “I would therefore encourage you, in your deliberations, to make the intrinsic dignity of every man and woman the key criterion in evaluating emerging technologies; these will prove ethically sound to the extent that they help respect that dignity and increase its expression at every level of human life.”[4]

Even though this case exposed the easily accessible nature of fabricated content, many women and girls had long voiced concerns about the harms of AI-generated deepfake pornography—well before such widely publicized incidents. Luba Kassova addresses this issue in her article “Tech bros need to realise deepfake porn ruins lives—and the law has to catch up,” arguing that “Firms feeding off this abuse should pay for the harm they cause.”[5] To meaningfully address the issue at scale, she advocates for effective systemic interventions: “We urgently need support services for survivors and effective response systems to block and remove nonconsensual sexual deepfakes.”[6]

Nonconsensual deepfake pornography—of which an estimated ninety-nine percent targets women—lays bare the stark gendered inequalities amplified by generative AI. What is frequently referred to by technologists as “bias” in AI systems is in fact a reflection of deeper, pre-existing social inequalities. These stem from biased data collection processes, unequal representation among developers and data scientists, and the concentration of power within male-dominated tech corporations. As Mike Zajko points out in his paper on the role of sociology in addressing social inequality in generative systems: “Algorithmic harms, racist and sexist robots, ubiquitous surveillance and behavioral manipulation have now all become well-established as public issues.”[7]

Although issues of digital inequality—rooted in patriarchy and sexism—have long been present in public discourse, substantive legal interventions only began emerging recently. At the regulatory level, the European Union introduced the AI Act, with the political agreement of December 2023 and formal adoption started in 2024. This legislation constitutes a regulatory framework for artificial intelligence that employs a risk-based approach, systematically classifying AI systems according to varying levels of risk and delineating corresponding obligations for both developers and users. Article 50(4) specifically addresses deepfakes, stating: “Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated.”[8] While this measure mandates transparency for synthetic media, it does not explicitly confront the issue of nonconsensual sexual deepfakes or offer protections for those targeted by them.

Fig. 2. Dominika Čupková, AIXcuseMe: I Am Sorry for Being a Long Girl, I Am, dominikacupkova.com/post/635911867244625920/aixcuseme-i-am-sorry-for-being-a-long-girl-i-am

Beyond the legal sphere, community-driven efforts offer a powerful alternative approach to addressing digital inequality and promoting fairness in AI. One such initiative is AIxDesign, a female-led collective that champions diversity and critical engagement with technology. As stated on their website: “We host events, run community-led research projects, and make public resources to democratize AI literacy and critical discourse, practice critical & creative thinking, and research and prototype new ways of being in relation to AI and technology.”[9] This non-profit group exemplifies an inter- and transdisciplinary practice aligned with the principles of diversity, and is centered on inclusivity and equity in AI development. It promotes a diversity-based approach in education and creative thinking, fostering a debate about the positive uses of AI through participatory projects, workshops, community building, research, open collaboration, and the sharing of information and tools. One of its team members, Slovak artist Dominika Čupková, uses in her solo project AI to interrogate gendered communication norms. Her visual artistic project Alxcuse.me[10] (fig. 2) critiques the phenomenon of excessive female apologizing. It presents AI-generated continuations of the phrase “I am sorry for being,” derived from a Twitter thread by @NihachuEatsCats,[11] where predictive keyboards were used to complete the sentence. Absurd outputs such as “I am sorry for being a long girl” humorously highlight the irrationality of constant self-apology. Displayed as colorful blocks with randomly generated fonts and styles, the work’s visual playfulness underscores a deeper critical message.

Under the Glass and From the Mirror

One of the more chilling examples of AI’s uncanny visual impact emerged in a viral series of images depicting seemingly ordinary people sitting in their living rooms. On closer inspection, however, the coffee table at the center of the scene was not made of wood or glass—but appeared to contain a human corpse encased in resin. In some versions, the individuals’ faces were also distorted, adding to the unsettling effect. These AI-generated images, nicknamed Grandma Coffee Table, elicited a range of online reactions[12]—from genuine confusion to dark humor. Comments like “Where in the world is this legal?” or “Was she deceased before being encased?” demonstrate how convincingly AI can fabricate dark grotesque scenarios. Others took a more sardonic tone: “Speak up, Grandma, I can’t hear ya,” or critically pointed out the absurdity of such creations: “Stop telling the AIs our fucked-in-the-head fantasies and ideas!”

While millions were intrigued by the viral debate surrounding the nature and authenticity of these images in September 2023, the case also raised aesthetic questions. It revealed how AI tools are reshaping photographic genres, particularly family photography, by inserting an uncanny twist. German photographer Boris Eldagsen refers to his neural-network-assisted creative practice as “promptography” rather than photography, emphasizing the centrality of textual prompts in the generative process. The actual term was coined by Peruvian artist Christian Vinces in an online discussion with Eldagsen,[13] whose work often mimics the aesthetic of mid-twentieth-century black-and-white photography.

Fig. 3. Gabriela Zigová–Zuzana Sabová, Mum’s Altar, 2023. Photo authors’ archive.

An interesting turn towards a family photography digested by generative neural networks was undertaken by Slovak artists Gabriela Zigová and Zuzana Sabová in their joint exhibition Toto som už niekde videla (I’ve Seen This Before), held in September 2023 at For maat Gallery in Trenčín, Slovakia (curated by Lucia Gavulová). The project used MidJourney to generate visuals based on a combined personal archive of childhood and adolescent photographs from both artists. The resulting images—depicting girls who resemble both creators—evoke shared memories while clearly exhibiting synthetic features, such as a disembodied arm or missing fingers. The installation, titled Mum’s Altar (fig.  3), prompts viewers to question authorship, memory, and identity in the age of AI. Whose mother is addressed in this altar of synthetic memory? By blurring real and imagined youth through generative tools, the work invites us to reconsider how collective memory and AI intersect—just as large language models rely on a vast dataset of humanity’s shared textual consciousness. Besides this, it also questions the phenomenology of our gaze at images of girls who are not real but are closely based on real youthful events of two women.

From Provocation to Dissemination

“Such loose creation and dissemination of memes should also encourage Congress and other lawmakers to consider some real investments in AI literacy for everyday people to understand the consequences of what they share online.”[14] The meme economy that has emerged alongside the proliferation of AI tools thrives on a unique fusion of humor, satire, provocation, absurdity, and aesthetic cues drawn from internet culture. This economy has flourished especially in politically charged environments, where memes operate as both cultural commentary and instruments of persuasion. During the U.S. presidential elections in November 2024, both campaigns disseminated memes mocking their opponents. Besides this, countless memes were derived directly from candidates’ public statements—such as Donald Trump’s notorious remark, “They’re eating the dogs. They’re eating the cats. They’re eating the pets.”

Importantly, the use of memes has not diminished with the conclusion of campaigns. In fact, meme-based political communication has persisted—particularly in the case of Donald Trump. One especially notable instance was the circulation of a deepfake portrait depicting Trump in papal robes, which he posted on his Truth Social profile. This image, shared shortly after the death of Pope Francis, was reshared by the official White House account. Accompanied by Trump’s public statement—“I would like to be the Pope”—the post was supported by affirming comments from several Republican senators.

While the earlier papal-themed meme referenced in this article—the so-called Balenciaga Pope—was primarily ironic and visually exaggerated, Trump’s deepfake portrait demonstrates the inverse.[15] Here, the underlying message of unthinkable megalomania is not parody but self-representation, and it is the public reaction (a flood of parody memes) that reintroduces irony and satire into the discourse. This example highlights a key point: the meaning of synthetic imagery often cannot be reliably determined without understanding its context, authorship, and reception.

In the Slovak context, contemporary meme culture has similarly engaged in political critique most notably targeting Martina Šimkovičová, Slovakia’s current Minister of Culture, who is frequently referred to as the “Minister of Nonculture.” A former television news presenter and candidate chosen by the Slovak National Party, Šimkovičová has gained notoriety for her nationalist, anti-LGBTIQ, and anti-diversity stances. Her tenure has been marked by sweeping and controversial personnel changes in key national institutions such as the Slovak National Gallery, Slovak National Museum, and Slovak National Theatre, along with significant funding cuts to independent cultural initiatives and other devastating steps for the Slovak culture.

Fig. 4. Simkovicova Meme, https://www.tiktok.com/discover/simkovicova-meme?lang=en

Rather than advocating for the protection and development of Slovakia’s diverse cultural landscape, Šimkovičová has used her position to promote an exclusionary and regressive cultural policy. Her antagonism toward critical journalists, non-collaborative artists, and dissenters—both online and offline—has further fueled public criticism. Notable examples include her use of absurd criminal complaints against visual artist Ilona Németh, a student protest organizer, and writer Michal Hvorecký. All these actions have made her a frequent target of ridicule in Slovak meme culture. A TikTok account titled “Šimkovičová Meme”[16] is dedicated entirely to mocking her public appearances and visual presence, serving as a digital archive of satirical resistance. (fig. 4)

The Technical Image of the Present and Deep Fictionality

Within the framework of his media theory, Vilém Flusser would undoubtedly classify images produced by generative AI—whether in the form of pop-cultural memes or digital artworks—as “technical images.” Unlike traditional images, “technical images,” according to Flusser, do not possess the same “magical” capacity to mediate the world to humans. Instead, they are derived from texts—specifically scientific or technical texts—which serve as the foundation for the apparatuses that enable their production.

When Flusser first advanced these claims in relation to photography as a paradigmatic example of the technical image, his thought was marked by a fascinating eccentricity, at times bordering on the hermetic (especially in his use of the concept of magic), and often intellectually demanding in both its reception and interpretation. However, when these same concepts are applied to contemporary AI-generated imagery, Flusser’s framework appears strikingly prescient—almost self-evident.Consider the following passage from Towards a Philosophy of Photography:[17] “Human beings forget they created the images (…) they are no longer able to decode them (…): Imagination has turned into hallucination.”[18] A few pages later, Flusser elaborates that technical images “are metacodes of texts which, as is yet to be shown, signify texts, not the world out there. The imagination that produces them involves the ability to transcode concepts from texts into images; when we observe them, we see concepts—encoded in a new way—of the world out there.”[19]

These reflections apply with striking accuracy to the creation of images via generative neural networks—though perhaps in a reversed or inverted form. Flusser posits that texts originate from images and serve to break down and interpret them. However, in the case of generative neural networks, the process begins with a textual prompt that initiates the production of a synthetic image. This prompt is not only a metacode of the eventual image (and, more abstractly, of the mental image that led the creator to formulate the prompt), but also functions as a protocode—a generative seed from which the image emerges.

The resulting synthetic image seeks to explain, interpret, and concretize its textual “parent” through visual code. In this sense, AI-generated images can be viewed as technical images turned inside out, further distancing them from any empirical reality. Their fictionality is not merely surface-level; it is structurally embedded and intensified, evolving into what we propose to call deep fictionality.

Indeed, even the most current theories of (synthetic) images do not associate image generation outputs with the concept of reference. In Artificial Aesthetics,[20] Lev Manovich links reference exclusively to manually created images. He does so in a chapter that traces the historical evolution of image-making techniques and their relation to reality. Among other points on this timeline, he associates photography with the principle of recording, and computer-generated 3D graphics with simulation. In contrast, for synthetic images, the key term is neither representation nor simulation, but prediction — that is, the generation of images based on the analysis of patterns in large datasets, which are then used to anticipate the style and form of a new image. In doing so, it reproduces the inherent flaws and biases embedded in the societies from which the data is derived, thereby perpetuating a self-reinforcing cycle of bias.

Yet it is important to recall a basic fact: new images are generated based on old ones, with the goal of maximum resemblance—ideally to the point of being indistinguishable. The fictionality of synthetic images lies in recombination—not in the creation of original fictions, but in collage-like assemblages of older elements that themselves retain ties to reality; and consequently, they may also carry forward the inherent biases present in the data upon which they were trained. From these elements grows, metaphorically speaking, a network of threads that connects the synthetic image to the world—albeit in a highly mediated and distanced way—making such images ideal tools for producing convincing deep fakes. Thus, to completely sever generated images from the issue of representing reality is hardly feasible. Generated images simulate the representation of reality; they do not present themselves as alternative realities (though they certainly can when prompted to do so). In the case of deep fakes, on the contrary, they aim to appear as the most faithful and persuasive representation of reality imaginable.

Whenever the fictionality of a work of art becomes problematic—manifesting in conspicuous glitches or inconsistencies—we tend to speak of metafiction: fiction that is ready for (self)reflection. The cases under discussion here, however, are of a different nature. Artistic communication is inherently bound to fictionality, but when generative AI enters the process as an active agent, fictionality is multiplied or deepened. It becomes not only a property of the final product, but also embedded within the creative process itself. (Indeed, the very term “artificial intelligence” functions both as a metaphor and a fiction.)

Creation thus proceeds as a dialogue between a human and a machine that simulates mastery of human language and cognition—a creative process as a dialogue with a fictional mind. Yet we cannot categorize this situation as metafictional, because it lacks the typical glitch or rupture that would invite (self)reflection. Everything unfolds smoothly on the surface of fiction (it can only be convicted of its imperfection by contextual examination, not by the morphology of the image itself). We are not dealing with metafiction, but rather with multi-fictionality, or fiction squared—better yet, deep fictionality. Deep fictionality, however, lacks inherent (self)reflexivity, and thus cannot, on its own, address the problem of deep fakes or the broader confusion of visual languages.

Neuroscience continues to provide evidence that perceiving fictional situations—whether visual or literary—helps prepare us to navigate analogous situations in real life.[21] The concept of embodied simulation, supported by the discovery of mirror neurons, suggests that when we observe an action, emotion, or sensation in others, similar neural pathways are activated in our brains as if we were experiencing the event ourselves. This mechanism is considered a fundamental way of understanding others’ mental states. In the context of art, this means that when we perceive an artwork, we may feel empathy toward its creators (this idea may be somewhat utopian from a humanities and social science perspective, but let’s leave that aside for now). The brain uses fictional worlds and narratives as a kind of “simulator” for social situations and emotional responses.[22]

Let us set aside, for now, the question of what the artist eperiences when using generative neural networks to create an artwork—and whether that experience can even be meaningfully reconstructed. Instead, in the face of increasingly sophisticated deep fakes, we should consider reversing the foundational position of neuroscience and ask: If fiction prepares us for life—who prepares us for fiction?

Can life itself do that? That is doubtful—judging by the trusting reactions of thousands of people beneath various deep fake images on social media.
While this is not meant to assign explicit tasks to art or artists, it nonetheless constitutes a major challenge for art: emerging from fictionality yet possessing the capacity to reflect upon it, to expose the ways fictional images of reality seek to persuade—and to deceive.

We believe that now, more than ever, subversive artistic actions and strategies are needed—ones that foster deeper and more critical engagement with the deep fictionality saturating the media space, and that call for its reflection. A striking example of such critical intervention into synthetic visual culture is a project by photographer Miles Astray, carried out last year.

It began with a photograph titled FLAMINGONE,[23] which captured a flamingo in a bizarre, seemingly headless position as it curled its neck under its belly to scratch itself. Astray took the photo using a traditional Nikon camera and did not digitally manipulate the scene in any way. The image gained a subversive function only when Astray submitted it to the 1839 Awards—in a category reserved for photographs generated with the help of artificial intelligence.

Because of the flamingo’s strange posture, the photograph seamlessly fit into the context of synthetic images, where such anatomical anomalies (e.g., incorrect numbers of fingers) are to be expected. In this way, Astray reversed the actions of those authors who in recent years have submitted synthetic works to competitions intended for human-made art. As Astray himself explained, his intention was to prove that “human-made content has not lost its relevance, that Mother Nature and her human interpreters can still beat the machine, and that creativity and emotion are more than just a string of digits.”[24]

The intervention gained additional significance when the competition results were announced. FLAMINGONE was awarded third place in the AI-generated photography category—judged by a jury comprising experts from prestigious institutions such as Christie’s and the Centre Pompidou. In the public vote (the People’s Vote), Astray’s image even took first place. Thus, people who believed they were voting for a synthetic hallucination had, in fact, chosen a realistic depiction of nature. This outcome confirms findings from recent empirical studies, which show that non-professional audiences tend to prefer non-generated artifacts — at least when they are informed of their origins.[25] Astray’s project suggests that this preference may persist even when the origin is not disclosed.

For our purposes, however, this case primarily exemplifies a photographic fiction posing as synthetic deep fiction—presenting seemingly obvious signs of synthetic generation (e.g., bizarre anatomy) as entirely unreliable indicators. In doing so, it highlights the current state of visual culture as one of confusion of visual languages.

Fig. 5. Grégory Chatonsky, Haven (1971–1973), 2024, https://chatonsky.net/film/

This paradox of contemporary technical images has become the foundation of the concept of “fiction without narrative,” which the French-Canadian artist Grégory Chatonsky advances in his works. He embodied this idea most notably in his film Haven (1971–1973) from 2024, which was entirely generated using AI (script, text, voice, music, image). The core of this concept lies in the deconstruction of the deep fiction nature of the generated artifact. In this particular case, it manifests in the way the generated film does not convey a cinematic narrative—even though many of its elements evoke and suggest one—but instead consists solely of fragments that construct the idea of a city (its streets, buildings, interiors, and even sounds that evoke it), a city that clearly has never existed, despite the film’s subtitle promising to recount its history from 1971 to 1973. (fig. 5)

When human figures appear in the film, they are captured in almost motionless poses, perhaps suggesting some form of ritualistic behavior. However, the behavior of these characters does not allow the viewer to form a coherent sense of what is happening in the depicted space, let alone of its historical development. Chatonsky creates a film from which the viewer would naturally expect a narrative—yet none is present, because there is no narrator, no actual subject who could or would have a reason to tell a story.

In this respect, Haven is a highly effective intervention into the domain of deep fiction: the generated image—indeed, every Flusserian technical image—struggles with subjectivity, even as it attempts to simulate it. Chatonsky deliberately and ostentatiously distances himself from this simulation; he leaves fiction as fiction and does not attach synthetic narration to it, as such narration would already imply a non-existent and therefore false human identity. In doing so, he creates a fictive, generated world that appears abandoned—devoid of human stories. This “abandonment of the human” is inherent to all deep fiction content, but it is only in the mode of artistic (self-)reflexive communication that it truly comes to the surface.

In the literary and artistic world, the emergence of deep fictionality has given rise to entire creative strategies, one such strategy being the emergence of a new genre: deep fake autofiction. This development is particularly associated with the work of K. Allado-McDowell, one notable example being the novella Amor Cringe (2022).[26] Autofiction, by its very nature, plays with the boundary between autobiography and fictional narrative. When neural networks enter this creative process as an additional agent involved in generating the text, the complexity of fictionality increases. As Stephanie Catani argues in her aptly titled study Halluzinierte Autorschaft (Hallucinated Authorship),[27] in the case of deep fake autofiction —though the same applies to deep fictionality more broadly—the implicit hermeneutic contract between author and reader is disrupted. This occurs because the reader can no longer be certain who the other party to the contract is: a human, a machine, or some hybrid of the two—and if the latter, of what kind?

Deep fakes, however, strive to maintain the illusion that this contract remains fully intact. It may be precisely our growing familiarity with artistic strategies such as deep fake autofiction that enables us to resist treating our implicit contracts with deep fiction content as eternal and unquestionable—and to be able to terminate them when necessary.

Beyond legislative measures and mutual support within creative communities, another potential response to this “Babylonian” confusion of images lies in subversive, artistically staged scenarios of distrust toward technical images—or even towards anything that presents itself as authentic, including texts.

Distrust, of course, is not a solution; it is more of a conceptual impasse. However, such artistic strategies offer a productive paradox: by staging distrust, they may help accelerate the sensory and cognitive adaptation of humans to new forms of technical images. These images do not strike fear in us in the way early cinema audiences reportedly panicked at the sight of a train appearantly rushing towards them from the screen. Yet they are similarly unfamiliar, posing a novel perceptual challenge—one for which we must still develop new skills. Among these is the skill of metareading[28] of images: a form of extended perception that allows us to engage critically with generated images and to navigate the world of deep fictionality.

A useful perspective to this concept in the visual arts is articulated in the postdigital work Fundamental Mythopoetry of András Cséfalvay, which engages with the concept of deep fiction through the presentation of a 3D-printed, nearly extinct pterosaur that delivers predictions about possible human futures. This mechanized, “speaking” dinosaur vocalizes messages generated by neural networks in real time, each session offering a single randomly “dreamed up” utterance to the audience. The installation comprises a large dome equipped with lighting, a monitor displaying text, speakers, and a 3D-printed pterosaur in a rock. The notion of deep fictionality operates on multiple levels within this work: first, in attributing the capacity for human-like speech and prophetic insight to a non-human, prehistoric creature; second, in allowing AI-generated content to stand in as speculative discourse about the future of humanity. The juxtaposition of a dying dinosaur—an emblem of extinction—with algorithmically generated foresight in real time concerning the fate of humankind encourages viewers to interpret the figure as a dystopian oracle or “misfortuneteller,” whose utterances constitute a kind of final address in anticipation of transformative change. The deep fictionality enacted here functions as both a metaphorical and technological warning against civilizational decline. Despite the artificiality of its components—a synthetic voice, a digitally modeled animal, and algorithmic text—the work retains a fundamentally existential dimension, projecting a futurological vision that ultimately reflects back on the human condition. (figs. 6, 7.)

Fig. 6. András Cséfalvay, Fundamental Mythopoetry. Photo Marek Jančúch

The existential dimension of deep fictionality in visual arts becomes particularly resonant when considered alongside an observation by Allison Parrish that “to some extent, any human endeavour based on data will function primarily as a mirror that shows us little more than our own faces.”[29] In both Grégory Chatonsky’s Haven and András Cséfalvay’s postdigital installation, generative technologies do not simply construct fictional worlds, but rather project algorithmic reflections of human imaginaries, and anxieties. The “speaking” pterosaur, voicing neural network–generated prophecies, exemplifies this mirroring effect: although ostensibly nonhuman, it articulates fears and speculations that are deeply anthropocentric. Similarly, the absence of a narrative subject in Haven does not negate meaning, but instead reflects the structural impossibility of locating authentic subjectivity within machinic systems trained on human cultural data. In both cases, deep fiction operates not as a departure from the human, but as its algorithmic reconfiguration—generating content that appears otherworldly while remaining bound to the contours of human discourse. By selecting the examples analyzed and interpreted in this article, we also wanted to show that the principle of deep fictionality knows no strict boundaries; it is used but also artistically subverted by the creators of cultural artifacts with equal intensity in Central and Western Europe and overseas.  Parrish’s insight thus sharpens the critical stakes of these works: what we encounter in these deep fictions is a recursive echo chamber where the human is simultaneously multiplied, abstracted, and estranged.


Acknowledgement

The paper was written as a part of the grant Generating Czech Poetry in an Educational and Multimedia Environment supported by the Technology Agency of the Czech Republic.

Zuzana Husárová is a Slovak poet, researcher, and scholar specializing in electronic literature, digital media, and intermedia performance. She is an Associate Professor of Digital Media at the Academy of Fine Arts and Design in Bratislava and an editor for the gender-focused magazine Glosolália. She has published four poetry books—liminal, lucent, amoeba, and Hypomnemata—as well as Výsledky vzniku (laureate of a national poetry prize), a book generated by the neural network Liza Gennart and co-created with Ľubomír Panák, and a bilingual Slovak-German collection Hyper (translated by Martina Lisa). She has co-edited two collective monographs on electronic literature research. Her theoretical book The Culture of Neural Networks, co-authored with Karel Piorecký, received the N. Katherine Hayles Award for Criticism in Electronic Literature.

Karel Piorecký is a literary scholar and researcher in the field of new media and artificial intelligence. He works in the Department of 20th‑Century and Contemporary Literature at the Institute of Czech Literature of the Czech Academy of Sciences. His expertise lies in the history of modern Czech poetry and the interplay between literature and new media. He has published the monographs Czech Poetry in the Postmodern Situation (2011) and Czech Literature and New Media (2016). In collaboration with Zuzana Husárová, he co‑authored the monograph The Culture of Neural Networks: Synthetic Literature and Art in (Not Only) the Czech and Slovak Context (2024).


[1] Reddit , “The Pope Drip.” Reddit, March 28, 2023,  www.reddit.com/r/midjourney/comments/120vhdc/the_pope_drip/

[2] Simon Ellery, “Fake Photos of Pope Francis in a Puffer Jacket Go Viral, Highlighting the Power and Peril of AI.” CBS News, March 23, 2023, www.cbsnews.com/news/pope-francis-puffer-jacket-fake-photos-deepfake-power-peril-of-ai/

[3]  Bil Perrigoly,  “How to Spot an AI-Generated Image Like the ‘Balenciaga Pope’.” TIME, March 28, 2023, time.com/6266606/how-to-spot-deepfake-pope/

[4] Deborah Castellano Lubov,  “Pope Francis Urges Ethical Use of Artificial Intelligence.” Vatican News, March 27, 2023,  www.vaticannews.va/en/pope/news/2023-03/pope-francis-minerva-dialogues-technology-artificial-intelligence.html

[5] Luba Kassova, “Tech Bros Need to Realise Deepfake Porn Ruins Lives – and the Law Has to Catch Up,” The Guardian, March 1, 2024, www.theguardian.com/global-development/2024/mar/01/tech-bros-nonconsensual-sexual-deepfakes-videos-porn-law-taylor-swift

[6] Ibid.

[7] Mike Zajko, “Artificial intelligence, algorithms, and social inequality: Sociological contributions to contemporary debates,” Sociology Compass 16, no. 3 (2022). https://doi.org/10.1111/soc4.12962

[8] EU Artificial Intelligence Act, “Article 50: Transparency Obligations for Providers and Deployers of Certain AI Systems,” www.artificialintelligenceact.eu/article/50/

[9] AIxDESIGN, www.aixdesign.co/

[10] Dominika Čupková,“AIXcuseMe: I Am Sorry for Being a Long Girl, I Am,” dominikacupkova.com/post/635911867244625920/aixcuseme-i-am-sorry-for-being-a-long-girl-i-am.

[11] Aiahgraphy. “Viral Moment.” X, October 3, 2021. x.com/aiahgraphy/status/1306447628335370241

[12] “Viral Photo of Grandmother Coffee Table Sparks Online Speculation and Humorous Discussions.” The Economic Times, June 27, 2023. economictimes.indiatimes.com/news/new-updates/viral-photo-of-grandmother-coffee-table-sparks-online-speculation-and-humorous-discussions/articleshow/101306420.cms?from=mdr.

[13] Manuel Charr, “What Is Promptography and Should Museums Exhibit It?” MuseumNext, May 11, 2025, www.museumnext.com/article/what-is-promptography-and-should-museums-exhibit-it/

[14] Nicol Turner Lee and Isabella Panico Hernández, “AI Memes: Election Disinformation Manifested Through Satire,” Brookings, October 3, 2024, www.brookings.edu/articles/ai-memes-election-disinformation-manifested-through-satire/

[15] Max Matza, “Trump criticised after posting AI image of himself as Pope,” BBC News,  May 4, 2025, https://www.bbc.com/news/articles/cdrg8zkz8d0o

[16] For more on this, see https://www.tiktok.com/discover/simkovicova-meme?lang=en

[17] Vilém Flusser, Towards a Philosophy of Photography (London: Reaktion Books, 2000).

[18] Flusser 10.

[19] Flusser 15.

[20] Emanuele Arielli and Lev Manovich: Artificial Aesthetics (2024), https://manovich.net/index.php/projects/artificial-aesthetics

[21] Diana Tamir, Andrew Bricker, David Dodell-Feder, and Jason Mitchell, “Reading Fiction and Reading Minds: The Role of Simulation in the Default Network,” Social Cognitive and Affective Neuroscience 11, no. 2 (February 2016): 215–24, doi:10.1093/scan/nsv114; Elise N. Good and Katharine Schaab, “The Biological Influence of Stories & The Importance of Reading Fiction,” The Kennesaw Journal of Undergraduate Research 9, no. 1 (2022), https://doi.org/10.62915/2474-4921.1234

[22] Sana Hashemi Nasl, “Mirror Neurons and Embodied Simulation in Visual Art: Aesthetics and Art Therapy” (MS thesis, Universidade de Lisboa, 2015)

[23] Jacqui Palumbo, “ An ‘unreal’ flamingo image won an AI award. The only catch? It’s a real photograph,” CNN, June 14, 2024, https://edition.cnn.com/2024/06/14/style/flamingo-photograph-ai-1839-awards

[24] Ibid.

[25] Federico Magni, Jiyoung Park, and Melody Manchi Chao, “Humans as Creativity Gatekeepers: Are We Biased Against AI Creativity?” Journal of Business and Psychology 39, no. 3 (2024): 643–656.

[26] Kenric Allado–McDowell, Amor Cringe (New York: Deluge Books, 2022)

[27] Stephanie Catani, „Halluzinierte Autorschaft. »Deepfake Autofictions« mit Großen Sprachmodellen,” in Das Subjekt des Schreibens (edition text + kritik im Richard Boorberg Verlag, 2024), 157–170.

[28] For more on the metareading of text in the context of AI-generated content, see the chapter “Reception mechanisms for synthetic textual media” from our book The culture of neural networks: Synthetic literature and art in (not only) the Czech and Slovak context (https://karolinum.cz/knihy/piorecky-the-culture-of-neural-networks-30719).

[29] Allison Parrish. “The umbra of an imago: Writing under control of machine learning.” Serpentine, August 14, 2020. https://www.serpentinegalleries.org/art-and-ideas /the-umbra-of-an-imago-writing-under-control-of-machine-learning

© 2026 tranzit.hu - Contemporary Art Organization

Main partner of tranzit is Erste Foundation

erste stiftung logo