Special Topic Series in Critical AI
Virtuality, Embodiment, and Meaning-Making in AI Technologies
Series Editor Alexa Alice Joubin (GWU)
The ability to autogenerate human-like textual and visual artifacts—so-called generative artificial intelligence (AI)—opens a wide variety of questions about meaning-making and the human condition: onto-epistemic, aesthetic, and politico-economic—even as “AI” remains a poor term for the machine technologies under discussion in the present day. As many researchers have documented, LLM-based models cannot distinguish between truthful and non-truthful text (e.g., Alvarado 2024; Stone, Goodlad, and Sammons, 2024; Hicks, Humphries, and Slater, 2024; Fredrikzon 2025). This special issue cluster calls on contributors to reflect specifically on the conditions for (and of) meaning-making, inside and outside the mediation of gen AI technologies, as a social activity at the intersection of embodiment and virtuality. A guiding research question for the special cluster asks what does it mean to interact with automated chatbots as distinct from interactions with characters in fictional media such as novels or plays (topics that have been adumbrated by, for example, Hanlon 2024; Judelson and Dryden 2024;
Wallace and Peeler, 2024).
The cluster defines embodiment as (1) tangible forms of abstract ideas and (2) the lived
materiality in which physical bodies and their interactions with the world shape personal
experiences and identities. Virtuality, in this context, refers to the interplay between material
objects and meaning making—an interplay that under certain socio-technical and material
conditions gives both fiction and the outputs of generative AI their world-making capacities For
example, the contextual use of a costume piece made of vinyl fabric, craft foam, and metallic
paint in a film determines its dramatic property as heavy iron armor in the fabula of the fiction.
Both fiction and generative AI content offer material that structures certain worlds as plausible
and “real,” while framing others as improbable or impossible. Virtuality thus gives figurative and
performative meanings to artificial beings such as fictional characters and generative AI outputs.
How does an audience’s experience of watching actors performing a role compare and contrast to
that of people interacting with AI chatbots?
In Shakespeare’s A Midsummer Night’s Dream, Bottom wakes up in the forest after spending
time as an ass with fairy queen Titania. He struggles to determine if it is a dream or an embodied
experience, describing it as a “most rare vision … past the wit of man.” Adding another meta-
theatrical layer to it, he decides to ask Peter Quince to write a ballad called “Bottom’s Dream”
that is so profound that it has no “bottom.” In the final scene of The Winter’s Tale, Leontes is “so
far transported that / he’ll think anon [the statue of Hermione] lives.” In different ways novellas
such as Roald Dahl’s The Great Automatic Grammatizator (1953), Philip K. Dick’s Do Androids
Dream of Electric Sheep? (1964); films like The Matrix (1999), Her (2013), and Ex Machina
(2014), and television series such as Altered Carbon (2018-20) provoke manifold opportunities
to theorize and explore the onto-epistemic, oneiric, socio-technical, and politico-economic
conditions for (and of) meaning-making as mediated by generative AI technologies (both
fictional and existing).
New ways of thinking about being in the world can prioritize new kinds of interdisciplinary
research questions. For example:
- What is the relationship between word, embodiment, and the world? How can new modes
of AI-infused sociality, including the advent of personified chatbots, be explored both from
disciplinary or inter-disciplinary perspectives? - Could semantic meanings be derived solely from text without the need for grounding in the
physical world or through embodied mediation with the world? What kind of “language”
would that entail? - Are dreams (as we understand them today) onto-epistemically real or are they virtual and,
if so, what is the meaning of virtuality they entail? - How might generative AI models “dream” according to these other criteria?
- How do ineffable, embodied experiences differ from or overlap with the datasets that
LLMs and other AI systems leverage? - How might the line of criticism of AI as lacking embodied cognition be strengthened or re-
appraised? - What does it entail to regard a historical portrait or a generative AI’s output as “lifelike”?
- What are the implications of thinking about virtuality and AI through oneiric theories,
through the long history of animal symbolism and human interactions with inanimate
entities (as in the myth of Pygmalion), diverse indigenous perspectives, or through Shinto
Buddhism’s animism—the belief that spirits reside in all beings and entities from objects to
mountains?
This special cluster welcomes papers from any discipline but especially welcomes
submissions from the digital humanities, religious studies, art history, ecocriticism, philosophy,
philology, neuroscience, performance / film / media, authorship studies, queer theory,
computational media and literary studies. Building on philosophical or socio-technical
understandings of virtuality and embodiment, this series welcomes essays which may adopt new
approaches to:
- embodiment in studies of portraiture or coma and paraplegic patients in relation to AI,
- depictions of dreams and neurodiversity as they relate to AI’s modeling of artificial worlds,relational ethics that emphasizes the interdependencies between human and non-human
agents - how new understandings of virtuality and embodiment impact pedagogies and higher
education - analyze the tendency to anthropomorphize AI
- other corollaries using ecocritical, transhumanist, posthumanist, and/or disability theories,
Logistics and parameters for submission:
In line with Critical AI’s editorial practice, this series welcomes essays (up to 8,000 words
including footnotes) and shorter think pieces (usually 4000 words or fewer) on a rolling basis.
We value interdisciplinarity (as well as work from outside the academy), so long as the work is
legible to readers in any discipline. Please see our submission guidelines including strong
preference for articles published in a humanities style that do not rehearse arguments about AI
technologies that are already familiar to the journal’s readers.
We invite 250-word proposals for a range of essays, from short think pieces of 1,500 to 3,000 words to essays of 5,000 to 8,000 words.
Please send 250-word abstracts to Alexa Alice Joubin at ajoubin@gwu.edu
This is an ongoing series, with no specific deadline at this time. Our editorial team will look
at complete essays at their discretion.