This event has ended.

View all events

eXtended Reality (XR) encompasses diverse technologies, including Augmented Reality (AR), Mixed Reality (MR), and Virtual Reality (VR), which seamlessly blend virtual and physical realms to create fully immersive experiences.

While XR is often associated with Hollywood VFX and gaming, its applications extend far beyond entertainment. Advertising, Architecture, Creative Technologies, Graphic/Visual Design, and Film/Media/Animation Studies are just a few disciplines that benefit from the transformative power of XR. The broad scope of XR's potential emphasises the urgent need for interdisciplinary collaboration to shape the critical tools, concepts, and networks that will drive its widespread adoption.

This conference interrogates an emerging field of scholarly and professional interest, bringing together practitioners and academics to delve into what the influx of XR technology means for the future of the creative industries.

Hosted by the University of Portsmouth, renowned for its Centre for Creative and Immersive Extended Reality (CCIXR), we will provide a tour of our next-gen facilities, including a professional standard motion capture studio, 3D photogrammetry suite, haptic immersive technologies, and a virtual production sound stage.

Join us and embrace the limitless possibilities that XR technology offers.

Event programme

Get checked in and refreshed.

Location: Eldon Foyer

Location: Eldon EW1.10

Panel A: The Stakes of XR

Renegotiating the ‘Virtual Commons’: Towards an Ethical, Equitable, and Responsible Future
Professor Helen Kennedy (University of Nottingham)

News Coverage of Extended Reality Technologies: Applications, Sentiment, and the Commercial Influence
Dr Emma Graves (Canterbury Christ Church University)

Ecological Entanglements with Augmented Reality Scenography
Lucy Thornett (University of the Arts London)

Panel Discussion

Location: Eldon Building

Panel B: ‘XR: Leaving Bodies Behind?’

A Frame for the development of immersive communication and research products
Alberto Sánchez Acedo (Rey Juan Carlos University, Madrid)

An Investigation of Virtual Production Pipelines in the Development of a Mixed Media Animation Experience
Jordan Buckner and Niki Wakefield (University of Portsmouth)

Using Virtual Reality to Better Respond to Pollution Incidents
Rory Miles (Southern Water)

Panel Discussion

Panel C: XR Production and Design

The ‘Truth of Sound’: Exploring the effects of an immersive location sound recording methodology within realist filmmaking
Steve Whitford (University of Portsmouth)

Rethinking Immersive Audio
Dr Adam Parkinson and Justin Randall (London South Bank University)

Exploring the acoustics of the virtual acoustic model of St Stephen’s Chapel, Westminster
Dr Aglaia Foteinou (University of York)

Panel Discussion

Explore and sample the next-gen technologies available in Eldon Building.

Locations: Eldon Foyer

Panel D: ‘XR – A New Medium for Storytelling?’

The City as Escape Room: place, participation, meaning, affect
Dr Roy Hanney (Solent University)

“You’re a lot more connected and emotionally invested”—VR Marketing Focus Groups and Evaluating the XR Experience
Dr Stephanie Janes (King’s College London)

The Neon Pack: hybrid storytelling via the XR continuum
Dr Nick Bax (Human Studio)

Panel Discussion

Panel E: XR Production and Design

VR and Biography: INSIDE Project
Dr Tom Livingstone (The University of the West England)

Re-Imaging the Park Experience for Virtual Reality
Dr Dan Johnston (University of York)

Immersive Empathy: Co-creating Immersive Narratives on Home and Homelessness
Dr Conn Holohan (University of Galway)

Panel Discussion

Location: Eldon Foyer

Panel F: ‘Learning from Film and Media Practices: What’s New in XR?’

The Making of STUDIOTEC VR
Amy Stone (University of Bristol)

Re-engineering Rear Projection: Situating Emergent Uses of Virtual Production Spaces within Film History
Dr Jennifer O’Meara (Trinity College Dublin)

The VR Film: Adapting Forms for a New Mode of Storytelling
Dr Penny Chalk (University of Portsmouth)

Panel Discussion

Panel G: ‘Between Simulation and Fantasy – Where is Reality?’

Walt’s Skeleton Dance: Digital Performance and the Disney MagicStage
Dr Christopher Holliday (King's College London)

Recreating Characters in the Modern Era: Reintroducing Legend and Literature in Fate/Grand Order
André Cowen (De Montfort University)

Can’t See the Suits for the Trees: Issues of Realism, Invisible Labour, and the Perception of Special Effects in Gorillas in the Mist
Ben Pinsent (University of East Anglia)

Panel Discussion

Location: XR Stage

Location: Eldon Foyer

Abstracts and biographies


The practical, creative and profitable opportunities presented by immersive technologies have prompted another ‘gold rush’, as ‘pioneers’ race to define, occupy, and extract value from the newly (re)imagined virtual frontier.  The vertiginous speed of innovation coupled with a strident narrative of exploitation risks importing the problematic colonising tendencies of the past.  In this talk we seek to showcase alternative sites of meaning making that hint at a more responsible, inclusive and ethical future and ask what resources do we need to ensure that the opportunities for employment, for play and for creative endeavour are accessible to all? 

Over the course of nine months across 2021 and 2022, a series of deliberate feminist interventions took place. These included all women VR Hackjams, editorials, interviews, and blog posts.  Donna Close and Helen Kennedy used their separate funded projects as mechanisms to stage calculated attempts to ‘interrupt’ the dominant masculinist and technoeuphoric narratives circulating about ’the metaverse’ during this period:

  • Digital Democracies, Arts Council (Donna Close)
  • Live, Experiential and Digital Diversification, ERDF (Helen Kennedy)

This talk will examine the surprising and uplifting outcomes of these different initiatives in relation to their intention to ‘make space’ for alternate voices and stories.  First, we will examine the protopian imaginaries that were coaxed from our interviewees as we invited them to speculate on a different future to the one we were being invited to inhabit by Meta and the metaverse enthusiasts.  Then we will turn to the moving and confounding outcomes of the all-women VR Hackjams facilitated by techno artist collective Inkibit Immersive, and conceptualise the ethics underpinning their approach. The Hackjams included visual artists, dancers and poets—none of whom had ever used VR or other immersive technologies in their work.  In this talk, we seek to highlight the principled and responsible feminist practices through which they stage curious encounters with the potentialities of creative making within a specially designed inclusive and highly participatory virtual environment. 


Helen W. Kennedy is Professor of Creative and Cultural Industries at the University of Nottingham. Her research interests are feminist games culture and the wider  diversification of access to creative practice; the ludification of cultural experience, innovations in experience design and the cultural evaluation of immersive experiences.

She has led a number of national and international projects seeking to improve women’s access to and experience within spaces of creative production – across screens, VR, and immersive technology more broadly. A key characteristic of these projects is collaboration and co-creation with individuals, grass roots organisations and sector advocacy groups.


The news media play a major role in the formation of public opinion about emerging technologies because they are often the general public’s first and main source of information about such innovations (Scheufele and Lewenstein, 2005; Sun et al., 2020). Indeed, when extended reality (XR) products began to re-emerge into the market between 2012–2017, many news outlets published articles about these developments, which had the potential to shape the public’s view of what these devices can be used for, their quality, and, indeed, any ethical concerns that might surround them. As the perceptions of new technologies are key to their success or failure (Buenaflor and Kim, 2013), the news media can have an impact not only on how these products are viewed but also on their adoption and diffusion (Rogers, 2003). Thus, analysing this news coverage is important as it sheds light on the messages the public are receiving (which could have a subsequent effect on adoption), as well as the key figures that have been able to shape this media discourse.

Based on a large-scale mixed methods study of XR news and marketing, this paper explores the way the technologies have been framed in the news. It pays particular attention to the applications XR is associated with, the overall sentiment of the coverage and the ethical concerns (or lack thereof) that are raised. Linking this to an analysis of news sources and XR marketing materials, it considers the influence of big tech companies in shaping the discourse. Further, it discusses the commercial factors that could lead to XR being framed in such a way, both in relation to journalistic practices and the consumers of XR hardware and software. It ends by considering what the implications of this are for practitioners in the XR realm and for the further study of XR media coverage.


Dr Emma Kaylee Graves is a Lecturer in Media and Communications at Canterbury Christ Church University. Her recently completed PhD research examined the news coverage of extended reality (XR) technologies and its relationship to product marketing. As well as continued research into the discourses surrounding XR, Emma’s wider research interests include media representations, commercialisation, technology, and online communities.


The proposed paper will discuss the spatial relations produced by AR, and its potential for generating ecological perspectives in audiences. I argue that XR technologies differ fundamentally from other (screen-based) media in that they produce space via direct intervention into the body of the participant. I propose that this configuration of body, space and media facilitates a particular embodied relationship with environments that might be harnessed for ecological purposes by revealing to audiences the entangled and processual nature of their relationship with the world.

I will draw on my own practice-research in AR scenography, through a discussion of series of experiments I have undertaken with a range of technologies (both handheld and head-worn) across a range of sites (found and designed; public and private). Scenography, originally associated with stage and performance design, is now established as an expanded and interdisciplinary practice concerned with the design of material and spatial encounters across a range of contexts. Scenography is deeply concerned with embodied and psychic relations of audiences and environments, and the practice-research I will outline explores the way that scenography can be employed, in conjunction with AR technologies, to give audiences new understandings of their relationship with the world. I will also draw on audience interviews I have conducted to expand upon the multiple ways in which this understanding manifested in reality.

The key argument of this paper is that AR, through its intervention into the moving body of the participant, produces spaces that cannot be perceived from a distant vantage point or apprehended in full. Rather, the space of AR is produced through an entangled and processual encounter between audience member, technology, and site - that necessarily only allows for a partial perspective on the event for the audience member. This encounter resists being understood through a human-centred perspective that foregrounds mastery and possession of the environment. Instead, I argue that the distinct way in which AR scenography produces space gestures towards the possibility for it to generate ecological awareness in audiences, making them differently cognisant of their entanglements with the non-human world.  


Lucy Thornett is a Senior Lecturer in Scenographic Practices at University of the Arts London. She makes installations and performances for theatres, galleries, and other spaces. She is currently co-curator (with Kathrine Sandys) of hello stranger: UK performance design 2019-2023 which includes a UK wide festival programme, the UK exhibit at the Prague Quadrennial of Performance Design and Space and a three-part publication series published by Performance Research Books. She is Associate Editor Reviews for Theatre and Performance Design Journal and an Arts and Humanities Research Council funded PhD researcher at the University of Leeds, completing a practice-led PhD in augmented reality scenographies.


The PhD project "Applied systems and models of immersive communication through extended reality technologies", investigates the development of different immersive models for the creation of innovative journalistic and communicative content with extended reality technologies, using systems such as the A-Frame framework. A-Frame is an open-source framework that enables the development of accessible and simple virtual reality experiences by creating 3D and WebVR scenes using HTML tags in combination with A-Frame specific components and attributes. These attributes allow geometry, materials, textures, lighting, sound, and interactive behaviours to be added to 3D elements in the scene. A-Frame supports multiple platforms and devices, including desktop browsers, mobile devices, and virtual reality headsets such as head-mounted displays. The integration of A-Frame with other frameworks and libraries is also possible. In the case of this research, we also worked with the BabiaXR library integrated, which allows visualisation of data with different types of charts.

In this context of this work, immersive prototypes have been developed using this technology. The first of these is a data museum that, as mentioned above, combines the BabiaXR library in an A-Frame environment. In this project, four data interactive graphs are displayed representing the discourse of persistent covid in social networks. This prototype was developed as a communication activity for the Spanish national project LONG-COVID-EXPCM and is framed into the research line related with the media speech.

On the other hand, an A-Frame environment was also developed as part of an educational activity with students between 15 and 18 years old. In this case, the immersive environment shows several images, some of which were generated by artificial intelligence and other photos of real people. The dynamic was for students to try to figure out which of the images were real people and which were artificial intelligence generated people. An immersive environment was chosen for this activity because the goal was to isolate participants from external elements that might influence their decisions.

The results of these initial experiences demonstrate the potential of immersive environments for their application in various actions of communication and scientific dissemination, but also the need to evaluate specific variables of environment design and interactivity to improve immersive capacity and its effectiveness.


Alberto Sánchez-Acedo is a pre-doctoral researcher in the Ciberimaginario Research Group at the Universidad Rey Juan Carlos. He is currently working on an industrial doctorate funded by the Community of Madrid at the company Prodigioso Volcán S.L., in the framework of the creation and implementation of formats developed with extended reality, applied to immersive journalism using extended technologies such as virtual and augmented reality, through open-source platforms. He has a Masters in Teacher Training, specialising in Audiovisual Communication and Processes, and a degree in Audiovisual Communication from the Rey Juan Carlos University. He has collaborated and worked on national and international research projects, which has allowed him to develop his professional and research skills, as well as to acquire various competences.

In this line of work, during the development of his PhD, he has created different immersive prototypes developed with A-Frame, applied to research projects and educational actions with students.


Our presentation would unpack and explore the virtual production pipelines we are currently developing as part of the Sprint and Exceptional Research Fund Projects. Within these current productions, we are exploring a number of themes which we’d love to discuss and shine a light upon.

Virtual Production provides an exciting new medium for immersive storytelling that is currently untapped and unexplored. We aim to experiment with the technical and artistic notions of what can be done in this burgeoning field - creating new worlds and experiences that are unrivalled by traditional storytelling approaches.

The projects we are currently developing not only explore Virtual Production technologies, but also aim to create truly unique and immersive animation experiences. In this, the viewer will be situated in a hyper-real homely environment in virtual reality, which unfolds and deconstructs. This once safe space deteriorates to reveal something stranger and more surreal, as home truths begin to bubble to the surface and parasitic forms take over the architecture.


Niki Wakefield is a Senior Lecturer in BSc (Hons) Computer Animation and Visual Effects and course leader for the new BSc (Hons) Virtual Production course. Before coming to the University of Portsmouth, she worked as a visual effects Compositor/2D Supervisor for Cinesite, MillFilm, Moving Picture Company (MPC) and Double Negative (DNeg), working on high-end motion pictures. Please visit her IMDB page to see a list of projects.


At Southern Water, the Bluewave (Innovation) and Learning and Development teams have been working on a new way of using virtual reality (VR) to improve how we respond to pollution incidents. This creates a virtual environment, enabling teams to be trained in conditions that mimic the real-life pressure of a live incident. This means we can refine and improve how we respond in a real-life setting, helping to reduce the number of pollution incidents and their impact on the community and the environment.

As a 24/7 business, our sites are manned continuously to ensure any issues that occur are identified quickly and corrected as soon as possible. First Responders (those who are the first to attend when an incident occurs) must remember many crucial steps to prevent damage to the environment and to stay safe.

A live incident can be the first time a first responder faces the magnitude of a potential pollution risk. We needed to enable our operational teams to experience a pollution incident more realistically and learn how to put their classroom training into practice, without having to wait for a live incident where the stakes could be much higher. We first tested how we might do ‘pollution drills’ as guided conversations, with experts, but these had limitations. Exploring different ways to bring the experience alive, we began to consider VR.

We have now developed a VR proof of concept - something that we can test rapidly and learn from - alongside MakeReal, who specialise in the use of VR for training, who had previous experience working with water companies. The proof of concept we developed allows operational teams to respond to a pollution incident, and crucially provides a safety net to make mistakes, without there being a significant real-world impact. Users can explore a virtual site – identifying what might have gone wrong, sample, take photos and get feedback on their experience.

The talk will present an overview of the development of the proof of concept, highlight the plans for testing and evaluation of the proof of concept and our approach to considering the user experience of VR. We’ll discuss this in the context of rolling this out further as part of pollutions training, to help First Responders perform effectively during an incident.


Rory Miles has diverse experiences in innovation and applied research across the global health, Defence and sustainability sectors.

Starting his career as a research scientist at Public Health England (now UK Health and Security Agency), he worked to develop cutting-edge technology for deployment against emerging diseases in lower economically developed countries. In 2017, an increasing interest in increasing the impact of science research led him to shift to a focus on project management at the Defence Science and Technology Laboratory (Dstl).

Continuing with an interest in the enterprise and innovation space, in April 2020, he started in the Innovation Fellow role at the Centre for Enzyme Innovation, aiming to ensure the exploitation of innovative recycling solutions for our most commonly polluting plastics. He promoted and delivered innovation opportunities by facilitating the transfer of technologies, skills and knowledge to industry and other end-users. He is currently a consultant on the Royal Academy of Engineering-funded “Puppets as Enzyme Engineers of the Imagination” project, working with young people in Portsmouth & Bognor Regis to inspire them to pursue careers in engineering and upskill engineers in public engagement activities.

Since October 2022, he has been the Innovation Programme Manager at Southern Water. Anchored in the Bluewave team. He has an increasing interest in understanding future opportunities for the application of XR as innovative solution to address business challenges.


Musical experiences are often described as or aspire to be immersive. Immersive audio is seen as an innovative frontier of music, sometimes encompassing other cutting-edge technologies such as Virtual Reality (VR) and Dolby Atmos. However, conceptions of immersion remain reductive and simplistic. Through exploring how immersion is conceptualised in other domains, I interrogate the limits of immersive audio, and argue for a model of immersion that critically considers interactivity and participation. This draws on Small’s concept of musicking (1998) and Csikszentmihalyi’s notion of flow (2013). Immersive audio generally means multichannel audio, involving multiple speakers (or rendered through headphones to appear as such). Immersion becomes a technical challenge solved by more or better configured speakers and ever more realistic spatialising algorithms. Historically, discourses dating back to the very earliest days of stereophonic and multichannel audio have often privileged a “sweet spot” for an immobile but attentive listener (Grajeda, 2015). However, I argue that immersion emerges not from being in an idealised listener position but through being an active participant.

Immersive experiences are not limited to sound, and within fields including heritage studies, gaming and theatre, experiences are often sold as being immersive. Scholarly literature in these domains interrogates the nature of this immersion and brings forth valuable perspectives. In her discussion of immersive heritage experiences, Kidd (2018) detaches immersion from technology and notes that “any and all heritage might potentially be understood as immersive.” For Kidd, key characteristics of immersive experiences include being “story-led, audience- and participation-centered, multimodal, multisensory and attuned to its environment.” Discussing immersion in video games, Collins (2013, p. 141) argues that rather than viewing the game as a separate space that players enter and are immersed in - as when one enters a concert hall - immersion emerges from interaction with the game. Van Elferen’s (2016) ALI model for analysing immersion in game music reveals how musical affect, literacy and interaction all play roles. As Bucher (2017) writes, immersion is “less about telling the viewer a story and more about letting the viewer discover the story.”

Through exploring varying ideas of immersion, I problematise this oft-used phrase and propose a model for immersion that considers interaction, affect and participation.


Dr Adam Parkinson is a Senior Lecturer in Music and Sound Design in the School of Arts and Creative Industries, Head of the Sonic Research Group, and a well-known sound artist and music technology researcher. He is interested in the musical possibilities of computers and the new musical practices that computers afford and is currently working with The Rose Theatre on London’s South Bank to create sixteenth century immersive soundscapes. He has published on fields including sound art, music, and human computer interaction, and how computers can be musical instruments. A recent article for the Organised Sound Journal entitled "Digital Musical Instruments as Probes: How computation changes the mode-of-being of musical instruments", co-authored with Koray Tahıroğlu, Thor Magnusson, Iris Garrelfs and Atau Tanaka, and a chapter on "The Problems with Participation" for the Routledge Research Companion to Electronic Music.


This work discusses the design process of an immersive audio-visual tour of the medieval chapel of St Stephen, in Westminster Palace. St Stephen’s chapel was a place of spectacular religious and royal ceremony between the thirteenth and the sixteenth centuries.  As the king’s principal chapel in the Palace, work on St Stephen’s began with Edward I and was completed by Edward III, aiming to compete with the grandeur of the Sainte-Chapelle in Paris built by Louis IX of France.  In 1548, however, St Stephen’s College was dissolved during the Reformation, leaving the former chapel to become the first permanent home of the House of Commons: a function which it retained until the devastating Palace of Westminster fire of 1834.

Taking a multi-disciplinary approach across subject areas including history, history of art, visual arts, musicology and acoustics, evidence has been gathered from relevant literature and a virtual model constructed to inform the acoustic study of the chapel as a space designed for sacred liturgy and music. The acoustic model provides us with audible examples (known as auralization results) of the designed space and gives us the opportunity to listen to the space as it might have been heard if we were present in 16th century. Masses and antiphons composed for St Stephen’s by Nicholas Ludford are used in this study as a stimulus to represent the historic soundscape of the chapel.

The final auralization results are integrated in the virtual model built in Unity3D, creating an immersive audio-visual experience of the medieval chapel. With this work we aim to explore the historical, liturgical, and musical significance of medieval and early Tudor St Stephen’s and engage a wider audience and encourage the preservation of the heritage.


Dr Aglaia Foteinou is a Research Fellow in Heritage of the XR Stories, at the University of York, working on the acoustic reconstruction of the medieval chapel of St Stephen in Westminster Palace. In 2022, Aglaia was working as a Research Associate on the EU JPI-CH PHE: The Past Has Ears project creating the acoustic model of the 1820s UK House of Commons and studying the impact of the acoustics on the debates and political decisions taking place in that period. Before that, Aglaia was a Senior Lecturer at the University of Wolverhampton from 2014-2022, teaching undergraduate and postgraduate courses in Music Technology.

She has collaborated on a variety of national and international projects related to acoustic modelling and impulse response measurements of Heritage sites. She has presented her work at several national and international conferences. She obtained both MSc and PhD degrees in Music Technology from the University of York and her Bachelor’s degree in Music Studies from the National and Kapodistrian University of Athens. She is a member of the Institute of Acoustics (IOA) and the Audio Engineering Society (AES).


Through the lens of ecologies of belonging The City as Escape Room transfers a simple and commonly held understanding of the escape room into a metaphor that reveals a complex layering of place, participation, and affect in meaning making for experience designers and transmedia storytellers. It situates the city as a play space in which community participation, meaning making and co-creation are interwoven as meaningful story experiences. By connecting the practice of urban shamanism with that of urban foraging, the discussion explores experiential storytelling as a form of sympoiesis that brings into being a shared memory, a becoming-with the city for the community that resides within. The role of affective resonance and affective atmospheres in the generation of a shared, co-created experience mirrors the relationship between those who inscribe the city with meaning (shaman) and those who decode this meaning (foragers).

A case study unpacking the symbiotic role between street artists and catchers (photographers) further develops the notion of sympoiesis as an act of co-creation. Establishing a framework for conceptualising non-linear, fragmented, and decentred experiential and immersive story design that moves beyond the simplistic dichotomy of author/audience. The case study, Arts Council funded sandbox project Cursed City Dark Tide, posits a world in which magic realist tropes are able to inhabit a city. Inscribing a city, in this case the city of Portsmouth in the autumn of 2019, with meaning making that layers lived experience with an imagined or transcendent reality. Avoiding the common placemaking tropes associated with public sector marketing and economic (re)generation, city-wide experiential storytelling is instead considered as a form of speculative fabulation that can defamiliarise the familiar and generate affective story experiences. The offering of a case study that contrasts commercial, and community driven transmedia experiences further illuminates the ways in which immersive experience design can take hold of a city as a play space and render it as a meaningful story experience.


Dr Roy Hanney (Associate Professor Media Practice, Solent University) has published widely on the use of live projects to bridge the divide between higher education and the world of work. More recently, Roy has turned towards creative talent development and community engagement as an important strand of his work. Alongside this, he continues to grow as a creative practice researcher, the development of community driven, immersive, audio-visual arts projects. He has delivered several Arts Council immersive experiences in his hometown of Portsmouth including Octopuses & Other Sea Creatures (2022) and is developing Rituals for Earthly Survival for 2023.


While attempting to define the possibilities of extended reality (XR) in the arts and humanities, how can we ascertain the potential cultural, theoretical, and civic impacts of these emerging technologies and how they might reframe personhood, time and narrative? My research explores immersive storytelling with particular regard to nonlinear time and the recreation of memory, involving three main stages of investigation: what is currently unique about immersive experiences; how might they change or evolve in the future and, ultimately, what possibilities do they represent in terms of storytelling and creative expression?

‘The Neon Pack’ is an immersive 360 experience by Human Studio interpreting three scenes from my own ‘Protopian Tale’ short story, formed by VR animation and an original soundtrack. Devised during 2021 and initially released online in early 2022, the artwork has subsequently been exhibited as part of the Immersive Futures Lab at SXSW (Austin, Texas, 2023) and BEYOND (Cardiff, 2022), and selected for Sonar+D (Barcelona, 2023). The original digital components have also been reworked as an augmented reality artwork, displayed on the platform of a prominent building in the centre of Sheffield (‘Look Up’, 2023) utilising Niantic’s semantic segmentation software tool, Lightship. Drawing from these previous iterations, plans are currently afoot to evolve the artwork into a live performance experience with increased audience interaction.

Extended reality (XR) is an emerging artform with unique qualities including the ability to convey a sense of spatial presence. The unique properties of VR, AR and MR artworks are significant enough for XR to be viewed as a distinct new form of artistic expression—such as literature, photography, cinema, radio, or television—and not purely an extension of film or video games. For ‘eXtending Reality: Immersive Design, Production and Technology’, I will discuss the conception and subsequent development of ‘The Neon Pack’ and how it provides a roadmap for future XR experiences and virtual production.


For over 30 years, Nick Bax's career has spanned the fields of design, creative direction, and art. His research explores extended reality storytelling, nonlinear narratives and the recreation of individuals, locations, and memory via immersive technology. He is currently an XR Stories Research Fellow at the University of York following his PhD research at the University of Sheffield. Nick is also XR-lead on the AHRC UK-China Creative Partnership immersive performance project ‘Bridging the Gaps’ (University of Leeds). 

Nick was part of the world-renowned Designers Republic team for 15 years before launching the creative practice Human Studio in 2007. Driven by the discovery of knowledge, innovation and culture, Human’s work intertwines academic research, technology and entertainment while collaborating with individuals and organisations that make a difference. Exploring new paths of communication and creative expression, the studio has exhibited work at galleries and events in Europe, Japan, Brazil, China and the United States.

Nick is a Fellow of the Royal Society of Arts (FRSA) and the Higher Education Academy (FHEA). He serves on the Partnership Advisory Board for WRoCAH (White Rose College of Arts & Humanities) and the School of Design Industrial Advisory Board at the University of Leeds.


This paper takes data from VR marketing focus groups run as part of a British Academy Postdoctoral Fellowship project around immersive promotional media. It uses games studies theories of involvement/incorporation as a theoretical lens (Calleja 2011) to ask what kind of ‘affective involvement’ might be at work in these experiences, and whether VR might encourage a different kind of affective, potentially playful relationship between consumers and brands. The ‘empathy machine’ (Milk, 2015) rhetoric remains relatively strong around VR and it is often credited with having a strong affective or emotional impact on users (Kukkakorpi & Pantti, 2021; Riva et al. 2007). A much-cited 2016 Neilsen report claimed Virtual Reality (VR) experiences could increase emotional attachment to a brand by as much as 27% (Perrett, 2016). It’s therefore surprising how few studies address VR’s impact on more complex, affective relationships between consumers, producers, and the brands they promote (Yung et al., 2021).

Those that do tend to count affect/emotion as one of several different measures of engagement including traditional marketing metrics like brand attitude or product recall (Van Kerrebroeck et al., 2017; De Gauquier et al., 2018). Some situate it alongside other measures more specifically relevant to VR e.g., “social presence, narrative interaction, narrative transportation, and affective brand engagement” (de Regt 2021: 514).” Some use qualitative data collection methods as part of their processes, but few make it their central focus (de Regt et al. 2021).

Quantitative approaches to understanding ‘affective’ engagement with XR experiences are understandably more appealing to marketers seeking to understand the potential ROI of VR marketing. However, this study ultimately contends that if VR marketing is really about creating qualitatively different emotional and affective relationships with consumers, then metrics will only provide partial answers to questions about emotional engagement and effectiveness.

Finally, the paper considers some of the challenges involved in this research, both practical and theoretical and ways forward for the future.


Dr Stephanie Janes (she/her) is a Lecturer in Immersive Media & Global Media Industries in the Department of Culture, Media & Creative Industries at King’s College London. Her research

interests include promotional and digital cultures, film marketing, immersive media, and alternate reality games. Recent publications include and Virtual Reality, Film Marketing and Value (2022). In F. Kerrigan, & C. Preece (Eds.), Marketing the Arts: Breaking Boundaries and monograph Alternate Reality Games: Promotion and Participatory Culture (2019, Routledge).


This paper will explore the adaptability of new technology to old forms and vice versa, by offering an in-depth look at a VR experience currently in production.

INSIDE is an immersive experience deeply concerned with internationally renowned artist Judith Scott’s sensorial access to the world. Judith Scott (1943-2005) was profoundly deaf and born with Down Syndrome. Pioneering the form of multisensory biography INSIDE – which is the subject of a Creative R&D case study undertaken as part of the MyWorld project - was conceived around the simple provocation: how might we tell Judith’s story in a way that would have been completely accessible to her were she still alive to see it?

This paper will offer an over-view of the production and its innovative combination of motion capture data, volumetrically captured performances (captured at the CCIXR facilities), haptic feedback and a programmable chair with an eye on examining the ways in which new technologies interact with old genres. Additionally, this paper will summarise the research taking place alongside the production that is designed to be responsive to its approach to XR media forms and practices. Keeping focus on the relationship between INSIDE’s proposed experiential qualities and the biographical genre, as well as the novel production methodologies that bridge the gap between VR and life-writing, I’ll reflect on the inter-disciplinary imperative of XR media practice and the research that attends to it.  

Not only does INSIDE re-format the biographical genre with an expanded sensorial experience, but in doing so it highlights some of the obstacles to access and inclusivity posed by traditional, non-immersive formats. Within the context of the rapid proliferation of XR and immersive media technologies and applications, INSIDE’s conceptualisation of the multisensory biography provides a useful starting point from which to consider wider processes of remediation within XR. The relationship between familiar genres and the expanded application of XR media is a productive site of critique and investigation as it provides a lens with which to view both the technological affordances of emergent media, the hegemony of established paradigms and, crucially, the processes of adoption and adaptation that occur at their intersection.


Dr Tom Livingstone is a Research Fellow at The University of the West of England (UWE) working within MyWorld, a creative R&D programme driving industry expansion and innovation in the southwest of England. His research focuses on emergent media with a particular interest in the impact of game engines on visual culture. He has published widely on film and digital media and his first book Hybrid Images and the Vanishing Point of Digital Visual Effects will be published by Edinburgh University Press.


Extensive research has demonstrated the positive impact experiences within natural environments can provide upon well-being.  Despite this, issues such as rapid urbanisation, lack of physical ability or the presence of psychological disorders such as agoraphobia could present barriers to accessibility.  Consumer virtual reality (VR) technology is now capable of producing immersive audio and visual environments that represent real-world settings.  Further to this, they allow user interactions that can deliver engaging and novel virtual experiences.   However, often these applications do not consider users from older or non-gaming populations resulting in difficulties in control or introduction of nausea through their presentation of movement.  This paper presents a proof-of-concept VR application designed to be an immersive artistic representation of Rowntrees Park, York, UK.  The goal of the software is to place the user into a contemplative state by offering meaningful self-driven activities within a meditative audio and visual environment.  Co-design sessions with a diverse group of people have ensured virtual interactions are accessible to users from different age groups and technical experience.  Following development, user testing (to be carried out) will evaluate the software ability to achieve its aims at inducing a meditative state.  Furthermore, we will provide recommendations for developing accessible interactive VR environments in order to maximise impact and user engagement.


Dr Daniel Johnston graduated in 2016 from the University of Portsmouth with BSc in Music and Sound Technology.  Following this he obtained an MSc in Audio and Music Technology and a PhD from the University of York.  Throughout his research career, he has focused on the use of interactive and immersive audio for digital healthcare interventions.  With projects examining how spatial audio can help to address auditory hypersensitivity in autistic young people. Additional projects also include a new extended reality environment for perceptual immersive audio listening tests, and live music performance within VR. 


From the celebration of VR’s potential as an “empathy machine” by filmmakers such as Chris Milk (2016) to the critique of “colonialist rhetorics” that celebrate new media’s power to access “other places and people” (Belisle and Roquet 2020), critical responses to the rise in VR filmmaking over the past decade have focused on the ethical implications of immersive media’s novel modes of spectatorship. As Kate Nash argues in a 2018 special issue of Studies in Documentary Film, the apparent erasure of the screen as a frame that separates viewer and image within VR film raises “the risk of improper distance,” particularly when the viewer is granted access to the intimate spaces of marginalized or vulnerable communities.

This paper will respond to and advance these critical debates by outlining the methodology and findings of the Immersive Empathy project, the central output of which is a co-created, virtual reality film exploring the meaning of home and homelessness, produced in collaboration with clients from the Galway Simon Community homeless charity. In applying a participatory video methodology to VR film production, the lived perspective and agency of the film’s participants is embedded in every stage of the project design, thereby countering the “one directional” access to the other’s space embedded in traditional documentary production practice (Belisle and Roquet 2020).

The testing stage of the project explores the efficacy of this approach by measuring the impact of the VR film on empathy levels amongst the general public towards people who have experienced homelessness and comparing the findings to previous studies measuring the impact of non-co-created VR films on empathy and attitudes.

If the home, in bell hooks terms, can act as a site of resistance to experiences of oppression and domination in the public sphere, VR documentaries that situate the viewer within the home of its protagonists have been criticized for rendering all space as public, thereby denying a private sphere to those lacking the agency to assert those boundaries (hooks 2015; Nakamura 2020). Through detailing both the process and the outcome of the Immersive Empathy project, this paper will argue for the the principles of co-creation and consent, exploring the possibility of an immersive film that enables its subjects to determine the borders of their private experience and communicate, on their own terms, the experience of home as a space of both trauma and resistance.


Dr Conn Holohan is the Director of the Centre for Creative Technologies at the University of Galway. His publications focus on the onscreen representations of home in cinema and immersive reality, and include a monograph and numerous articles in journals such as the Quarterly Review of Film & Video. He has led a number of co-created film projects working with homeless organisations and is the lead researcher on the cross-disciplinary Immersive Empathy project.


To better understand how film studios operated in Britain, France, Germany, and Italy, 1930-60, the ERC funded STUDIOTEC research project is using virtual reality. But why are we using VR? Because, in addition to 3D reconstruction and preservation of digital assets, it offers a way to facilitate greater understanding of some of the core aspects of the STUDIOTEC project in terms of Architecture & Infrastructure (via spatial understanding); Creativity, Practices and Innovation (by studying the technologies used); and Politics/ Economics/ Professional & Labour Relations (through the way that the spaces functioned for those that were working in them and how this compared across the 4 countries). Through the key affordances of VR - immersion, presence (i.e., sense of ‘being there’) and agency – engaging, realistic and interactive environments have been created to support film historians with their research, and it is envisioned that the VR experience will be complementary to the project supported by the other publications generated (

In this talk, I will discuss how the STUDIOTEC VR experience has been made with a particular focus on both the creative concept development through initial brief formulation, VR storyboarding and design specification, and technical development such as asset creation, game mechanics and user experience. I will conclude by looking at digital archiving and long-term preservation, a particular concern in the digital heritage sector, and reflect on some of the learnings and outcomes of the project so far.


Amy Stone is a Real-Time and Immersive Designer with a background in 3D Design and VR based in the Department of Computer Science at the University of Bristol. She has more than 20 years’ experience in the design industry and has expertise in virtual architecture, immersive spaces, exhibition and event design, interior design, and user experience. She is a creator of worlds with a passion for world building using real-time technologies. She has a BSc (Hons) in Industrial Design from Brunel University, and an MA in Virtual and Extended Realities from the University of the West of England, Bristol. In the STUDIOTEC Project, Amy will be supporting the project team to recreate film studios by bringing them to life using 3D modelling and developing a virtual reality experience.


“[T]he foreground appears as one flat plane, and the background as another flat plane, and the two are not fully synthesized into the convincing illusion of an integrated Whole”.

So writes Julie Turnock (2016: p95) on To Catch a Thief (Alfred Hitchcock, 1955), as part of an examination of the challenges using rear projection techniques in 1950s Hollywood. Despite significant developments in screen-compositing technologies, including bluescreens and greenscreens, Stanley Kubrick still found uses for rear projection in Eyes Wide Shut (1999), projecting “process shots” of New York behind Tom Cruise, while the actor walked on a treadmill in London’s Pinewood Studios, the film’s shooting location. (Coincidentally, the studio now houses a virtual production space for Sony.) Commenting on the “utter unreality” of this set-up, Kolker and Abrams (2019: p82-83) note how ‘“realism’, in its banal sense, was never as important for Kubrick as the perfect image”, with his ambiguity about cinematic realism also reflecting how conventional Hollywood “realism” is “of course, an illusion”.

Building on such practices and insights, this paper will situate the emergent uses of virtual production spaces within film history, including through critical analysis of related historical and contemporary industry discourse on the value of these technologies. Particular attention will be paid to the aesthetic and logistic potentials of virtual production spaces, as well as to the challenges that accompanied earlier forms and which may prove to be less of a sticking point when using contemporary “digital assets” as a backdrop.

While uses of rear projection are most often associated with sequences shots in cars, I will compare virtual production backgrounds to the multi-dimensional uses of composite images with pre-recorded backgrounds in films such as Rebecca (Alfred Hitchcock, 1940), All About Eve (Joseph L. Mankiewicz, 1950) and Eyes Wide Shut. The study will draw on a range of scholarship on screen technologies and special effects (e.g. Mulvey 2011; Turnock 2012; Keil and Whissel 2016; O’Meara and Szita, 2023) and production accounts of techniques including background plates and composite images, such as those historically featured in American Cinematographer (e.g., Foster, 1962). As part of the aim of comparing industry discourse on virtual production spaces across time, it will supplement these sources with contemporary accounts in trade journals and Entertainment Technologists Inc.’s whitepaper “Demystifying Virtual Production” (2023).


Jennifer O’Meara is Associate Professor in Film Studies at Trinity College Dublin, where she specialises in digital theory and practice and is a mentor on the HUMAN+ Marie Sk’odowska-Curie Actions fellowship programme, focused on human-centred approaches to technology. She has published on a diverse range of film and media topics in venues such as Cinema Journal, NECSUS, Feminist Media Studies, Celebrity Studies and PRESENCE: Virtual and Augmented Reality. Her second book, Women’s Voices in Digital Media: The Sonic Screen from Film to Memes (University of Texas Press, 2022), was shortlisted for the Best Monograph prize by the British Association of Film, Television and Screen Studies in 2023. Her current research project, funded by an Irish Research Council Starting Laureate Award (2022-2026), is titled “From Cinematic Realism to Extended Reality: Reformulating Screen Studies at the Precipice of Hyper-reality.”


‘VR feels like the very early days of cinema, a time when filmmakers are learning the grammar, rather than writing the language, of VR.’ (Mapplebeck, 2017)[1]

In recent years, with the rise and availability of Virtual Reality headsets for personal use, a new form of storytelling has emerged, the VR film. Film festivals such as Tribeca, Venice and Sheffield Doc/Fest have introduced new categories to include this medium, the BFI is funding VR projects and key production studios such as Archer’s Mark, Felix & Paul Studios and Atlas V have begun to emerge. Yet, in the field of film criticism, thorough and engaged research has yet to be conducted.

Highlighting key trends in this emerging medium, this paper takes a formalist approach to the VR film and outlines the technologies used to create them. The VR film can be loosely categorised into two modes of experience, the Walk-Around VR and the Head-Turn VR. There are also two key modes of production that have begun to emerge, those experiences that use 360 video and those that use a Games Engine to build the 360 environments. Both the mode of experience and mode of production impact the way the viewer can interact and engage with the virtual world.

Focussing on Space Explorers: The ISS Experience (2020),  Goliath: Playing with Reality (2021), On the Morning You Wake to The End of the World (2022), and In Pursuit of Repetitive Beats (2022), I identify the narrative devices used to further the story, looking at the prevalence of narration, how filmmakers transition between scenes and periods of time, and how the viewer is guided without traditional editing practices and unbounded by the film frame. Mapplebeck argues that VR ‘filmmakers are learning the grammar, rather than writing the language.’ This paper contends that as with all new forms of media, a process of adaptation is occurring, with creative practitioners borrowing and modifying the language of film, theatre, and games to create a new mode of storytelling. Finally, this paper questions what this new medium offers storytellers and audiences alike, looking at the notion of immersion, and how a passive spectator can be invited to play an active role in the story through the introduction of interactive elements.


Dr Penny Chalk holds a Ph.D. in Film by the University of Portsmouth and is employed in the Centre for Creative and Immersive Extended Reality. Chalk’s PhD was an intertextual study of film production during the Hollywood Studio Era, examining how popular nineteenth-century texts, such as Frankenstein, Sherlock Holmes and Jane Eyre evolved as they were adapted for theatre, radio and film. Her current research encompasses narrative-based immersive experiences, exploring how traditional storytelling techniques can be transposed into new media forms.

Working with a range of cultural heritage institutions such as the D-day Story and Mary Rose Museum, she has contributed to projects that use technological innovations to engage new audiences. Dr Chalk has been involved in several projects from a diverse range of funding sources, including the UKCRF Community Renewal Fund and Interreg. Her publications are in the field of Adaptation Studies with an article awarded the best new scholar award by the International Association for Media and History in 2018.




In February 2023, a ghostly digital hologram of Walt Disney made its debut as the virtual host of Disney100: The Exhibition at the Franklin Institute in Philadelphia, greeting guests ‘live’ as they began their journey through the many gallery installations, artwork, props, and audiovisual displays housed in the 15,000-square-foot museum that collectively honoured 100 years of The Walt Disney Company. Preceded by a short animated prologue featuring Mickey Mouse taken from the character’s appearance in “The Sorcerer’s Apprentice” segment from Fantasia (1940), a swirl of magic dust soon conjures the photorealistic digital avatar of Walt who proceeds to enthusiastically describe the Disney corporate ethos to those gathered at the museum, championing the studio’s ability to transform “ideas into reality” and celebrating the curiosity that fuels each new creative project.

This sophisticated projection of the Disney company’s founder (impossibly post-mortem) that premiered at the company’s centenary was crafted using state-of-the-art AI and voice synthesis technology, while exploiting the creative possibilities of the innovative Disney MagicStage to convincing conjure Walt’s uncanny re-appearance. An immersive virtual imaging technology utilised primarily for live stage performances, the MagicStage marks the latest development in the use of video projections and computer-generated effects that have formed a central part of the contemporary live concert experience.

This paper contextualises developments in the Disney MagicStage in light of recent trends within popular Hollywood cinema for both “posthumous” (Bode 2010) and digitally-de-aged” performances (Holliday 2022), as well as the rise of contemporary Deepfake technology that falsifies star physiognomies through processes of computer-mediated ‘reskinning.’ Indeed, as more than just a playful gesture to longstanding rumours of Walt’s cryogenic ‘freezing,’ the technological reconstitution of the California-born producer and entrepreneur as a new digital asset via archival voice recordings and machine learning fully reflects the role of celebrity bodies in the acceleration of immersive media content (Thomas 2019).

In an era where AI systems are being monitored closely for the computer’s ability to intervene into the celluloid image, this paper argues that the Disney MagicStage hologram fully reflects the power of celebrities as figures of trust, support, safety, and benevolence when it comes to rapid technological change. Not only does the digital revival of Walt therefore further secure the studio’s connection to a durable culture of technological innovation, but it equally showcases the presumed agency of animated star avatars to manage our understanding of what a computer can – and should – be doing.


Christopher Holliday is Lecturer in Liberal Arts and Visual Cultures Education at King’s College London, specializing in Hollywood cinema, animation history, and contemporary digital media. He has published on digital technology and computer animation and is the author of The Computer-Animated Film: Industry, Style and Genre (EUP, 2018), and co-editor of the anthologies Fantasy/Animation: Connections Between Media, Mediums and Genres (Routledge, 2018) and Snow White and the Seven Dwarfs: New Perspectives on Production, Reception, Legacy (Bloomsbury, 2021). Christopher is currently researching the relationship between identity politics and digital technologies in popular cinema and is the curator of the website/blog/podcast 


With the increase in popularity of transmediality in the twenty-first century, Japan’s adaptational culture has permeated the western world. One such transmedial product is Type-Moon’s Fate/Grand Order, marketed as a role-playing game for mobile devices, the player is tasked with controlling ‘heroic spirits’ of famous historical and literary figures in combat. Furthermore, the player is able to converse with and form bonds with the spirits of these figures through interaction, allowing the player to take on a more intimate role than that of a reader. Such interaction with fictional and historical figures, such as King Arthur, Leonardo Da Vinci, and Sherlock Holmes, brings into question the relationship between videogames, history, and culture.

Within this paper, I look at the role of historical characters within Fate/Grand Order and the reimagining of history and literary texts within the videogame. Through this, I analyse whether characters depicted within the game are representative of their original character in a fantasy setting, and if so what marks them as the same character as seen in other adaptations. From this, I argue that how we perceive characters both historical and fictional can be altered through cultural memory. I suggest that this is something that videogames such as Fate/Grand Order, which allow players to play as or alongside historical and literary characters, can influence due to their vastly different characterisation of historical and fictional entities.

I also look at how the way in which we perceive fictional characters changes based on the adaptation of characters into different franchises and genres.. Sherlock Holmes within Fate/Grand Order is a knowledgeable and dependable character whose role is to find weaknesses and flaws which can aid the player. However within The Great Ace Attorney Chronicles, the representation of the character (presented as Herlock Sholmes) is one which is eccentric and quick to jump to conclusions which are almost correct, allowing the player to feel superior to a famous detective in some ways, whilst also relying on him in others. As such, I suggest that the different characterisations, including the original source material, are equally important to the character as an overarching entity, with no single interpretation presenting the full range of a character.


André Cowen is a PhD candidate researching in the Centre for Adaptations at De Montfort University, where his subject focus is the adaptation, reception, and interactivity of literature and their video game adaptations. André utilises an interdisciplinary approach to the subject to cover a range of relevant research fields including Videogames Studies. His thesis is focused on the case studies of Andrzej Sapkowski’s Witcher series (1993-2013) and Dmitry Glukhovsky’s Metro series (2007-2015). This research seeks to demonstrate how transmedial and intermedial narratives have developed both literary and ludic techniques to incorporate multilinear narratives into transmedial storyworlds.

André has previously presented work on adapting senses within videogames through the utilisation of a Heads-Up Display and neurological links within Metro 2033 Redux. He has also presented a paper on the influence of a player’s morality on Geralt of Rivia’s characterisation within The Witcher: Enhanced Edition. He also has an upcoming publication analysing how folkloric and mythical monsters are adapted into the videogame medium through different means, including audio-visual-tactile methods.


In 1988 documentarian Michael Apted released a biographical picture focused on the life of Dian Fossey. The press revelled in the exotic Rwandan locals, and shots of lead actor Sigourney Weaver getting unbelievably close to the mountain gorillas. However, those who were paying attention to the credits, and those knowledgeable critics could see that there was the possibility of a ringer in the jungle. Practical creature effects make-up artist Rick Baker, the man behind the werewolf transformation in An American Werewolf in London (John Landis, 1981), the Cantina scene in Star Wars (George Lucas, 1977) and the man in the ape suit of 1976’s King Kong (John Guillermin), was credited as an Executive Producer. Despite he and his crew providing realistic ape suits to the production, the producers of Gorillas in the Mist decided to render his team invisible, by crediting Baker for a different position and not referencing the rest of his team.

Using a historical materialist approach, this paper will take a two-fold approach. Firstly it will look at how special effects producers were treated on the Gorillas in the Mist set. It will then explore how the critical reception of Gorillas in the Mist was affected by the producer’s decision to render the work of the special effects artist invisible. It will draw on the work of Stephen Prince (2012), applying his theory of “perceptual realism” to the discussion of the effects that critics were unaware of. This would also challenge previous models of how special effects are perceived and discussed, put forward by the likes of Michele Pierson (2002).

The invisible labour of these invisible effects provides an interesting case study for an inherently visible medium as it engages with the philosophy of visibility put forward by Michael Foucault (2003) and Maurice Merleau-Ponty (1968) in new and exciting ways.

Rick Baker also provides an interesting case study, as I argue that due to his other special effects make-up, he is a filmmaker that is able to cross over from “below-the-line” to “above-the-line”. While viewed as a special effects technician his achievements in special effects, have lead to a stardom of sorts raising him above the line of visible film producers. Yet his team still remains invisible. This leads us to question the role of the film historian in presenting an important singular figure as a credible way to tell the history of production.


Benjamin Pinsent is a Postgraduate Researcher at the University of East Anglia. He has a specialism in Film History and Special Effects. He recently submitted his PhD thesis exploring the perception of special effects as it relates to the work of Rick Baker. He has organised the Making Monsters, Manufacturing Terror Conference at UEA, guest editing the special issue that was generated from the papers presented there, which will be published in The Journal of Film and Performance sometime this year. He has also written on the way in which fans of special effects engage with online video on YouTube, whether that be an individual creator or an online tutorial.

Funding and sponsorships

This conference is funded by the following bodies.

Logo for British Association of Film, Television, and Screen Studies

British Association of Film, Television, and Screen Studies (BAFTSS)