By Assim Kalouaz
'La Plume et la Lanterne' Poster
As mentioned in a previous blog post, I have worked on an award-winning student project, “La Plume et la Lanterne” (“The Quill and the Lantern”). “La Plume et la Lanterne” was an asymmetrical experience with two users: one storyteller and one actor. In this setting, the storyteller was sitting on a little desk, facing a screen presenting the first person view of the actor, while the actor was immersed into the virtual environment, carrying a real-time tracked lantern. The storyteller would have to make 3 narrative choices for each of the fairy tales that the actor would venture themselves into. The fairy tale scene would be altered accordingly and would change the puzzle the actor had to solve to progress further.
Actor exploring (left) and storyteller making narrative choices on an Arduino-embedded book page (right)
There is a lot to unpack from this project, so I will focus on the actor experience in their progression throughout the different environments. As designers and storytellers, we aimed to push the actor’s sense of presence as much as we could so that they would feel immersed and transported into these fairy tales. To this end, we made several design decisions that I am going to break down here.
The actor’s objective was to look for the anomaly that altered 3 different fairy tales: Little Red Riding Hood, Alice in Wonderland, and Snow White. To do so, they had to solve mini puzzles in each scene that would vary depending on the storyteller choices.
To navigate throughout the environments, we had to consider 1) the visual representation of the actor and 2) their displacement method. The displacement method refers to how a VR user moves around. The most common methods are teleportation and sliding using the controller joystick. Each have their pros and cons and the decision depends on many factors, mostly the context of the VR experience and the avatar visual representation. The visual representation refers to how a VR user looks like in VR, if they have hands, arms with hands, a head or even a full animated body.
Because we had tight time constraints, we opted to design and animate for no avatar and chose to only represent one hand (tracked with the VR headset controller) and the lantern (tracked with a Vive tracker). From a narrative viewpoint, the actor was defined as “The Hero” which played as a “self-insert” title justifying the absence of a body. This allowed us to choose natural walk as a displacement method, which was the most diegetic method in the context of our narrative and our visual representation. A diegetic interaction is an important concept in VR that is defined by how much an interaction makes sense in its context. For instance, teleportation makes more sense if you play a ghost or if it is your character’s superpower than if you are a human. Diegesis in VR is crucial to prevent from breaking the immersion (Çamcı, 2019) by designing interactions that make no sense or feel out of place/unnatural in their context.
While natural walk relieved us from having to design and animate an avatar, it constrained us to create scenes that would be sized on a 1:1 scale according to the available physical surface we would have, which was 3x3 square metres.
After deciding on the actor visual representation and their displacement method, we focused on designing diegetic interactions that would challenge what is possible in real life while still fitting into our narrative context. To this end, we centred the progress of the actor on the lantern which was a key narrative element that allowed the actor to light up footsteps on the ground to guide them and acted as our immersion catalyst. By providing a real-time tracked physical counterpart to the virtual lantern, we directly facilitated a tactile feedback where the felt weight and motion of the physical lantern would complement the perceived motion of the virtual one. This matching of sensory stimuli across senses is called sensorimotor contingency (Christofi et al., 2020) where the information perceived in the physical world match and complement the ones received in the virtual world and vice-versa. There are different forms of sensorimotor contingencies possible across senses (for instance, crouching in the real world (motor action) would change your viewpoint in VR accordingly (visual consequence). The most famous experiment addressing this phenomenon is probably the rubber hand experiment (Botvinick & Cohen, 1998).
Another instance of sensorimotor contingency that is very common in video games but especially relevant in VR is the spatialized sound where the volume of the sound emitted by an object or entities varies depending on our orientation and proximity to it, just like in real life. This phenomenon directly gives room to narrativity: in the Red Riding Hood scene, there were spatialized snarling wolf sounds that would trigger at random moments, with their spatialization allowing the user to guess their origin and instill a sense of impending threat. In the Alice in Wonderland scene, the Cheshire Cat sounds (voiced in French by the imitator Superflame) were not spatialized, with the same volume in both ears, giving the impression that he was inside the user’s head.
Diegetic interactions and sensorimotor contingencies were our driving concepts to push immersion and presence. Some other examples in our experience include having actors grabbing a stick and getting on the tip of their toes to reach the red hood stuck in the branches of a tree, or manually operating a crank to get the red hood out of the well, which were one of the two actions the actor had to perform to progress, both illustrating these key concepts.
Red hood stuck in the tree with a stick on the floor needed to reach it, next to the crank to use if the hood was into the well.
The narrative context plays a huge role in what makes interactions diegetic. In the second scene, the dreamlike ambience set by the music and the known fantastic nature of Alice in Wonderland allowed the actor to defy realistic expectations by growing, shrinking, or grabbing an object from inside a painting, without these actions seeming out of place.
Painting before and after the crown was taken from it,
other option being the sceptre
These are just some examples of the actions we designed. I hope this little case study was insightful enough to help inform and inspire future immersive experiences, as it was an extremely rich creative experience for me that influenced my research interests quite a bit.
Botvinick, M., & Cohen, J. (1998). Rubber hands “feel” touch that eyes see . In Nature (Vol. 391, Issue 6669, p. 756). Nature Publishing Group. https://doi.org/10.1038/35784
Çamcı, A. (2019). Exploring the effects of diegetic and non-diegetic audiovisual cues on decision-making in virtual reality. Proceedings of the Sound and Music Computing Conferences, 195–201.
Christofi, M., Michael-Grigoriou, D., & Kyrlitsias, C. (2020). A Virtual Reality Simulation of Drug Users’ Everyday Life: The Effect of Supported Sensorimotor Contingencies on Empathy. Frontiers in Psychology, 11(June), 1–12. https://doi.org/10.3389/fpsyg.2020.01242