RealityRemix: Prototyping VR Larping

We’re at a point where the internet is kind of good enough, and remote embodiment via VR equipment is getting cheap enough, where we can enable a new type of entertainment, where performers with an improv background can act as non-player characters for interactive adventures in virtual worlds. I’ve been prototyping these experiences as part of Reality Remix, and here’s some of my initial findings.

The Sea Shanty set from Third Person. The performer is in “God Mode”

Priming Players

One of the difficult parts of this work is how to label it. When playtesting, I’ve found up-front describing what users are going to experience is pretty unhelpful. “LARP”, or Live-Action Roleplaying, is most accurate, but just like Dungeons and Dragons, people who have heard the term but don’t have experience with it pre-load a bunch of unhelpful pre-conceived notions, and are concerned or intimidated that they won’t “do it right”. Instead, I tell playtesters that they’ll be going into an experience with an open-ended mission, and tell them they’ll be speaking to live performers. There are adjacent genres – from interactive dinner theatre, to murder mysteries, or walk-around immersive theatre, or even escape rooms. The end goal of escape rooms is very clear, but I’ve found some of the more engaging escape rooms have multiple types of solutions, and the best instances involve actors, such as Secret City Adventure’s games in Toronto.

Content Tested

We’ve built two virtual sets & two corresponding adventures.

The Mysterious Murder of Roger – set in a medieval village, co-written by myself and San Francisco-based immersive theatre director Josh Marx. The players’ mission is to investigate the death of Roger, the town drunk, found dead in the morning in the middle of the main road in town.

The Sea Shanty – set on a palatial island estate, written by Toronto-based director and dungeonmaster/podcast host Tom McGee. The players’ mission is to prevent Lady Chaos’ purchase of the fastest ship in the world.

The Mysterious Murder of Roger was an internal, feature-elicitation prototype, and we designed The Sea Shanty based on lessons learned. The Sea Shanty debuted at VRTO 2018, with Kat Letwin performing all NPC roles.

Prototyping Process

Fortunately, we can prototype some of the LARP elements by talking through scenarios independently from the VR performer feature elements. At the first Reality Remix Salon, I laid out our medieval village set on the floor using masking tape and little folded pieces of paper for each NPC + character bio.

Top View of Medieval Village Set
The medieval village set used for The Mysterious Murder of Roger
Prototype Medieval Village Set
Lars Von Trier’s Dogville used a similar technique

Then, we can do walk throughs of the experience without having to get caught up in VR, or being distracted by feature issues. I can run around playing all NPCs IRL (in real life) without having to worry about whether my teleportation or inverse kinematic scheme is right. In early-stage software development, this technique is called “Wizard of Oz” (pls ignore the man behind the curtain). The mini-bios of each NPC are meant to be open-ended and free to live-edit by the performer, similar to the NPCs in Bad News.

I’m pretty familiar with eliciting digital features from live performers (my PhD thesis has a whole section on it). Performers are an interesting user case because they’re professionally good at making the mundane interesting, so there’s lots of bumps they’re good at hiding. They’re also very aware of their body and orientation to external constraints (especially film actors). However, some times they’re so good at making the mundane interesting that they won’t complain about a bug, or request features that they really want. I’ve found that non-technical creative people are really bad at estimating how hard it is to implement something (well, so are actual software engineers), and will assume if you tell them a couple times in a row that something is too hard to implement in given time constraints, that they’ll stop asking. I’ve found demarcating the magic circle between the performance mode versus post-performance talk-back is most useful. Sometimes you even need to leave the performance space to get into the feature elicitation mindset.

Let’s talk about one specific problem that required a solution in both content and interaction technique…

Focusing Players

One of the interesting discoveries in The Mysterious Murder of Roger is that, since players are free to move right from the beginning, they treat NPCs as people they poke until they get an answer out of them, and walk right past them if they get distracted. Lots of players miss key clues or hints this way, and would remain confused for long sections of the experience. There’s a similar problem with real-life free-roam murder mysteries, where audience members enter an escape room mindset, and try to rush through things – Josh Marx and I had a similar problem with a previous AR project we did, where we built a narrative adventure, but players rushed through it like an escape room. The change in approach we took for The Sea Shanty is starting the players behind a literal gate, speaking to Agent X to brief them on their mission before permitting entry to the space. This technique is similar to “gating” that appears in mechanics-oriented video games where players are only permitted entry to an open-ended space once they’ve demonstrated mastery of a new mechanic in a close-ended space.

In The Sea Shanty, the players start behind a gate until Agent X determines their spy cover is good enough to pass muster on Lady Chaos’ estate. You can think of this as a Hero’s Journey Crossing-The-Threshold moment.

We narrowed down even more on this feature during performances of The Sea Shanty. Sometimes players couldn’t tell whether information an NPC was performing to them was background chatter, or actually significant plot. Again, players would miss important information and complain about being lost. So, we gave the performer an option to hold the button to have the camera snap to them. Video games used to have non-interactive cut scenes, but in the past few years, they’ve opted to indicate important plot moments by subtly orienting the player’s camera towards a point of interest, and adding black bars on the top and bottom of the screen. Similarly, the immersive theatre show The Speakeasy in San Francisco uses different lighting cues to differentiate important, non-interactive plot moments from free-roaming moments where you’re allowed to speak to the actors.

Next Steps

We’re continuing to playtest this format with more performers and players. Some of the interesting stuff we’re learning is about how to diversify different types of player interaction, adjust the genre to audience members’ preferred tone (e.g. comedy or intrigue?). In terms of features, one of the coolest parts of building this out is that the performers have reality-shifting abilities. They start invisible, have X-ray vision to see through walls, can jump up to the sky to move the sun, can pull props out of an infinite bag of holding, etc. Performers see an enhanced version of the world, like a dancer would see Laban Notation instead of a dance, and adding the supported graphics for that is an extremely interesting problem. The hype term I use for that right now is “AR for VR” – just like nobody who viewed The Matrix from outside would view it as an actual render.

 

Here’s a teaser video showing off the current set live performer tools, called Sparasso [1]:

[1] Dionysus’ death and rebirth via dismemberment