IMMERSIVE INTERACTION

4i PROJECT

for Indie developers with Interactive Machine Learning

INTRO

Aiming to support independent developers and artists in designing movement and body-based interaction for Virtual Reality and immersive media this project builds tools to allow designing by moving via Interactive Machine Learning. Creating better tools and working processes aims to enable developers to create better movement interaction for players, audiences and end-users.
 
The project has two major, intertwined elements: Firstly developing interaction design tools based on interactive machine learning and testing these tools through creative, artistic work. The creative work will inform the design of the tools and the tools enable the creative work. The tool will be a plugin to a development platform such as Unity or Unreal Engine supporting three basic movement sensing technologies: the controllers available with standard VR systems; optical motion capture for more accurate, full-body interaction and also more experimental sensor technologies based around physical computing.
 
Over the full 2 years of the project, Gibson/Martelli will develop a movement-based VR experience for public exhibition, using the final prototype tools and will be a mix of arts practice as research and qualitative research. Practice as research will be conducted by Gibson and will centre on studio-based experimentation with HCI systems. This work will develop gestures, forms of movement interaction, avatars and custom-built virtual environments for art galleries and performative contexts. During the project, a number of hackathons expose the tools to indie game developers and artists.

BACKGROUND

Among the most significant recent developments in media is the emergence of new Immersive technologies, including Virtual Reality, which transports users to a full 3D graphics environment via head-mounted displays and Augmented Reality where digital objects are overlaid on the real world. Combined by using a range of display technologies, these experiences are now classed by the general term ‘Mixed Reality’. User experiences range from a single person at home in a virtual reality game to social, multi-user environments in the public spaces of a museum or art gallery, ‘immersive theatres’ combining real-world elements with technologically enhanced, virtual components. What combines all of these is that the audience member is fully surrounded in an interactive immersive environment — which is partially virtual but can feel entirely real. The most compelling experiences are those with which the user can interact — as if it was the real world: walking around freely or picking up objects to examine them. 

This comes at a time when the UK games and digital media industries are seeing an increasing participation of small independent (“indie”) developers and creative SMEs, resulting in an increase in the creativity and diversity of the digital creative sector, for example, the rise of “art” games with deeper emotional experiences than traditional games (Cole et al 2015) and that are strongly influenced by the fine arts. However, many are still excluded, women, in particular, are still very underrepresented in the games and digital creative industries. The creation of immersive media needs to be accessible to both existing small developers and those who are currently underrepresented. This project aims to do that by developing tools and creative processes and doing so in collaboration with 3 groups of target users: “mainstream” indie game developers, young women aspiring to enter the industry and fine artists working in the computational domain. This cannot be simply technology research; to build successful tools, the technology must be developed within the context of artistic and creative work.


Modern game engine platforms like Unity (www.​unity3d.com​) or Unreal (​www.unrealengine.com)​ play a big part in making game development accessible to small teams and promise to do the same for immersive media. They make the elements that immersive media share with games (animated 3D objects and environments) easy to create. However to create elements that are unique to VR, easy to use tools are yet to emerge. The most important is how we interact with immersive experiences, for regular screen-based games ‘traditional’ interaction via keyboards, joysticks and touchscreens work well, all the same, they are not trying to make their players feel as if they are physically inside a virtual world. This sense of being in another world, called “​Presence​”, is characterised by Slater (2009) as consisting of a number of illusions. The first, ​place illusion — the illusion of being in another place (in virtual reality and forms of mixed reality), is supported by standard VR hardware like head-mounted displays that allow us to turn our heads and have our view updated.

 The second illusion, ​plausibility,​ is the illusion that the world we are in, and the objects in it, are real. Slater’s theory proposes that virtual things feel real if they respond to us in the way that real objects do: the more our interaction with virtual things is like our interaction with real things the more they feel real. This implies that we should interact with an immersive experience in the same way we interact with the real world: through our physical body movements (Gillies 2016), not mediated by control devices. A third illusion, ​embodiment (Kilteni et al 2012), is the sense of having a body in the immersive experience, which is also very dependent on movement.


To have a strong sense of presence, users need compelling ways of interaction that engage their whole body. For users to experience these forms of interaction, developers need to be able to design these types of interaction. While movements such as picking up and interacting with objects are straightforward to design (the focus of the design is on the object, not the movement), there are many forms of movement interaction that are not well supported by current technologies, primarily those that rely on recognising and interpreting how people move. For example, a VR music experience which responds when you dance in certain ways; an augmented reality fencing game that recognises different types of sword play or even an immersive story where characters respond to when you express emotions through your body. All of these are potentially rich and compelling experiences, however, none of the movements can easily be defined mathematically, or described in detail in any way other than actually performing them. These types of body movement interaction are hard to design because it relies on tacit and embodied knowledge (Gillies 2016). Most graphical interfaces rely on text and symbols implemented via code, however knowledge of movement cannot be put in this form: we know how to ride a bike, or perform a dance, by doing it, and cannot put it into detailed verbal instructions. That means that traditional interaction design techniques cannot capture the feeling of movement well. 

That is why a number of design methods have been developed that place body movement and feeling at their centre, for example, embodied sketching (Márquez Segura et al. 2016). 

This encourages designers to design by moving, but once a movement has been designed, the interaction must be implemented, which typically means moving back to a screen and keyboard for coding. 

Therefore we propose developing an immersive, embodied, movement based tool for both designing and implementing movement interaction.

Machine Learning (ML) is a promising approach to implementing movement interaction, because it allows us to design by providing examples of movement rather than code, and can capture the complex nuance of movement that it is hard to represent in programmed rules. However, most current implementations of machine learning are very far from being usable by artists and indie developers. They are difficult even for machine learning engineers, let alone designers. Patel et al. (2008) performed a user study with expert programmers working with machine learning and identified a number of difficulties including treating methods as a “black box” and difficulty in interpreting results. Enabling domain experts to design using machine learning is therefore not simply a matter of using existing machine learning software but requires us to fundamentally rethink machine learning in terms of usability. These issues are beginning to be addressed with user-centred techniques in the emerging field of Interactive Machine Learning (IML) (Fails and Olsen 2003; Fiebrink et al. 2011), in which end users are able to train machine learning models by interactively providing and labelling example data, progressively refining the machine learning model based on interactive testing.

OBJECTIVES

  • O1​: Enable the creation of a new generation of movement interfaces for immersive experiences by small scale creative teams, independent developers, artists and others.
  • O2: Create easy to use, immersive tools, based on interactive machine learning, for movement interaction design, in order to help achieve O1. ​
  • O3​: Develop immersive design methodologies and workflows for movement interaction. ​
  • O4​: Use participatory design to understand the needs of our target users (small scale creative teams, independent developers, and artists) ensuring the tools meet those needs. ​
  • O5​: Better understand the process and challenges of applying Interactive Machiane Learning to this domain in order to inform future research in this area and to suggest future uses.

WHO IS IT FOR?

This project will benefit researchers in the field of Virtual Reality and immersive technologies by providing new ways of designing interaction for VR and a new understanding of how developers do interaction design. More broadly the research will benefit the field of Human-Computer Interaction with a new interaction design tools and a better understanding of how designers can use the body in designing movement interaction. The field of Machine Learning will benefit greatly from the application of HCI methods to that technology, providing a way in which ML can move beyond the lab to ordinary users, and unearthing a new set of challenges from this new application area. The major pathway to academic impact will be publication in major conference and journals such as ACM SIGCHI, IEEE Virtual Reality, ACM TOCHI, ACM TiiS and ACM SIGGRAPH.

The tools will also be released as open-source so researchers can use them directly in their work.
Other beneficiaries are the developers of Immersive Media (VR, AR and Mixed Reality experiences such as location-based installations). They will benefit from having better tools and techniques for designing interaction, making it easier to develop more compelling forms of interaction that focus on users’ body movements rather than on traditional button/joystick. This will not only enable them to make their experiences better but potentially to create new forms of experience, leading to new markets.
The project focuses on small, independent developers, artists, and underrepresented groups within games and digital media. These SME developers are key both to the economics of the industry (over 95% of UK game developers are SMEs) and also to the cultural strength and diversity. Unlike major companies, they do not have access to the resources and specialist expertise current approaches to complex movement interaction and for the use of machine learning. It is therefore particularly important for them to have to types of usable and rapid tools that we intend to develop in this project.
The project directly works with participants in these demographics in our hackathons. Our participants will be the first major beneficiaries, with immediate access to the software and training in its use. The software will be directly disseminated to developers as a plugin to a popular development platform such as Unity or Unreal Engine.

METHODOLOGY

We will develop interaction design tools based on interactive machine learning and test these tools through creative, artistic work. Creative work will inform the design of the tools and the tools enable the creative work.

TECHNOLOGY

Develop a movement interaction design tool for immersive media. It will be immersive in the sense that developers will design and implement interaction through their movements while immersed in an environment (not using a traditional screen-based GUI). The tool will use interactive machine learning to train a system to recognise movements and respond to them. The workflow will be for developers to design movements by performing those movements, which will act as training data to the machine learning system. 

The tool will be a plugin to a development platform such as Unity or Unreal Engine so that it will be readily usable by developers. It will support three basic movement sensing technologies: the controllers available with standard VR systems (e.g. Oculus Touch or VIVE wands); optical motion capture for more accurate, full body interaction and also more experimental sensor technologies based around physical computing.

CREATIVE PRACTICE

The tools will be tested and refined in the context of in-the-wild research with Immersive media creators. The aim of this research is to understand how IML is used in real creative work. In order to study a large number of projects that are nonetheless examples of real creative work, we will undertake the first part of the research at Hackathons, GameJams and Choreographic Labs (equivalent terms to Hackathon for games and choreographic development, respectively). This research is complemented by longer creative projects. 

Each hackathon begins by introducing choreographic thinking as a movement-based design method (as organised by Gibson), after which participants will be introduced to the prototypes and proceed to develop their own projects. Gibson will develop innovative approaches to create gesture libraries with the participants. The investigations will address the lack of knowledge surrounding movement quality, kinaesthetic perception and logic of expression in Immersive Media, and explore what choreographic thinking brings to the development of these gestures and inhabitation through the use of dance scores drawing on somatics. Participants will be encouraged to explore both familiar and unfamiliar movements and to design and perform gestures across interface and game worlds. The three types of session will aim to uncover new interaction techniques and make (respectively) games, experiences, and movement—either as tools for others or ends in themselves. 

Hackathons are a good way to develop an understanding of a wide range of creative projects in a short time, the projects tend to be small-scale proofs of concept rather than complete works. For that reason, we will augment the hackathons with complete creative work suitable for publication or exhibition. Over the full 2 years of the project, Gibson/Martelli will develop a movement based VR project for public exhibition. In addition, we will sponsor two residencies for two hackathon participants to develop a complete work. These creative projects will use our final prototype tools and will be a mix of arts practice as research and qualitative research. Practice as research will be conducted by Gibson and will centre on studio based experimentation with HCI systems. This work will develop gestures, forms of movement interaction, avatars and custom built virtual environments for art galleries and performative contexts. The research will include choreographic practices that investigate somatic sensing in machine learning through interactive environments and critical reflection analysis and discussion on the bodily experience of immersive interaction, feeding both into the design of interaction and our understanding of movement interaction in immersive media.

Overall the creative practice research will lead to a deeper understanding of embodied interaction in immersive media and whole-body movement interaction systems. It will form the basis for a ‘smart’ version of embodied HCI and potentially new ways of thinking about and working with Machine Learning and AI.

TEAM

We are pleased to work with our teammates, led by Marco Gillies 

MARCO GILLIES

MARCO GILLIES

LEAD

RUTH GIBSON

RUTH GIBSON

LEAD

Ruth Gibson is Creative Fellow at the Centre for Dance Research and a certified teacher in Skinner Releasing Technique. She works across disciplines to produce objects, software and installations in partnership with artist Bruno Martelli as Gibson/Martelli . She exhibits in galleries and museums internationally creating award-winning projects using computer games, virtual and augmented reality, print and moving image. Ruth worked as a motion capture performer, supervisor and advisor for Vicon, Motek, Animazoo, Televirtual, and the BBC. A recipient of a BAFTA nomination, a Creative Fellowship from the AHRC, awards from NESTA, the Arts Council and The Henry Moore Foundation, she won the Lumen Gold Prize & the Perception Neuron contest. Widely exhibited, her work has been shown at the Venice Biennale, SIGGRAPH, ISEA, Transmediale and is currently touring with the Barbican’s ‘Digital Revolution’. She is PI on Reality Remix, an AHRC/EPSRC Immersive Experiences Award.

REBECCA FIEBRINK

REBECCA FIEBRINK

LEAD

Dr. Rebecca Fiebrink is a Senior Lecturer in Computing at Goldsmiths. Her research focuses on designing new ways for humans to interact with computers in creative practice, including on the use of machine learning as a creative tool. She began work on these topics in 2008 and has since authored over 20 related publications. She is the creator of the Wekinator, open-source software for real-time interactive machine learning, downloaded over 10,000 times, and of a MOOC titled “Machine Learning for Artists and Musicians”. Recent grants include “MIMIC: Musically Intelligent Machines Interacting Creatively” (Co-I, AHRC, £806,693), “Supporting Feature Engineering for End-User Design of Gestural Interactions” (PI, EPSRC, £123,787), and “RAPID-MIX: Realtime Adaptive Prototyping for Industrial Design of Multimodal Interactive eXpressive technology” (Co-I, H2020, €2.3k). Fiebrink has worked closely with creative tech start-ups (e.g., Smule) and major industry labs (e.g., Microsoft Research) and given keynotes at the NIPS 2017 Creative Machine Learning workshop, Audio Mostly, Sound and Music Computing, and ISMIR.

PHOENIX PARRY

PHOENIX PARRY

LEAD

Creates physical games and embodied experiences. Her work brings people together to raise awareness of our collective interconnectivity. Current research underway at Goldsmiths, University of London looks at leveraging our other senses, with particular focus on sound and skin based feedback to trigger affective response. A consummate advocate for women in game development, she founded Code Liberation Foundation. This organization teaches women to program games for free. Since starting in 2012, this project has reached over 3000 women in the New York and London areas between the ages of 16 to 60. Fostering professional growth and mentoring new leaders in the field, she strives to infuse the industry with new voices. Currently, she is a Lecturer in Physical Computing at Goldsmiths, University of London and the program leader of the Independent Games and Playable Experience MA