IMMERSIVE INTERACTION

4i PROJECT

for Indie developers with Interactive Machine Learning

INTRO

4i: Immersive Interaction design for Indie Developers with Interactive machine learning is an EPSRC funded project that aims to enable independent developers and artists to design and implement movement-based interaction for immersive media such as Virtual, Augmented and Mixed reality. 

The key to this approach is Interactive Machine Learning (IML), where design is specified through examples of movement which are used as input to a Machine Learning algorithm, which “learns” to recognise those movements. However, this will be interactive: users will not simply gather a data and send it to the algorithm as a one-off, but gradually add examples to refine and tweak the results, just as a design refines a product. 

Small developers have increased the diversity and creativity of the industry, particularly with more work that is influenced by fine arts and literature (though there is considerable work still be done as women and BAME people are still underrepresented in the industry). The tools will be developed collaboratively with undefended and underrepresented developers to ensure they meet the needs of our user groups and to understand how they perform immersive interaction design. This user research will be done, in the wild, with working developers and artists via a series of hackathons, game jams and choreographic coding labs.

The challenges of this project cannot be addressed simply with technology as the creation of immersive movement interaction is as much a creative as a technological problem. As such this project will be based on a close interaction between technology and artistic practice. 

This project is a partnership between Goldsmiths, University of London, The University of Coventry, University of the Arts London, Gibson/Martelli and CodeLiberation. 

ABOUT

Aiming to support independent developers and artists in designing movement and body-based interaction for Virtual Reality and immersive media this project builds tools to allow designing by moving via Interactive Machine Learning. Creating better tools and working processes aims to enable developers to create better movement interaction for players, audiences and end-users.
 
The project has two major, intertwined elements: Firstly developing interaction design tools based on interactive machine learning and testing these tools through creative, artistic work. The creative work will inform the design of the tools and the tools enable the creative work. The tool will be a plugin to a development platform such as Unity or Unreal Engine supporting three basic movement sensing technologies: the controllers available with standard VR systems; optical motion capture for more accurate, full-body interaction and also more experimental sensor technologies based around physical computing.
 
Over the full 2 years of the project, Gibson/Martelli will develop a movement-based VR experience for public exhibition, using the final prototype tools and will be a mix of arts practice as research and qualitative research. Practice as research will be conducted by Gibson and will centre on studio-based experimentation with HCI systems. This work will develop gestures, forms of movement interaction, avatars and custom-built virtual environments for art galleries and performative contexts. During the project, a number of hackathons expose the tools to indie game developers and artists.

BACKGROUND

Among the most significant recent developments in media is the emergence of new Immersive technologies, including Virtual Reality, which transports users to a full 3D graphics environment via head-mounted displays and Augmented Reality where digital objects are overlaid on the real world. Combined by using a range of display technologies, these experiences are now classed by the general term ‘Mixed Reality’. User experiences range from a single person at home in a virtual reality game to social, multi-user environments in the public spaces of a museum or art gallery, ‘immersive theatres’ combining real-world elements with technologically enhanced, virtual components. What combines all of these is that the audience member is fully surrounded in an interactive immersive environment — which is partially virtual but can feel entirely real. The most compelling experiences are those with which the user can interact — as if it was the real world: walking around freely or picking up objects to examine them. 

This comes at a time when the UK games and digital media industries are seeing an increasing participation of small independent (“indie”) developers and creative SMEs, resulting in an increase in the creativity and diversity of the digital creative sector, for example, the rise of “art” games with deeper emotional experiences than traditional games (Cole et al 2015) and that are strongly influenced by the fine arts. However, many are still excluded, women, in particular, are still very underrepresented in the games and digital creative industries. The creation of immersive media needs to be accessible to both existing small developers and those who are currently underrepresented. This project aims to do that by developing tools and creative processes and doing so in collaboration with 3 groups of target users: “mainstream” indie game developers, young women aspiring to enter the industry and fine artists working in the computational domain. This cannot be simply technology research; to build successful tools, the technology must be developed within the context of artistic and creative work.


Modern game engine platforms like Unity or Unreal play a big part in making game development accessible to small teams and promise to do the same for immersive media. They make the elements that immersive media share with games (animated 3D objects and environments) easy to create. However to create elements that are unique to VR, easy to use tools are yet to emerge. The most important is how we interact with immersive experiences, for regular screen-based games ‘traditional’ interaction via keyboards, joysticks and touchscreens work well, all the same, they are not trying to make their players feel as if they are physically inside a virtual world. This sense of being in another world, called “​Presence​”, is characterised by Slater (2009) as consisting of a number of illusions. The first, ​place illusion — the illusion of being in another place (in virtual reality and forms of mixed reality), is supported by standard VR hardware like head-mounted displays that allow us to turn our heads and have our view updated.

 The second illusion, ​plausibility,​ is the illusion that the world we are in, and the objects in it, are real. Slater’s theory proposes that virtual things feel real if they respond to us in the way that real objects do: the more our interaction with virtual things is like our interaction with real things the more they feel real. This implies that we should interact with an immersive experience in the same way we interact with the real world: through our physical body movements (Gillies 2016), not mediated by control devices. A third illusion, ​embodiment (Kilteni et al 2012), is the sense of having a body in the immersive experience, which is also very dependent on movement.


To have a strong sense of presence, users need compelling ways of interaction that engage their whole body. For users to experience these forms of interaction, developers need to be able to design these types of interaction. While movements such as picking up and interacting with objects are straightforward to design (the focus of the design is on the object, not the movement), there are many forms of movement interaction that are not well supported by current technologies, primarily those that rely on recognising and interpreting how people move. For example, a VR music experience which responds when you dance in certain ways; an augmented reality fencing game that recognises different types of sword play or even an immersive story where characters respond to when you express emotions through your body. All of these are potentially rich and compelling experiences, however, none of the movements can easily be defined mathematically, or described in detail in any way other than actually performing them. These types of body movement interaction are hard to design because it relies on tacit and embodied knowledge (Gillies 2016). Most graphical interfaces rely on text and symbols implemented via code, however knowledge of movement cannot be put in this form: we know how to ride a bike, or perform a dance, by doing it, and cannot put it into detailed verbal instructions. That means that traditional interaction design techniques cannot capture the feeling of movement well. 

That is why a number of design methods have been developed that place body movement and feeling at their centre, for example, embodied sketching (Márquez Segura et al. 2016). 

This encourages designers to design by moving, but once a movement has been designed, the interaction must be implemented, which typically means moving back to a screen and keyboard for coding. 

Therefore we propose developing an immersive, embodied, movement based tool for both designing and implementing movement interaction.

Machine Learning (ML) is a promising approach to implementing movement interaction, because it allows us to design by providing examples of movement rather than code, and can capture the complex nuance of movement that it is hard to represent in programmed rules. However, most current implementations of machine learning are very far from being usable by artists and indie developers. They are difficult even for machine learning engineers, let alone designers. Patel et al. (2008) performed a user study with expert programmers working with machine learning and identified a number of difficulties including treating methods as a “black box” and difficulty in interpreting results. Enabling domain experts to design using machine learning is therefore not simply a matter of using existing machine learning software but requires us to fundamentally rethink machine learning in terms of usability. These issues are beginning to be addressed with user-centred techniques in the emerging field of Interactive Machine Learning (IML) (Fails and Olsen 2003; Fiebrink et al. 2011), in which end users are able to train machine learning models by interactively providing and labelling example data, progressively refining the machine learning model based on interactive testing.

OBJECTIVES

  • O1​: Enable the creation of a new generation of movement interfaces for immersive experiences by small scale creative teams, independent developers, artists and others.
  • O2: Create easy to use, immersive tools, based on interactive machine learning, for movement interaction design, in order to help achieve O1. ​
  • O3​: Develop immersive design methodologies and workflows for movement interaction. ​
  • O4​: Use participatory design to understand the needs of our target users (small scale creative teams, independent developers, and artists) ensuring the tools meet those needs. ​
  • O5​: Better understand the process and challenges of applying Interactive Machiane Learning to this domain in order to inform future research in this area and to suggest future uses.

WHO IS IT FOR?

This project will benefit researchers in the field of Virtual Reality and immersive technologies by providing new ways of designing interaction for VR and a new understanding of how developers do interaction design. More broadly the research will benefit the field of Human-Computer Interaction with a new interaction design tools and a better understanding of how designers can use the body in designing movement interaction. The field of Machine Learning will benefit greatly from the application of HCI methods to that technology, providing a way in which ML can move beyond the lab to ordinary users, and unearthing a new set of challenges from this new application area. The major pathway to academic impact will be publication in major conference and journals such as ACM SIGCHI, IEEE Virtual Reality, ACM TOCHI, ACM TiiS and ACM SIGGRAPH.

The tools will also be released as open-source so researchers can use them directly in their work.
Other beneficiaries are the developers of Immersive Media (VR, AR and Mixed Reality experiences such as location-based installations). They will benefit from having better tools and techniques for designing interaction, making it easier to develop more compelling forms of interaction that focus on users’ body movements rather than on traditional button/joystick. This will not only enable them to make their experiences better but potentially to create new forms of experience, leading to new markets.
The project focuses on small, independent developers, artists, and underrepresented groups within games and digital media. These SME developers are key both to the economics of the industry (over 95% of UK game developers are SMEs) and also to the cultural strength and diversity. Unlike major companies, they do not have access to the resources and specialist expertise current approaches to complex movement interaction and for the use of machine learning. It is therefore particularly important for them to have to types of usable and rapid tools that we intend to develop in this project.
The project directly works with participants in these demographics in our hackathons. Our participants will be the first major beneficiaries, with immediate access to the software and training in its use. The software will be directly disseminated to developers as a plugin to a popular development platform such as Unity or Unreal Engine.

METHODOLOGY

We will develop interaction design tools based on interactive machine learning and test these tools through creative, artistic work. Creative work will inform the design of the tools and the tools enable the creative work.

TECHNOLOGY

Develop a movement interaction design tool for immersive media. It will be immersive in the sense that developers will design and implement interaction through their movements while immersed in an environment (not using a traditional screen-based GUI). The tool will use interactive machine learning to train a system to recognise movements and respond to them. The workflow will be for developers to design movements by performing those movements, which will act as training data to the machine learning system. 

The tool will be a plugin to a development platform such as Unity or Unreal Engine so that it will be readily usable by developers. It will support three basic movement sensing technologies: the controllers available with standard VR systems (e.g. Oculus Touch or VIVE wands); optical motion capture for more accurate, full body interaction and also more experimental sensor technologies based around physical computing.

CREATIVE PRACTICE

The tools will be tested and refined in the context of in-the-wild research with Immersive media creators. The aim of this research is to understand how IML is used in real creative work. In order to study a large number of projects that are nonetheless examples of real creative work, we will undertake the first part of the research at Hackathons, GameJams and Choreographic Labs (equivalent terms to Hackathon for games and choreographic development, respectively). This research is complemented by longer creative projects. 

Each hackathon begins by introducing choreographic thinking as a movement-based design method (as organised by Gibson), after which participants will be introduced to the prototypes and proceed to develop their own projects. Gibson will develop innovative approaches to create gesture libraries with the participants. The investigations will address the lack of knowledge surrounding movement quality, kinaesthetic perception and logic of expression in Immersive Media, and explore what choreographic thinking brings to the development of these gestures and inhabitation through the use of dance scores drawing on somatics. Participants will be encouraged to explore both familiar and unfamiliar movements and to design and perform gestures across interface and game worlds. The three types of session will aim to uncover new interaction techniques and make (respectively) games, experiences, and movement—either as tools for others or ends in themselves. 

Hackathons are a good way to develop an understanding of a wide range of creative projects in a short time, the projects tend to be small-scale proofs of concept rather than complete works. For that reason, we will augment the hackathons with complete creative work suitable for publication or exhibition. Over the full 2 years of the project, Gibson/Martelli will develop a movement based VR project for public exhibition. In addition, we will sponsor two residencies for two hackathon participants to develop a complete work. These creative projects will use our final prototype tools and will be a mix of arts practice as research and qualitative research. Practice as research will be conducted by Gibson and will centre on studio based experimentation with HCI systems. This work will develop gestures, forms of movement interaction, avatars and custom built virtual environments for art galleries and performative contexts. The research will include choreographic practices that investigate somatic sensing in machine learning through interactive environments and critical reflection analysis and discussion on the bodily experience of immersive interaction, feeding both into the design of interaction and our understanding of movement interaction in immersive media.

Overall the creative practice research will lead to a deeper understanding of embodied interaction in immersive media and whole-body movement interaction systems. It will form the basis for a ‘smart’ version of embodied HCI and potentially new ways of thinking about and working with Machine Learning and AI.

TEAM

We are pleased to work with our teammates, led by Marco Gillies 

MARCO GILLIES

MARCO GILLIES

LEAD

Marco Gillies is Principal Investigator on the 4i Project. Marco’s research centres on how we can create technologies that work with embodied, tacit human knowledge. He has many years’ experience of research into how to generate non-verbal communication for animated virtual characters, particularly for social interaction in virtual reality. His approach focuses on the role actors and performers can play in creating autonomous characters.He has also worked on other forms of immersive experience and embodied interaction, particularly applied to immersive theatre and performance. His recent research has been on human-centred machine learning in which humans guide machine learning algorithms interactively as a way of making use of tacit human knowledge in artificial intelligence systems.

RUTH GIBSON

RUTH GIBSON

LEAD

Ruth Gibson is a Reader at the Centre for Dance Research and a certified teacher in Skinner Releasing Technique. She works across disciplines to produce objects, software and installations in partnership with artist Bruno Martelli as Gibson/Martelli . She exhibits in galleries and museums internationally creating award-winning projects using computer games, virtual and augmented reality, print and moving image. Ruth worked as a motion capture performer, supervisor and advisor for Vicon, Motek, Animazoo, Televirtual, and the BBC. A recipient of a BAFTA nomination, a Creative Fellowship from the AHRC, awards from NESTA, the Arts Council and The Henry Moore Foundation, she won the Lumen Gold Prize & the Perception Neuron contest. Widely exhibited, her work has been shown at the Venice Biennale, SIGGRAPH, ISEA, Transmediale and is currently touring with the Barbican’s ‘Digital Revolution’. She is PI on Reality Remix, an AHRC/EPSRC Immersive Experiences Award.

REBECCA FIEBRINK

REBECCA FIEBRINK

LEAD

Rebecca Fiebrink is a Reader at the Creative Computing Institute at University of the Arts London (primary affiliation) and in Computing at Goldsmiths, University of London. She is the developer of the Wekinator, open-source software for real-time machine, and she is the creator of a MOOC titled “Machine Learning for Artists and Musicians.” Much of her work is driven by a belief in the importance of inclusion, participation, and accessibility: she works frequently with human-centred and participatory design processes. Current and recent projects include creating new accessible technologies with people with disabilities, designing inclusive machine learning curricula and tools, and applying participatory design methodologies in the digital humanities. Dr Fiebrink has worked with companies including Microsoft Research, Sun Microsystems Research Labs, Imagine Research, and Smule. She has performed with numerous musical ensembles as both an electronic and acoustic musician. She holds a PhD in Computer Science from Princeton University.

PHOENIX PARRY

PHOENIX PARRY

LEAD

Phoenix Perry creates physical games and embodied experiences. Her work looks for opportunities to bring people together to raise awareness of our collective interconnectivity. Current research underway at Goldsmiths, University of London looks at leveraging our other senses, with particular focus on sound and skin-based feedback to trigger affective response. A consummate advocate for women in game development, she founded Code Liberation Foundation. This organization teaches women to program games for free. Since starting in 2012, this project has reached over 3000 women in the New York and London areas between the ages of 16 to 60. Fostering professional growth and mentoring new leaders in the field, she strives to infuse the industry with new voices. Presently, she leads an MSc and a BSc in Creative Computing at UAL’s Institute of Creative Coding. Her speaking engagements include A MAZE, GDC, Games for Change, The Open Hardware Summit, Indiecade, Comic Con, Internet Week, Create Tech, IBM Dev Pulse, Montreal International Games Summit and NYU Game Center among others. Perry’s creative work spans a large range of disciplines including drawing, generative art, video, games, interfaces and sound. Her projects have been seen worldwide at venues and festivals including the GDC, E3, Come out and Play, Maker Faire at the New York Hall of Science, Lincoln Center, Transmediale, Yerba Buena Center for the Arts, LAMCA, Harvest Works, Babycastles, European Media Arts Festival, GenArt, Seoul Film Festival and Harvestworks.

NICOLA PLANT

NICOLA PLANT

RESEARCHER

Nicola Plant is a new media artist, researcher and developer currently working as a researcher on a project developing machine learning tools for movement interaction design in immersive media at Goldsmiths, University of London. Nicola holds a PhD in Computer Science that focuses on embodiment, non-verbal communication and expression in human interaction from Queen Mary University of London. She has an artistic practice that specialises in movement-based interactivity and motion capture, creating interactive artworks exploring expressive movement within VR.

CARLOS GONZALEZ DIAZ

CARLOS GONZALEZ DIAZ

RESEARCHER

Carlos Gonzalez Diaz is a PhD Candidate in the Intelligent Games and Games Intelligence (IGGI) Centre for Doctoral Training at the University of York, Goldsmiths and Queen Mary universities. His research focuses on how the use of interactive machine learning in the design of movement interactions of virtual reality games can affect the player experience. During his PhD, Carlos has collaborated with Sony Interactive Entertainment R&D researching on the PSVR game system as well as participated in a Google-funded project to develop an interactive machine learning framework for the Unity3D game engine. He has recently co-organised academic events, being one of the chairs for ACM CHI Play 2019 and IEEE CoG 2019.

MICHAEL ZBYSZYNSKI

MICHAEL ZBYSZYNSKI

RESEARCHER

Michael Zbyszyński is a lecturer in the Department of Computing, where he teaches perception & multimedia computing, live electroacoustic music, and real-time interaction. His research involves applications of interactive machine learning to musical instrument design and performance. As a musician, his work spans from brass bands to symphony orchestras, including composition and improvisation with woodwinds and electronics. He has been a software developer at Avid, SoundHound, Cycling ’74, and Keith McMillen Instruments, and was Assistant Director of Pedagogy at UC Berkeley’s Center for New Music and Audio Technologies (CNMAT). He holds a PhD from UC Berkeley and studied at the Academy of Music in Kraków on a Fulbright Grant. His work has been included in Make Magazine, the Rhizome Artbase, and on the ARTSHIP recording label.

CLARICE HILTON

CLARICE HILTON

RESEARCHER

Clarice Hilton is a creative technologist and researcher specialising in Unity and immersive artwork. She is a researcher at the University of London developing a movement based tool to intuitively design interaction in unity using machine learning. In her interdisciplinary practise she collaborates with filmmakers, dance practitioners, theatre makers and other artists to explore participatory and embodied experiences. She developed an interactive puppetry and AR touring show If Not Here… Where? with The Little Angel, Great Ormond Street Hospital. She was the creative technologist on SOMA a virtual reality experience exploring the somatic experience between the physical and the virtual in VR by Lisa May Thomas. She worked as a developer on The Collider developed by Anagram which has toured internationally at Tribeca, Venice Film Festival, IDFA Doc Lab and Sandbox Immersive Festival (best immersive artwork) and was named one of 2019 top immersive experiences by Forbes. She previously taught Interactive Storytelling and Unity at UCL on the Immersive Factual Storytelling course.

BRUNO MARTELLI

BRUNO MARTELLI

ARTIST

Bruno Martelli’s practice examines figure and landscape, transposing sites to create ambiguous topographies exploring the relationship natural and artificial. He works with live simulation, performance capture, installation and video to create immersive virtual realities. He holds a doctorate in Immersive Environments from RMIT. Commissioned by Wallpaper, Selfridges, Henry Moore Foundation, The Barbican & NESTA, his AHRC projects include: ‘Error Network’, ‘Capturing Stillness – visualisations of dance through motion capture technologies’ and ‘Reality Remix’. He led serious gaming projects to create permanent installations in James Cooke University Hospital, Middlesborough for the ‘Healing Arts Project’, and Ashfield School in Leicester - part of the ‘Building Schools for the Future’ programme. Directing motion capture for an award-winning UNICEF animation, his artworks have been commissioned by Great Ormond Street Hospital Trust. Based in London Bruno collaborates with artist Ruth Gibson as Gibson/Martelli. Their first work together was BAFTA nominated, recently their ground-breaking ‘MAN A’ project won the Lumen Gold Prize.

SUPPORTERS

FUNDERS

PARTNERS