Performing Encryption

page from ‘Cryptography A Very Short Introduction’ by Fred Piper and Sean Murphy

We are working with our colleague Susan Kozel on a project called Performing Encryption in Malmo. We’ve been looking at a poetic way of encrypting data, in this case motion capture (mocap) data. So far we’ve been looking at the open .htr format for mocap files. These are text files with XYZ Translation and Rotation coords for each of the markers for every frame.

It looks like this sample below:

Mocap data in .HTR format.

One idea we’ve been working with is taking this data and messing with it somehow to encrypt it. Working with text edit we went thru’ and did a find & replace and tried a number of operations: replace 3 with 4, replace 87 with 11 and also swapping the axis order, YZX becomes XYZ. You can save the files afterward and open them with a viewer, and see how the mocap is messed up. We’ve been using Unity to look at it in VR. These operations totally mess up the file. It turns out that replacing just one number messes up the data more than replacing 2 numbers because 87 comes up less often. The problem is that if we do the opposite, we introduce noise into the system. With a free character downloaded from the internet, in VR the mocap takes look like this:

Messed up mocaps
accidental duplications lead to obfusications

So maybe a more rounded way to do it would be to make a gesture, which is recorded and this is somehow used as a key to encrypt the data. The gesture is idiosyncratic and has the advantage of being timebased so there is more complexity. To do the encryption it can be done programatically in my case just by using a spreadsheet with a formula – there’s an example below. Here the Sequence A, row 1 is added to the equivalent row in B to give the result below.

Encrypting with gesture

So making a 30 sec take at 30fps actually gives a spreadsheet with 54,000 rows, but anyway…

doing this kind of operation totally messes the mocap and you get a fully obfusicated file. Here’s a gesture in Motion builder – basically you get rid of everything except one marker, in this case the right hand.

Susan K Gesture mocap

Heres another..

Ruth G gesture mocap

Heres the PLAINTEXT mocap file of Ruth before encryption..

And here’s the CYPHERTEXT version of the same file after its been encrypted with the gesture..

Quite a mangling huh!

Now back to the gesture, when you record movement, no shape gets made, its just point moving thru’ space, linked by a skeleton. We tried simultaneously recording the mocap and getting the performer to be in VR using Quill, a VR 3d drawing package. Quill allows users to select different ‘brushes’ and make marks in space. Marks can be saved and exported as 3D .fbx files. You can then import them into other 3d packages.

Making gestures

When you get them into Unity they can look like the image below, you can see Ruth’s on the left, Susan’s in the middle and my fiddly one on the right. Note how different the three dimensional form is from the apparently flat image from the screegrab of the mocap. Faster motions give smoother lines.

3 gestures, each 1 second duration

This brings us to another idea, what if the gestures are rematerialised? Something that takes the encryption key out of the digital into the real world. We are going to get the ‘keys’ cast as bronze objects. In this way they become permanent symbols, freed from the ephemeral nature of digital. Bronze can last for thousands of years. This plays nicely with the performing encryption idea because of course, someone could look at the bronze and then replicate the gesture, if it was mocapped then they could potentially unscramble the data.

We also began looking at gesture recognition. We got the Gesture Recognition plugin for Unity. This widget allows you (In VR) to record a series of similar gestures, which form the brain of a neural network – after editing the brain and giving it a big enough data set (50 versions of the gesture or more) you can then run it and when you make a gesture it will tell you with an accuracy score if you make the one its looking for. Obviously you can have more than one and once a gesture is recognised then you can use it to trigger an action. In the examples they use it to trigger firing fireballs, like a wizard. Anyway what we were thinking was to get the user to look at the bronze then do the gesture in order to view to mocap in VR. For extra trickiness we could scramble the mocap further with wrong gestures. We started to look at this and ran out of time. Susan is going to see if we can show some of this at the TEI conference (Tangible Embedded Interactions) in Stockholm in 2018.

Download ‘The Bronze Key Performing Encryption’ Paper here