Taxonomy of Interaction

Back Scroll To Media

Taxonomy of Interaction

Sam Hill

4th October 2012

The version shown here focuses almost entirely on human-to-object input. The first public beta version published was v.0.13. (This is the current version.)

 

Conceived in a Pub


We had an idea a couple of months ago (one of those “made-sense-in-the-pub, does-it-still-makes-sense-in-the-morning?” type of ideas) that it’d be incredibly useful to have some kind of universal classification of interaction. Nothing fancy, just a straight-up, catch-all taxonomy of human-systems inter-relationships.

The idea was a hierarchial table; a neat, organised and absolutely correct information tree, sprouting in two directions from the middle, i.e. the computer, system, or thing. At one end would be “every” (see below) conceivable form of input, any kind of real-world data that a computer could make sense of (actually, or theoretically). These roots would converge into their parent ideas and link up to the centre. Then, in the other direction would shoot every conceivable output, any kind of effective change that could be made upon another thing, person or environment.

If suitably broad enough, any known or future ‘interaction’ object or installation would, in theory, be mappable through this process, whether it was a door handle, wind-chime, musical pressure-pad staircase or Google’s Project Glass.

We couldn’t find any existing tables of such breadth in the public domain (though correct me if I’m wrong). It was in fact hard enough to find even parts of such a table. So we decided, tentatively, to see how far we could get making our own. Some of the issues that cropped up were surprising, whilst others which loomed ahead in the distance remained obstinately present until we were forced to address them.

 

Complications and Definitions


There are a couple of important things to be clear on, it seems, when putting together any kind of taxonomy. One of them is choosing an appropriate ordering of common properties. For example, should “physical communication” come under physical activity (alongside walking, eating and other non-communicative movement) or under communication (alongside verbal and haptic communication)? Conventionally, the three elements of an interaction are assumed to be: A person, an object, and an environment. Any fundamental interaction typically occurs between any two, e.g. person-person, person-object, object-environment, etc. These elements seemed like an appropriate fundamental trinity, from which everything else could stem from.

The difference between an object and environment, however, seems mostly (though not unilaterally) to be a matter of scale. Especially when it comes to methods of analysis (temperature, colour, density etc.). We also thought it best to include “abstract data” as a source, essentially to represent any data that might have been created randomly, or where the original relevance had been lost through abstraction.

It can often be desirable to describe something in two different ways. For example, a camera might help recognise which facial muscles are in use (to achieve nostril flair or raise an eyebrow), but it might also be desirable for an object to recognise these signifiers contextually, as a likely indicators of mood (e.g. anger, happiness, fear).

Another issue is granularity. How fine (how deep) should an inspection go? Tunnelling deeper into each vein of enquiry it becomes difficult to know when to stop, and challenging to maintain consistency.

When we talk about covering “every” kind of input, we mean that it should be possible for an organisation system like this to be all-encompassing without necessarily being exhaustive. A system that broadly refers to “audio input” can encompass the notions of “speech” and “musical instruments”, or incorporate properties like “volume” and “timbre”, without necessarily making distinctions between any of them.

 

Sarcasm and Toasters


One decision we made early on was to categorise human inputs by their common characteristics, not by the input mechanism that would record them. This was because there might be more than one way of recording the same thing (e.g. movement could be recorded by systems as diverse as cameras, sonar and accelerometers). This created an interesting side-effect, as the taxonomy shifted into a far more complex study of human behaviour and bio-mechanics; “what can people do”. Whilst studying areas of audio, visual and haptic communication, we were especially struck by the sense we were writing the broad specifications for a savvy, sympathetic AI – a successful android/ cylon/ replicant; i.e. something capable of reading the full range of human action.

Imagine, for example, what it would require for an object to appreciate sarcasm – a toaster, let’s say, that can gauge volume, intonation, emphasis, facial expression, choice of language, timing, socio-cultural circumstances… –  estimate a probability of irony and then respond accordingly.

Such capacity would avoid situations like this:

Does it have a Future?


To develop this project is going to require some input from a few experts-in-their-fields; learned specialists (rather than generalists like myself with wiki-link tunnel vision). The taxonomy needs expanding upon (‘outputs’ at the moment are entirely non-existent), re-organising and probably some correcting. If you would like to contribute and you’re a linguist, anthropologist, roboticist, physiologist, psychologist, or see some territory or area you could assist with, your insight would be appreciated.

As our research has indicated, there is often more than one way of classifying interactions. But a hierarchy demands we always use the most applicable interpretation. All taxonomies are artificial constructs, of course, but a hierarchy seems to exacerbate rigid pigeon-holing of ideas. Perhaps the solution is to evolve into something less linear and absolute than a hierarchial taxonomy (like the nested folders of an OS), and instead consider something more nebulous and amorphous (for example the photo-sets and tags in Flickr).

It would also be great to have overlays; optional layers of additional information (such as examples of use, methodologies and necessary instruments). Perhaps really the solution needs to be a bit more dynamic. Like an interactive app or site. I wonder if the most effective solution might be an industry-powered interaction-centric wiki. Such a project would be non-trivial, and would require mobilising a fair few effective contributors. The Interaction Design Foundation have a V1.0 Encyclopaedia, and an impressive number of authors. And yet,  their chapter led approach to ownership make this quite different from the more democratic ‘wiki’ approach.

 

Credit


Justas did a great job pulling the early research together. A special thanks is also in order for Gemma Carr and Tom Selwyn-Davis who helped research and compile this chart.

What does remembering feel like?

Back Scroll To Media

What does remembering feel like?

Ben Barker

2nd October 2012

Since first talking about memory a few months ago, we’ve been playing with a few things that explore its role in how we reflect on experiences, and how we remember. We’ve recorded the podcast with Neuroscientist Izzy and the smell camera is coming along well, more to follow on that shortly. In conversation, it has also become a defacto measure of an experience’s worth. Below are some of the other things we’ve been fiddling with.

 

Hour Day Week Month Year


http://hourdayweekmonthyear.com/

The site asks you to recall an event after an hour has past, then a day, a week, a month and a year. It started as a space for us to test the methods we use to remember, and to try new ones. Now we’ve given it a user login and a bit of a face lift so that anyone can use it, and we’d love it if you tested your own memory. There is some advice on the site for different approaches you might take. We’ll soon be adding email reminders for when it’s time for your next recall.

How does a memory decay?

From the memories that I currently have in process on the site, I’ve found strong visuals, faces and the colour of things are easy to recall. Yet the sequence of events is easily muddled and any conversation that took place is much harder to bring back. I’m currently doing a number of variations, one where I list everything I saw in space, another where there is a sequence to events. I’m beginning one where I break it down by senses, a control of sorts. A year down the line, I’m hoping for a time capsule quality to the memories, that revealing them will be surprising and there will be traces of change in the way a memory has evolved, like Munch repainting his Sick Child. I have an image that a memory will eventually decay down to it’s essential element, the bit that will be both the reason it remained and the thumbnail by which I know it.

 

10,000th Day


https://panstudio.co.uk/ten-thousand/

We joked about the fact that no one celebrates their 10,000th day, then a quick calculation showed I’d missed mine, which I found surprisingly disappointing. To help others avoid the same loss, I built a php calculator to figure out when it was, based on a given birthday.

Then I started to obsess over what I did on my 10,000th day. It’s likely that we remember more than we realise, but unlike the systems we compare our memory to, our concept relating is poor. Which means the events of that day are probably stored, but the date does not evoke them. With a little work I was able to figure it out. This got me thinking about what role technology plays in memory recall. With my past all backed up on servers and hard drives, I became detective, no longer responsible for my past, but rebuilding it bit by bit from the traces I had left behind. The moments when I had left a digital mark.

This led on to a line of thinking about how our collected digital memory might look in the future. Do we need a new, centralised digital archive? Are we happy that our past (like our identity) has become so distributed across information structures? When we finally give it all to technology, what does our memory look like?

How much of your 10,000th day can you remember? What technologies helped you to do it?

Searching for mine, I went to twitter first, which revealed where I’d been a a day after and a few days before. My emails showed what I’d been working on, I checked my bank statements to see if I’d spent any money in the pub in the evening, but that was the day before. My calendar was blank. I checked facebook, nothing. Then I searched lightroom for the date. That was it, taken and uploaded to flickr, pictures of a bike I’d bought for my brother, a check of my text messages later reveals, the day before. So that was it, I’d gone to the studio, as my emails show, worked on mock-ups for a website, (although the only file created that day was a download of a mySql database), visited our upcoming exhibition at This Way Up. Then went home, cleaned the bike, took pictures and uploaded them. I can remember some of the parts, some I’m filling in from habit. The only part that gives me a physical pull back to the moment is uploading the pictures to flickr. Perhaps the most pleasurable part of the day? I thought it would be more immersive but it just felt like pieces that didn’t make a whole. There was no sense of self, no surging back of the past. I just knew what I’d done, plainly, more memorial than a visitable idea.

In all of this it should note that our minds ability to edit and store only the important details is vital to our sanity. The case of Jill Price highlights this. There was an interesting piece on Channel Four recently too.

 

A Definite Trace


All of the above got me thinking about how camera phones have changed the way I use photography. They capture a more constant stream than was possible before, to be searched through later like digital madelines, jerking us back to the unremembered. Before camera phones I hadn’t photographed in this way, forgetting the aesthetic and building a chronology of identity. The feed doesn’t have to be temporal either. It could be location, habit or even emotion based.

 

Tom’s GhostCar, is a FourSquare account that takes his check-ins from a year ago and visits them in the present. His ghost is walking the streets of Britain. It is a beautiful example of location as the relational concept. As Tom puts it:

It gives me a visceral memory: reminds my bones, my heart, what they felt. (That, for reference, is my defence against nostalgia. This isn’t just about nostalgia, because you might not like what it makes you feel. It’s just about remembering feelings; stopping to pause and remember the passage of time).

“It’s just about remembering feelings.” This is a point we always push, it’s not about the best bits, we only know our lives when we sense all of the experiential range. So when looking through my instagram, I felt frustrated by the inevitable positive spin put on my memories by the desire to capture me at my best. I was guilty of doing what Perec criticised the news for doing (taken from Matt’s recent post on Performance).

The daily papers talk about everything except the daily. The papers annoy me, they teach me nothing. What they recount doesn’t concern me, doesn’t ask me questions and doesn’t answer the questions I ask or would like to ask.
What’s really going on, what we’re experiencing, the rest, all the rest, where is it? How should we take account of, question, describe what happens every day and recurs every day: the banal, the quotidian, the obvious, the common, the ordinary, the infra-ordinary, the background noise, the habitual?

Georges Perec, Approaches to What? 1973

Am I aware of the passing of time if I don’t have the tools to acknowledge the habitual?

At the same time as reading this I had set myself a challenge to remember every day. It was surprisingly easy, to take a moment and log a bit of data. It also meant I did more individual things, to ensure days became easier to deliniate. Again, as Tom said,

The moment I fired ghostcar up, I realised I needed to start giving it better data so that it’d continue to have meaning a year in the future. So that’s a strange, interesting takeaway: changing my behaviour because I want the fossil record to be more accurate.

In combing this thinking with my new use of photography, my call to action went from ‘I want to remember everyday’, to, ‘I want to be reminded of everyday’. So I put together the watch camera below. A camera, a timer and a flash memory drive. It takes a picture every ten minutes and it goes everywhere I go. A trace of the mudane, the melancholy and the habitual. A reminder of the passing of time, but also a route back through the sequence.

Wafaa Bilal, the NYC artist did a similar project where he had a camera implanted in the back of his head. He was exploring surveillance (his images were immediately available online) and the things we leave behind (the camera was in the back of his head) and many people have done 1 image a day.

It also works a non-prescriptive documenting of an individual, like Pete’s beautiful map of a year. A graphic of what a you, behaves like. We’ve talk about this element in our work with Lambeth Collaborative recently. This is more side-effect than intention. I should stress, the images are only intended for me. There is no suggestion that a record of persons life is interesting to anyone else. They’re only appearing here as proof.

This is clearly a beta prototype, more about surfacing questions that refining an object. How much of a role should aesthetic and composition play in photos? How aware should I be when the picture is being taken and how aware should others be? Where should the camera be located, clearly watch level isn’t quite right. I’ll deal with these as prototyping continues, glasses seem the the obvious next step. There are plenty of other issues too, such as privacy, editing and how we revisit the images. Is instagram the place where I should naturally be encountering these photos?

For now though, I’m more conscious of how a day is made up. I’m always near a laptop. I go to bed far too late. It’s the routine bits that jump out.

As I was posting this Chris noted the Autographer which looks really interesting.