We’ve won the Playable City Award!

Back Scroll To Media

We’ve won the Playable City Award!

Ben Barker

21st January 2013

We are excited to announce that our project Hello Lamp Post! has been selected for the Playable City Award. It’s a real surprise, we still can’t quite believe it. When we saw the quality of the shortlist, with work from so many names that we respect, we never imagined being chosen. We’re thrilled and can’t wait to get working. Big thanks to Tom and Gyorgyi for their work too.

hZ_AAUwx9PZ7AQrRgOMSMtW2LzmtGOrJBScxhMR_nEc

We’re also really grateful to the judges for their comments, some of which are below.

Imogen Heap said: ‘I love this for its whispers on the street, guardians in dark corners, humanising our cities’ appendages whose eyes and ears now have a voice. Vessels for an ever evolving conversation, connecting us together. They were there all along!’

Tom Uglow said: ‘Hello Lamp Post! stood out with a potential for both art and play using existing urban furniture. It points to a future made up of the physical objects already around us, the ‘internet of things’, and the underlying complexity is made simple and easy for people by just using SMS for this project. Poetry and technology combine to create subtle and playful reflections of the world we live in. It filled me with a childish delight.”

Claire Doherty says: “We were enchanted by this proposal and particularly loved the way it challenged the prevalence of mass-entertainment and spectacle, revealing an invisible ‘soft city’ – the exchanges and incidents that create a city’s social fabric. It’s rare to find a proposal which combines those intimate exchanges with the humour and playfulness of Hello Lamp Post!”

W3V5KsQFIXW_sjYcb-OYB9x4rO4NthOWgnhstG_BURk

Clare Reddington, Judging Panel Chair says, “We were really excited by the applications we received and by the comments and questions from audiences about the short-listed entries.  The judges had a difficult decision to make but have selected an unusual and innovative project, which responds perfectly to the theme and seems very apt for Bristol. We will certainly have some challenges to make sure the project reaches as many people as possible, but am sure people will respond with curiosity and warmth and I am very much looking forward to waking up some street furniture this summer.”

We’ll keep you updated as the project develops, and look forward to developing the ideas and building the project. Thanks for all the support during the shortlisting process.

Transformations for Experience

Back Scroll To Media

Transformations for Experience

Sam Hill

10th January 2013

A while back I mentioned our theory-in-progress: that there are two kinds of design intervention that can improve the human experience. The first are designed ‘events’: finite moments in time, with their own contexts, during which things happen. Lots of people work in producing consumable experiential events, even if they don’t necessarily view them this way – certainly performers, game makers and interaction designers do; but also musicians, film-makers, artists, restaurateurs, etc. etc.

The other intervention type, however, is a little bit trickier and much less common. These are transformations, or augmentations – finding constant, passive, sustainable ways of being. How do you squeeze more life out of everyday living? We’ve identified three broad categories of transformation that would allow the collection of more experience value: sensory augmentation, memory augmentation and attitudinal re-evaluation.

 

1. Sensory Augmentation


Sensory Augmentation is ‘improving’ the way we interpret the world, which could be done in many ways:

Augmenting our existing senses

We could, theoretically, take our existing senses and improve them with the following abilities:

  • Perceiving beyond our current range, e.g. our vision does not include infra-red or ultra-violet light; our hearing capacity is restricted to a narrow band of frequencies)
  • Detecting things from greater distances (e.g. sharpness of vision, smelling blood in the water that originates from far away)
  • Distinguishing subtle differences between similar sensory inputs (e.g. tasting different varieties of grape in wines, or being able to sing pitch-perfectly)
  • Isolating a particular element amongst broad and varied sources (e.g. picking out a particular voice in a bar)
  • Processing input more quickly (e.g. seeing movement at a faster “frame-rate”, as many birds can)
  • Discerning subtle rates of change (in temperature, light, speed)
  • Observing in a broader directional field (e.g. having greater peripheral vision)

Here’s part of a larger Mezzmer info-graphic doing the rounds. It illustrates how awesomely badass the mantis shrimp’s vision is, relative to ours:

 

Senses seen in other Animals, not analogous to human senses

With the aid of developing tech, we might be able to equip ourselves with entirely new senses, inspired by other organisms in nature, such as:

  • echo-location, such as that used by bats, or dolphins (granted, some people have mastered echo-location too)
  • chemical detection via a vomeronasal organ, like in snakes
  • electroreception, as seen in sharks
  • magnetoception, as seen in birds

Data-centric Augmentation:

Contextual data could aid our perception and navigation of the social, human-constructed world:

  • universal translators and other aids for communication
  • diagrammatic vision – abstract visualisation of intangible things e.g. showing the electric field around an object, or the presence of radiation
  • annotated vision – providing ancillary data about things seen

Non-naturally occuring senses – including the fantastical

  • “x-ray” vision – seeing ‘through’ solid things
  • thermography – perceiving temperature (edit: some snakes have a crude form of this)
  • tele-sensation – tactile sensation from a distance, perhaps through an avatar/ slave-sensor
  • telepathy – non-verbal/ non-physical communication

Gregory McRoberts used an Arduino Lilypad, ultrasonic and infrared sensors to augment his partially-sighted eye to provide distance and temperature data

 

But… Would Sense Augmentation Really Increase Experience Value?

We’re postulating on the fly, to be honest. It makes sense that if sensory capacity was enhanced, one would get more from life, but we don’t really have any evidence to back this up. So perhaps we should consider it an opportunity for discourse. There are after all couple of considerations…

The first consideration is feasibility. Can we improve our capacity for greater sensation? Perhaps, even with the greatest bionic and genetic development we couldn’t enhance our senses beyond a certain limiting factor. Even if we could, it seems our minds can only process a finite amount of sensory stimulation at once.

The second consideration is: should we seek to augment our senses? They are, after all, a product of our evolution and should (you’d think) be somewhat attuned to our needs – we’ve actually lost some superfluous ancestral sensory abilities, such as a stronger olfactory ability, as recently as the last couple of hundred-thousand years. It may be that not only does further sensory development fail to provide an evolutionary edge, but possessing it could even reduce quality of life.

For example, Gregory McRoberts says that anyone trying to use his eye-patch on a fully-functioning eye suffers from a form of ‘Helmet fire’ – a term coined in aviation, where stress-induced task saturation, exacerbated by helmet HUDs, impedes pilots abilities to function and make decisions.

See also the clip below of ‘binocular soccer’ – even though binoculars are an accepted form of visual augmentation, if they can’t integrate passively and sympathetically with the other demands of our vision (depth perception and peripheral awareness) they also have an impeding effect:

 

2. Memory Augmentation


Specifically, enhancing experience would require augmenting Autobiographical memory; episodic memory in particular – recollecting times, places, associated emotions, and other contextual knowledge.

Augmenting Human memory would involve affecting our ability to:

  • record memories – encoding experiences exhaustively, with depth and detail
  • retain memories – remembering experiences for longer/ indefinitely
  • recall memories – have access to memories easily, quickly, entirely, accurately

Pragmatically this can be done in part through existing stuff – tools (such as cameras), systems (such as diaries) or techniques (such as mnemonics), but conceivably, it could perhaps be achievable in the future through genetics or neural-interfacing bionics.

Of course, the experiences themselves aren’t enhanced, but the memories of them are more exactly and comprehensively stored – so all memories would retain more experiential value.

Some people already have superior autobiographical memory – the condition is known as Hyperthymesia and is incredibly rare. People possessing the condition can recall every detail of their lives with as much accuracy as if it’d happened moments ago. Once again, however, we need to question if this capability really improves the human experience. The condition isn’t always regarded as a ‘blessing’; some affected experience it as a burden, and many spend a great deal of time dwelling on the past. The condition challenges the traditional notion of what healthy memory is, prompting the attitude “it isn’t just about retaining the significant stuff. Far more important is being able to forget the rest.” [(via Wikipedia) Rubin, D. C., Schrauf, R. W., & Greenberg, D. L. (2003). Belief and recollection of autobiographical memories. Memory and Cognition, 31, 887–901.]

 

3. Attitudinal Change/ Value Re-assessment


Finally, sense- and memory-augmentation won’t reap much benefit for any individual unless they have a sincere interest in exploiting such capacity to gain more experience value.

Attitudinal change, based on a reassessment of values, is a much less technology-orientated intervention. Instead it is a cognitive shift; a willingness to perceive ones environment more actively, and with a greater attention to detail.

On the one hand, this can be thought of as a learned skill. Sherlock Holmes, for example, is the archetypal ‘observer’ – someone who is ceaselessly, lucidly, taking in the details around him, analysing them and extracting wisdom. On the other hand, there is also a broader philosophical element, or at the very least a set of arguments – statements for why seeking out and making the most of life’s variety is of benefit to us.

To cause a change in attitude and behaviour would require:

  • Learning how to stand back during autobiographical events and and have an absolute, lucid, sensory attentiveness to what’s happening.
  • Being able to objectively reflect on one’s own motivations, decisions, actions and emotional state.
  • Training emotional and intellectual post-analysis (reflection, critiquing)
  • understanding why it is desirable to maximise one’s experiences

People talk about an ability to “live in the now”. Often it is seen as a good thing, though sometimes the phrase is used pejoratively to imply an inability to see the consequences of actions. It is contrasted with both those obsessed with and living inside of their memories, and those who cannot appreciate the here and now because they are constantly looking for the next thing.

Mindfulness

Mindfulness‘ is an essential tenet of Buddhism. Considered to be one of the seven factors for achieving spiritual enlightenment, it is a ready-made, tried-and-tested tool kit for gaining a greater appreciation of one’s environment. Mindfulness teaches both the importance of being aware, as well as providing instructions into how to achieve such a state. Unavoidably, it’s is a very trendy concept in the west at the moment, mainly because it’s lessons can also be applied as a form of cognitive therapy, to help temper conditions such as anxiety, depression and stress.

From our perspective, the applicable teachings of Mindfulness are very interesting, and as we continue to investigate them we’re keen to see what can be learned. However, without wanting to diminish it’s spiritual salience, nor the significance of being able to help therapeutically, as experience designers we need to be sure we can (if possible) isolate the processes from the spiritual.

 

Experiential Research?


‘Medicine’ is an enormous branch of applied science; a collection of inter-related but distinct areas of study, implicitly dedicated (arguments for quality of life to one side) to the goal of prolonging human life, through countering disease, environmental harm or genetic conditions.

Imagine if there was another contrasting branch of science, called Experiential Research. Experiential research might be considered to have the same ultimate purpose as medicine, but with an approach pivoted at 90 degrees. That ultimate goal being to fit more living into a life, but the change in tact being to get more intensity of living into each minute, rather than increasing the number of minutes.

Obviously, the underlying sciences already exist – biochemistry, bionics, cybernetics, genetics, psychology, neurology, human-system interfaces and data visualisation. The concepts of “body-hacking”, or “super-senses” are not new either. But if there was more collaboration across these disciplines, with the aim of creating new transformative interventions, then perhaps we could all reap the benefits of new capabilities and new perspectives.

 

Proustian Camera – Prototyping

Back Scroll To Media

Proustian Camera – Prototyping

Sam Hill

8th January 2013

We’ve been looking at memory – specifically episodic memory – and it’s relationship with experience for a few months now. We’ve spoken with neuroscientists on the science of memory, and Ben has been working on several interventions around the subject. Earlier in the year I first proposed our so-called “Anti-Camera”, or “Scent-camera”. More recently we have come to call it The Proustian Camera (which seems to summarise our intentions most neatly). Ultimately, we hope to develop a device that provides an alternative to conventional cameras, by letting people ‘tag’ events and occasions with a scent, which they can later recreate to aid memory.

 

The Prototype


We’re pleased to announce that this week we’ve assembled our first working prototype – a rudimentary, bare-bones version of what we’d like to finally end up building.

The prototype has 6 scent chambers (the final version will likely have between 18 and 30), and uses piezo-atomisers paired with felt wicks to make the compound scents airborne. Ben gutted the components and electronics from a bunch of Glade “Wisps” (there’s a Make: tutorial on how these can be hacked.) and got a couple of Arduinos inside setup to sync the atomising with the push a of a trigger. The left-most button is a “submit” button, and the rest toggle respective piezos (‘up’ is on, though you can see at the time of the photo they weren’t aligned).

 

Getting to a Prototype stage


To briefly summarise how we got to where we are now – firstly, we’re very grateful to Odette Toilette, who has valuable experience having worked on some interesting fragrance-based products before, and knows all about the alchemical world of olfaction. She pointed us in the right direction for sourcing the scents we needed, and helped us get our heads around the mechanics of how to make smell-objects work.

We bought thirty of so ingredient scents from Perfumer’s Apprentice, who were very helpful when we explained what we were trying to do. They sent us a bunch of tiny jars with a broad spectrum of concentrated scents in them, ranging from musky base tones to fruity high notes. Stuff like Ethyl Methyl 2-Butyrate (which sort of smells like a banana-flavoured epoxy), Bergamot (citric) and Civetone. Civetone is one of the oldest known ingredients in perfume, and is still present in contemporary fragrances like Chanel No. 5. However in isolation it smells like a prison gymnasium crossed with a festival toilet.

 

Experiments


We used these scents for a couple of research projects, to see if memory scent tagging would work as neatly as we hoped.

First a “sample group” of colleagues visited two locations: one as a control, and one whilst using a composite scent. Several days later we re-combined the composite scent and tested to see if they could recall the latter location with more clarity than the control.

In the second experiment we took a larger group (some 50 or so Goldsmiths design volunteers) and showed them five youtube music videos, in each case giving them a different composite scent. We wanted to see if they could later recall which scent was linked to which video.

Disappointingly, in both cases we learned that episodic memory is not as discreet as we first hoped. Declaring one “episode” of memory over, and another to have begun is not as straight-forward as we had expected – similar activities, especially if minutes apart, will ultimately “bleed” into one another from the perspective of a participant.

It seems the time frame of a Proustian mark will be associated with a period longer than several minutes. Though both experiments showed a slight trend towards there being a positive correlation between olfactory stimulation and memory, we’ll need to repeat them, spaced over a longer period of time, to glean any usable data. Now at least we can use the prototype in these experiments.

 

Refining the model


With a working prototype out in the field, we’ve set our sights on designs for a fully-featured iteration. Specifically, we’re working out a form and use-cases to get the right scents in the right place, and with a solid and intuitive user-interface.

Justas has been rapidly sketching forms to interrogate the object’s potential use. We’re conscious that as a conversation piece, our object may benefit from referencing consumer technology products. We’ve set about exploring forms that both indicate it’s situations of use and it’s scent based nature. We want people to be able to imagine it in their hands and functioning in their lives. Drawing and re-drawing the object, we’re learning about what the object needed to function and how to read as a camera-like object.

Justas has also explored the form as 3-D render-sketches, again to see it as a finished object and have conversations about how it works and where it sits. We’re now starting to produce physical sketches, which we’ll upload pics of soon.

 

Next Steps


We still need solid data, demonstrating whether or not our object can do what the scientific theory implies. We’ll use our prototype to explore this. Our use-case explorations have helped us identify three routes, which we’ll be weighing up over the following weeks.

The outcome of this exploration could be:

  1. A conceptually ‘purist’ approach – a ‘blind’ object with minimal functionality, capable of only two functions: a) emitting a scent for the first time,  b) emitting a previous scent-encoding at random
  2. An assisting device for cameras – something that works in tandem with convention cameras to provide a more holistic memory encoding experience
  3. A compromise between the two – a legitimate ‘competitor’ to a recollection-through-sight object, but a tool that is sympathetic to the user and provides meta-data (location, time, possibly the ability to tag scents textually) to help them select previous scents.

Once we’ve settled on the use conditions, we’ll get 3D prints made up and install component PCB’s more specifically adapted to our final needs (as opposed to the proto-electronics we’ve been using so far).

(Finally, here’s is a close up of a scent getting atomised, click for more detail:)