Experiencing Cities: TEDxHamburg
Sam Hill
24th July 2013
Sam Hill
24th July 2013
Ben Barker
15th July 2013
We’re live! A thoroughly enjoyable 6 months of work culminated yesterday in a high-tea fuelled launch on College Green. The first conversations with Bristol’s street furniture were held in a marquee in front of City Hall.
It was great to see so many faces from the duration of the project again. We had play testers and sponsors, judges Imogen and Clare and even Bristol’s Mayor George Ferguson texting lamp posts. A massive thanks to you all for testing, talking about and guiding the project. Our biggest thanks however, are reserved for Clare and Verity at Watershed. This project’s realisation is as much due to their hard work and enthusiasm as ours and we’re extremely grateful.
Even at less than 24hrs old we’ve had nearly 900 responses, and people all over Bristol are getting involved. We’ve had people sharing historical facts on City Hall, thanking lamp posts for helping with their fear of the dark and reminiscing about their experiences in Bristol. A select few from the last hour are below:
What’s your favourite memory of Bristol?
Watching the old Wills buildings being demolished at 6am many years agoHave you been waiting here long?
I have not been waiting long for a bus … But it feels like i’ve been waiting all my life to talk to a bus stop.I was thinking of taking up a hobby – do you have one?
I love to play cricket. I’m not sure bridges can play cricket though!
We also have a beautiful miniature of our eponymous lamp post and 4 of his co-stars in the foyer at Watershed. Come down and have a chat to them and learn more about the project.
Keep an eye on the website for a snapshot of what people are waking up and talking about. As the project evolves we’ll be learning how people use the system and tweaking as we go, the most exciting thing of all is we only have a hunch how people will play. There’s lots of potential for different types experiences, such as branching narratives and objects with specific agendas so keep looking for new things to talk to and visiting old ones. We’re just getting started.
There are more images in this set.
Ben Barker
21st June 2013
This week we’ve put up the holding page for Hello Lamp Post and as the counter tells you, we’re less than a month from launch. We thought it would be worth talking about what we’ve been up to and what we’ve been finding out.
One of our key challenges has been communcating the idea, both the mechanic and the sense of fun we hope it will bring to all types of player in Bristol. So as a recap here’s our most recent attempt at explaining the project.
Codes can be found around the city that are normally used to identify public objects for maintenance and monitoring. With Hello Lamp Post we use these codes as identifiers in a city wide network of objects. These codes allow you to either ‘wake up’ a sleeping object or learn what other people have been saying to one that is already awake. Will it be pleased to see you? Irritated at having been left in the rain? Or will it tell you a secret? Each exchange will last for a few messages and you will be encouraged to come back and talk to the object some more another day. The more you play, the more the hidden life of the city will be revealed. Objects might even start to get attached to you if you talk to them a lot.
There is also a trailer that we hope gives people a sense of how the interaction will feel and what the cumulative effect of it will be.
We don’t want to give too much away about the content, but a lot of our work has focused on the balance between giving the objects personality, and their responsibility to convey the personality of the people of Bristol. We started off building quite characterful objects with recognisable personalities, but we quickly realised that acting like a logic system is more honest to what they are, the personality comes from the people on the system.
At an earlier playtest we explored the idea that people would speak to the whole of Bristol through the objects, not just the object they were stood in front of. Though this meant there was a lot more content to explore, the disconnect from place and narrative – one of the core aspects of our proposal – was lost and the players told us that. So on the second day of testing we moved back to the much more object-centric conversation which vastly improved the experience, though the cost is that people see less user content. Thanks to everyone who helped with playtests, we’ve learned a lot from your input.
At our most recent playtest we settled on mechanic that is mix of the both object specific content and ideas from other parts of the city, so we can facilitate a city wide communication as well as location specific ideas to emerge.
Tom expands on what this play testing does and our overall process:
It’s one of continual prototyping and slow evolution. We’ve always had the basic idea of what a conversation should feel like – but it’s turned out to be the details of the implementation that’s most important. So we’ll explore what a conversation might feel like first with pen and paper – drawing up flowcharts, sample dialogue – and once we have some logic laid out, we’ll code it up and see what it feels like when it’s in your hand, on a small screen – and see what all the edge cases we hadn’t thought about lead to! And then we’ll iterate on that.
We playtest it a lot: both with friends and colleagues in the studio, but also on site in Bristol. We’ve tried it with several groups down there over a number of visits, and it’s been really interesting to see how they’ve responded to it; each time we go back, the new improvements lead to even more useful feedback.
Another point that comes up a lot in discussion is around the Internet of Things. In someways this wasn’t a huge part of our thinking – the idea grew from a desire to record and encourage sharing of experience in the city. Yet as designers practicing today, we were inevitably influenced by the IoT (and for me a big part of my design education was with Alex and Tinker). Though the project arrives at a time when the IoT is being talked about more and more, the challenge with Hello Lamp Post has been almost an opposite. Rather than giving networks more human and tangible presence, our objects were already there, integrated into the city and with their own stories, our challenge was to create a playful, human network for them to live on.
Justas Motuzas
17th May 2013
We’ve had the camera dolly for a while now and it’s been really useful in film work, helping to achieve interesting cinematic results. We wanted to add a motor for longer time-lapse panning shots and eliminate any undesirable vibration. This was a good opportunity for us to tinker with motors, as well as explore the ferric chloride etching process for making circuit boards. The first test is below.
We took a 21.2W 156:1 geared DC motor which we’ve had on our shelf for a while and added a T5 timing pulley and belt combination.
One of our aims was to be able to control motor speed and direction. Luckily we already had a PWM motor regulator module which helped to solve variable speed problem.
To be in control of the direction in which the camera moves, we decided to use a ‘DPDT'(ON-OFF-ON) switch and two LEDs indicating direction in which the dolly is moving
The ferric chloride etching process proved to be quite effective for making a simple circuit board. After printing out the design, we ironed the transfer toner onto a copper plated board. Then with a fifteen minute ferric chloride bath and a tiny bit of careful drilling, we were left with this:
And here’s how it looks, soldered and boxed up:
We hope to share some more footage soon and get some nice applications out of it.
(Feel free to get in touch if you have any questions.)
Ben Barker
10th May 2013
El Ultimo Grito’s workshop programme Pilots is an undertaking at the Stanley Pickering Gallery that explores new forms of design education. The imperative for this discussion is the changing role of education in an open-data society. If you can learn anything on youtube, what is the role of the educator? Though it’s harder to learn a critical practice with embedded social and system thinking on youtube, it is unclear how long this will remain true. Identifying what institutions and educators roles are now and in the future is key to retaining the viability of a formal education. People are approaching this in variety of ways, a course in Australia recently structured lessons around googleable vs non googleable questions.
It’s a question we explored with Matt last year, and led to us creating a film for the design department at Goldsmiths exploring the value of the University education they offer.
As design becomes less production-focused and increasingly embedded in policy formation and social change, El Ultimo Grito believe design education models will be applicable as new models for education as a whole. You only need to see the quiet, design led revolution happening at gov.uk to know design’s role in society is changing.
Given the increasingly co-designed, agile nature of design practice, what can education learn from industry? Moreover, if we are now regularly seeing flexible, user-lead design processes being put into practice, have we reflected that back on our educational structures?
A defining moment in my university education was a realisation that the teaching I was receiving was as exploratory as my attempts to make sense of it. Our tutors would often acknowledge that lessons were based on a hunch or an experiment. I liked knowing I was both the experiment and the scientist, learning and helping shape my learning. In many ways they are just actions in the same cycle; explaining things helps you understand the gaps in your knowledge. It’s not the only feedback loop inherent in a good design education but I believe it’s the most important. The first time I tried teaching was the moment I realised what I had learned.
Russell talks about the changing relationships with users in his work at GDS:
“But it’s not an agency-type relationship where someone distant and important has to ‘approve’ everything. This is mostly because our chief responsibility is to our users – they approve our decisions by using or not using the services we offer them. Or by complaining about them, which they sometimes do. Also, because you just can’t do agile with a traditional client-approval methodology.”
If all stakeholders in education saw the course as an on-going investigation, with everyone equally responsible, it would increase the sense of ownership for everyone. In primary education the Harris Foundation takes students from around London Boroughs and asks them to design their own education, they call it a step change in student engagement, motivation and learning. The Kunskapsskolan in Sweden also looks at a student lead, deconstructed curriculum and is now the second most popular education provider. The Dirty Art Department at The Sandburg institute looks at an open course model in further education, asking students to define their own ambition and the course. There are plenty of other good examples in this report from Innovation Unit.
Even in the creative sector, it’s surprising how often students feel they are being taught at. As designers, if we are to question how organisations behave, and evangelise a responsive, open process, we need to make sure we’ve checked how we’re engaging our students and teachers in the learning process.
Our session at Pilots was lead by Daniel Charny and we modelled three approaches based around time, place and resources. The project is on going and there’s a really good write up here on the intentions:
Justas Motuzas
30th April 2013
Last Wednesday evening our EEG controlled mini helicopter became airborne. It’s a lot of fun, though it takes time to master the techniques of concentration and meditation for precise control.
Electroencephalography (EEG) is the recording of electrical activity along the scalp. EEG measures voltage fluctuations resulting from ionic current flows within the neurons of the brain.
More complicated EEGs are widely used in scientific research. Meanwhile simpler research grade product versions are available for anybody, and with a little bit of tinkering, hobbyists are able to repurpose these brain wave readers. The mind controlled helicopter is perhaps one of the more popular hack projects for EEG devices. It’s a great way to familiarise yourself with processes involved and the devices functionality.
We are interested in what experientially rewarding uses there could be for this device and what contexts would benefit from this input? We used a device before for an EEG Controlled Seance at 2011 Winterwell Halloween and perhaps the best known use of an EEG controller is ‘The Ascent’ by Yehuda Duenyas.
We’ve been sharing the helicopter at the Hack the Barbican sessions this weekend, and are keen to use it in other ways, so get in touch if you have something in mind.
As mentioned before there was a lot information about how to hack into the ‘ syma s107g’ model mini helicopter, however we had the 3 channel ‘9808’ so could not completely rely on the existing libraries, so we mashed a few things together to make this work.
You Will Need:
Hardware:
1x 9808 minihelicopter with IR controller (usually comes with helicopter)
1x Arduino Board
1x Bread Board
1x MPC4131 5K Digital Potentiometer
Jump wires
Unscrew the Helicopter controller casing, the potentiometer on the left controls gas unsolder it, you will be left with two wires, connect them to digital potentiometer pins as shown in the picture.
Set up your EEG headset install software and make sure it is communicating well with USB dongle. Upload the code to arduino board, have your EEG headset on, turn on helicopter, turn on controller start processing. Note: (If helicopter does not reacting you might have to restart it by pluging out red wire from 5V plugging it to GND and back to 5V)
And slightly tweaked ‘Arduino’ digital pot library by Tom Igoe:
/*
Digital Pot Control
This example controls an Analog Devices AD5206 digital potentiometer.
The AD5206 has 6 potentiometer channels. Each channel’s pins are labeled
A – connect this to voltage
W – this is the pot’s wiper, which changes when you set it
B – connect this to ground.The AD5206 is SPI-compatible,and to command it, you send two bytes,
one with the channel number (0 – 5) and one with the resistance value for the
channel (0 – 255).The circuit:
* All A pins of AD5206 connected to +5V
* All B pins of AD5206 connected to ground
* An LED and a 220-ohm resisor in series connected from each W pin to ground
* CS – to digital pin 10 (SS pin)
* SDI – to digital pin 11 (MOSI pin)
* CLK – to digital pin 13 (SCK pin)created 10 Aug 2010
by Tom IgoeThanks to Heather Dewey-Hagborg for the original tutorial, 2005
*/
// inslude the SPI library:
#include// set pin 10 as the slave select for the digital pot:
const int slaveSelectPin = 10;float mylevel=0;
int inbyte;void setup() {
Serial.begin(9600);
// set the slaveSelectPin as an output:
pinMode (slaveSelectPin, OUTPUT);// initialize SPI:
SPI.begin();}
void loop() {
{ if (Serial.available()>0)
inbyte = Serial.read();}
// go through the six channels of the digital pot:
for (int channel = 0; channel < 6; channel++) {//adjust inbyte or map values for individual callibration
if (inbyte < 50){ mylevel = map(inbyte, 0, 50, 20, 32); } else if (inbyte >= 50) {
mylevel = map(inbyte, 50, 100, 34, 36);
}
//mylevel=inbyte;
digitalPotWrite(channel, mylevel);delay(10);
}}
int digitalPotWrite(int address, int value) {
// take the SS pin low to select the chip:
digitalWrite(slaveSelectPin,LOW);
// send in the address and value via SPI:
SPI.transfer(address);
SPI.transfer(value);
// take the SS pin high to de-select the chip:
digitalWrite(slaveSelectPin,HIGH);
}
We had to tweak a library by Andreas Borg little to our needs:
import netscape.javascript.*;
/*
The NeuroSky MindWave device did not ship with any proper Java bindings.
Jorge C. S. Cardoso has release a processing library for the MindSet device
but that communicates over the serial port. NeuroSky has since release a connector
application that talks JSON over a normal socket.Using the same API as the previous library this talks directly to the ThinkGear
connector.Info on this library
http://crea.tion.to/processing/thinkgear-java-socketInfo on ThinkGear
http://developer.neurosky.com/Info on Cardoso’s API
http://jorgecardoso.eu/processing/MindSetProcessing/Have fun and get some peace of mind!
xx
Andreas Borg
Jun, 2011
borg@elevated.to
*/import processing.serial.*;
Serial port;
import neurosky.*;
import org.json.*;
ThinkGearSocket neuroSocket;
int attention=0;
int meditation=0;
PFont font;
void setup() {
size(600,600);
port = new Serial(this, “COM6”, 9600);
ThinkGearSocket neuroSocket = new ThinkGearSocket(this);
try {
neuroSocket.start();
}
catch (ConnectException e) {
println(“Is ThinkGear running??”);
}
smooth();
noFill();
font = createFont(“Verdana”,12);
textFont(font);}
void draw() {
background(0,0,0,50);
fill(0, 0,0, 255);
noStroke();
rect(0,0,120,80);fill(0, 0,0, 10);
noStroke();
rect(0,0,width,height);
fill(0, 116, 168);
stroke(0, 116, 168);
text(“Attention: “+attention, 10, 30);
noFill();
ellipse(width/2,height/2,attention*3,attention*3);//fill(209, 24, 117, 100);
//noFill();
//text(“Meditation: “+meditation, 10, 50);
//stroke(209, 24, 117, 100);
//noFill();
//ellipse(width/2,height/2,meditation*3,meditation*3);
}void poorSignalEvent(int sig) {
println(“SignalEvent “+sig);
}public void attentionEvent(int attentionLevel) {
println(“Attention Level: ” + attentionLevel);
attention = attentionLevel;
port.write(attentionLevel);
}void meditationEvent(int meditationLevel) {
println(“Meditation Level: ” + meditationLevel);
meditation = meditationLevel;
//port.write(meditationLevel);
}void blinkEvent(int blinkStrength) {
println(“blinkStrength: ” + blinkStrength);
}public void eegEvent(int delta, int theta, int low_alpha, int high_alpha, int low_beta, int high_beta, int low_gamma, int mid_gamma) {
println(“delta Level: ” + delta);
println(“theta Level: ” + theta);
println(“low_alpha Level: ” + low_alpha);
println(“high_alpha Level: ” + high_alpha);
println(“low_beta Level: ” + low_beta);
println(“high_beta Level: ” + high_beta);
println(“low_gamma Level: ” + low_gamma);
println(“mid_gamma Level: ” + mid_gamma);
}void rawEvent(int[] raw) {
//println(“rawEvent Level: ” + raw);
}void stop() {
neuroSocket.stop();
super.stop();
}
Ben Barker
21st February 2013
Recently we’ve talked a lot about how memory and place relate, in part following on from our thoughts on Memory, identity and the network. The discussion also formed a starting point for our Playable City submission. We have been exploring ways to let people create a new history of the city, to record and share the stories they have lived and are living.
One inspiration was Austerlitz, Seabald’s excellent novel where the titular protagonist unravels his forgotten past through travel, searching for his identity by crossing the globe as if it was his brain, examining the cities of crumbling synapse. It paints an idea of our environment, the city, as a Wiki about how we got to be the way we are, where we can walk the streets and be reminded of the ingredients.
James Bridle draws the network as the 5th dimension, that of memory:
“The network is not a shared consciousness, but it is a shared memory (and a shared experience if it makes sense to say that our experiences are just memories)… It is a recording device. It is a recording angel, but a curiously passive one. “
This passivity is important, more than ever there is an imperative to be agent in how we remember. What do we give and what do we get back? The network defies time too, at last years Serpentine Memory Marathon Douglas Copeland talked about how eras have been flattened in our networked age, platforms like spotify don’t care about when.
“Our lives have lost their narrative threads. The internet has bent us to it’s will quicker than any other technology….all eras co-exist at once.”
The collective memory has become as available as our own, the function of memory on identity is being separated and tested. We rely on the network to tell us what we are like. This Is My Jam Odyssey refers to a years personal listening as an epic journey of exploration, and it is. The network knows we want, perhaps need, to be reminded of what we’re like. Bookcases aren’t about storage, but a granular record of what we’re made of, and this is a continuation.
I’ve referenced Tom Armitage’s Ghostcar before and it goes someway to answering the question of how we geolocate and revist memories in a meaningful way. It’s built on FourSquare, the most notable locational service and a very digital tool in a physical, messy reality. The beauty of ghostcar is it closes the loop, asks you to actually be there. Tom Loois’ Blank Ways is an app that shows the places you haven’t been, what Matt Ward describes as the spaces of mental calm, the unused storage, it’s a poetic highlighting of how memory and location are interwoven.
The memory structures of a city are a journey of surprise, we don’t always know the recalled experiences that will jump from a bus stop or broken sign. At it’s simplest, the network requires us to know what we want to remember. Facebook reminds us of holidays and parties, twitter of our sharable thoughts (it’s well worth keeping a tweets I didn’t send.doc as a personal record). We don’t visit physical spaces to journey backwards so often, but if we did, could we equally describe a return to childhood haunts as an ‘epic journey of exploration’? On a recent trip to my Grandparents soon to be sold house, I was surprised by the number of memories awaiting me, one in a steep stair case, another in a half dug pond. The image of it all being bulldozed and lost forever suddenly seemed a much more personal violation, a loss of data. Our experiences are intricately linked with place, but we haven’t reconciled locations temporality with our new networked realities.
So it’s from that space, between the networked collation of experience, and the stronger sensory reactions evoked using location that we approached Hello Lamp Post. Documenting and sharing these spacial memories as they change, before they change. Tracing them onto the network. The infrastructure of the smart city provides the skeleton of a low tech network that we hope will allow us to explore that.
Ben Barker
21st January 2013
We are excited to announce that our project Hello Lamp Post! has been selected for the Playable City Award. It’s a real surprise, we still can’t quite believe it. When we saw the quality of the shortlist, with work from so many names that we respect, we never imagined being chosen. We’re thrilled and can’t wait to get working. Big thanks to Tom and Gyorgyi for their work too.
We’re also really grateful to the judges for their comments, some of which are below.
Imogen Heap said: ‘I love this for its whispers on the street, guardians in dark corners, humanising our cities’ appendages whose eyes and ears now have a voice. Vessels for an ever evolving conversation, connecting us together. They were there all along!’
Tom Uglow said: ‘Hello Lamp Post! stood out with a potential for both art and play using existing urban furniture. It points to a future made up of the physical objects already around us, the ‘internet of things’, and the underlying complexity is made simple and easy for people by just using SMS for this project. Poetry and technology combine to create subtle and playful reflections of the world we live in. It filled me with a childish delight.”
Claire Doherty says: “We were enchanted by this proposal and particularly loved the way it challenged the prevalence of mass-entertainment and spectacle, revealing an invisible ‘soft city’ – the exchanges and incidents that create a city’s social fabric. It’s rare to find a proposal which combines those intimate exchanges with the humour and playfulness of Hello Lamp Post!”
Clare Reddington, Judging Panel Chair says, “We were really excited by the applications we received and by the comments and questions from audiences about the short-listed entries. The judges had a difficult decision to make but have selected an unusual and innovative project, which responds perfectly to the theme and seems very apt for Bristol. We will certainly have some challenges to make sure the project reaches as many people as possible, but am sure people will respond with curiosity and warmth and I am very much looking forward to waking up some street furniture this summer.”
We’ll keep you updated as the project develops, and look forward to developing the ideas and building the project. Thanks for all the support during the shortlisting process.
Sam Hill
10th January 2013
A while back I mentioned our theory-in-progress: that there are two kinds of design intervention that can improve the human experience. The first are designed ‘events’: finite moments in time, with their own contexts, during which things happen. Lots of people work in producing consumable experiential events, even if they don’t necessarily view them this way – certainly performers, game makers and interaction designers do; but also musicians, film-makers, artists, restaurateurs, etc. etc.
The other intervention type, however, is a little bit trickier and much less common. These are transformations, or augmentations – finding constant, passive, sustainable ways of being. How do you squeeze more life out of everyday living? We’ve identified three broad categories of transformation that would allow the collection of more experience value: sensory augmentation, memory augmentation and attitudinal re-evaluation.
Sensory Augmentation is ‘improving’ the way we interpret the world, which could be done in many ways:
We could, theoretically, take our existing senses and improve them with the following abilities:
Here’s part of a larger Mezzmer info-graphic doing the rounds. It illustrates how awesomely badass the mantis shrimp’s vision is, relative to ours:
With the aid of developing tech, we might be able to equip ourselves with entirely new senses, inspired by other organisms in nature, such as:
Data-centric Augmentation:
Contextual data could aid our perception and navigation of the social, human-constructed world:
Gregory McRoberts used an Arduino Lilypad, ultrasonic and infrared sensors to augment his partially-sighted eye to provide distance and temperature data
We’re postulating on the fly, to be honest. It makes sense that if sensory capacity was enhanced, one would get more from life, but we don’t really have any evidence to back this up. So perhaps we should consider it an opportunity for discourse. There are after all couple of considerations…
The first consideration is feasibility. Can we improve our capacity for greater sensation? Perhaps, even with the greatest bionic and genetic development we couldn’t enhance our senses beyond a certain limiting factor. Even if we could, it seems our minds can only process a finite amount of sensory stimulation at once.
The second consideration is: should we seek to augment our senses? They are, after all, a product of our evolution and should (you’d think) be somewhat attuned to our needs – we’ve actually lost some superfluous ancestral sensory abilities, such as a stronger olfactory ability, as recently as the last couple of hundred-thousand years. It may be that not only does further sensory development fail to provide an evolutionary edge, but possessing it could even reduce quality of life.
For example, Gregory McRoberts says that anyone trying to use his eye-patch on a fully-functioning eye suffers from a form of ‘Helmet fire’ – a term coined in aviation, where stress-induced task saturation, exacerbated by helmet HUDs, impedes pilots abilities to function and make decisions.
See also the clip below of ‘binocular soccer’ – even though binoculars are an accepted form of visual augmentation, if they can’t integrate passively and sympathetically with the other demands of our vision (depth perception and peripheral awareness) they also have an impeding effect:
Specifically, enhancing experience would require augmenting Autobiographical memory; episodic memory in particular – recollecting times, places, associated emotions, and other contextual knowledge.
Augmenting Human memory would involve affecting our ability to:
Pragmatically this can be done in part through existing stuff – tools (such as cameras), systems (such as diaries) or techniques (such as mnemonics), but conceivably, it could perhaps be achievable in the future through genetics or neural-interfacing bionics.
Of course, the experiences themselves aren’t enhanced, but the memories of them are more exactly and comprehensively stored – so all memories would retain more experiential value.
Some people already have superior autobiographical memory – the condition is known as Hyperthymesia and is incredibly rare. People possessing the condition can recall every detail of their lives with as much accuracy as if it’d happened moments ago. Once again, however, we need to question if this capability really improves the human experience. The condition isn’t always regarded as a ‘blessing’; some affected experience it as a burden, and many spend a great deal of time dwelling on the past. The condition challenges the traditional notion of what healthy memory is, prompting the attitude “it isn’t just about retaining the significant stuff. Far more important is being able to forget the rest.” [(via Wikipedia) Rubin, D. C., Schrauf, R. W., & Greenberg, D. L. (2003). Belief and recollection of autobiographical memories. Memory and Cognition, 31, 887–901.]
Finally, sense- and memory-augmentation won’t reap much benefit for any individual unless they have a sincere interest in exploiting such capacity to gain more experience value.
Attitudinal change, based on a reassessment of values, is a much less technology-orientated intervention. Instead it is a cognitive shift; a willingness to perceive ones environment more actively, and with a greater attention to detail.
On the one hand, this can be thought of as a learned skill. Sherlock Holmes, for example, is the archetypal ‘observer’ – someone who is ceaselessly, lucidly, taking in the details around him, analysing them and extracting wisdom. On the other hand, there is also a broader philosophical element, or at the very least a set of arguments – statements for why seeking out and making the most of life’s variety is of benefit to us.
To cause a change in attitude and behaviour would require:
People talk about an ability to “live in the now”. Often it is seen as a good thing, though sometimes the phrase is used pejoratively to imply an inability to see the consequences of actions. It is contrasted with both those obsessed with and living inside of their memories, and those who cannot appreciate the here and now because they are constantly looking for the next thing.
‘Mindfulness‘ is an essential tenet of Buddhism. Considered to be one of the seven factors for achieving spiritual enlightenment, it is a ready-made, tried-and-tested tool kit for gaining a greater appreciation of one’s environment. Mindfulness teaches both the importance of being aware, as well as providing instructions into how to achieve such a state. Unavoidably, it’s is a very trendy concept in the west at the moment, mainly because it’s lessons can also be applied as a form of cognitive therapy, to help temper conditions such as anxiety, depression and stress.
From our perspective, the applicable teachings of Mindfulness are very interesting, and as we continue to investigate them we’re keen to see what can be learned. However, without wanting to diminish it’s spiritual salience, nor the significance of being able to help therapeutically, as experience designers we need to be sure we can (if possible) isolate the processes from the spiritual.
‘Medicine’ is an enormous branch of applied science; a collection of inter-related but distinct areas of study, implicitly dedicated (arguments for quality of life to one side) to the goal of prolonging human life, through countering disease, environmental harm or genetic conditions.
Imagine if there was another contrasting branch of science, called Experiential Research. Experiential research might be considered to have the same ultimate purpose as medicine, but with an approach pivoted at 90 degrees. That ultimate goal being to fit more living into a life, but the change in tact being to get more intensity of living into each minute, rather than increasing the number of minutes.
Obviously, the underlying sciences already exist – biochemistry, bionics, cybernetics, genetics, psychology, neurology, human-system interfaces and data visualisation. The concepts of “body-hacking”, or “super-senses” are not new either. But if there was more collaboration across these disciplines, with the aim of creating new transformative interventions, then perhaps we could all reap the benefits of new capabilities and new perspectives.
Sam Hill
8th January 2013
We’ve been looking at memory – specifically episodic memory – and it’s relationship with experience for a few months now. We’ve spoken with neuroscientists on the science of memory, and Ben has been working on several interventions around the subject. Earlier in the year I first proposed our so-called “Anti-Camera”, or “Scent-camera”. More recently we have come to call it The Proustian Camera (which seems to summarise our intentions most neatly). Ultimately, we hope to develop a device that provides an alternative to conventional cameras, by letting people ‘tag’ events and occasions with a scent, which they can later recreate to aid memory.
We’re pleased to announce that this week we’ve assembled our first working prototype – a rudimentary, bare-bones version of what we’d like to finally end up building.
The prototype has 6 scent chambers (the final version will likely have between 18 and 30), and uses piezo-atomisers paired with felt wicks to make the compound scents airborne. Ben gutted the components and electronics from a bunch of Glade “Wisps” (there’s a Make: tutorial on how these can be hacked.) and got a couple of Arduinos inside setup to sync the atomising with the push a of a trigger. The left-most button is a “submit” button, and the rest toggle respective piezos (‘up’ is on, though you can see at the time of the photo they weren’t aligned).
To briefly summarise how we got to where we are now – firstly, we’re very grateful to Odette Toilette, who has valuable experience having worked on some interesting fragrance-based products before, and knows all about the alchemical world of olfaction. She pointed us in the right direction for sourcing the scents we needed, and helped us get our heads around the mechanics of how to make smell-objects work.
We bought thirty of so ingredient scents from Perfumer’s Apprentice, who were very helpful when we explained what we were trying to do. They sent us a bunch of tiny jars with a broad spectrum of concentrated scents in them, ranging from musky base tones to fruity high notes. Stuff like Ethyl Methyl 2-Butyrate (which sort of smells like a banana-flavoured epoxy), Bergamot (citric) and Civetone. Civetone is one of the oldest known ingredients in perfume, and is still present in contemporary fragrances like Chanel No. 5. However in isolation it smells like a prison gymnasium crossed with a festival toilet.
We used these scents for a couple of research projects, to see if memory scent tagging would work as neatly as we hoped.
First a “sample group” of colleagues visited two locations: one as a control, and one whilst using a composite scent. Several days later we re-combined the composite scent and tested to see if they could recall the latter location with more clarity than the control.
In the second experiment we took a larger group (some 50 or so Goldsmiths design volunteers) and showed them five youtube music videos, in each case giving them a different composite scent. We wanted to see if they could later recall which scent was linked to which video.
Disappointingly, in both cases we learned that episodic memory is not as discreet as we first hoped. Declaring one “episode” of memory over, and another to have begun is not as straight-forward as we had expected – similar activities, especially if minutes apart, will ultimately “bleed” into one another from the perspective of a participant.
It seems the time frame of a Proustian mark will be associated with a period longer than several minutes. Though both experiments showed a slight trend towards there being a positive correlation between olfactory stimulation and memory, we’ll need to repeat them, spaced over a longer period of time, to glean any usable data. Now at least we can use the prototype in these experiments.
With a working prototype out in the field, we’ve set our sights on designs for a fully-featured iteration. Specifically, we’re working out a form and use-cases to get the right scents in the right place, and with a solid and intuitive user-interface.
Justas has been rapidly sketching forms to interrogate the object’s potential use. We’re conscious that as a conversation piece, our object may benefit from referencing consumer technology products. We’ve set about exploring forms that both indicate it’s situations of use and it’s scent based nature. We want people to be able to imagine it in their hands and functioning in their lives. Drawing and re-drawing the object, we’re learning about what the object needed to function and how to read as a camera-like object.
Justas has also explored the form as 3-D render-sketches, again to see it as a finished object and have conversations about how it works and where it sits. We’re now starting to produce physical sketches, which we’ll upload pics of soon.
We still need solid data, demonstrating whether or not our object can do what the scientific theory implies. We’ll use our prototype to explore this. Our use-case explorations have helped us identify three routes, which we’ll be weighing up over the following weeks.
The outcome of this exploration could be:
Once we’ve settled on the use conditions, we’ll get 3D prints made up and install component PCB’s more specifically adapted to our final needs (as opposed to the proto-electronics we’ve been using so far).
(Finally, here’s is a close up of a scent getting atomised, click for more detail:)
Ben Barker
5th December 2012
This year our advent calendar is called The Santa Scores. It is the only advent-calendar tv-listings site that picks a film from that day, and then rates it for Christmasiness, against other films that are also on, but on different days.
Last year our advent calendar reviewed pre-packaged high street sandwiches. This year we’re asking what gives films a festive feeling. We’ve been scanning the TV listings for you. Every day we’ll choose a film that looks like the most Christmassy one on and then we’ll review it for Christmasiness. You can exploit this information as you choose.
From an experiential perspective, we think the “Christmas feeling” is very interesting. It’s a weird compound emotion – a mix of apprehension, nostalgia and suspension of disbelief; synonymous with feelings towards family, reward, comfort and pop mythology. Though it must be unique for everyone, a lot of people understand it as a concept and have had personal experience of it. When it begins is very subjective, and the subtlest of details can trigger it – Christmas lights going up, decorating the tree, or other tropes and rituals – including, we suspect, certain films. We’re keen to find out which movies have the best chance of triggering a ‘festive cascade’.
Sam Hill
4th October 2012
The version shown here focuses almost entirely on human-to-object input. The first public beta version published was v.0.13. (This is the current version.)
We had an idea a couple of months ago (one of those “made-sense-in-the-pub, does-it-still-makes-sense-in-the-morning?” type of ideas) that it’d be incredibly useful to have some kind of universal classification of interaction. Nothing fancy, just a straight-up, catch-all taxonomy of human-systems inter-relationships.
The idea was a hierarchial table; a neat, organised and absolutely correct information tree, sprouting in two directions from the middle, i.e. the computer, system, or thing. At one end would be “every” (see below) conceivable form of input, any kind of real-world data that a computer could make sense of (actually, or theoretically). These roots would converge into their parent ideas and link up to the centre. Then, in the other direction would shoot every conceivable output, any kind of effective change that could be made upon another thing, person or environment.
If suitably broad enough, any known or future ‘interaction’ object or installation would, in theory, be mappable through this process, whether it was a door handle, wind-chime, musical pressure-pad staircase or Google’s Project Glass.
We couldn’t find any existing tables of such breadth in the public domain (though correct me if I’m wrong). It was in fact hard enough to find even parts of such a table. So we decided, tentatively, to see how far we could get making our own. Some of the issues that cropped up were surprising, whilst others which loomed ahead in the distance remained obstinately present until we were forced to address them.
There are a couple of important things to be clear on, it seems, when putting together any kind of taxonomy. One of them is choosing an appropriate ordering of common properties. For example, should “physical communication” come under physical activity (alongside walking, eating and other non-communicative movement) or under communication (alongside verbal and haptic communication)? Conventionally, the three elements of an interaction are assumed to be: A person, an object, and an environment. Any fundamental interaction typically occurs between any two, e.g. person-person, person-object, object-environment, etc. These elements seemed like an appropriate fundamental trinity, from which everything else could stem from.
The difference between an object and environment, however, seems mostly (though not unilaterally) to be a matter of scale. Especially when it comes to methods of analysis (temperature, colour, density etc.). We also thought it best to include “abstract data” as a source, essentially to represent any data that might have been created randomly, or where the original relevance had been lost through abstraction.
It can often be desirable to describe something in two different ways. For example, a camera might help recognise which facial muscles are in use (to achieve nostril flair or raise an eyebrow), but it might also be desirable for an object to recognise these signifiers contextually, as a likely indicators of mood (e.g. anger, happiness, fear).
Another issue is granularity. How fine (how deep) should an inspection go? Tunnelling deeper into each vein of enquiry it becomes difficult to know when to stop, and challenging to maintain consistency.
When we talk about covering “every” kind of input, we mean that it should be possible for an organisation system like this to be all-encompassing without necessarily being exhaustive. A system that broadly refers to “audio input” can encompass the notions of “speech” and “musical instruments”, or incorporate properties like “volume” and “timbre”, without necessarily making distinctions between any of them.
One decision we made early on was to categorise human inputs by their common characteristics, not by the input mechanism that would record them. This was because there might be more than one way of recording the same thing (e.g. movement could be recorded by systems as diverse as cameras, sonar and accelerometers). This created an interesting side-effect, as the taxonomy shifted into a far more complex study of human behaviour and bio-mechanics; “what can people do”. Whilst studying areas of audio, visual and haptic communication, we were especially struck by the sense we were writing the broad specifications for a savvy, sympathetic AI – a successful android/ cylon/ replicant; i.e. something capable of reading the full range of human action.
Imagine, for example, what it would require for an object to appreciate sarcasm – a toaster, let’s say, that can gauge volume, intonation, emphasis, facial expression, choice of language, timing, socio-cultural circumstances… – estimate a probability of irony and then respond accordingly.
Such capacity would avoid situations like this:
To develop this project is going to require some input from a few experts-in-their-fields; learned specialists (rather than generalists like myself with wiki-link tunnel vision). The taxonomy needs expanding upon (‘outputs’ at the moment are entirely non-existent), re-organising and probably some correcting. If you would like to contribute and you’re a linguist, anthropologist, roboticist, physiologist, psychologist, or see some territory or area you could assist with, your insight would be appreciated.
As our research has indicated, there is often more than one way of classifying interactions. But a hierarchy demands we always use the most applicable interpretation. All taxonomies are artificial constructs, of course, but a hierarchy seems to exacerbate rigid pigeon-holing of ideas. Perhaps the solution is to evolve into something less linear and absolute than a hierarchial taxonomy (like the nested folders of an OS), and instead consider something more nebulous and amorphous (for example the photo-sets and tags in Flickr).
It would also be great to have overlays; optional layers of additional information (such as examples of use, methodologies and necessary instruments). Perhaps really the solution needs to be a bit more dynamic. Like an interactive app or site. I wonder if the most effective solution might be an industry-powered interaction-centric wiki. Such a project would be non-trivial, and would require mobilising a fair few effective contributors. The Interaction Design Foundation have a V1.0 Encyclopaedia, and an impressive number of authors. And yet, their chapter led approach to ownership make this quite different from the more democratic ‘wiki’ approach.
Justas did a great job pulling the early research together. A special thanks is also in order for Gemma Carr and Tom Selwyn-Davis who helped research and compile this chart.
Sam Hill
20th July 2012
We’ve been working with Animal Systems to find a way of communicating Chirp to the world – a platform they’ve developed for devices to share data with each other via audio. Chirp was demoed recently at Future Everything and Sonar, and the explanatory film below has now been released into the public domain. Which would be you. Hello you!
(credit – audio production: Coda-Cola)
Chirp allows devices to share data wirelessly, through sound. To showcase the potential of the technology Animal Systems are launching their first app, which will allow people to share information – initially short-links to their photos – between iOS devices in a streamlined, intuitive manner. Sharing contacts, notes and locations are soon expected to follow, as well as a version for Android. This is the tip of the iceberg for the platform’s potential, however. Their CEO and co-founder Patrick Bergel (aka @GoodMachine) explained to us how he’s “very interested in opening up the platform as much as we can, and working with other developers”.
Their intentions are many fold. Patrick emphasises that Chirp is not just an application, but a platform – so other applications could be built using it. In providing a piece of middleware, they’re keen to see an eclectic range of software use the technology – for gaming, social media, organisation tools and broadcasting. They’d also like see more hardware devices embrace it’s data-transfer capability; crossing operating systems and serving both so-called ‘smart’ and ‘dumb’ objects.
An audio protocol of this nature comes with some quite distinct UX properties, the potential of which we’re finding very interesting. There are some inherent traits that make it stand out from other data-sharing systems – it has the bonus of not requiring the fuss of ‘pairing’ devices, and one broadcasting device can share information to multiple recipients simultaneously.
From an experiential perspective, we specifically like the fact that object-to-object communication becomes far more tangible to organic meatbags like you or I, simply because it permeates the boundaries of our perception. With audible data transfer we can tell if a device has broadcast it’s message, and make a good guess as to whether another nearby device is likely to have received it. There’s no need for symbolic feedback here (cartoon envelopes vaporising into space dust) – we’re actually hearing the actual data travel, encoded in the airwaves. There’s something appreciably humanistic going on here – a stark contrast to radio EMF technologies like Wi-fi, Bluetooth, or RFID (as BERG highlighted with their Immaterials research).
The idea of a relatively open system like this was exciting news for us. It can’t be long before the marketplace produces some very novel games and applications that use the tech. In fact, we have a few ideas ourselves we’d like to pursue.
Significantly this appears to be the first instance of an airborne object-to-object “speech” system. It’s tantalisingly fun therefore, to wonder if we might be hearing the formative stages of a robotic lingua franca; something unlike either the silence of EMF radio, or the explicitly human-centric discourse of Siri. Within the local network of things, a truly physical environment, where smart-objects must interact in-situ with people and other objects amongst people, this could be a real and appropriate voice of the machines – to us inimitable, but appreciable. We might even come to be familiar with a few of their stock phrases, like “excuse me”, “I’m busy/ available” or “acknowledged”.
(Or “KILL KILL KILL” – basically, whatever the situation demands.)
Ben Barker
11th July 2012
The Podcast:
[audio: https://panstudio.co.uk/podcasttwo_memory.mp3]In this second podcast we talk with Isabel Christie, a neuroscientist, about memory creation and recollection. We ask:
Without memory, or the ability to recall it, is there any value in experience?
Can you be something other than the collection of memories that you have?
Are memory and sentience the same thing?
As we say, Pan exists to explore what value we as humans find in experience. We understand that the transformative quality of experiences define us as humans and constitute the major part of what makes a life. In exploring memory, we hope to better understand the role of memory recall in valuing experiences.
Oh, and the image above is of a synapse, the connection between two neurons in the brain. The strengthening and weakening of these synapses is called synaptic plasticity and it is understood that a vast network of these connections is what represents memory.
Here’s the direct link to the file: Memory Podcast
Ben Barker
2nd May 2012
I can’t really remember the film Moneyball. I saw it recently, and I’m sure it was fine. I think Brad Pitt looked a bit waxy, a man did something with numbers. I suppose it saved me reading the wikipedia page for either or both of them. If a friend hadn’t asked my opinion, I would have forgotten it forever. That doesn’t make it a bad film, but in my estimation doesn’t make it a good one either, it left no legacy.
Is that system fair, and does it translate more broadly to experience?
A measure for experience might be memorability. If you can’t remember a month, a year or a decade, then it probably lacked profound, revelatory or stimulating experiences. This is something we touched on in our podcast a few months back. Though it could also be applied to any single experience as much as a period of time. Think back to any moment, how sharply do you remember its smells, encounters or the way you felt. What does your recall of it tell you about its significance?
There is the danger of making memory creation the activity rather than one of the outcomes, here’s Simon Amstell on that:
“I was in Paris recently with a new group of people… So she suggests that, at about three in the morning, that we all run up the Champs-Elysees to the Arc de Triomphe. And I guess telling you about that now sounds a little bit exciting and fun, but at the time, I just thought, “but w-why would we do that? And then what’s the point? And then when we get there, then what will we do with our lives?” And I’m sort of analyzing what the point of it is, and we live that way [points the other way] and it seems a long way to go. And everyone else is just not analyzing, they’re just running and I’m running as well because of the peer pressure because I’m fun! And we’re all running and running and everyone else, I think, is just at one with the moment, at one with joy, at one with the universe, and I’m there, as I’m running, thinking, “well, this will probably make a good memory!” Which is living in the future discussing the past with someone who, if they asked you, “oh, what did it feel like?” “I don’t know! I was thinking about what to say to you!”
If he had of been “at one with the universe” like his peers, he could still have formed a lasting memory, however it wasn’t a transformative or meaningful experience for him, so he had to consciously encode it.
So how is memory formed? This is all reduced from Richard C. Mohs, PhD description at: http://science.howstuffworks.com/environmental/life/human-biology/human-memory.htm
It begins with encoding of perception. Perception is a stimulation of any or all of the senses. The Hippocampus then integrates all those inputs into a single experience and decides if they are worth remembering. This is based on your state of mind, repetition and need. We are surprisingly in control of what we remember. He says “how you pay attention to information may be the most important factor in how much of it you actually remember.” As one brain cell sends signals to another, the synapse between the two gets stronger. The more signals sent between them, the stronger the connection grows. Thus, with each new experience, your brain slightly rewires its physical structure. What is interesting is that we can either choose to try and remember something, or an experience can be sufficiently transformative that it demands to be remembered. First kiss, first airplane flight. First is a word that comes up again and again.
In Remembrance of Things Past, Proust made the distinction between habitual memory and what he called ‘memoire invoulontaire’. He took the view that ‘habit was anaesthetic to memory’ and that it ‘weakens all impressions.’ He even went as far as saying that it was ‘a second nature that prevented us from knowing our real selves’. The important distinction here is made between strong memorable moments, and the anaesthetic of memories encoded through habit. As A. E. Pilkington explains, Proust sees “voluntary and involuntary memory as recollection of identifiable events”, however Bergson makes the point in Matter et Memoire that ‘the two memories [may] run side by side and lend to each other a mutual support.’ So the challenge is not just forming memory, but memory of identifiable events and moments.
Can we use memorability as a measure of experience?
A reasonable criticism may be the fallibility of memory. We adapt our experiences to represent what we want, without subjectivity so memory is not a definte measure, however it is an indicator of richness of experience. We can ask ourselves what stands out, and in that sense the detail is less important, our remembrance, however skewed, is the measure of value.
After-Life is a beautiful film by Hirokazu Koreeda where newly dead people find themselves in a waystation en route to heaven. They are asked to reflect on what their favourite memory is. All the characters are drawn from interviews with real people, put into the hypothetical situation. It’s interesting for the memories they choose, but also for forcing people to weigh their lives. If you were one of the newly deceased, what would you choose?
What other measures of experience might exist? Personality, appearance, values. All of these are surely products of memory, be it habit (the experientially bad kind), voluntary (actively remembered) or involuntary (excited by chance). The quality of memory that an experience generates is its only legacy. Taking the metaphorical deathbed* as being the bottom-line of life, will you congratulate yourself the endless hours of forgettable contentment or be glad of the memorable experiences and transitions?
*The moment when your memories are both the most valuable and the most worthless. This is a big question to address, for now ponder on this quote from Blade Runner:
Tears In Rain (Blade Runner – 1982):
“I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched c-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in rain. [pause] Time to die.”