Earphones and Selective Reality

Back Scroll To Media

Earphones and Selective Reality

Sam Hill

20th November 2011

It’s feasible an average commuting city worker might wear earphones between 5 and 12 hours a day. In some places they’re ubiquitous – on the train, in the office, on the high street – so much as to have become invisible.

This is fine of course – it’s not a criticism, just an observation. Personal experience reveals journeys are less stressful if the sound of a baby crying can be blocked; work is achieved more efficiently without the ambient distractions of an open-plan office.

But the observation does come with a hefty question in tow. It’s equally typical that the aforementioned worker might spend up to 15-16 hours a day looking at screens, but there is a significant difference: screens are not all encompassing. They can be looked away from, or around, and we can shut our eyes. Conversely, personal headphones are supposed to be all encompassing; they are supposed to override all ambient noise.

What does it mean then, to block out the world around you: to usurp an important link to one’s environment for so much of the time?

Context

The personal stereo is about 25 years old and has gone through multiple format changes. Significantly, the MP3 player massively opened up the potential for people to carry their “entire” music collections with them. Another (slightly overlooked) innovation has been Spotify for mobile, which allows someone to listen to any song they can call to mind from practically any location through their smartphone, 3G and ‘the cloud’. Even making allowances for licensing and signal strength, that’s an incredible thought isn’t it? Any song, any place, any time. From prehistory up until 150 years ago, the only way to hear music was to be in the same space as the instrument. There is an incredibly liberating cultural power that comes with the tech we now wield.

Voluntary Schism from Reality

To take a critical sensation like hearing and hack it’s primarily informative/exploratory role to instead supply entertainment will certainly have a significant effect on one’s perception of reality. Granted, ‘reality’ is a weasely, subjective term, but the choice will still affect an individual’s capacity to perceive their immediate environment. Critically, the user of earphones has made a choice: they are listening to what they want to, regardless of whether it’s what she should listen to. They have been granted the power to exert an amount of control on sensory input, and how they engage with their environment. Whether or not there is an experiential  ‘compromise’ going on is contentious.

For example, consider a typical 40 minute train commute. Coincidently, 40 minutes is the approximate length of time of an average album. So within a week’s commute it might be possible to listen to roughly 10 new albums. Doing so would impart a constant, fairly rich supply of fresh experience. On the other hand, listening instead to the daily sounds of a train carriage would probably be emotionally and sensationally lacking, most of the time. However, occasionally the ambient noise of a journey might yield (experiential) gems: eavesdropping on an argument, a phone call or the ramblings of an alcoholic.

Most likely, the album-listening route would be more rewarding in the long term, and so within this context could be considered experientially condonable. But is this true beyond the commute?

Boundaries

Has society had time to adjust to the power of being able to limit depth of engagement with the physical world? Do we understand the point at which the benefit becomes a hurdle – when a delivery mechanism for experience becomes an obstacle? The thought first occurred to me when I saw a father carrying a toddler in his arms through a park. The father had white earphones hanging from his ears and a vacant expression. The kid was babbling and humming and blowing raspberries at his dad but he was completely oblivious. The sight, an abuse of technological power, made me instantly uncomfortable. The fact this man had wilfully placed a barrier between himself and his son, to the detriment of them both, made me incredibly angry, actually. In this instance it wasn’t strangers on the tube being phased out of attention but immediate family. It seemed wrong by every measure of quality.

I’ve also been amazed to see cyclists weave through traffic whilst listening to music. In my experience it seems necessary to dedicated every possible faculty to cycling in a built-up environment. Granted, there might be marginally more experiential value in cycling to music, but is the pay-off worth the risk of failing to identify peripheral hazards? After all, a premature death will reduce an individuals net lifetime experience acquired, quite drastically.

By Analogy

In a recent workshop we held at Goldsmith’s College, a design student ran a quick experiment to limit their exposure to unpleasant smells. They subverted their olfactory sense by keeping a perfumed cloth over their nose whilst walking through bad smelling places.

The student realised within a few hours that living with a single abstract ‘pleasant’ smell was less desirable than having access to countless neutral and unpleasant odours – odours which were still relevant and contextually grounded.

Sensory Augmentation: Vision (pt. 1)

Back Scroll To Media

Sensory Augmentation: Vision (pt. 1)

Sam Hill

30th August 2011

Blinkered

The above diagram illustrates the full breadth of the electro-magnetic spectrum, from tiny sub-atomic gamma rays to radio waves larger than the earth (there are in fact, no theoretical limits in either direction). That thin technicoloured band of ‘visible light’ is the only bit our human eyes can detect. That’s it. Our visual faculties are blinkered to a 400-800 Terahertz range. And from within these parameters we try as best we can to make sense of our universe.

There is no escaping the fact that our experience of the environment is limited by the capacity of our senses. Our visual, aural, haptic and olfactory systems respond to stimuli – they read “clues” from our environment – from which we piece together a limited interpretation of reality.

So says xkcd:

This limited faculty has suited us fine, to date. But it follows that if we can augment our senses, we can also increase our capacity for experience.

Seeing beyond visible light

Devices do already exist that can process EM sources into data that we can interpret: X-ray machines, UV filters, cargo scanners, black-lights, radar, MRI scanners, night-vision goggles and satellites all exploit EM waves of various frequencies to extend our perceptions. As do infrared thermographic cameras, as made popular by the Predator (1987).

What are the implications of para-light vision?

Let’s for one second ignore a canonical issue with the Predator films – that the aliens sort of had natural thermal vision anyway and pretend they can normally see visible light. Let’s also ignore the technical fact that the shots weren’t captured with a thermal imaging camera (they don’t work well in the rainforest, apparently). Let’s assume instead that we have a boxfresh false-colour infra-red system integrated into a headset, and that human eyes could use it. How effective would it be?

First of all, we’re talking about optical apparatus, something worn passively rather than a tool used actively (such as a camera, or scanner). The design needs special consideration. An x-ray scanner at an airport is an unwieldy piece of kit, but it can feed data to a monitor all day without diminishing the sensory capacity of the airport security staff that use it. They can always look away. If predator vision goggles were in use today, they would be burdened with a problem similar to military-grade “night-vision” goggles.

Predator Vision is not a true sensory augmentation in that it does not *actually* show radiating heat. Instead it piggy-backs off the visible-light capability of the eye and codifies heat emissions into an analogical form that can be made sense of: i.e. false-colour. In order to do so, a whole new competing layer of data must replace or lie above – and so interfere with – any visible light that is already being received.

Predator Vision in the home

For example, let’s task The Predator with a household chore. He must wash the dishes. The predator doesn’t have a dishwasher. There are two perceivable hazards: the first is scolding oneself with hot water, which Predator Vision can detect; the second is cutting oneself on a submerged kitchen knife, which only visible light can identify (assuming the washing up liquid isn’t too bubbly). Infra-red radiation cannot permeate the water’s surface. What is The Predator to do?

He would probably have to toggle between the two – viewing in IR first to get the correct water temperature, then visible light afterwards. But a user-experience specialist will tell you this is not ideal – switching between modes is jarring and inconvenient, and it also means the secondary sense can’t be used in anticipation. A careless Predator in the kitchen might still accidentally burn himself on a forgotten electric cooker ring. The two ideally want to be used in tandem.

What’s the solution?

It’s a tricky one. How can we augment our perception if any attempt to do so is going to compromise what we already have? Trying to relay too much information optically is going to cause too much noise to be decipherable (remember our ultimate goal is to have as much of the EM spectrum perceptible as possible, not just IR). This old TNG clip illustrates the point quite nicely:

Here Geordi claims that he has learned how to “select what I want and disregard the rest”. Given the masking effect of layering information, the ability to “learn” such a skill seems improbable. It seems as likely as, say, someone learning to read all the values from dozens of spreadsheets, overprinted onto one page. However, the idea of ‘selectivity’ is otherwise believable – we already have such a capacity of sorts. Our eyes are not like scanners, nor cameras. We don’t give equal worth to everything we see at once, but rather the brain focuses on what is likely to be salient. This is demonstrable with the following test:

It’s also worth noting the unconscious efforts our optical system makes to enhance visibility. Our irides contract or expand to control the amount of light entering our eyes, and the rod-cells in the retina adjust in low-light conditions to give us that certain degree of night-vision we notice after several minutes in the dark. The lenses of our eyes can be compressed to change their focal length. In other words the eye can calibrate itself autonomously, to an extent, and this should be remembered from a biomimetric perspective.

Option one:

The most immediate answer to para-light vision is a wearable, relatively non-invasive piece of headgear that works through the eye. In order to compensate for an all visible-light output, the headgear would need to work intelligently, with a sympathetic on-board computer. The full scope of this might be difficult to foresee here. Different frequencies of EM radiation might need to be weighted for likely importance – perhaps by default visible light would occupy 60% of total sight, 10% each for IR and UV, and 20% for the remaining wavelengths. A smart system could help make pre-emptive decisions for the viewer on what they might want to know, e.g. maybe only objects radiating heat above 55ºC would be shown to give off infra-red light (our temperature pain threshold). Or maybe different frequencies take over if primary sight is failing. Eye tracking could be used to help the intelligent system make sense of what the viewer is trying to see and respond accordingly. This might fix the toggling-between-modes issue raised earlier.

It’s interesting to wonder what it would mean to perceive radio-bands such as for wi-fi or RFID – obviously, it would be fascinating to observe them in effect, but might their pervasion be over-bearing? Perhaps the data could be presented non-literally, but processed and shown graphically/diagrammatically?

Option Two:

The second, more outlandish option is a cybernetic one. Imagine if new perceptions could be ported directly to the brain, without relying on pre-formed synaptic systems. Completely new senses. Perhaps existing parts of the brain could accept these ported senses. The phenomenon of synesthesia comes to mind, where stimulation of one sensory or cognitive pathway leads to automatic, involuntary experiences in a second sensory or cognitive pathway. Is it possible that in a similar vein the visual cortex could read non-optic information, and would that help us to see several types of information simultaneously but allow us to selectively choose which parts to focus on? If such a segue weren’t possible, would a neural implant bridge the gap?

In Summary

I’ve intentionally only discussed the EM scale here, but of course there are many other forms of data that can be visualised. There might be potential for augmenting vision with sonar, for example, or miscroscopy. Human-centric metadata deserves a whole post in it’s own right.

It’s difficult to predict how the potential for sensory augmentation will change, but whatever opportunities pioneering science unlocks can be followed up with tactical design consideration to make sure applications are appropriately effective and adoptable. It’s an exciting prospect to think that we may be on the threshold of viewing the world in new, never-before seen ways – and with this new vision there will be, inevitably, new points of inspiration and new ways of thinking.