Augmented Cinema

Back Scroll To Media

Augmented Cinema

Sam Hill

24th October 2011

Last night I saw The Matrix Live at the Royal Albert Hall – a showing of the original 1999 motion picture, but with a live orchestra performing the score. It was phenomenal. The NDR Pops Orchestra perfectly captured the epic melodrama of Don Davis’ original soundtrack, with it’s relentless use of violins, and the big brass/ timpani crescendos. The venue was perfect for it and the film itself had aged quite well for such a stylised piece of science fiction.

The experience was similar to a treatment of 2001: A Space Odyssey by the Philharmonia Orchestra and Philharmonia Voices, which I caught last year at the Royal Festival Hall and was absolutely bowled over by. It was brilliant and haunting and an unparalleled sensory experience. Loads of other films (Star Wars in Concert for example) have received a similar treatment, and cinematic performances have diversified in many other ways too.

This brings to mind a number of questions about what makes the cinematic experience brilliant, as it is, and when it’s appropriate to toy with the format.

It might be helpful to analyse what the two film have in common to see why they were chosen:

  • To start with, 2001 and The Matrix are both excellent, popular movies with incredible scores.
  • They have a large replay value.
  • They are oscar winning classics and have endured long enough to remain relevant.
  • They are both unashamedly ostentatious and ambitious works of cinema.

I doubt this style of adaptation would work for films that do not obey theses criteria, as good as they still might be. Shrek (2001) for example, is a perfectly good film – funny, innovative and enduring, but it probably lacks the gravitas to warrant a full blown orchestra. Though new, Tinker Tailor Soldier Spy is, at the time of writing, a critically acclaimed release; but to build a proximate intervention between it and the audience would be a disservice: the movie-goer has not yet seen it as it was meant to be seen, so it shouldn’t be tampered with yet.

A rough logic is beginning to fall in to place.

Already Good

Going to the cinema is a fairly unique activity: it can only really be considered a semi-social event, seeing as talking is actively discouraged. Despite this, it’s one of the most popular public leisure activities of the last century. In a way, it’s incredible to think that though we can spend most of our working day looking at screens, and have the opportunity to go home and watch anything we want off more screens from the comfort of a sofa, we consider it a treat to instead occasionally leave the house and view another, bigger screen, at a relatively premium rate. There must be good reasons for this, surely?

Progress in delivering new experiences is important, but if the following assets of cinema are undermined too far then any intervention will be rendered distracting rather than immersive; a diminishment of the cinematic experience, not an augmentation.

What makes cinema great? –

  • First off, there is the complete, unavoidable immersion – the film stretches to the edge of the viewer’s peripheral vision and the audio overrides all other noise.
  • It’s romantic – the ritual of the popcorn, the trailers, the sense of shared experience and the analytical post-drinks.
  • It’s an easy, comfortable and passive activity to take part in, the viewer need only sit, look and listen – sometimes that’s all we want to do.
  • Finally, there’s the quality, of both narrative and production. Cinema is arguably the king of story-telling and continues to remain at the very frontier of our qualitative expectations in so many respects.

Future Cinema

(photo credit: Saulius Patumsis via Flickr)

I mentioned cinema performances have diversified in other ways. One group that seem to consistently nail immersive, film-centric nights are Future Cinema. As their site reads:

Future Cinema is a live events company that specialise in creating living, breathing experiences of the cinema…Future Cinema aim to bring the concept of ‘experience’ back to the cinema-going world.

Specialising in bringing events to life through a unique fusion of film, improvised performances, detailed design and interactive multimedia, Future Cinema create wholly immersive worlds that stretch the audience’s imagination and challenge their expectations.

The activities they organised for Blade Runner, One Flew Over The Cuckoo’s Nest, Top Gun and Watchmen have become somewhat legendary in London. Future Cinema are currently the authority on cinematic experience.

What Else?

As well as use of theatre to blur the edges of the screen, there are further tools both upcoming and established, that are employed to affect our cinema experience. 3D glasses for example, faced their first seriously commercial acid test with Avatar (2009), but seem now to be well established. The super-wide IMAX screenings are arguably even more immersive than conventional cinema and showings are often very popular. New and unusual locations for temporary cinemas are always cropping up, which provide a break of style from the multiplexes we’re used to. Olfactory stimulation (“smell-o-vision”) is a gimmick occasionally used with films for kids (see Spy Kids 4 in 4-D Aroma-scope(2011)) and in a dozen or so theme parks internationally they go a little further with a show called Pirates 4-D, a slightly cheesy film (starring Eric Idle and the late Leslie Nielsen) with “4-D effects” involving water cannons, bursts of air, vibrating seats and wires which push against the viewers feet

A friend described once how he went to the cinema to see a preview of Danny Boyle’s Sunshine (2007), a film set on a space ship heading towards the sun. He saw it in the middle of the 2007 summer heatwave, and the cinema’s air conditioning broke down. Sweating as he sat, he didn’t know if he was a victim of a PR stunt or was suffering an onset of psychosomosis caused by the film. In any case, the experience stayed with him.

Edit (I): London Dungeon have further strained the idea of extra-“dimensional” cinema by introducing a 5D ride – ‘Vengeance‘. This includes 3D vision, a number of techniques similar to Pirates 4-D (air blasts, water sprays, vibrations etc.), and laser-sighted pistols which allow the whole audience to play a cooperative, interactive game onscreen.

Edit (II): Another phenomenon that deserves looking at is audience-initiated or cinema-facilitated activity associated with certain cult films. The Room (2003), often cited as the “best, worst film ever made” serves as a really good example. A ritual has grown around the film – the audience join in with the dialogue, greet the characters as they appear, shout satirical comments and throw plastic spoons at the screen. The effect is that one of the worst films ever produced allows for one of the most energetic and entertaining cinematic experiences possible. In a similar vein, Grease, Rocky Horror and Sound of Music are often shown in independent cinemas on special sing-a-long nights, and tend to feature a degree of cosplay.

An infamous clip from "The Room":

 

Sensory Augmentation: Vision (pt. 1)

Back Scroll To Media

Sensory Augmentation: Vision (pt. 1)

Sam Hill

30th August 2011

Blinkered

The above diagram illustrates the full breadth of the electro-magnetic spectrum, from tiny sub-atomic gamma rays to radio waves larger than the earth (there are in fact, no theoretical limits in either direction). That thin technicoloured band of ‘visible light’ is the only bit our human eyes can detect. That’s it. Our visual faculties are blinkered to a 400-800 Terahertz range. And from within these parameters we try as best we can to make sense of our universe.

There is no escaping the fact that our experience of the environment is limited by the capacity of our senses. Our visual, aural, haptic and olfactory systems respond to stimuli – they read “clues” from our environment – from which we piece together a limited interpretation of reality.

So says xkcd:

This limited faculty has suited us fine, to date. But it follows that if we can augment our senses, we can also increase our capacity for experience.

Seeing beyond visible light

Devices do already exist that can process EM sources into data that we can interpret: X-ray machines, UV filters, cargo scanners, black-lights, radar, MRI scanners, night-vision goggles and satellites all exploit EM waves of various frequencies to extend our perceptions. As do infrared thermographic cameras, as made popular by the Predator (1987).

What are the implications of para-light vision?

Let’s for one second ignore a canonical issue with the Predator films – that the aliens sort of had natural thermal vision anyway and pretend they can normally see visible light. Let’s also ignore the technical fact that the shots weren’t captured with a thermal imaging camera (they don’t work well in the rainforest, apparently). Let’s assume instead that we have a boxfresh false-colour infra-red system integrated into a headset, and that human eyes could use it. How effective would it be?

First of all, we’re talking about optical apparatus, something worn passively rather than a tool used actively (such as a camera, or scanner). The design needs special consideration. An x-ray scanner at an airport is an unwieldy piece of kit, but it can feed data to a monitor all day without diminishing the sensory capacity of the airport security staff that use it. They can always look away. If predator vision goggles were in use today, they would be burdened with a problem similar to military-grade “night-vision” goggles.

Predator Vision is not a true sensory augmentation in that it does not *actually* show radiating heat. Instead it piggy-backs off the visible-light capability of the eye and codifies heat emissions into an analogical form that can be made sense of: i.e. false-colour. In order to do so, a whole new competing layer of data must replace or lie above – and so interfere with – any visible light that is already being received.

Predator Vision in the home

For example, let’s task The Predator with a household chore. He must wash the dishes. The predator doesn’t have a dishwasher. There are two perceivable hazards: the first is scolding oneself with hot water, which Predator Vision can detect; the second is cutting oneself on a submerged kitchen knife, which only visible light can identify (assuming the washing up liquid isn’t too bubbly). Infra-red radiation cannot permeate the water’s surface. What is The Predator to do?

He would probably have to toggle between the two – viewing in IR first to get the correct water temperature, then visible light afterwards. But a user-experience specialist will tell you this is not ideal – switching between modes is jarring and inconvenient, and it also means the secondary sense can’t be used in anticipation. A careless Predator in the kitchen might still accidentally burn himself on a forgotten electric cooker ring. The two ideally want to be used in tandem.

What’s the solution?

It’s a tricky one. How can we augment our perception if any attempt to do so is going to compromise what we already have? Trying to relay too much information optically is going to cause too much noise to be decipherable (remember our ultimate goal is to have as much of the EM spectrum perceptible as possible, not just IR). This old TNG clip illustrates the point quite nicely:

Here Geordi claims that he has learned how to “select what I want and disregard the rest”. Given the masking effect of layering information, the ability to “learn” such a skill seems improbable. It seems as likely as, say, someone learning to read all the values from dozens of spreadsheets, overprinted onto one page. However, the idea of ‘selectivity’ is otherwise believable – we already have such a capacity of sorts. Our eyes are not like scanners, nor cameras. We don’t give equal worth to everything we see at once, but rather the brain focuses on what is likely to be salient. This is demonstrable with the following test:

It’s also worth noting the unconscious efforts our optical system makes to enhance visibility. Our irides contract or expand to control the amount of light entering our eyes, and the rod-cells in the retina adjust in low-light conditions to give us that certain degree of night-vision we notice after several minutes in the dark. The lenses of our eyes can be compressed to change their focal length. In other words the eye can calibrate itself autonomously, to an extent, and this should be remembered from a biomimetric perspective.

Option one:

The most immediate answer to para-light vision is a wearable, relatively non-invasive piece of headgear that works through the eye. In order to compensate for an all visible-light output, the headgear would need to work intelligently, with a sympathetic on-board computer. The full scope of this might be difficult to foresee here. Different frequencies of EM radiation might need to be weighted for likely importance – perhaps by default visible light would occupy 60% of total sight, 10% each for IR and UV, and 20% for the remaining wavelengths. A smart system could help make pre-emptive decisions for the viewer on what they might want to know, e.g. maybe only objects radiating heat above 55ºC would be shown to give off infra-red light (our temperature pain threshold). Or maybe different frequencies take over if primary sight is failing. Eye tracking could be used to help the intelligent system make sense of what the viewer is trying to see and respond accordingly. This might fix the toggling-between-modes issue raised earlier.

It’s interesting to wonder what it would mean to perceive radio-bands such as for wi-fi or RFID – obviously, it would be fascinating to observe them in effect, but might their pervasion be over-bearing? Perhaps the data could be presented non-literally, but processed and shown graphically/diagrammatically?

Option Two:

The second, more outlandish option is a cybernetic one. Imagine if new perceptions could be ported directly to the brain, without relying on pre-formed synaptic systems. Completely new senses. Perhaps existing parts of the brain could accept these ported senses. The phenomenon of synesthesia comes to mind, where stimulation of one sensory or cognitive pathway leads to automatic, involuntary experiences in a second sensory or cognitive pathway. Is it possible that in a similar vein the visual cortex could read non-optic information, and would that help us to see several types of information simultaneously but allow us to selectively choose which parts to focus on? If such a segue weren’t possible, would a neural implant bridge the gap?

In Summary

I’ve intentionally only discussed the EM scale here, but of course there are many other forms of data that can be visualised. There might be potential for augmenting vision with sonar, for example, or miscroscopy. Human-centric metadata deserves a whole post in it’s own right.

It’s difficult to predict how the potential for sensory augmentation will change, but whatever opportunities pioneering science unlocks can be followed up with tactical design consideration to make sure applications are appropriately effective and adoptable. It’s an exciting prospect to think that we may be on the threshold of viewing the world in new, never-before seen ways – and with this new vision there will be, inevitably, new points of inspiration and new ways of thinking.