We’ve been working with Animal Systems to find a way of communicating Chirp to the world – a platform they’ve developed for devices to share data with each other via audio. Chirp was demoed recently at Future Everything and Sonar, and the explanatory film below has now been released into the public domain. Which would be you. Hello you!
(credit – audio production: Coda-Cola)
Chirp allows devices to share data wirelessly, through sound. To showcase the potential of the technology Animal Systems are launching their first app, which will allow people to share information – initially short-links to their photos – between iOS devices in a streamlined, intuitive manner. Sharing contacts, notes and locations are soon expected to follow, as well as a version for Android. This is the tip of the iceberg for the platform’s potential, however. Their CEO and co-founder Patrick Bergel (aka @GoodMachine) explained to us how he’s “very interested in opening up the platform as much as we can, and working with other developers”.
Their intentions are many fold. Patrick emphasises that Chirp is not just an application, but a platform – so other applications could be built using it. In providing a piece of middleware, they’re keen to see an eclectic range of software use the technology – for gaming, social media, organisation tools and broadcasting. They’d also like see more hardware devices embrace it’s data-transfer capability; crossing operating systems and serving both so-called ‘smart’ and ‘dumb’ objects.
An audio protocol of this nature comes with some quite distinct UX properties, the potential of which we’re finding very interesting. There are some inherent traits that make it stand out from other data-sharing systems – it has the bonus of not requiring the fuss of ‘pairing’ devices, and one broadcasting device can share information to multiple recipients simultaneously.
From an experiential perspective, we specifically like the fact that object-to-object communication becomes far more tangible to organic meatbags like you or I, simply because it permeates the boundaries of our perception. With audible data transfer we can tell if a device has broadcast it’s message, and make a good guess as to whether another nearby device is likely to have received it. There’s no need for symbolic feedback here (cartoon envelopes vaporising into space dust) – we’re actually hearing the actual data travel, encoded in the airwaves. There’s something appreciably humanistic going on here – a stark contrast to radio EMF technologies like Wi-fi, Bluetooth, or RFID (as BERG highlighted with their Immaterials research).
The idea of a relatively open system like this was exciting news for us. It can’t be long before the marketplace produces some very novel games and applications that use the tech. In fact, we have a few ideas ourselves we’d like to pursue.
Significantly this appears to be the first instance of an airborne object-to-object “speech” system. It’s tantalisingly fun therefore, to wonder if we might be hearing the formative stages of a robotic lingua franca; something unlike either the silence of EMF radio, or the explicitly human-centric discourse of Siri. Within the local network of things, a truly physical environment, where smart-objects must interact in-situ with people and other objects amongst people, this could be a real and appropriate voice of the machines – to us inimitable, but appreciable. We might even come to be familiar with a few of their stock phrases, like “excuse me”, “I’m busy/ available” or “acknowledged”.
(Or “KILL KILL KILL” – basically, whatever the situation demands.)