Notes on Blindness.

After being personally affected in my family by visual impairment and watching this touching documentary scene from ‘Notes on Blindness’ it really triggered a desire to use research and development in immersive technologies to realise John Hulls’ idea of raining inside so space could be commmunicated. I researched the ‘Notes of Blindness’ VR experience of the garden scene to further understand what could possibly be achieved outside of VR.

John Hull scene that inspired this latest research

VR screen capture of rain and wind in the garden communicating space to John from his balcony

Procedural Rain?

After contemplating John’s wish, I started protoyping a procedural audio rain patch that could be parametrically driven by realtime data input. This invovled studying the physics of rain and the sysnthesis techniques of Andy Farnell to create randomised raindrops with gaussian distribution to imitate the different charecteristics of drops, drips and distance of rainfall distribution in a singular light computational patch. At first considered using BCI interfaces to drive the synthesis of rain to begin the human wearable aspect to the project destination.

BCI Muse headset parametrically driving rain synthesis

Sensory Substitution Device

Being in the electronic engineering department with close links to psychology dept also my research led to existing sensory substitution devices. Sensory substitution is a non-invasive technique for circumventing the loss of one sense by feeding its information through another channel. Researching existing devices showed evidence of existing devices since the 70’s, I began to try and understand how my initial findings could push boundaries of current tech in the form of a sensory subtitution device (SSD). The ‘white cane’ is considered one of the most orignal forms of sensory substitution devices, the user uses the stick to understand envrioment and feel of surrouding objects if blind or visually impaired. Latest immersive tech could enhance this world information being delivered.

Magic Leap “Eyes for the Blind” conveys information from the world scene and translates to incoming information through audio. Dr Dylan Fox

LiDAR, ARKit & iPhone

Whilst some Magic Leap protoypes showed promise, they were not utlsiing realtime interactive and procedrual audio or spatialisation from the world scene. I began digging into ARKit using the iPhone pro 12 with LiDAR alongside Unity game engine. The goal ultimately utilising real world scene data in realtime to parmetrically drive sound design. This could be applicable in storytelling but also for the SSD. The visual interial odometry and the semantic scene information given from the LiDAR was of real interest. Whilst delving into this side I was still constantly thinking about the sonfication of this data. The point cloud positional data is akin to a raindrop. What is useful information? what is not?, what should it sound like? What would a 3D cube sound like?

Realtime LiDAR semantic scene understanding

Point Cloud of scene data

Semantically driven AR sound design

I managed to combine the LiDAR scene information in realtime to parametrically drive the synthesis of a bouncing AR ball by utlising the semantic information in conjuntion with distance of real world objects and spatialised the AR ball post event using Google Resonance plugin. This is a rudimentary prototype, however the pipeline is there!

Procedural bouncing synthesis AR ball driven by scene semantics. The window changes the sound of the bounce

Make it rain!

Now the proven pipeline exists I turned to sonification of the point cloud data. There are two camps of thought here after much acdemic research. Put simply, It’s either sonify everything and the brain will figure out what’s useful, OR, be reductive in design and sonify selected elements and hope to find a happy medium for a user. I am of the opinion that there is no ‘one size fits all’, I chose to sonify everything and push the limits of current audio contraints in spatial audio in AR. But first, how would you trigger the sound of the environment if you needed to hear what’s going on? Rain is constant, but would it be constantly needed? From research it is evident that the blind and visually imparied communities have excellent coping strategies already and I am conscious of this in terms of over engineering. I began with a point cloud 3D cube, with a user control for focus and provide individual control and introduced a wave that emits like SONAR from the user to trigger the audio emitters. The emitters are representative of point cloud position from the realtime data and would trigger the raindrop syntheiss in 3D space.

Unity app triggering 3D point cloud cube through a wave trigger action of 100’s of 3d audio emitter positions in selected view. No audio connected here yet

The real audio challenges begin…