Notes on Blindness.
After being personally affected in my family by visual impairment and watching this touching documentary scene from ‘Notes on Blindness’ it really triggered a desire to use research and development in immersive technologies to realise John Hulls’ idea of raining inside so space could be commmunicated. I researched the ‘Notes of Blindness’ VR experience of the garden scene to further understand what could possibly be achieved outside of VR.
Procedural Rain?
After contemplating John’s wish, I started protoyping a procedural audio rain patch that could be parametrically driven by realtime data input. This invovled studying the physics of rain and the sysnthesis techniques of Andy Farnell to create randomised raindrops with gaussian distribution to imitate the different charecteristics of drops, drips and distance of rainfall distribution in a singular light computational patch. At first considered using BCI interfaces to drive the synthesis of rain to begin the human wearable aspect to the project destination.
Sensory Substitution Device
Being in the electronic engineering department with close links to psychology dept also my research led to existing sensory substitution devices. Sensory substitution is a non-invasive technique for circumventing the loss of one sense by feeding its information through another channel. Researching existing devices showed evidence of existing devices since the 70’s, I began to try and understand how my initial findings could push boundaries of current tech in the form of a sensory subtitution device (SSD). The ‘white cane’ is considered one of the most orignal forms of sensory substitution devices, the user uses the stick to understand envrioment and feel of surrouding objects if blind or visually impaired. Latest immersive tech could enhance this world information being delivered.
LiDAR, ARKit & iPhone
Whilst some Magic Leap protoypes showed promise, they were not utlsiing realtime interactive and procedrual audio or spatialisation from the world scene. I began digging into ARKit using the iPhone pro 12 with LiDAR alongside Unity game engine. The goal ultimately utilising real world scene data in realtime to parmetrically drive sound design. This could be applicable in storytelling but also for the SSD. The visual interial odometry and the semantic scene information given from the LiDAR was of real interest. Whilst delving into this side I was still constantly thinking about the sonfication of this data. The point cloud positional data is akin to a raindrop. What is useful information? what is not?, what should it sound like? What would a 3D cube sound like?
Semantically driven AR sound design
I managed to combine the LiDAR scene information in realtime to parametrically drive the synthesis of a bouncing AR ball by utlising the semantic information in conjuntion with distance of real world objects and spatialised the AR ball post event using Google Resonance plugin. This is a rudimentary prototype, however the pipeline is there!
Make it rain!
Now the proven pipeline exists I turned to sonification of the point cloud data. There are two camps of thought here after much acdemic research. Put simply, It’s either sonify everything and the brain will figure out what’s useful, OR, be reductive in design and sonify selected elements and hope to find a happy medium for a user. I am of the opinion that there is no ‘one size fits all’, I chose to sonify everything and push the limits of current audio contraints in spatial audio in AR. But first, how would you trigger the sound of the environment if you needed to hear what’s going on? Rain is constant, but would it be constantly needed? From research it is evident that the blind and visually imparied communities have excellent coping strategies already and I am conscious of this in terms of over engineering. I began with a point cloud 3D cube, with a user control for focus and provide individual control and introduced a wave that emits like SONAR from the user to trigger the audio emitters. The emitters are representative of point cloud position from the realtime data and would trigger the raindrop syntheiss in 3D space.