“Your eyes can deceive you, don’t trust them.”Obi-Wan Kenobi in Star Wars: A New Hope (aka “Star Wars”)
And so here we are!
Digging into this Sony product a bit (the SRS-NS7), I found a preceding wearable audio unit released by Sony in July, the SRS-NB10. This product was positioned as “the Ultimate Work-From-Home Companion” in the press release for the product. It’s a true wearable device–looking like a kind of weighted scarf–that allows the user to listen to audio (music, podcasts, etc.) and participate in calls all while sauntering freely around. The difference from wireless headphones is that the user will apparently not be blocking out the outside world, as these don’t cover the ears, they rest on the shoulders.
Now hold onto your 21st century hats kids, because I’m literally going draw a thread back some 40 years to a product I remember being marketed when I was a little kid: the Bone Fone (a name that’s extremely unfortunate if not somewhat hilarious.)
The Bone Fone was a device that looked like a weighted scarf and was meant to allow the user to listen to music while not disturbing those around through some sort of bone vibrating induction technology. Sound familiar?! This was first marketed back in 1979… 1979!!!
And this archival news clip actually seems to purport that it worked.
I saw someone selling one of these at a Jamaica Plain garage sale 20 years ago when I was in grad school (my first grad school!) and I still kick myself for not shelling out the $5 to buy it, if only as a curio.
Going back to my original tweet, Sony’s latest wearable device is meant to provide a spatial audio experience with support for Dolby Atmos, which is basically the latest surround sound enhancement that provides sounds from above instead of just around and behind. My only experience with Dolby Atmos is with Amazon’s Echo Studio home speaker which has some speakers that shoot up to the ceiling and bounce off. I tried out some of the Dolby Atmos-enabled programming on Amazon and it’s actually pretty nifty, so I don’t think it’s necessarily all smoke and mirrors but I wonder about the feasibility of wearing a heavy digital scarf around the house.
However, wearing small earbuds is a completely different thing and I can attest from observing my 13 year old that many people wear AirPods all the time. I purposely didn’t state many “young” people but I will admit there does seem to be a generational gap with that approach for someone like me. I’m used to actively wearing headphones to listen to something (music, video call, podcast, etc.) and then removing them when I’m done, but I definitely know that many people wear devices like AirPods all the time.
And in fact, a few years back when AirPods Pro emerged, some started noting that they offered insights into Apple’s strategy with regard to wearables and augmented reality – a concept driven home by the portmanteau “hearables” (which I first noted in this 2017 piece by Andrew Murphy titled “AirPods: The First Mass Market Hearable”.)
In a follow-up post two years later, Murphy outlines some real out-of-the-box type ideas of what Audio Augmented Reality could entail, musing about shared location based stories or audio experiences, amongst other ideas. One key aspect of shifting between this audio augmented layer is the Transparency Mode of AirPods, which essentially drops any noise cancelling to allow sounds from the real world to be heard. It’s akin to dropping the graphic overlay from a set of smart glasses.
Since the adoption rate of ear-based wearables like AirPods is way ahead of the curve for visual wearables like the RayBan Facebook glasses that I posted about a few weeks ago, I actually think audio AR could be the entry point for mainstream adoption of AR. There’s so much to consider with regard to the user experience of an audio driven augmented reality type of space that I will continue that in my next post, in which I will wrap up my trilogy of wearable blog posts in thrilling fashion!