How hearables will drive the attention economy
- 13 July, 2019 20:00
In an age of mass production, the global internet and digital everything, the most valuable commodities have become time and attention. And everybody wants yours.
Your work demands them. Advertisers need them. Broadcast media crave them. Social networks want them. Marketers, charity panhandlers, evangelists, celebrities, smartphone apps, political campaigns and others are all clamoring for you to focus on them and ignore the rest.
Finally there’s help, in the form of an emerging revolution in audio. Low-power components, better batteries, smaller AI processors, rapidly advancing digital signal processors (DSP), advanced microphone arrays, improving Bluetooth specs and other technologies are converging to enable a new world of smart earbuds, headphones and hearing aids.
What’s more, with better touch and gesture controls and on-board accelerometers, very small hearable devices can be controlled without a smartphone or app.
Controlling the noise
Several new products are coming out this year that promise to give you control over what you hear and, therefore, what you pay attention to.
Every day for the past week, I’ve been wearing Bose’s new wireless Noise Cancelling Headphones 700, which first became available last month. The headphones have eight microphones, most of which are used for what is probably the best noise cancellation in the business. (Noise cancellation works by detecting ambient noise, then playing opposing soundwaves to cancel out the noise.) Some of the microphones are used for isolating your voice from the ambient noise while talking to someone in person or on the phone.
You can dial the noise cancellation from zero (this is called “Transparency Mode” and it lets you hear all the surrounding noise) to 10, which is maximum noise cancellation, and very quiet—without any of the hiss common with most noise cancellation headphones.
Buttons, voice commands and built-in touchpad functionality on the outside of the right ear cup give you control from the headphones themselves; no need to use the app or your phone to control audio. One of those buttons will trigger Google Assistant or Amazon’s Alexa.
The headphones can pair with two devices at the same time, giving you audio and control from either or both.
But what’s impressive isn’t the noise cancellation itself, but the fact that it’s applied in a way that’s contextually aware. For example, say you’re listening to music on the Bose 700 headphones from your laptop with noise cancellation set to 10 (maximum) and get a call. Simply tap on the side of the headphones to accept the call. The headphones pause the music on your laptop and answer the call on your phone, while simultaneously turning down noise cancellation so you can hear yourself talk on the call. Meanwhile, it uses noise cancellation for the caller. In other words, your noise cancellation is reduced to about half volume, while being turned all the way up for the caller, who then can’t hear the noisy room you’re in. When the call ends, the device is switched back to the laptop, noise cancellation goes back up and music play is resumed.
In other words, a change in context (you go from listening to music to answering a call) leads to multiple changes in the entire audio setup to accommodate the new context. That’s the future of hearables.
One promising direction comes in the form of the Jabra Elite 85h product, which offers a mode called “SmartSound.” The mode analyzes ambient sounds and mutes them based on their specific signatures, which means that the type of noise cancellation changes as you move from one environment to the next. For example, for safety reasons it automatically mutes sound less when you’re near traffic and more when you’re in a noisy coffee shop.
Another huge leap forward recently came from Sony. It’s new WF-1000XM3 wireless earbuds, which can be pre-ordered now but won’t ship until August, have active noise cancellation (a rare feature for wireless earbuds).
And yet another emerging category is smart glasses that are optimized for audio. Bose Frames, for example, are sunglasses that have sensors and speakers built-in. Unlike other smart glasses, they don’t use bone conduction. They direct actual sound into your ears without covering them. Their purpose is to convey sound from your smartphone and enable phone calls. Reviewers say they’re surprised by the quality of both sound and microphone. Bose Frames hint at a future of augmented audio attention management without earbuds or headphones.
These Bose, Jabra and Sony products offer a small sample of the future of AI-enhanced attention management, where the sound you hear intelligently changes based on the needs of your context at any given moment, and where the features of today’s headphones become available in tomorrow’s wireless earbuds.
For a glimpse of the future, look to the past
An early and ill-fated project, Here One earbuds from a company called Doppler, sought to kick-start the hearables revolution. The earbuds, which went on sale in 2017, promised to enable app control over sounds in the environment. For example, at a nightclub, you could block conversation and listen only to music, or vice versa. You could block the sound of a baby crying.
These earbuds offered advanced features, but their price, battery life and size all said, “Not quite ready for prime time.” The company’s intellectual property was sold to Dolby.
But it’s clear that Doppler was just early to market — the technology wasn’t ready. Amazon, Apple, Google, Microsoft and other companies are now working on bringing similar technology to the market.
The future of earbuds are hearables that are Doppler-like but smaller, lighter, more intelligent and powered by batteries that last all day.
It’s also easy to predict the merger of smart earbuds and hearing aids, with hearables able to customize and optimize sound for everyone, including the hearing-impaired. When AI-based hearables go mainstream, hearing aids will become obsolete.
A company called Resound launched a product this year called LiNX Quattro, which is a hearing aid that uses AI. Over time, it learns to adjust the audio settings according to the wearer’s preferences. It also supports Apple’s Siri virtual assistant and pairs with smartphones for additional app-based control, including voice control. This all sounds great, but the problem is that the product costs thousands of dollars. Hearables and hearing aids will merge when the price of this technology comes way down — which, of course, it will.
The leading hearables today, of course, are Apple’s AirPods, the current version of which offers always-listening Siri support and voice commands, as well as noise-filtering and the ability to pick your voice out of a cacophony of environmental noises. Apple is reportedly hard at work adding intelligence and sophisticated features into future AirPods.
Augmented reality goes audio
We tend to think of augmented reality (AR) as a primarily visual technology. But before we’re all walking around wearing visual AR glasses, we’ll get audio AR.
Of course, museums and sporting events (like professional tennis games) use location to offer an audio stream of information or commentary only to those present. Over time, we’ll get all kinds of audio-based augmented reality applications. This is likely to take the form of features in virtual assistants that are more context-aware.
Bose, for example, is developing something called Bose AR, an audio augmented reality platform. The company is trying to initiate an ecosystem of apps that provide contextual information and notifications. The idea is that contextual information will be whispered into your ears rather than shown on a tiny screen.
Glasses with eye tracking combined with a virtual assistant will allow us to look at a business and ask, “When does it open?” or at a machine and ask, “How does this work?” and get the answer spoken to us.
Why hearables will dominate wearables
Today, smartwatches dominate the wearables market. One of the big drivers in the smartwatch and smartband categories is the quantified self — mostly fitness tracking. Watches can monitor activity, movement, heartbeat and other biometrics.
But guess what. So can hearables. They can even track our mental state.
Poppy Crum, a Stanford University neuroscientist and chief scientist at Dolby Laboratories, believes that in-ear hearable computing devices that deliver audio will also monitor our mood or emotional state, and using that information to change how virtual assistants interact with us. Hilariously, she says that the human ear is like a USB port to the brain, the ideal location for both “reading from” and “writing to” the brain. Both pulse and brain electrical signals can be monitored there, and she says that stress can be both detected and reduced (by reducing the sounds that cause stress, such as police sirens or babies crying).
So like watches, hearables will be able to monitor motion and heart rate. But unlike watches, they’ll be able to act on that detection by changing what we’re paying attention to.
In short, hearables will be less intrusive and more satisfying to use than smartwatches, and will likely overtake smartwatches in the market within 10 years.
Why people want to control attention
A recent survey by Bitkom Research in Germany found that nearly half of all people who wear headphones do so to mute their surroundings. Another 42% use them as a signal to others not to disturb them. Around 20% use headphones to focus on work.
Even without AI-augmented sound management and audio AR, people are already instinctively using even primitive headphones to seize control of their own attention.
This is the year when the first crop of capable products emerges that use AI and other advanced technologies to automatically control what you hear in order to help you control your attention.
So pay attention. The revolution in attention management audio is just beginning.