Apple’s annual global developer conference opened with a series of eye-catching announcements, updating all its operating systems: iOS 14, iPad OS 14, watchOS 7 and tvOS 14, as well as new processor architectures. For the audio industry, this event is full of exciting announcements on all platforms, because this conference not only updates the AirPods firmware, but also brings spatial audio support to AirPods Pro.
Tim Cook announced the opening of the WWDC20 conference, and then the company’s executives introduced iOS 14, iPadOS 14 (including handwriting enhanced with Apple Pencil), watchOS 7, macOS Big Sur (coming in 2020 The macOS 11) launched in October 2010 has breakthrough features, and is supported by easy-to-use and powerful development tools.
In terms of audio, Apple confirmed that AirPods will have the ability to seamlessly switch between Apple devices through automatic device switching. Users can start listening to music on the iPhone, just put down one device and hold down another device, AirPods will automatically switch the audio to the currently used device. When watching a movie on Apple TV, the user can make a call on the iPhone, and the earbuds will pause the movie and switch to the smartphone.
After the software upgrade, this applies not only to AirPods Pro and AirPods (second generation), but also to the true wireless earbuds of Beats Powerbeats and Powerbeats Pro, and the Beats Solo Pro wireless headphones. What all these products have in common is that they all use Apple’s own H1 packaging system (SiP), which allows software updates and the introduction of new valuable features. This is a strong signal to the market for products that have become obsolete in the past 6 months or a year.
However, the most exciting news in this regard is that AirPods Pro will get dynamic audio head tracking function, which will obtain spatial audio support, so as to watch movies, TV and surround 5.1, 7.1 and Dolby Atmos (Dolby Atmos) audio Provide an immersive experience when tracking any content. As we all know, the introduction of convincing binaural rendering of multi-channel audio sources is not an easy task, and so far, it also requires external hardware support (usually a dongle) or a dedicated full-featured front-end with DSP A power amplifier and a sensor for head tracking, as well as a powerful computer, can handle all the complex calculations required.
So far, the application of this function is usually done with large in-ear headphones, rather than true wireless earplugs. Apple is able to bring spatial audio effects to AirPods Pro because it has equipped these in-ear devices with H1 multi-core chips and a complete sensor array powered by the same SiP, which can handle complex active noise reduction functions, Siri voice recognition.
In fact, the low-power SiP package even has its own digital amplifier platform, which is very suitable for use in combination with real-time audio signal processing. With this powerful and scalable platform, Apple was able to develop solutions based on standard directional audio filters and fine frequency adjustments for each ear, so as to place the sound virtually according to the same auditory cues in the multi-channel surround sound format In the space, the advanced spatial audio algorithm supported by the H1 chip has fully realized this kind of immersive listening experience.
The Apple H1 chip is specifically designed for earphones and was originally used in the 2019 version of AirPods. It has Bluetooth 5.0 and supports hands-free “Hey Siri” commands. Compared with the W1 chip of the earlier version of AirPods, it provides 30% low latency. In order to realize that AirPods Pro can coordinate the binaural rendering of these complex signals according to the user’s location, Apple uses the accelerometer and gyroscope in AirPods Pro to track the movement of the user’s head. In this way, even if the user turns his head, each voice prompt will still be fixed on the device, while the center channel is still in front. Even if the user moves the device while watching a movie, like an iPad, the system will track the position of the user’s head relative to the screen, understand how they move relative to each other, and keep the auditory prompts fixed in their position.
Finally, we hope that Apple will also confirm support for MPEG-H audio, and eventually support the loading of HRTF configuration files to achieve complete personalization.