October 12, Beijing time, the annual Meta Connect conference took place. The conference showcased a variety of accessories, apps and services, as well as Meta’s exploration of the metaverse.

This article provides a frame-by-frame analysis of the latest Reality Labs cutting-edge virtual reality technologies showcased at the conference. Check it out!

At last year’s Connect 2021 conference, Meta demonstrated a wearable device for one-handed information input in virtual reality, where the device captures data through wrist movements to enable textual information input.

At that time, Meta had revealed that the next step was to deepen the complexity of the action, develop EMG muscle electric wristband corresponding products, and may launch EMG muscle electric wearable devices in the future.

At this year’s Connect conference, Meta demonstrated their latest achievements after combining artificial intelligence and EMG.

In the first video demonstration, the two used the wrist-based EMG to play a parkour game.

Although they use the same gestures, they actually gesture slightly differently due to the differences between individuals.

Whenever one of them makes a gesture movement, the algorithm adapts and interprets the signal to transform it so that each person’s natural gestures are recognized quickly and with high reliability.

Even over time, the system gets better and better at understanding with the help of machine learning.

Meta believes that an approach that uses neuromuscular signals from the wrist as input is truly human-centric.

The user does not need to learn a control scheme that has some learning cost; the device will autonomously learn and adapt to the user as he or she uses it.

With the combination of machine learning and neuroscience, Meta has even developed “co-adaptive learning” or co-adaptive learning, an algorithm that uses individual differences as a factor to influence adaptation.

Moreover, the potential of co-adaptive learning is not limited to complete gestures, but also to slight gesture differences. It can learn in real time how to respond to the electromyographic signals that the body is activating and transmit the signals through only the slightest hand movements.

In the future, EMG could revolutionize the way we interact with the digital world, allowing us to not only do more, but to do it the way we want to.

At the Connect conference, Zuckerberg showed how motor neuron signals can be used to control AR/VR devices.

With just a few tiny movements, you can perform tasks such as message viewing and taking photos.

The ambitious Zuckerberg noted that these are just the beginning, and that real AR glasses and future interfaces will unlock more and more useful interactions.

For example, right-click on a real object or location to see its details, better control devices without leaving the virtual world, or get help and support from a personalized AI digital assistant.

When these interactions are combined, Meta will provide a more natural and human-centered approach to computing, offering more possibilities for human-computer interaction.

3D mapping of indoor venues with AR glasses
Two years ago, Meta announced Project Aria at Connect, focusing on research for wearable AR devices.

Working with a pilot at Carnegie Mellon University’s (CMU) Cognitive Assist Lab, Meta is using AR goggles to create 3D maps of indoor venues that are not typically covered by GPS signals.

At this year’s Connect conference, Meta reported on the progress of this project.

Wearing Project Aria’s engineering glasses, CMU researchers created a 3D map of Pittsburgh International Airport.

In the past, building indoor navigation required enough iBeacons to get accurate location information.

Now, by using 3D maps created by Project Aria to train AI models, it is possible to locate accurately without having to rely too much on Bluetooth beacons.

In Meta’s demo video, people with visual impairments are shown identifying their location and navigating through the airport via the mobile NavCog app, which provides disability-friendly interaction.

Building and manipulating 3D objects in virtual worlds is essential to the construction of metaverse, but the process would be very long if we relied entirely on manual drawing.

It would be faster and easier if we could take a real object as a template and ‘copy’ it into the virtual world, building it based on a copy.

Meta is working on two different techniques to solve this challenge.

The first method, taking multiple 2D images from different angles, reconstructs the appearance of a 3D object with the help of a neural radiation field (NERF) based on machine learning.

Zuckerberg demonstrated the process of replicating a teddy bear into a virtual world. This method replicates the object and reproduces many fine details of the object.

The second method, directly capturing the geometry and appearance, scans the object with the help of ‘reverse rendering’ technology and places its digital twin into the world of augmented reality or virtual reality.

In the video of the live demonstration, Zuckerberg replicates a digital twin object that can even produce dynamic reflections of light in the virtual world.

It can even simulate the appearance of real objects landing and bouncing in the real world.

In addition, Meta also showed some of the latest developments in Codec Avatar 2.0, including how to make the avatar’s facial expressions more closely resemble the body’s form. Judging from the actual demo, Codec Avatar 2.0 does not disappoint. In addition to performance optimizations, Codec Avatar 2.0 also incorporates some of the facial movements that people rely on to communicate and understand tone of voice: raised eyebrows, squinted eyes, widened eyes, and wrinkled noses.

In the future, Meta will further optimize the generation efficiency and make the overall processing time even shorter.

It should be noted that Instant Codec Avatar lacks in quality and realism compared to Codec Avatar 2.0.

Nevertheless, the instant performance is not bad in practice. This kind of production, which can be done without professional equipment, lowers the threshold quite a bit.