Skip to content

This was Marck Zuckerberg’s ultra-realistic interview in the metaverse

Lex Fridman Does many things. He has a very popular podcast, which is among the first to be dubbed with a synthetic voice by Spotify. He also knows martial arts, and was doing some tests with Elon Musk in the middle of the year, when it was in his horizon of possibilities that Musk and Mark Zuckerberg would fight.

LOOK: Creator of ChatGPT would have teamed up with former Apple designer to make a new “iPhone”

And now he published an episode of his podcast in which interview with Mark Zuckerberg in the context of Connect 2023, the event for Meta developers (where they announced virtual reality glasses, chatbots for WhatsApp and more) that has a peculiarity: it is done in video, but using augmented reality glasses. Each of them is wearing Quest Pro glasses; What they see is a digital version of the other, but with a surprising level of realism: it achieves a lot of detail in each person’s face, their facial micro-expressions, their movements. It’s almost like they’re face to face, but Fridman is in Texas, and Zuckerberg is in California; both are in a kind of digital limbo where they can change the lighting, the environment and much more.

Zuckerberg had already shown a version of this hyper-realistic avatar in October of last year, when criticism of the future of the metaverse (a discussion that is far from settled). The version now seen on video is even more sophisticated, and is reminiscent of Google’s telepresence technology, Project Starline. Each has its own thing: Starline requires a screen, cameras and a particular room, but people are seen in 3D, with their face, torso and hands digitized at the moment; the tool Zuckerberg and Fridman used does not require (once the original digitization was done to generate the 3D model of the face) more than a pair of Quest Pro glasses for each participant.

Together with digitization of the face, explains Zuckerberg in his talk with Fridman, they also digitized some of his expressions, they generated a digital model of those faces and their possible expressions (how the mouth moves, what happens to the forehead when the eyebrows are raised, wrinkles or spots on the skin, for example). The Quest Pro glasses have internal cameras that look at the user’s micro-expressions, and software transfers, in real time, those movements to the digital avatar, in such a way that it moves as if a camera were focused on the speaker’s face.

As Zuckerberg explained, the system only transmits the audio and encoded expression changes; The generation of the avatar’s movements is done on the destination device, which significantly reduces the connectivity requirements to carry out a telepresence session of this type.

For now, Meta has not made any announcement about the availability of this tool for the general public, largely due to the prior process of digitizing the face, which requires special equipment.

The Nation of Argentina / GDA

Source: Elcomercio

Share this article:
globalhappenings news.jpg
most popular