How does a live broadcast like Jitterbug Racer operate how to broadcast and watch the video?

Virtual character live broadcast is realized through the use of virtual reality technology, the main methods of realization are as follows:

Virtual image design. Using 3D modeling tools such as Maya, 3d Max, etc. to design the 3D model of the virtual character, including the character's facial features, hair, clothing and other details. Then use the rendering engine such as Unity 3D, Unreal, etc. for high realism rendering, and output the 2D image and video from various angles.

Expression capture. Using face recognition SDKs such as Faceunity, FacePlusPlus, etc., we capture the anchor's expression in real time through the normal camera, map the expression to the virtual character 3D model, and drive its lips and facial muscles to change, so as to realize the lip synchronization and expression change.

Motion capture. Using equipment such as Perception Neuron for human body motion capture, through the capture of the anchor's hand and torso movement information, drive the avatar's skeletal structure and LEGO human animation, to realize the whole body motion mapping.

Virtual scene setting. Use Unity, Unreal and other 3D scene editors to build virtual scenes for live broadcasting, such as rooms, offices, etc., and place virtual characters in them, set up virtual lighting, etc., and output real-time 3D scene views.

Real-time tracking and rendering. Using AR or virtual camera technology to track the anchor's head movement in real time, control the virtual camera to synchronize the movement, and use the rendering engine to render the output of the customized perspective of the 3D scene in real time, providing an immersive viewing experience.

Interactive system development. The interactive interface is developed on the webpage or App side, which receives input from the audience in the chat bar or interactive menu, converts the information into control commands, and drives the voice, movement, and expression of the avatar in real time to realize the interaction between the audience and the avatar.

Live integration. Integrate the above parts into the RTMP live streaming service to complete the interactive live broadcast by the anchor operating avatars. Or publish short videos customized for avatars, which can also bring a high degree of novelty and interaction.

This is a more complex project, there is still a high threshold, requiring relevant 3D modeling, recognition and unity development skills, as well as a certain rendering and live technology base. But in the future, with the maturity of the relevant tools and services, the production cost will drop significantly, and the virtual character live will be more popular.