WiMi Hologram Cloud Inc. (NASDAQ: WIMI) (“WiMi” or the “Company”), a leading global Hologram Augmented Reality (“AR”) Technology provider, today announced that it is developing a Digital Human Multi-channel Data Management System, which can unify the management and processing of multi-channel data of virtual digital humans, including voice, image, pose, movement, emotion and other data.
The system mainly includes the following parts:
1. Multi-channel data collection: The module uses a variety of perceptual devices, such as voice recognizers, motion capture devices, etc., and integrates various data sources to achieve multi-channel data acquisition of voice, image, pose, and motion of the digital human.
2. Data pre-processing and feature extraction: Machine learning and pattern recognition technology are adopted to pre-process, feature extraction, data mining, and data dimension reduction of the collected multi-channel data to extract valuable information and features.
3. Data fusion and modeling: The module fuses information from multiple data channels and uses a deep neural network modeling approach to model and predict various data from digital humans.
4. Data storage and management module: the pre-processed and modeled data is stored in the database, and the data structure and algorithm are used to achieve data storage and management.
5. Data retrieval and query module: The module retrieves and queries data in the database with the query language and search algorithm.
6. Data analysis and mining module: This module uses data mining and data analysis techniques to analyze and mine the stored multi-channel data of digital people to extract valuable information.
The system has the characteristics of real-time, high security, and strong adaptability. It can perform data processing and analysis promptly after data collection to provide users with immediate feedback. The system makes strict protection for the security of user data to ensure that user data will not be leaked or misused. The system can also be customized to meet the needs of different industry sectors, with good flexibility and adaptability.
The system can be applied to a variety of scenarios, including speech recognition, action recognition, emotion recognition, and human-computer interaction. For example, the system can realize the voice interaction of the digital human by collecting the voice data and applying it with voice recognition technology. The system applies pose and motion capture equipment to manage the motion data of digital humans. Combined with motion recognition technology, it can realize the interaction and motion control of the digital human in scenes such as entertainment and games. By collecting data such as digital human voice, video, and physiological signals, the system, combined with emotion recognition technology, can realize the emotional communication of digital humans. The interaction between the digital human and the user is realized through the multi-channel data management system. In these scenarios, the system provides comprehensive data management and data analysis support for digital human technology, which improves the realism and interaction effect and enhances the application value of digital humanities in digital entertainment, online education, human-computer interaction, medical health, and other fields.