Inmotive Receives Strategic Investment from Suzuki

BEIJING, Nov. 8, 2023 /PRNewswire/ — WiMi Hologram Cloud Inc. (NASDAQ: WIMI) (“WiMi” or the “Company”), a leading global Hologram Augmented Reality (“AR”) Technology provider, today announced that it applied transfer learning to the image classification, and a fusion model for image classification was built to improve the classification performance on small sample datasets by utilizing the feature representation of models trained on large-scale datasets.

Deep transfer learning can apply deep learning models that have been trained on large-scale datasets to new tasks. In image classification, deep transfer learning can accelerate the model training process and improve classification performance by transferring some or all of the network parameters of an already trained model to a new model. The image features are extracted by a pre-trained deep neural network model, classified using a classifier model, the pre-trained deep neural network model and the classifier model are connected, and finally, the whole model is optimized by an end-to-end training approach and by a back-propagation algorithm. This approach can effectively utilize the existing features to improve the accuracy and efficiency of image classification.

A fusion model design is used in WiMi’s deep transfer learning-based image classification fusion model, which combines several pre-trained deep learning models and integrates them by transfer learning to improve the accuracy of image classification. The model architecture consists of the following key components:

Basic model selection: In the design of the fusion model, some basic deep learning models need to be selected as candidate models first. These models are pre-trained models on large-scale image datasets, and they have good performance and a wide range of applications for image classification tasks.

Feature extraction: In order to be able to fuse the different base models, we need to add a feature extraction to each model. The role of this feature extraction is to convert the input image into a high-dimensional feature vector so that subsequent classifiers can classify it. In this feature extraction, we use a convolutional neural network (CNN) for feature extraction.

Fusion: After the feature extraction, multiple feature vectors extracted from the basic model will be obtained. To fuse them, another fusion is designed, the purpose of the fusion is to fuse multiple feature vectors into a more expressive feature vector to improve the classification.

Classifier: Next, a fused feature vector will be obtained. For final classification, a classifier will need to be added, through which the fused feature vector will be mapped to different categories, thus realizing the classification of the image.

Fusing the advantages of multiple basic models can improve the accuracy of image classification. At the same time, the fusion model for image classification based on deep transfer learning also has some flexibility, and different base models and fusion methods can be selected according to the actual situation to adapt to different image classification tasks.

Image recognition is an important application of deep learning in the field of computer vision, and the image classification and fusion model based on deep transfer learning researched by WiMi will also be widely used in more industry fields. For example, in the field of intelligent security, the model can be used to perform real-time face recognition on images captured by surveillance cameras, thus realizing automatic alarms for strangers. Autonomous driving is also another important application, where the image classification fusion model based on deep transfer learning can be used to recognize and classify objects such as traffic signs, vehicles and pedestrians on the road. This is crucial for self-driving vehicles to help them determine changes in the surrounding environment and make decisions accordingly. For example, when the vehicle recognizes a pedestrian crossing the road in front of it, it can take timely braking measures to ensure the safety of the pedestrian. In addition, the model can also be used for the vehicle’s automatic parking system, which realizes the automatic parking of the vehicle by recognizing parking spaces and obstacles. In addition, social media analysis is also an application for analyzing and classifying images on social media using the image classification fusion model of deep transfer learning. By classifying images on social media, it realizes the understanding of users’ interests and preferences. For example, by analyzing photos posted by users on social media, relevant products or activities can be recommended to provide personalized recommendation services. In addition, social media analytics can be used for sentiment analysis to understand the emotional state of users by recognizing expressions and emotions in images, thus providing better services and marketing strategies for enterprises.

In addition to the above several application scenarios, the image classification fusion model based on deep transfer learning can also be applied to many other fields, such as smart home, smart manufacturing, smart assistant, etc. By recognizing and classifying images, intelligent perception and understanding of the environment and objects can be achieved, bringing more convenience and efficiency to people’s lives and work.

With the successful application of deep transfer learning on image classification tasks, in the future, WiMi will focus more on exploring and improving the image classification fusion model based on deep transfer learning in terms of cross-domain transfer learning, model interpretability, and small-sample learning, in order to further improve the performance and application scope of image classification tasks.

About WIMI Hologram Cloud

WIMI Hologram Cloud, Inc. (NASDAQ: WIMI) is a holographic cloud comprehensive technical solution provider that focuses on professional areas including holographic AR automotive HUD software, 3D holographic pulse LiDAR, head-mounted light field holographic equipment, holographic semiconductor, holographic cloud software, holographic car navigation and others. Its services and holographic AR technologies include holographic AR automotive application, 3D holographic pulse LiDAR technology, holographic vision semiconductor technology, holographic software development, holographic AR advertising technology, holographic AR entertainment technology, holographic ARSDK payment, interactive holographic communication and other holographic AR technologies.

Safe Harbor Statements

This press release contains “forward-looking statements” within the Private Securities Litigation Reform Act of 1995. These forward-looking statements can be identified by terminology such as “will,” “expects,” “anticipates,” “future,” “intends,” “plans,” “believes,” “estimates,” and similar statements. Statements that are not historical facts, including statements about the Company’s beliefs and expectations, are forward-looking statements. Among other things, the business outlook and quotations from management in this press release and the Company’s strategic and operational plans contain forward−looking statements. The Company may also make written or oral forward−looking statements in its periodic reports to the US Securities and Exchange Commission (“SEC”) on Forms 20−F and 6−K, in its annual report to shareholders, in press releases, and other written materials, and in oral statements made by its officers, directors or employees to third parties. Forward-looking statements involve inherent risks and uncertainties. Several factors could cause actual results to differ materially from those contained in any forward−looking statement, including but not limited to the following: the Company’s goals and strategies; the Company’s future business development, financial condition, and results of operations; the expected growth of the AR holographic industry; and the Company’s expectations regarding demand for and market acceptance of its products and services.

Further information regarding these and other risks is included in the Company’s annual report on Form 20-F and the current report on Form 6-K and other documents filed with the SEC. All information provided in this press release is as of the date of this press release. The Company does not undertake any obligation to update any forward-looking statement except as required under applicable laws.

Source : WiMi Announced a Deep Transfer Learning-Based Fusion Model for Image Classification

The information provided in this article was created by Cision PR Newswire, our news partner. The author's opinions and the content shared on this page are their own and may not necessarily represent the perspectives of Thailand Business News.