WiMi Announced Image-Fused Point Cloud Semantic Segmentation with Fusion Graph Convolutional Network
05 Januar 2024 - 2:00PM
WiMi Hologram Cloud Inc. (NASDAQ: WIMI) ("WiMi" or the "Company"),
a leading global Hologram Augmented Reality ("AR") Technology
provider, today announced an image-fused point cloud semantic
segmentation method based on fused graph convolutional network,
aiming to utilize the different information of image and point
cloud to improve the accuracy and efficiency of semantic
segmentation. Point cloud data is very effective in representing
the geometry and structure of objects, while image data contains
rich color and texture information. Fusing these two types of data
can utilize their advantages simultaneously and provide more
comprehensive information for semantic segmentation.
The fused graph convolutional network (FGCN) is
an effective deep learning model that can process both image and
point cloud data simultaneously and efficiently deal with image
features of different resolutions and scales for efficient feature
extraction and image segmentation. FGCN is able to utilize
multi-modal data more efficiently by extracting the semantic
information of each point involved in the bimodal data of the image
and point cloud. To improve the efficiency of image feature
extraction, WiMi also introduces a two-channel k-nearest neighbor
(KNN) module. This module allows the FGCN to utilize the spatial
information in the image data to better understand the contextual
information in the image by computing the semantic information of
the k nearest neighbors around each point. This helps FGCN to
better distinguish between more important features and remove
irrelevant noise. In addition, FGCN employs a spatial attention
mechanism to better focus on the more important features in the
point cloud data. This mechanism allows the model to assign
different weights to each point based on its geometry and the
relationship of neighboring points to better understand the
semantic information of the point cloud data. By fusing multi-scale
features, FGCN enhances the generalization ability of the network
and improves the accuracy of semantic segmentation. Multi-scale
feature extraction allows the model to consider information in
different spatial scales, leading to a more comprehensive
understanding of the semantic content of images and point cloud
data.
This image-fused point cloud semantic
segmentation with fusion graph convolutional network is able to
utilize the information of multi-modal data such as images and
point clouds more efficiently to improve the accuracy and
efficiency of semantic segmentation, which is expected to advance
machine vision, artificial intelligence, photogrammetry, remote
sensing, and other fields, providing new a method for future
semantic segmentation research.
This image-fused point cloud semantic
segmentation with fusion graph convolutional network has a wide
range of application prospects and can be applied in many fields
such as autonomous driving, robotics, and medical image analysis.
With the rapid development of autonomous driving, robotics, medical
image analysis and other fields, there is an increasing demand for
processing and semantic segmentation of image and point cloud data.
For example, in the field of autonomous driving, self-driving cars
need to accurately perceive and understand the surrounding
environment, including semantic segmentation of objects such as
roads, vehicles, and pedestrians. This image-fused point cloud
semantic segmentation with fusion graph convolutional network can
improve the perception and understanding of the surrounding
environment and provide more accurate data support for decision
making and control of self-driving cars. In the field of robotics,
robots need to perceive and understand the external environment in
order to accomplish various tasks. Image fusion point cloud
semantic segmentation with fusion graph convolutional network can
fuse image and point cloud data acquired by robots to improve the
ability to perceive and understand the external environment, which
helps robots to better accomplish tasks. In the medical field,
medical image analysis requires accurate segmentation and
recognition of medical images to better assist medical diagnosis
and treatment. The image-fused point cloud semantic segmentation
with fusion graph convolutional network can fuse medical images and
point cloud data to improve the segmentation and recognition
accuracy of medical images, thus providing more accurate data
support for medical diagnosis and treatment.
In the future, WiMi research will further
optimize the model structure. At the same time, the model will be
combined with deep learning technology to take advantage of deep
learning technology to improve the performance of the model. And
further develop the multi-modal data fusion technology to fuse
different types of data (e.g., image, point cloud, text, etc.) to
provide more comprehensive and richer information and improve the
accuracy of semantic segmentation. WiMi will continue to improve
the real-time processing of the image-fused point cloud semantic
segmentation with fusion graph convolutional network capability to
meet the demand.
About WIMI Hologram CloudWIMI Hologram Cloud,
Inc. (NASDAQ:WIMI) is a holographic cloud comprehensive technical
solution provider that focuses on professional areas including
holographic AR automotive HUD software, 3D holographic pulse LiDAR,
head-mounted light field holographic equipment, holographic
semiconductor, holographic cloud software, holographic car
navigation and others. Its services and holographic AR technologies
include holographic AR automotive application, 3D holographic pulse
LiDAR technology, holographic vision semiconductor technology,
holographic software development, holographic AR advertising
technology, holographic AR entertainment technology, holographic
ARSDK payment, interactive holographic communication and other
holographic AR technologies.
Safe Harbor StatementsThis press release
contains "forward-looking statements" within the Private Securities
Litigation Reform Act of 1995. These forward-looking statements can
be identified by terminology such as "will," "expects,"
"anticipates," "future," "intends," "plans," "believes,"
"estimates," and similar statements. Statements that are not
historical facts, including statements about the Company's beliefs
and expectations, are forward-looking statements. Among other
things, the business outlook and quotations from management in this
press release and the Company's strategic and operational plans
contain forward−looking statements. The Company may also make
written or oral forward−looking statements in its periodic reports
to the US Securities and Exchange Commission ("SEC") on Forms 20−F
and 6−K, in its annual report to shareholders, in press releases,
and other written materials, and in oral statements made by its
officers, directors or employees to third parties. Forward-looking
statements involve inherent risks and uncertainties. Several
factors could cause actual results to differ materially from those
contained in any forward−looking statement, including but not
limited to the following: the Company's goals and strategies; the
Company's future business development, financial condition, and
results of operations; the expected growth of the AR holographic
industry; and the Company's expectations regarding demand for and
market acceptance of its products and services.
Further information regarding these and other
risks is included in the Company's annual report on Form 20-F and
the current report on Form 6-K and other documents filed with the
SEC. All information provided in this press release is as of the
date of this press release. The Company does not undertake any
obligation to update any forward-looking statement except as
required under applicable laws.
ContactsWIMI Hologram Cloud Inc.Email:
pr@wimiar.comTEL: 010-53384913
ICR, LLCRobin YangTel: +1 (646) 975-9495Email:
wimi@icrinc.com
WiMi Hologram Cloud (NASDAQ:WIMI)
Historical Stock Chart
Von Dez 2024 bis Jan 2025
WiMi Hologram Cloud (NASDAQ:WIMI)
Historical Stock Chart
Von Jan 2024 bis Jan 2025