- Ceva-NeuPro-Nano NPUs deliver optimal balance of
ultra-low power and best performance in small area to efficiently
execute TinyML workloads in consumer, industrial and
general-purpose AIoT products
- Ceva-NeuPro Studio complete AI SDK for the Ceva-NeuPro
NPU family supports open AI frameworks including TensorFlow Lite
for Microcontrollers and microTVM to simplify the rapid development
of TinyML enabled applications
- Optimized NPUs for embedded devices build on Ceva's market
leadership in IoT connectivity and strong expertise in
audio and vision sensing to help semiconductor companies and OEMs
unlock the potential of edge AI
ROCKVILLE, Md., June 24,
2024 /PRNewswire/ -- Ceva, Inc. (NASDAQ: CEVA),
the leading licensor of silicon and software IP that enables Smart
Edge devices to connect, sense and infer data more reliably and
efficiently, today announced that it has extended its Ceva-NeuPro
family of Edge AI NPUs with the introduction of Ceva-NeuPro-Nano
NPUs. These highly-efficient, self-sufficient NPUs deliver the
power, performance and cost efficiencies needed for semiconductor
companies and OEMs to integrate TinyML models into their SoCs for
consumer, industrial, and general-purpose AIoT products.
TinyML refers to the deployment of machine learning models on
low-power, resource-constrained devices to bring the power of AI to
the Internet of Things (IoT). Driven by the increasing demand for
efficient and specialized AI solutions in IoT devices, the market
for TinyML is growing rapidly. According to research firm ABI
Research, by 2030 over 40% of TinyML shipments will be powered by
dedicated TinyML hardware rather than all-purpose MCUs. By
addressing the specific performance challenges of TinyML, the
Ceva-NeuPro-Nano NPUs aim to make AI ubiquitous, economical and
practical for a wide range of use cases, spanning voice, vision,
predictive maintenance, and health sensing in consumer and
industrial IoT applications.
The new Ceva-NeuPro-Nano Embedded AI NPU architecture is fully
programmable and efficiently executes Neural Networks, feature
extraction, control code and DSP code, and supports most advanced
machine learning data types and operators including native
transformer computation, sparsity acceleration and fast
quantization. This optimized, self-sufficient architecture enables
Ceva-NeuPro-Nano NPUs to deliver superior power efficiency, with a
smaller silicon footprint, and optimal performance compared to the
existing processor solutions used for TinyML workloads which
utilize a combination of CPU or DSP with AI accelerator-based
architectures. Furthermore, Ceva-NetSqueeze AI compression
technology directly processes compressed model weights, without the
need for an intermediate decompression stage. This enables the
Ceva-NeuPro-Nano NPUs to achieve up to 80% memory footprint
reduction, solving a key bottleneck inhibiting the broad adoption
of AIoT processors today.
"Ceva-NeuPro-Nano opens exciting opportunities for companies to
integrate TinyML applications into low-power IoT SoCs and MCUs and
builds on our strategy to empower smart edge devices with advanced
connectivity, sensing and inference capabilities. The
Ceva-NeuPro-Nano family of NPUs enables more companies to bring AI
to the very edge, resulting in intelligent IoT devices with
advanced feature sets that capture more value for our customers,"
said Chad Lucien, vice president and
general manager of the Sensors and Audio Business Unit at Ceva. "By
leveraging our industry-leading position in wireless IoT
connectivity and strong expertise in audio and vision sensing, we
are uniquely positioned to help our customers unlock the potential
of TinyML to enable innovative solutions that enhance user
experiences, improve efficiencies, and contribute to a smarter,
more connected world."
According to Paul Schell,
Industry Analyst at ABI Research, "Ceva-NeuPro-Nano is a compelling
solution for on-device AI in smart edge IoT devices. It addresses
the power, performance, and cost requirements to enable always-on
use-cases on battery-operated devices integrating voice, vision,
and sensing use cases across a wide array of end markets. From TWS
earbuds, headsets, wearables, and smart speakers to industrial
sensors, smart appliances, home automation devices, cameras, and
more, Ceva-NeuPro-Nano enables TinyML in energy constrained AIoT
devices."
The Ceva-NeuPro-Nano NPU is available in two configurations -
the Ceva-NPN32 with 32 int8 MACs, and the Ceva-NPN64 with 64 int8
MACs, both of which benefit from Ceva-NetSqueeze for direct
processing of compressed model weights. The Ceva-NPN32 is highly
optimized for most TinyML workloads targeting voice, audio, object
detection, and anomaly detection use cases. The Ceva-NPN64 provides
2x performance acceleration using weight sparsity, greater memory
bandwidth, more MACs, and support for 4-bit weights to deliver
enhanced performance for more complex on-device AI use cases such
as object classification, face detection, speech recognition,
health monitoring, and others.
The NPUs are delivered with a complete AI SDK - Ceva-NeuPro
Studio - which is a unified AI stack that delivers a common set of
tools across the entire Ceva-NeuPro NPU family, supporting open AI
frameworks including TensorFlow Lite for Microcontrollers (TFLM)
and microTVM (µTVM).
The Ceva-NeuPro-Nano Key Features
Flexible and scalable NPU architecture
- Fully programmable to efficiently execute Neural Networks,
feature extraction, control code, and DSP code
- Scalable performance by design to meet a wide range of use
cases
- MAC configurations with up to 64 int8 MACs per cycle
- Future proof architecture that supports the most advanced ML
data types and operators
- 4-bit to 32-bit integer support
- Native transformer computation
- Ultimate ML performance for all use cases using advanced
mechanisms
- Sparsity acceleration
- Acceleration of non-linear activation types
- Fast quantization
Edge NPU with ultra-low memory requirements
- Highly efficient, single core design for NN compute, feature
extraction, control code, and DSP code eliminates need for a
companion MCU for these computationally intensive tasks
- Up to 80% memory footprint reduction via Ceva-NetSqueeze which
directly process compressed model weights without the need for an
intermediate decompression stage
Ultra-low energy achieved through innovative energy
optimization techniques
- Automatic on-the-fly energy tuning
- Dramatic energy and bandwidth reduction by distilling
computations using weight-sparsity acceleration
Complete, Simple to Use AI SDK
- Ceva-NeuPro Studio provides a unified AI stack, with an easy
click-and-run experience, for all Ceva-NeuPro NPUs, from the new
Ceva-NeuPro-Nano to the powerful Ceva-NeuPro-M
- Fast time to market by accelerating software development and
deployment
- Optimized to work seamlessly with leading, open AI inference
frameworks including TFLM and µTVM
- Model Zoo of pretrained and optimized TinyML models covering
voice, vision and sensing use cases
- Flexible to adapt to new models, applications and market
needs
- Comprehensive portfolio of optimized runtime libraries and
off-the-shelf application-specific software
Availability
Ceva-NeuPro-Nano NPUs are available for
licensing today. For more information, visit:
https://www.ceva-ip.com/product/ceva-neupro-nano/
About Ceva, Inc.
At Ceva, we are passionate about
bringing new levels of innovation to the smart edge. Our wireless
communications, sensing and Edge AI technologies are at the heart
of some of today's most advanced smart edge products. From
Bluetooth, Wi-Fi, UWB and 5G platform IP for ubiquitous, robust
communications, to scalable Edge AI NPU IPs, sensor fusion
processors and embedded application software that make devices
smarter, we have the broadest portfolio of IP to connect, sense and
infer data more reliably and efficiently. We deliver differentiated
solutions that combine outstanding performance at ultra-low power
within a very small silicon footprint. Our goal is simple – to
deliver the silicon and software IP to enable a smarter, safer, and
more interconnected world. This philosophy is in practice today,
with Ceva powering more than 17 billion of the world's most
innovative smart edge products from AI-infused smartwatches, IoT
devices and wearables to autonomous vehicles and 5G mobile
networks.
Our headquarters are in Rockville,
Maryland with a global customer base supported by operations
worldwide. Our employees are among the leading experts in their
areas of specialty, consistently solving the most complex design
challenges, enabling our customers to bring innovative smart edge
products to market.
Ceva: Powering the Smart Edge™
Visit us at www.ceva-ip.com and follow us on LinkedIn, X,
YouTube, Facebook, and Instagram.
https://mma.prnewswire.com/media/74483/ceva__inc__logo.jpg
View original
content:https://www.prnewswire.com/news-releases/ceva-extends-its-smart-edge-ip-leadership-adding-new-tinyml-optimized-npus-for-aiot-devices-to-enable-edge-ai-everywhere-302179990.html
SOURCE Ceva, Inc.