Driving Innovation through Advanced R&D Solutions
Artificial Intelligence (AI) and Machine Learning (ML) have become mainstream topics, particularly with the rise of tools like OpenAI’s ChatGPT, which has garnered attention for its capabilities in natural language processing. However, as AI becomes increasingly ubiquitous, there’s growing curiosity about whether it can be applied to more constrained environments, such as microcontrollers and embedded devices. Given that traditional machine learning models require vast amounts of data and immense computational power, the idea of running AI on limited-resource hardware might seem far-fetched. Yet, this is precisely where edge computing and low-power AI solutions come into play, offering an exciting way to deploy AI models on tiny, energy-efficient devices. Let’s explore some cutting-edge platforms for embedded AI and edge computing.
Nvidia is a leader in the AI and machine learning hardware space, with its Jetson series offering powerful modules for edge computing. The Jetson AGX Orin, for instance, provides a whopping 275 TOPS (Tera Operations Per Second), making it an ideal choice for high-performance AI applications. However, for less demanding edge computing tasks, the Jetson Nano is a more budget-friendly option, offering impressive performance for its price point.
The Jetson Nano Developer Kit is an excellent starting point for developers, with a price of around $99 USD. The Nano offers strong versatility: it runs Linux, supports PyTorch and CUDA, and allows for easy deployment of AI models without a complex deployment process. The onboard Linux environment supports a wide array of peripherals, with serial protocols like SPI and I2C to connect external sensors. However, keep in mind that access to peripherals in a Linux environment may not be as straightforward as on simpler microcontroller-based platforms.
While Jetson Nano is a great option for edge AI, the platform is more resource-heavy and versatile, making it suitable not only for AI tasks but also for more general computing needs. For those who need GPU acceleration for intense calculations, the Jetson series, including the powerful Jetson Xavier, offers great potential.
On the other end of the spectrum, Google’s Coral Dev Board Micro focuses on ultra-low-power, efficient AI processing at the edge. Priced at $80 USD, this compact board is one of the smallest and most affordable options dedicated to AI tasks. At its core, the NXP i.MX RT1176 ARM Cortex-based controller pairs with the Coral Edge TPU (Tensor Processing Unit) coprocessor, which is optimized for machine learning inference.
With 4 TOPS of AI processing power, the Coral Dev Board Micro can handle significant ML workloads without needing cloud processing. This makes it an ideal solution for edge applications that require low-latency AI, such as real-time image or sound analysis. The board comes with a microphone and camera, along with secure elements, flash memory, and extra RAM to support data collection and secure processing.
Developers can either use Google’s pre-trained models or create their own custom models. With a focus on embedded systems, the Coral Dev Board Micro supports both Python and C/C++ for application development. If you’re looking to develop for a non-Linux environment, you’ll find that the Coral Dev Board supports FreeRTOS for low-level embedded applications. The device also offers a range of expansion options via click-on boards, including Wi-Fi, Bluetooth Low Energy (BLE), and PoE (Power over Ethernet), enhancing its connectivity and versatility for edge-based IoT solutions.
You don’t need to rely on expensive, cutting-edge development boards to start experimenting with edge AI. Platforms like STM32 — microcontroller units (MCUs) from STMicroelectronics — offer a more cost-effective solution for embedded AI. With the help of tools like TensorFlow Lite and STM32CubeAI, you can run machine learning models on older MCUs, such as those from the STM32F4 family, which are 10-15 years old.
While these older platforms are less powerful than modern AI-specific devices like the Jetson Nano or Coral Dev Board Micro, they can still serve specific AI use cases, particularly in low-cost, embedded systems where you want to replace human-written logic with an AI-driven solution. Real-world examples show how predictive maintenance and fault classification for industrial machinery can be effectively implemented on older hardware using AI techniques. For applications where you don’t need high-end processing, STM32 offers a reliable and economical platform for edge computing.
Another accessible option for experimenting with AI at the edge is Raspberry Pi. While not specifically designed for machine learning, the Raspberry Pi offers a flexible and affordable platform for various computing tasks. The newest Raspberry Pi models, including the Raspberry Pi 5, provide enhanced capabilities, such as a PCIe port for connecting specialized accelerators like the Coral PCIe Accelerator.
With its powerful CPU and GPU, the Raspberry Pi can run lightweight machine learning models, and you can use libraries like TensorFlow Lite to deploy models efficiently. This makes Raspberry Pi a versatile choice for anyone looking to build edge AI projects, whether for smart home applications, robotics, or industrial IoT.
Additionally, the Raspberry Pi ecosystem has extensive community support, making it an ideal platform for beginners and hobbyists who want to experiment with AI and machine learning.
As AI continues to evolve, it’s clear that edge computing is a growing area of interest, enabling AI-powered applications to run on small, efficient devices. Whether you’re working with Nvidia’s Jetson series, Google’s Coral Dev Board Micro, or even older STM32 microcontrollers, there are a variety of platforms available to suit different use cases.
For companies or developers who need powerful, real-time AI processing, solutions like Nvidia Jetson Nano or Coral Dev Board Micro offer substantial computing power with low power consumption. On the other hand, those working in more cost-sensitive, embedded environments may find great value in deploying AI models on traditional microcontrollers like the STM32 or Raspberry Pi.
The exciting thing is that AI and ML are no longer limited to cloud-based servers or high-performance devices. Edge computing is paving the way for AI to be deployed on a range of devices with diverse capabilities, unlocking new possibilities for applications in IoT, automation, healthcare, and more.
In the years to come, we can expect significant advancements in both hardware and software for edge computing, further expanding the possibilities for AI applications on embedded systems. As technology continues to evolve, these platforms will become even more powerful, cost-effective, and accessible to developers around the world.