Embedded Machine Learning describes the execution of machine learning on such edge devices, whereas the description Tiny Machine Learning is used for systems that use microcontrollers with particularly limited memory and computational capabilities to enable battery-powered off-grid applications. [1] A wide range of computing power classes exist, with fluent transition between the different classes.
Edge AI can be challenging in comparison to classic server-side centralized AI, but it also comes with some benefits:
Reduced Latency
Real time applications require inference directly on an edge system. A server connection might be not available at any given time or have a certain latency. The main challenge is the evaluation of complex problems with constrained memory and computing power of edge devices while fulfilling the real-time requirements of the application.
Environmental Impact
Edge AI can help reduce the environmental impact of AI. If the edge processes the data directly, energy-intensive data transmission is not required. Edge devices can be programmed to transmit raw data to online storage only if it is new or unexpected, or to transmit only relevant features of the data.
Data Privacy
The privacy of sensitive data can also be enhanced by using edge AI. When raw data is not transmitted to a central server, it remains local on each edge device. This means considerable reduction of security risks. The local datasets are however often not distributed identically and each edge may have individual statistical characteristics.
But it is possible to train machine learning models on the local edge devices using federated learning algorithms. Here, the models are trained locally and only the model parameters are sent to a global server, ensuring data privacy while still gaining knowledge from each edge device.