The STMicroelectronics X-LINUX-AI software environment provides tools to perform inference on embedded systems using neural network models. Examples of applications that typically use neural network inference include object/pattern recognition, gesture control, voice processing, and sound monitoring.
X-LINUX-AI includes support for two standard inference engines:
Include X-LINUX-AI packages in Digi Embedded Yocto
Add the meta-st-stm32mpu-ai layer to your
conf/bblayers.conf configuration file if it isn’t there already:
+ /usr/local/dey-4.0/sources/meta-st-stm32mpu-ai \
conf/local.conf file to include one of the X-LINUX-AI package groups in your Digi Embedded Yocto image:
IMAGE_INSTALL:append = " packagegroup-x-linux-ai"
There are a few package groups to choose from:
packagegroup-x-linux-ai-tflite includes TensorFlow Lite packages and examples, for use with the ConnectCore MP15’s CPU
packagegroup-x-linux-ai-tflite-edgetpu includes TensorFlow Lite packages and examples, for use with a Coral USB Accelerator
packagegroup-x-linux-ai-onnxruntime includes ONNX Runtime packages and examples
packagegroup-x-linux-ai includes all of the above
|Including either of these packageg roups increases the size of the rootfs image significantly. To minimize the increase in image size, select a subset of their packages depending on your needs. You might need to remove additional packages for the rootfs image to fit on your device.
See ST’s X-LINUX-AI OpenSTLinux Expansion Package article for more information on X-LINUX-AI.