The NXP eIQ ML (edge intelligence machine learning) software environment provides tools to perform inference on embedded systems using neural network models. The software includes optimizations that leverage the hardware capabilities of the i.MX8M Mini family for improved performance. Examples of applications that typically use neural network inference include object/pattern recognition, gesture control, voice processing, and sound monitoring.
eIQ includes support for four inference engines:
Performance numbers documented by NXP have been made with i.MX8M Plus, a CPU that has a dedicated Neural Processing Unit (NPU). Expect lower performance on ConnectCore 8M Mini. Differences in CPU speed and memory bus width can also affect performance. |
Include eIQ packages in Digi Embedded Yocto
Add the meta-multimedia layer to your conf/bblayers.conf
configuration file if it isn’t there already:
/usr/local/dey-4.0/sources/meta-digi/meta-digi-arm \
/usr/local/dey-4.0/sources/meta-digi/meta-digi-dey \
+ /usr/local/dey-4.0/sources/meta-openembedded/meta-multimedia \
"
Edit your conf/local.conf
file to include the eIQ package group in your Digi Embedded Yocto image:
IMAGE_INSTALL:append = " packagegroup-imx-ml"
This package group contains all of NXP’s eIQ packages compatible with the ConnectCore 8M Mini.
Including this package group increases the size of the rootfs image significantly. To minimize the increase in image size, select a subset of its packages depending on your needs. See the package group’s recipe for more information on the packages it contains. |
NXP eIQ examples
Overview
The generated image with packagegroup-imx-ml
contains the eIQ demos provided by NXP in the eiq-example.
eIQ examples and source code are provided by NXP, so the exact commands in the following steps may need to be altered slightly. Use them as reference. |
The eIQ examples available in the image are inside the /usr/bin/eiq-examples-git
folder:
# ls -l /usr/bin/eiq-examples-git/
drwxr-xr-x 2 root root 4096 Mar 9 2018 dms
-rw-r--r-- 1 root root 4069 Mar 9 2018 download_models.py
drwxr-xr-x 2 root root 4096 Mar 9 2018 face_recognition
drwxr-xr-x 2 root root 4096 Mar 9 2018 gesture_detection
drwxr-xr-x 2 root root 4096 Mar 9 2018 image_classification
drwxr-xr-x 2 root root 4096 Mar 9 2018 object_detection
That folder contains:
-
download_models.py
: This is a script that downloads the required TensorFlow models and creates copies of those models converted with Vela for use with the NPU. -
Demo directories: There are multiple demos, and each demo folder contains a Python script to run it.
Setup
The sequence to work with the demos is:
-
Download the required models. This is only required once. (The script also converts the downloaded models with Vela).
To download the models, the device must have network connectivity. -
Run the
download_models.py
script.# cd /usr/bin/eiq-examples-git # python3 download_models.py Downloading gesture recognition model(s) file(s) from https://drive.google.com/ uc?export=download&&id=1yjWyXsac5CbGWYuHWYhhnr_9cAwg3uNI ... Downloading dms iris landmark model(s) file(s) from https://s3.ap-northeast-2.w asabisys.com/pinto-model-zoo/049_iris_landmark/resources.tar.gz Converting facenet_512_int_quantized.tflite ... Batch Inference time 3.10 ms, 322.73 inferences/s (batch size 1)
Some relevant notes about the download process:
-
The download size is quite large and may take approximately an hour.
-
The script converts the downloaded models with Vela, increasing the script’s duration. Your device remains busy during this process.
-
The device requires extra space to store all the models.
-
If the script is stopped or fails, it starts the download from the beginning, ignoring any previously downloaded data.
Consider editing the script to reuse download data and avoid restarting the full download if the script stops or fails. Once the process is completed, you’ll see the following folders:
-
models
: Downloaded models. -
vela_models
: Converted Vela models.The downloaded models can be reused. Digi recommends backing up those folders so the downloaded models can be reused on other devices.
-
-
Choose the demo and run it. There is a Python script inside each folder, each with its own parameters. You can run it with the
-h
parameter to check available parameters.
Running an example (using the CPU)
As a general rule, enter one of the demo folders, run the python script inside, and check the help prompt of the main script.
For instance, to run the object_detection
demo which identifies objects in the camera’s input:
# cd /usr/bin/eiq-examples-git/object_detection
# python3 main.py -h
usage: main.py [-h] [-i INPUT] [-d DELEGATE]
options:
-h, --help show this help message and exit
-i INPUT, --input INPUT
input to be classified
-d DELEGATE, --delegate DELEGATE
delegate path
# python3 main.py -i /dev/video0
[ WARN:0@0.513] global cap_gstreamer.cpp:2784 handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module source reported: Could not read from resource.
[ WARN:0@0.517] global cap_gstreamer.cpp:1679 open OpenCV | GStreamer warning: unable to start pipeline
[ WARN:0@0.517] global cap_gstreamer.cpp:1164 isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
rectangle:(223,200),(621,475) label:person
rectangle:(389,387),(544,480) label:chair
If you use the regular models for inference on the NPU instead of the ones converted with Vela, the demo will automatically convert the model before running the inference, adding a significant delay in the demo’s execution time. |
More information
See NXP’s i.MX Machine Learning User’s Guide for more information on eIQ.