Are you looking to integrate artificial intelligence (AI) into your next product design? How about machine learning (ML) and deep learning (DL)? You can start by learning about the differences in these three concepts, and how each model works, as well as the solutions available today to enable you to rapidly integrate these technologies into your designs.

Please take a moment to complete the form below and gain instant access to this recorded webinar.
 cover page

Recorded Webinar

AI at the Edge: Machine Learning in Embedded Systems

Oct 07, 2021 | Length: 39:05

Are you looking to integrate artificial intelligence (AI) into your next product design? How about machine learning (ML) and deep learning (DL)? You can start by learning about the differences in these three concepts, and how each model works, as well as the solutions available today to enable you to rapidly integrate these technologies into your designs.

Whether you are just beginning to learn about these powerful technologies, or you are planning a specific project, this recorded webinar will accelerate your journey into AI, ML and DL. You will also learn how Digi embedded development solutions – the Digi XBee® ecosystem and the Digi ConnectCore® embedded system on module platform – can support your goals.

Connect with Digi

Want to learn more about how Digi can help you? Here are some next steps:

Follow-up Webinar Q&A

Thank you again for attending our session on AI at the Edge: Machine Learning in Embedded Systems. Here are the questions that followed the presentation and their answers. If you have additional questions, be sure to reach out.

What additional resources, processor, memory, etc., are required to implement machine learning in an embedded system?

It really depends. Machine learning applications cover a whole spectrum. As we've seen, there are simpler applications, such as that example of monitoring a few sensors for changes in vibration patterns as a use case for predictive maintenance services for that construction machine. For this use case, a small microcontroller with very little memory can be sufficient. And there are high-end applications, such as detecting objects in high-resolution video streams that obviously need a lot of compute power and memory bandwidth to shuffle the data.

A lot of machine learning applications today originate from cloud development, where you have a lot of compute resources available. Developers didn't take care about compute resources there, which is in high contrast to embedded devices. And now with the aim or the desire to move that functionality to the edge where you do not have all that compute performance available, that's a tricky task. With machine learning at the edge, more attention needs to be paid to the model use and the optimizations for resource-constrained embedded devices.

And there's vendors like our partner Au-Zone, which we have seen in the demo as well, who are experts on this and they provide an embedded inference engine and model optimization tools to prepare it for running on these constrained embedded devices with low memory footprints and fast inference times even when little compute resources are available.

And we saw that example of voice recognition. Just to highlight again, we are going to provide such a solution with our ConnectCore SoM solution offering, and that is optimized for embedded devices. So you don't need a fancy neural network processing unit, which is also costly. You can run that application, voice recognition application supporting thousands of words and vocabulary on a single Cortex-A core with less than 50% load while you might need an NPU if you do the same thing with a non-optimized kind of building your own open-source machine learning framework.

Is it possible to have deep learning for text data like processing poetry in order to identify different genres?

It's certainly possible to process text and do classification of text elements. So that's definitely possible with Machine Learning. And there are plenty of use cases for that, for example, your spam filters processing text and emails and classifying it to spam or non-spam. It's not quite poetry in those emails, but it's related, I guess.

How does artificial intelligence and Machine Learning in a device impact security?

There are security threats targeting machine learning applications if that's the question. For example, attackers try to modify the inputs into the machine learning model with a certain technique to mislead the model in a way that it misclassifies an object, for example, or could be even a road sign in a traffic application. And then it also would give out, with the right manipulations, a high confidence factor for these and this certainly is a security issue and also a safety issue.

Such an attack would be called an adversarial example attack and there are methods to harden the model against such attacks, which can be applied during the model training and development. And other security issues with machine learning includes model theft or modeling version attacks. And Digi is providing the countermeasures available from NXP and the eIQ Machine Learning tools to address some of these machine learning-specific security issues.

But also the general system security is important and other security features such as secure boot, encrypted file system, protected ports, and tamper detection need to be enabled as well. With our security framework we're providing as part of ConnectCore, the Digi TrustFence, that is a complete security framework available with Digi ConnectCore SoM. And all the features I just mentioned are fully integrated into the bootloader, the kernel, the root file system, ready to use without becoming a security expert or spending many weeks to enable them. So they work on all the hardware and software layers.

Will the presentation be recorded for later viewing?

Yes, absolutely. The recording will be available for later playback on the BrightTALK platform here. And we will also post the link on www.digi.com, our website. In the resources section, you see a webinar section there. And this is where we post all the webinars for later watching.

How do you validate the accuracy of the machine learning model?

That is done during the training phase. So usually, you have a big set of data to train the model. And then, you set aside different data to do the actual testing and to verify the accuracy of the model. And once you're happy with the accuracy, you're done. But if you're not happy with the accuracy, you need more training data, feed that into a model, train it more and then test again with different data and iterate that process till you're happy with the accuracy. Just on a high level.

There's another question on examples available for learning many channels of low-speed signals.

Not sure if I got the question right. But there are two ways. You can build a model from scratch and do it all on your own, and you need lots and lots of data. But typically, you would use a pre-trained model and then apply something called transfer learning. There are available pre-trained models out there for image recognition, for voice recognition, for text, for many other things, and you would need to find a use case or a model that covers your use case. And then you would apply transfer learning to tweak or modify that model in a way so it's actually serving your exact use case.

How was the latency of the wake word measured on Cortex-M core? Is it possible to configure a wake word? Does it require additional learning?

So, in that scenario, the wake word can be configured. So you can define your own wake words and learn those in that model. So you would actually put your input in terms of what commands you want to learn and then record that, and apply different speaking, and then the engine would recognize those words. And also transfer the model to make it run on the embedded device for the Cortex-M core and to optimize it to run efficiently on that engine. The latency of that wake word was not terribly important, I'd say. I mean, this is humans interacting with the machine. So if that's taking a few more milliseconds, that is really not an issue. So it wasn't required to be a real low latency. But still, to be snappy in terms of people using it. But latency was not terribly important. And it was below a second so it was seamless to switch on the machine.

Do you have any examples of using FPGAs for machine learning in the embedded area? How is this different in terms of requirements and performance?

Sorry, can't answer that one. I don't have experience on the FPGA side. I'm sure that can be used to kind of mimic a neural network processing engine. I'm sure there's some IP out there to run that functionality in an FPGA. But you have all those cores available. And today's embedded SoCs, system on chips, you have often a GPU you're not using. You have multiple Cortex-A cores. You often have a companion Cortex-M core separate from that. So with these highly integrated SoCs, you have plenty of cores and plenty of options in those SoCs. So using an external FPGA would just be adding cost and design complexity. But if it's required, I'm sure there's options to run neural network accelerators in an FPGA.

Watch our video on embedded design
Learn how to tackle embedded design challenges the right way

Related Content

IoT and the Supply Chain: How Machine Learning Eases Bottlenecks IoT and the Supply Chain: How Machine Learning Eases Bottlenecks IoT and the supply chain today go hand in hand, and in fact logistics tracking is one of the most prevalent Internet of Things... READ BLOG How Is EV Infrastructure Developing and Growing in the US? How Is EV Infrastructure Developing and Growing in the US? In the US, public initiatives have been used to drive sustainability efforts, including building sustainable cities. But future... READ BLOG Digi Embedded Android and Android Extension Tools Digi Embedded Android and Android Extension Tools Today many embedded developers choose the Android operating system for application development in both mobile and industrial... WATCH VIDEO Power Management Techniques in Embedded Systems Power Management Techniques in Embedded Systems Utilizing key power management techniques in your embedded system designs can have enormous benefits, from battery life... READ BLOG Digi ConnectCore 8M Nano Development Kit Unboxing and Getting Started Digi ConnectCore 8M Nano Development Kit Unboxing and Getting Started The Digi ConnectCore® 8M Nano system-on-module is an excellent development platform for rapid prototyping of embedded products ... WATCH VIDEO Digi ConnectCore 8M Mini Digi ConnectCore 8M Mini Embedded system-on-module based on the NXP i.MX 8M Mini processor with built-in Video Processing Unit (VPU); designed for longevity and scalability in industrial IoT applications. VIEW PRODUCT Digi ConnectCore SOM Solutions Digi ConnectCore SOM Solutions Embedded system-on-modules based exclusively on NXP i.MX applications processors - designed for longevity and scalability, in industrial IoT applications VIEW PDF Machine Learning Demo with Digi ConnectCore and ByteSnap SnapUI Machine Learning Demo with Digi ConnectCore and ByteSnap SnapUI Digi International and ByteSnap Design collaborated to develop an interesting and entertaining demo featuring a pirate game... WATCH VIDEO Digi XBee Ecosystem | Everything You Need to Explore and Create Wireless Connectivity Digi XBee Ecosystem | Everything You Need to Explore and Create Wireless Connectivity The complete Digi XBee ecosystem includes rf modules, code libraries, and the award-winning tool suite, Digi XBee Tools. VIEW PRODUCTS Simplify and accelerate your development with Digi ConnectCore i.MX-based SOMs Simplify and accelerate your development with Digi ConnectCore i.MX-based SOMs Developing an IoT product is challenging, and as a result, a large percentage of embedded design projects fail due to the... RECORDED WEBINAR Digi ConnectCore 8M Nano: Developer Resources, Security, Scalability Digi ConnectCore 8M Nano: Developer Resources, Security, Scalability Digi International recently announced availability of the Digi ConnectCore 8M Nano Development Kit. The Digi ConnectCore® 8M... READ BLOG Digi ConnectCore 8M Nano Digi ConnectCore 8M Nano Embedded system-on-module based on the NXP i.MX 8M Nano processor; designed for longevity and scalability in industrial IoT applications VIEW PRODUCT Build vs Buy: Navigating the Choice Build vs Buy: Navigating the Choice In this white paper, we help you to evaluate the best way to optimize your IP and make the right build-vs.-buy decision to meet your goals. VIEW PDF

Have a Question? Connect with a Digi Team Member Today!