Build AI@Edge Hardware


These pages offer guidance and best practices when building intelligent edge hardware - hardware that is able to run AI models, process data and take actions without a roundtrip to the cloud.

Silicon selection

The first step in designing AI hardware for the Edge is to work with Silicon provider to determine what type of boards, SoCs, MCUs will be needed in a solution. A good starting point is to leverage hardware development kits from a Silicon partner that will allow you to prototype the solution.

Select Operating System

Read about operating system options for IoT Edge devices

Hardware acceleration

A key consideration when selecting and building hardware is to ensure that the hardware is capable of running AI models locally, as opposed to relying on cloud connectivity. A key value proposition to AI hardware at the Edge is having the AI model run on speciazlied hardware locally which allows for fast or realtime inferencing on the device. For example, when creating a Vision AI solution, you may want the inferencing to occur in realtime with every frame the camera captures. In that scenario, while training and managing the model is still best suited for the cloud, the model itself can be offloaded to dedicated hardware or hardware accelerated if you select the right components for the Edge hardware solution. This is often included in the development kit as a DSP or FPGA, but will likely be referred to under the specific brand the Silicon provider uses.

Read more about Hardware acceleration

Hardware considerations for IoT Edge Runtime

In addition to selecting and developing hardware with AI acceleration in mind, you should also take into account the system requirements to support the IoT Edge Runtime which will enable cloud connectivity and management for the device. As a best practice we recommend running your application in a container so you should ensure the hardware supports a container engine. For detailed information on system requirements and containers with IoT Edge runtime, see the documentation for Platform Support

Enable ONNX

ONNX is an open format to represent both deep learning and traditional models. With ONNX, AI developers can more easily move models between state-of-the-art tools and choose the combination that is best for them.

See instructions for enabling ONNX runtime in ONNX runtime GitHub.

Plan for deploying devices at scale

When using development kits or deploying prototype devices, you may be relying on setting up each device individually which would be difficult to manage once you are ready to deploy at scale. When you’re ready to deploy at scale, you should leverage technologies such as DPS that ease deployment and ensure that devices are production ready. For more information, consult the following resources.

Security considerations

Both for security and deploying at scale, we leverage technologies such as TPMs and HSMs. For more information see Securing Azure IoT Edge


Finally, when you’re ready to certify your device for Azure IoT Edge, read more about certification. Azure IoT certified devices can be found from Azure Device Catalog improving their discoverability among potential solution builders and end users.