What is ONNX?
What is ONNX - Open Neural Network Exchange
ONNX is an open format to represent both deep learning and traditional models. With ONNX, AI developers can more easily move models between state-of-the-art tools and choose the combination that is best for them. ONNX is developed and supported by a community of partners such as Microsoft, Facebook and AWS.
ONNX is widely supported and can be found in many frameworks, tools, and hardware. Enabling interoperability between different frameworks and streamlining the path from research to production helps increase the speed of innovation in the AI community. ONNX helps to solve the challenge of hardware dependency related to AI models and enables deploying same AI models to several HW accelerated targets.
- More information regarding ONNX can be found from ONNX Runtime web pages
- Microsoft’s blog regarding announcing ONNX Runtime 1.0
Watch channel 9 video that talks about ONNX
Get started with ONNX
Train an ONNX model using Azure Machine Learning.
Convert your model to ONNX
- Use ONNX Converter Image to convert other major model frameworks to ONNX. Supported frameworks are currently
- CNTK, CoreML, Keras, scikit-learn, Tensorflow, PyTorch
Get started with examples
Here is a list of product examples using a ONNX and tested combination of hardware and AI model.
Run ONNX on NVIDIA Jetson Nano
Get started with ONNX framework and NVIDIA Jetson Nano
WinML on the edge
Run Windows ML inferencing in an Azure IoT Edge module running on Windows
Run ONNX and Intel® OpenVINO™
Get started with ONNX framework, OpenVINO™ and Intel® powered Hardware
Want to publish your own example? |
Send email to aiedge@microsoft.com |