Pairing DirectML with the ONNX Runtime is often the most straightforward way for many developers to bring hardware-accelerated AI to their users at scale. These three steps are a general guide for using this powerful combo
The ONNX format enables you to leverage ONNX Runtime with DirectML, which provides cross-hardware capabilities.
To convert your model to the ONNX format, you can utilize ONNXMLTools or Olive.
Once you have an .onnx model, leverage Olive powered by DirectML to optimize your model. You'll see dramatic performance improvements that you can deploy across the Windows hardware ecosystem.
Even though DirectML can be accessed via other frameworks such as the ONNX Runtime, DirectML can also be accessed directly by developers familiar with C++.
It has a familiar (native C++, nano-COM) programming interface and workflow in the style of DirectX 12.
We recommend that developers start by looking at our Introduction to DirectML and our DirectML overview.