Pairing DirectML with the ONNX Runtime is often the most straightforward way for many developers to bring hardware-accelerated AI to their users at scale. These three steps are a general guide for using this powerful combo
Once you have an .onnx model, leverage Olive powered by DirectML to optimize your model. You'll see dramatic performance improvements that you can deploy across the Windows hardware ecosystem.
Even though DirectML can be accessed via other frameworks such as the ONNX Runtime, DirectML can also be accessed directly by developers familiar with C++.
It has a familiar (native C++, nano-COM) programming interface and workflow in the style of DirectX 12.