optimize¶
- onnxscript.optimizer.optimize(model: _ModelProtoOrIr, num_iterations: int = 2, *, onnx_shape_inference: bool = True, stop_if_no_change: bool = True, input_size_limit: int = 8192, output_size_limit: int = 262144, inline: bool = True) _ModelProtoOrIr [source]¶
Optimizes a model.
- Parameters:
model – The model to be optimized.
num_iterations – Number of times the optimization loop is repeated.
onnx_shape_inference – Applies node-level shape-inference as part of optimization
input_size_limit – Will not apply constant folding to ops with any input of size greater than this. Does not apply to special ops like Shape() and Size().
output_size_limit – Will not rewrite any foldable-op into a Constant op if the size of the output tensor is greater than this.
stop_if_no_change – Stop the optimization loop if no change is detected in an iteration.
inline – If True, inlines all functions in the model.
- Returns:
The optimized model. If the input was a ModelProto, the output will also be a ModelProto. If the input was an ir.Model, the output will also be an ir.Model.