PyTorch Profiler (Utilities)#
Evaluator#
- archai.discrete_search.evaluators.pt_profiler_utils.pt_profiler_eval.profile(model: Module, forward_args: List[Any] | None = None, forward_kwargs: Dict[str, Any] | None = None, num_warmups: int | None = 1, num_samples: int | None = 1, use_cuda: bool | None = False, use_median: bool | None = False, ignore_layers: List[str] | None = None) Dict[str, float | int] [source]#
Profile a PyTorch model.
Outputs FLOPs, MACs, number of parameters, latency and peak memory.
- Parameters:
model – PyTorch model.
forward_args – model.forward() arguments used for profilling.
forward_kwargs – model.forward() keyword arguments used for profilling.
num_warmups – Number of warmup runs before profilling.
num_samples – Number of runs after warmup.
use_cuda – Whether to use CUDA instead of CPU.
use_median – Whether to use median instead of mean to average memory and latency.
ignore_layers – List of layer names that should be ignored during profiling.
- Returns:
FLOPs, MACs, number of parameters, latency (seconds) and peak memory (bytes).
Hooks#
- archai.discrete_search.evaluators.pt_profiler_utils.pt_profiler_hooks.enable_functional_hooks() None [source]#
Enables functional API profiler hooks.
- archai.discrete_search.evaluators.pt_profiler_utils.pt_profiler_hooks.disable_functional_hooks() None [source]#
Disables functional API profiler hooks.
Model#
- class archai.discrete_search.evaluators.pt_profiler_utils.pt_profiler_model.ProfilerModel(model: Module)[source]#
Prepare a model to be used with profilling.
- start(ignore_layers: List[str] | None = None) None [source]#
Start profiling.
- Parameters:
ignore_layers – Layers to be ignored when profiling.
- get_flops() int [source]#
Get the model’s number of FLOPs.
- Returns:
Number of floating point operations.
- get_macs() int [source]#
Get the model’s number of MACs.
- Returns:
Number of multiply-accumulate operations.