DivNAS#

Activations Analyser#

archai.supergraph.algos.divnas.analyse_activations.create_submod_f(covariance: array) Callable[source]#
archai.supergraph.algos.divnas.analyse_activations.get_batch(feature_list, batch_size, i)[source]#
archai.supergraph.algos.divnas.analyse_activations.rbf(x: array, y: array, sigma=0.1) array[source]#

Computes the rbf kernel between two input vectors

archai.supergraph.algos.divnas.analyse_activations.compute_brute_force_sol(cov_kernel: array, budget: int) Tuple[Tuple[Any], float][source]#
archai.supergraph.algos.divnas.analyse_activations.compute_correlation(covariance: array) array[source]#
archai.supergraph.algos.divnas.analyse_activations.compute_covariance_offline(feature_list: List[array]) array[source]#

Compute covariance matrix for high-dimensional features. feature_shape: (num_samples, feature_dim)

archai.supergraph.algos.divnas.analyse_activations.compute_rbf_kernel_covariance(feature_list: List[array], sigma=0.1) array[source]#

Compute rbf kernel covariance for high dimensional features. feature_list: List of features each of shape: (num_samples, feature_dim) sigma: sigma of the rbf kernel

archai.supergraph.algos.divnas.analyse_activations.compute_euclidean_dist_quantiles(feature_list: List[array], subsamplefactor=1) List[Tuple[float, float]][source]#

Compute quantile distances between feature pairs feature_list: List of features each of shape: (num_samples, feature_dim)

archai.supergraph.algos.divnas.analyse_activations.greedy_op_selection(covariance: array, k: int) List[int][source]#
archai.supergraph.algos.divnas.analyse_activations.compute_marginal_gain(y: int, A: Set[int], S: Set[int], covariance: array) float[source]#
archai.supergraph.algos.divnas.analyse_activations.collect_features(rootfolder: str, subsampling_factor: int = 1) Dict[str, List[array]][source]#

Walks the rootfolder for h5py files and loads them into the format required for analysis.

Inputs:

rootfolder: full path to folder containing h5 files which have activations subsampling_factor: every nth minibatch will be loaded to keep memory manageable

Outputs:

dictionary with edge name strings as keys and values are lists of np.array [num_samples, feature_dim]

archai.supergraph.algos.divnas.analyse_activations.plot_all_covs(covs_kernel, corr, primitives, axs)[source]#
archai.supergraph.algos.divnas.analyse_activations.main()[source]#

Cell#

class archai.supergraph.algos.divnas.divnas_cell.Divnas_Cell(cell: Cell)[source]#

Wrapper cell class for divnas specific modifications

collect_activations(edgeoptype, sigma: float) None[source]#
update_covs()[source]#
clear_collect_activations()[source]#

Experiment Runner#

class archai.supergraph.algos.divnas.divnas_exp_runner.DivnasExperimentRunner(config_filename: str, base_name: str, clean_expdir=False)[source]#
model_desc_builder() DivnasModelDescBuilder[source]#
trainer_class() Type[ArchTrainer] | None[source]#
finalizers() Finalizers[source]#

Finalizers#

class archai.supergraph.algos.divnas.divnas_finalizers.DivnasFinalizers[source]#
finalize_model(model: Model, to_cpu=True, restore_device=True) ModelDesc[source]#
finalize_cell(cell: Cell, cell_index: int, model_desc: ModelDesc, *args, **kwargs) CellDesc[source]#
finalize_node(node: ModuleList, node_index: int, node_desc: NodeDesc, max_final_edges: int, cov, *args, **kwargs) NodeDesc[source]#

Model Description Builder#

class archai.supergraph.algos.divnas.divnas_model_desc_builder.DivnasModelDescBuilder[source]#
pre_build(conf_model_desc: Config) None[source]#

hook for accomplishing any setup before build starts

build_nodes(stem_shapes: List[List[int | float]], conf_cell: Config, cell_index: int, cell_type: CellType, node_count: int, in_shape: List[int | float], out_shape: List[int | float]) Tuple[List[List[int | float]], List[NodeDesc]][source]#

Rank Finalizer#

class archai.supergraph.algos.divnas.divnas_rank_finalizer.DivnasRankFinalizers[source]#
finalize_model(model: Model, to_cpu=True, restore_device=True) ModelDesc[source]#
finalize_cell(cell: Cell, cell_index: int, model_desc: ModelDesc, *args, **kwargs) CellDesc[source]#
finalize_node(node: ModuleList, node_index: int, node_desc: NodeDesc, max_final_edges: int, cov: array, cell: Cell, node_id: int, *args, **kwargs) NodeDesc[source]#

Div-Based Operators#

class archai.supergraph.algos.divnas.divop.DivOp(op_desc: OpDesc, arch_params: ArchParams | None, affine: bool)[source]#

The output of DivOp is weighted output of all allowed primitives.

PRIMITIVES = ['max_pool_3x3', 'avg_pool_3x3', 'skip_connect', 'sep_conv_3x3', 'sep_conv_5x5', 'dil_conv_3x3', 'dil_conv_5x5', 'none']#
property collect_activations: bool#
property activations: List[array] | None#
property num_primitive_ops: int#
forward(x)[source]#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

ops() Iterator[Tuple[Op, float]][source]#

Return contituent ops, if this op is primitive just return self

finalize() Tuple[OpDesc, float | None][source]#

Divnas with default finalizer option needs this override else the finalizer in base class returns the whole divop

can_drop_path() bool[source]#
training: bool#

Sequential Operators#

class archai.supergraph.algos.divnas.seqopt.SeqOpt(num_items: int, eps: float)[source]#

Implements SeqOpt TODO: Later on we might want to refactor this class to be able to handle bandit feedback

sample_sequence(with_replacement=False) List[int][source]#
update(sel_list: List[int], compute_marginal_gain_func) None[source]#

In the full information case we will update all expert copies according to the marginal benefits

Randomized Weighted Majority#

class archai.supergraph.algos.divnas.wmr.Wmr(num_items: int, eta: float)[source]#

Implements the Randomized Weighted Majority algorithm by Littlestone and Warmuth We use the version in Fig 1 in The Multiplicative Weight Update with the gain version

property weights#
update(rewards: array) None[source]#
sample() int[source]#