causica.training.evaluation

Module Contents

Functions

eval_intervention_likelihoods(→ torch.Tensor)

Calculate the average log-prob of interventional data.

eval_ate_rmse(→ tensordict.TensorDict)

Evaluate the ATEs of a model

eval_ite_rmse(→ tensordict.TensorDict)

Evaluate the ITEs of a model.

list_mean(→ torch.Tensor)

Take the mean of a list of torch tensors, they must all have the same shape

list_logsumexp(→ torch.Tensor)

Take the logsumexp of a list of torch tensors, they must all have the same shape

causica.training.evaluation.eval_intervention_likelihoods(sems: list[causica.sem.structural_equation_model.SEM], intervention_with_effects: causica.datasets.causica_dataset_format.InterventionWithEffects) torch.Tensor[source]

Calculate the average log-prob of interventional data.

Specifically we calculate 𝔼_sample[log(𝔼_G[p(sample | G)])]

Parameters:
sems: list[causica.sem.structural_equation_model.SEM]

An iterable of SEMS to evaluate the interventional log prob of

interventions

True interventional data to use for evaluation.

Returns:

Log-likelihood of the interventional data for each interventional datapoint

causica.training.evaluation.eval_ate_rmse(sems: Iterable[causica.sem.structural_equation_model.SEM], intervention: causica.datasets.causica_dataset_format.InterventionWithEffects, samples_per_graph: int = 1000) tensordict.TensorDict[source]

Evaluate the ATEs of a model

Parameters:
sems: Iterable[causica.sem.structural_equation_model.SEM]

An iterable of structural equation models to evaluate the ATE RMSE of

intervention: causica.datasets.causica_dataset_format.InterventionWithEffects

True interventional data to use for evaluation.

samples_per_graph: int = 1000

Number of samples to draw per graph to calculate the ATE.

Returns:

Dict of the RMSE of the ATE for each node we’re interested in

causica.training.evaluation.eval_ite_rmse(sems: Iterable[causica.sem.structural_equation_model.SEM], counterfactual_data: causica.datasets.causica_dataset_format.CounterfactualWithEffects) tensordict.TensorDict[source]

Evaluate the ITEs of a model.

Parameters:
sems: Iterable[causica.sem.structural_equation_model.SEM]

An iterable of structural equation models to evaluate the ITE RMSE of

counterfactual_data: causica.datasets.causica_dataset_format.CounterfactualWithEffects

Data of true counterfactuals to use for evaluation.

Returns:

Dict of RMSEs for each effect variable we’re interested in

causica.training.evaluation.list_mean(list_of_tensors: list[torch.Tensor]) torch.Tensor[source]

Take the mean of a list of torch tensors, they must all have the same shape

causica.training.evaluation.list_logsumexp(list_of_tensors: list[torch.Tensor]) torch.Tensor[source]

Take the logsumexp of a list of torch tensors, they must all have the same shape