Skip to main content
Version: 1.0.7

CyberML

access anomalies: complement_access.py

  1. ComplementAccessTransformer is a SparkML Transformer. Given a dataframe, it returns a new dataframe comprised of access patterns sampled from the set of possible access patterns not present in the original dataframe. In other words, it returns a sample from the complement set.

feature engineering: indexers.py

  1. IdIndexer is a SparkML Estimator. Given a dataframe, it creates an IdIndexerModel (described next) for categorical features. The model maps each partition and column seen in the given dataframe to an ID, for each partition or one consecutive range for all partition and column values.
  2. IdIndexerModel is a SparkML Transformer. Given a dataframe maps each partition and column field to a consecutive integer ID. Partitions or column values not encountered in the estimator are mapped to 0. The model can operate in two modes, either create consecutive integer ID independently
  3. MultiIndexer is a SparkML Estimator. Uses multiple IdIndexers to generate a MultiIndexerModel (described next) for categorical features. The model contains multiple IdIndexers for multiple partitions and columns.
  4. MultiIndexerModel is a SparkML Transformer. Given a dataframe maps each partition and column field to a consecutive integer ID. Partitions or column values not encountered in the estimator are mapped to 0. The model can operate in two modes, either create consecutive integer ID independently

feature engineering: scalers.py

  1. StandardScalarScaler is a SparkML Estimator. Given a dataframe it creates a StandardScalarScalerModel (described next) which normalizes any given dataframe according to the mean and standard deviation calculated on the dataframe given to the estimator.
  2. StandardScalarScalerModel is a SparkML Transformer. Given a dataframe with a value column x, the transformer changes its value as follows: x'=(x-mean)/stddev. That is, if the transformer is given the same dataframe the estimator was given then the value column will have a mean of 0.0 and a standard deviation of 1.0.
  3. LinearScalarScaler is a SparkML Estimator. Given a dataframe it creates a LinearScalarScalerModel (described next) which normalizes any given dataframe according to the minimum and maximum values calculated on the dataframe given to the estimator.
  4. LinearScalarScalerModel is a SparkML Transformer. Given a dataframe with a value column x, the transformer changes its value such that if the transformer is given the same dataframe the estimator was given then the value column will be scaled linearly to the given ranges.

access anomalies: collaborative_filtering.py

  1. AccessAnomaly is a SparkML Estimator. Given a dataframe, the estimator generates an AccessAnomalyModel (described next). The model can detect anomalous access of users to resources where the access is outside of the user's or resources's profile. For instance, a user from HR accessing a resource from Finance. This result is based solely on access patterns rather than explicit features. Internally, the code is based on Collaborative Filtering as implemented in Spark, using Matrix Factorization with Alternating Least Squares.
  2. AccessAnomalyModel is a SparkML Transformer. Given a dataframe the transformer computes a value between (-inf, inf) where positive values indicate an anomaly score. Anomaly scores are computed to have a mean of 1.0 and a standard deviation of 1.0 over the original dataframe given to the estimator.
  3. ModelNormalizeTransformer is a SparkML Transformer. This transformer is used internally by AccessAnomaly to normalize a model to generate anomaly scores with mean 0.0 and standard deviation of 1.0.
  4. AccessAnomalyConfig contains the default values for AccessAnomaly.