block_zoo.normalizations package

Submodules

block_zoo.normalizations.LayerNorm module

class block_zoo.normalizations.LayerNorm.LayerNorm(layer_conf)[source]

Bases: torch.nn.modules.module.Module

LayerNorm layer

Parameters:layer_conf (LayerNormConf) – configuration of a layer
forward(string, string_len)[source]

process input

Parameters:
  • string_len (string,) –
  • string (e.g.) – [batch_size, seq_len, dim], string_len (Tensor): [batch_size]
Returns:

[batch_size, seq_len, output_dim], [batch_size]

Return type:

Tensor

class block_zoo.normalizations.LayerNorm.LayerNormConf(**kwargs)[source]

Bases: block_zoo.BaseLayer.BaseConf

Configuration of LayerNorm Layer

declare()[source]

Define things like “input_ranks” and “num_of_inputs”, which are certain with regard to your layer

num_of_input is N(N>0) means this layer accepts N inputs;

num_of_input is -1 means this layer accepts any number of inputs;

The rank here is not the same as matrix rank:

For a scalar, its rank is 0;

For a vector, its rank is 1;

For a matrix, its rank is 2;

For a cube of numbers, its rank is 3.

… For instance, the rank of (batch size, sequence length, hidden_dim) is 3.

if num_of_input > 0:

len(input_ranks) should be equal to num_of_input

elif num_of_input == -1:

input_ranks should be a list with only one element and the rank of all the inputs should be equal to that element.

NOTE: when we build the model, if num_of_input is -1, we would replace it with the real number of inputs and replace input_ranks with a list of real input_ranks.

Returns:None
inference()[source]

Inference things like output_dim, which may relies on defined hyper parameter such as hidden dim and input_dim

Returns:None
verify()[source]

Define some necessary varification for your layer when we define the model.

If you define your own layer and rewrite this funciton, please add “super(YourLayerConf, self).verify()” at the beginning

Returns:None

Module contents