block_zoo.math package

Submodules

block_zoo.math.Add2D module

class block_zoo.math.Add2D.Add2D(layer_conf)[source]

Bases: torch.nn.modules.module.Module

Add2D layer to get sum of two sequences(2D representation)

Parameters:layer_conf (Add2DConf) – configuration of a layer
forward(*args)[source]

process input

Parameters:*args – (Tensor): string, string_len, string2, string2_len e.g. string (Tensor): [batch_size, dim], string_len (Tensor): [batch_size]
Returns:[batch_size, output_dim], [batch_size]
Return type:Tensor
class block_zoo.math.Add2D.Add2DConf(**kwargs)[source]

Bases: block_zoo.BaseLayer.BaseConf

Configuration of Add2D layer

declare()[source]

Define things like “input_ranks” and “num_of_inputs”, which are certain with regard to your layer

num_of_input is N(N>0) means this layer accepts N inputs;

num_of_input is -1 means this layer accepts any number of inputs;

The rank here is not the same as matrix rank:

For a scalar, its rank is 0;

For a vector, its rank is 1;

For a matrix, its rank is 2;

For a cube of numbers, its rank is 3.

… For instance, the rank of (batch size, sequence length, hidden_dim) is 3.

if num_of_input > 0:

len(input_ranks) should be equal to num_of_input

elif num_of_input == -1:

input_ranks should be a list with only one element and the rank of all the inputs should be equal to that element.

NOTE: when we build the model, if num_of_input is -1, we would replace it with the real number of inputs and replace input_ranks with a list of real input_ranks.

Returns:None
inference()[source]

Inference things like output_dim, which may relies on defined hyper parameter such as hidden dim and input_dim

Returns:None
verify()[source]

Define some necessary varification for your layer when we define the model.

If you define your own layer and rewrite this funciton, please add “super(YourLayerConf, self).verify()” at the beginning

Returns:None

block_zoo.math.Add3D module

class block_zoo.math.Add3D.Add3D(layer_conf)[source]

Bases: torch.nn.modules.module.Module

Add3D layer to get sum of two sequences(3D representation)

Parameters:layer_conf (Add3DConf) – configuration of a layer
forward(*args)[source]

process input

Parameters:*args – (Tensor): string, string_len, string2, string2_len e.g. string (Tensor): [batch_size, seq_len, dim], string_len (Tensor): [batch_size]
Returns:[batch_size, seq_len, output_dim], [batch_size]
Return type:Tensor
class block_zoo.math.Add3D.Add3DConf(**kwargs)[source]

Bases: block_zoo.BaseLayer.BaseConf

Configuration of Add3D layer

declare()[source]

Define things like “input_ranks” and “num_of_inputs”, which are certain with regard to your layer

num_of_input is N(N>0) means this layer accepts N inputs;

num_of_input is -1 means this layer accepts any number of inputs;

The rank here is not the same as matrix rank:

For a scalar, its rank is 0;

For a vector, its rank is 1;

For a matrix, its rank is 2;

For a cube of numbers, its rank is 3.

… For instance, the rank of (batch size, sequence length, hidden_dim) is 3.

if num_of_input > 0:

len(input_ranks) should be equal to num_of_input

elif num_of_input == -1:

input_ranks should be a list with only one element and the rank of all the inputs should be equal to that element.

NOTE: when we build the model, if num_of_input is -1, we would replace it with the real number of inputs and replace input_ranks with a list of real input_ranks.

Returns:None
inference()[source]

Inference things like output_dim, which may relies on defined hyper parameter such as hidden dim and input_dim

Returns:None
verify()[source]

Define some necessary varification for your layer when we define the model.

If you define your own layer and rewrite this funciton, please add “super(YourLayerConf, self).verify()” at the beginning

Returns:None

block_zoo.math.ElementWisedMultiply2D module

class block_zoo.math.ElementWisedMultiply2D.ElementWisedMultiply2D(layer_conf)[source]

Bases: torch.nn.modules.module.Module

ElementWisedMultiply2D layer to do Element-Wised Multiply of two sequences(2D representation)

Parameters:layer_conf (ElementWisedMultiply2DConf) – configuration of a layer
forward(*args)[source]

process input

Parameters:*args – (Tensor): string, string_len, string2, string2_len e.g. string (Tensor): [batch_size, dim], string_len (Tensor): [batch_size]
Returns:[batch_size, output_dim], [batch_size]
Return type:Tensor
class block_zoo.math.ElementWisedMultiply2D.ElementWisedMultiply2DConf(**kwargs)[source]

Bases: block_zoo.BaseLayer.BaseConf

Configuration of ElementWisedMultiply2D layer

declare()[source]

Define things like “input_ranks” and “num_of_inputs”, which are certain with regard to your layer

num_of_input is N(N>0) means this layer accepts N inputs;

num_of_input is -1 means this layer accepts any number of inputs;

The rank here is not the same as matrix rank:

For a scalar, its rank is 0;

For a vector, its rank is 1;

For a matrix, its rank is 2;

For a cube of numbers, its rank is 3.

… For instance, the rank of (batch size, sequence length, hidden_dim) is 3.

if num_of_input > 0:

len(input_ranks) should be equal to num_of_input

elif num_of_input == -1:

input_ranks should be a list with only one element and the rank of all the inputs should be equal to that element.

NOTE: when we build the model, if num_of_input is -1, we would replace it with the real number of inputs and replace input_ranks with a list of real input_ranks.

Returns:None
inference()[source]

Inference things like output_dim, which may relies on defined hyper parameter such as hidden dim and input_dim

Returns:None
verify()[source]

Define some necessary varification for your layer when we define the model.

If you define your own layer and rewrite this funciton, please add “super(YourLayerConf, self).verify()” at the beginning

Returns:None

block_zoo.math.ElementWisedMultiply3D module

class block_zoo.math.ElementWisedMultiply3D.ElementWisedMultiply3D(layer_conf)[source]

Bases: torch.nn.modules.module.Module

ElementWisedMultiply3D layer to do Element-Wised Multiply of two sequences(3D representation)

Parameters:layer_conf (ElementWisedMultiply3DConf) – configuration of a layer
forward(*args)[source]

process input

Parameters:*args – (Tensor): string, string_len, string2, string2_len e.g. string (Tensor): [batch_size, seq_len, dim], string_len (Tensor): [batch_size]
Returns:[batch_size, seq_len, output_dim], [batch_size]
Return type:Tensor
class block_zoo.math.ElementWisedMultiply3D.ElementWisedMultiply3DConf(**kwargs)[source]

Bases: block_zoo.BaseLayer.BaseConf

Configuration of ElementWisedMultiply3D layer

declare()[source]

Define things like “input_ranks” and “num_of_inputs”, which are certain with regard to your layer

num_of_input is N(N>0) means this layer accepts N inputs;

num_of_input is -1 means this layer accepts any number of inputs;

The rank here is not the same as matrix rank:

For a scalar, its rank is 0;

For a vector, its rank is 1;

For a matrix, its rank is 2;

For a cube of numbers, its rank is 3.

… For instance, the rank of (batch size, sequence length, hidden_dim) is 3.

if num_of_input > 0:

len(input_ranks) should be equal to num_of_input

elif num_of_input == -1:

input_ranks should be a list with only one element and the rank of all the inputs should be equal to that element.

NOTE: when we build the model, if num_of_input is -1, we would replace it with the real number of inputs and replace input_ranks with a list of real input_ranks.

Returns:None
inference()[source]

Inference things like output_dim, which may relies on defined hyper parameter such as hidden dim and input_dim

Returns:None
verify()[source]

Define some necessary varification for your layer when we define the model.

If you define your own layer and rewrite this funciton, please add “super(YourLayerConf, self).verify()” at the beginning

Returns:None

block_zoo.math.MatrixMultiply module

class block_zoo.math.MatrixMultiply.MatrixMultiply(layer_conf)[source]

Bases: torch.nn.modules.module.Module

MatrixMultiply layer to multiply two matrix

Parameters:layer_conf (MatrixMultiplyConf) – configuration of a layer
forward(*args)[source]

process input

Parameters:*args – (Tensor): string, string_len, string2, string2_len e.g. string (Tensor): [batch_size, seq_len, dim], string_len (Tensor): [batch_size]
Returns:[batch_size, seq_len, output_dim], [batch_size]
Return type:Tensor
class block_zoo.math.MatrixMultiply.MatrixMultiplyConf(**kwargs)[source]

Bases: block_zoo.BaseLayer.BaseConf

Configuration of MatrixMultiply layer

Parameters:operation (String) – a element of [‘common’, ‘seq_based’, ‘dim_based’], default is ‘dim_based’ ‘common’ means (batch_size, seq_len, dim)*(batch_size, seq_len, dim) ‘seq_based’ means (batch_size, dim, seq_len)*(batch_size, seq_len, dim) ‘dim_based’ means (batch_size, seq_len, dim)*(batch_size, dim, seq_len)
declare()[source]

Define things like “input_ranks” and “num_of_inputs”, which are certain with regard to your layer

num_of_input is N(N>0) means this layer accepts N inputs;

num_of_input is -1 means this layer accepts any number of inputs;

The rank here is not the same as matrix rank:

For a scalar, its rank is 0;

For a vector, its rank is 1;

For a matrix, its rank is 2;

For a cube of numbers, its rank is 3.

… For instance, the rank of (batch size, sequence length, hidden_dim) is 3.

if num_of_input > 0:

len(input_ranks) should be equal to num_of_input

elif num_of_input == -1:

input_ranks should be a list with only one element and the rank of all the inputs should be equal to that element.

NOTE: when we build the model, if num_of_input is -1, we would replace it with the real number of inputs and replace input_ranks with a list of real input_ranks.

Returns:None
default()[source]

Define the default hyper parameters here. You can define these hyper parameters in your configuration file as well.

Returns:None
inference()[source]

Inference things like output_dim, which may relies on defined hyper parameter such as hidden dim and input_dim

Returns:None
varify

Docstring inheriting method descriptor The class itself is also used as a decorator doc_inherit decorator

block_zoo.math.Minus2D module

class block_zoo.math.Minus2D.Minus2D(layer_conf)[source]

Bases: torch.nn.modules.module.Module

Minus2D layer to get subtraction of two sequences(2D representation)

Parameters:layer_conf (Minus2DConf) – configuration of a layer
forward(*args)[source]

process inputs

Parameters:*args – (Tensor): string, string_len, string2, string2_len e.g. string (Tensor): [batch_size, dim], string_len (Tensor): [batch_size]
Returns:[batch_size, output_dim], [batch_size]
Return type:Tensor
class block_zoo.math.Minus2D.Minus2DConf(**kwargs)[source]

Bases: block_zoo.BaseLayer.BaseConf

Configuration of Minus2D layer

Parameters:abs_flag – if the result of the Minus2D is abs, default is False
declare()[source]

Define things like “input_ranks” and “num_of_inputs”, which are certain with regard to your layer

num_of_input is N(N>0) means this layer accepts N inputs;

num_of_input is -1 means this layer accepts any number of inputs;

The rank here is not the same as matrix rank:

For a scalar, its rank is 0;

For a vector, its rank is 1;

For a matrix, its rank is 2;

For a cube of numbers, its rank is 3.

… For instance, the rank of (batch size, sequence length, hidden_dim) is 3.

if num_of_input > 0:

len(input_ranks) should be equal to num_of_input

elif num_of_input == -1:

input_ranks should be a list with only one element and the rank of all the inputs should be equal to that element.

NOTE: when we build the model, if num_of_input is -1, we would replace it with the real number of inputs and replace input_ranks with a list of real input_ranks.

Returns:None
default()[source]

Define the default hyper parameters here. You can define these hyper parameters in your configuration file as well.

Returns:None
inference()[source]

Inference things like output_dim, which may relies on defined hyper parameter such as hidden dim and input_dim

Returns:None
verify()[source]

Define some necessary varification for your layer when we define the model.

If you define your own layer and rewrite this funciton, please add “super(YourLayerConf, self).verify()” at the beginning

Returns:None

block_zoo.math.Minus3D module

class block_zoo.math.Minus3D.Minus3D(layer_conf)[source]

Bases: torch.nn.modules.module.Module

Minus3D layer to get subtraction of two sequences(3D representation)

Parameters:layer_conf (Minus3DConf) – configuration of a layer
forward(*args)[source]

process input

Parameters:*args – (Tensor): string, string_len, string2, string2_len e.g. string (Tensor): [batch_size, seq_len, dim], string_len (Tensor): [batch_size]
Returns:[batch_size, seq_len, output_dim], [batch_size]
Return type:Tensor
class block_zoo.math.Minus3D.Minus3DConf(**kwargs)[source]

Bases: block_zoo.BaseLayer.BaseConf

Configuration of Minus3D layer

Parameters:abs_flag – if the result of the Minus3D is abs, default is False
declare()[source]

Define things like “input_ranks” and “num_of_inputs”, which are certain with regard to your layer

num_of_input is N(N>0) means this layer accepts N inputs;

num_of_input is -1 means this layer accepts any number of inputs;

The rank here is not the same as matrix rank:

For a scalar, its rank is 0;

For a vector, its rank is 1;

For a matrix, its rank is 2;

For a cube of numbers, its rank is 3.

… For instance, the rank of (batch size, sequence length, hidden_dim) is 3.

if num_of_input > 0:

len(input_ranks) should be equal to num_of_input

elif num_of_input == -1:

input_ranks should be a list with only one element and the rank of all the inputs should be equal to that element.

NOTE: when we build the model, if num_of_input is -1, we would replace it with the real number of inputs and replace input_ranks with a list of real input_ranks.

Returns:None
default()[source]

Define the default hyper parameters here. You can define these hyper parameters in your configuration file as well.

Returns:None
inference()[source]

Inference things like output_dim, which may relies on defined hyper parameter such as hidden dim and input_dim

Returns:None
verify()[source]

Define some necessary varification for your layer when we define the model.

If you define your own layer and rewrite this funciton, please add “super(YourLayerConf, self).verify()” at the beginning

Returns:None

Module contents