block_zoo.op package

Submodules

block_zoo.op.Combination module

class block_zoo.op.Combination.Combination(layer_conf)[source]

Bases: torch.nn.modules.module.Module

Combination layer to merge the representation of two sequence

Parameters:layer_conf (CombinationConf) – configuration of a layer
forward(*args)[source]

process inputs

Parameters:args (list) – [string, string_len, string2, string2_len, …] e.g. string (Variable): [batch_size, dim], string_len (ndarray): [batch_size]
Returns:[batch_size, output_dim], None
Return type:Variable
class block_zoo.op.Combination.CombinationConf(**kwargs)[source]

Bases: block_zoo.BaseLayer.BaseConf

Configuration for combination layer

Parameters:operations (list) –

a subset of [“origin”, “difference”, “dot_multiply”]. “origin” means to keep the original representations;

”difference” means abs(sequence1 - sequence2); “dot_multiply” means element-wised product;

declare()[source]

Define things like “input_ranks” and “num_of_inputs”, which are certain with regard to your layer

num_of_input is N(N>0) means this layer accepts N inputs;

num_of_input is -1 means this layer accepts any number of inputs;

The rank here is not the same as matrix rank:

For a scalar, its rank is 0;

For a vector, its rank is 1;

For a matrix, its rank is 2;

For a cube of numbers, its rank is 3.

… For instance, the rank of (batch size, sequence length, hidden_dim) is 3.

if num_of_input > 0:

len(input_ranks) should be equal to num_of_input

elif num_of_input == -1:

input_ranks should be a list with only one element and the rank of all the inputs should be equal to that element.

NOTE: when we build the model, if num_of_input is -1, we would replace it with the real number of inputs and replace input_ranks with a list of real input_ranks.

Returns:None
default()[source]

Define the default hyper parameters here. You can define these hyper parameters in your configuration file as well.

Returns:None
inference()[source]

Inference things like output_dim, which may relies on defined hyper parameter such as hidden dim and input_dim

Returns:None
verify()[source]

Define some necessary varification for your layer when we define the model.

If you define your own layer and rewrite this funciton, please add “super(YourLayerConf, self).verify()” at the beginning

Returns:None

block_zoo.op.Concat2D module

class block_zoo.op.Concat2D.Concat2D(layer_conf)[source]

Bases: torch.nn.modules.module.Module

Concat2D layer to merge sum of sequences(2D representation)

Parameters:layer_conf (Concat2DConf) – configuration of a layer
forward(*args)[source]

process inputs

Parameters:*args – (Tensor): string, string_len, string2, string2_len, … e.g. string (Tensor): [batch_size, dim], string_len (Tensor): [batch_size]
Returns:[batch_size, output_dim], [batch_size]
Return type:Tensor
class block_zoo.op.Concat2D.Concat2DConf(**kwargs)[source]

Bases: block_zoo.BaseLayer.BaseConf

Configuration of Concat2D Layer

Parameters:concat2D_axis (int) – which axis to conduct concat2D, default is 1.
declare()[source]

Define things like “input_ranks” and “num_of_inputs”, which are certain with regard to your layer

num_of_input is N(N>0) means this layer accepts N inputs;

num_of_input is -1 means this layer accepts any number of inputs;

The rank here is not the same as matrix rank:

For a scalar, its rank is 0;

For a vector, its rank is 1;

For a matrix, its rank is 2;

For a cube of numbers, its rank is 3.

… For instance, the rank of (batch size, sequence length, hidden_dim) is 3.

if num_of_input > 0:

len(input_ranks) should be equal to num_of_input

elif num_of_input == -1:

input_ranks should be a list with only one element and the rank of all the inputs should be equal to that element.

NOTE: when we build the model, if num_of_input is -1, we would replace it with the real number of inputs and replace input_ranks with a list of real input_ranks.

Returns:None
default()[source]

Define the default hyper parameters here. You can define these hyper parameters in your configuration file as well.

Returns:None
inference()[source]

Inference things like output_dim, which may relies on defined hyper parameter such as hidden dim and input_dim

Returns:None
verify()[source]

Define some necessary varification for your layer when we define the model.

If you define your own layer and rewrite this funciton, please add “super(YourLayerConf, self).verify()” at the beginning

Returns:None

block_zoo.op.Concat3D module

class block_zoo.op.Concat3D.Concat3D(layer_conf)[source]

Bases: torch.nn.modules.module.Module

Concat3D layer to merge sum of sequences(3D representation)

Parameters:layer_conf (Concat3DConf) – configuration of a layer
forward(*args)[source]

process inputs

Parameters:*args – (Tensor): string, string_len, string2, string2_len, … e.g. string (Tensor): [batch_size, seq_len, dim], string_len (Tensor): [batch_size]
Returns:[batch_size, seq_len, output_dim], [batch_size]
Return type:Tensor
class block_zoo.op.Concat3D.Concat3DConf(**kwargs)[source]

Bases: block_zoo.BaseLayer.BaseConf

Configuration of Concat3D layer

Parameters:concat3D_axis (1 or 2) – which axis to conduct Concat3D, default is 2.
declare()[source]

Define things like “input_ranks” and “num_of_inputs”, which are certain with regard to your layer

num_of_input is N(N>0) means this layer accepts N inputs;

num_of_input is -1 means this layer accepts any number of inputs;

The rank here is not the same as matrix rank:

For a scalar, its rank is 0;

For a vector, its rank is 1;

For a matrix, its rank is 2;

For a cube of numbers, its rank is 3.

… For instance, the rank of (batch size, sequence length, hidden_dim) is 3.

if num_of_input > 0:

len(input_ranks) should be equal to num_of_input

elif num_of_input == -1:

input_ranks should be a list with only one element and the rank of all the inputs should be equal to that element.

NOTE: when we build the model, if num_of_input is -1, we would replace it with the real number of inputs and replace input_ranks with a list of real input_ranks.

Returns:None
default()[source]

Define the default hyper parameters here. You can define these hyper parameters in your configuration file as well.

Returns:None
inference()[source]

Inference things like output_dim, which may relies on defined hyper parameter such as hidden dim and input_dim

Returns:None
verify()[source]

Define some necessary varification for your layer when we define the model.

If you define your own layer and rewrite this funciton, please add “super(YourLayerConf, self).verify()” at the beginning

Returns:None

Module contents