block_zoo.encoder_decoder package

Submodules

block_zoo.encoder_decoder.SLUDecoder module

class block_zoo.encoder_decoder.SLUDecoder.SLUDecoder(layer_conf)[source]

Bases: block_zoo.BaseLayer.BaseLayer

Spoken Language Understanding Encoder

References

Liu, B., & Lane, I. (2016). Attention-based recurrent neural network models for joint intent detection and slot filling. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, (1), 685–689. https://doi.org/10.21437/Interspeech.2016-1352

Parameters:layer_conf (SLUEncoderConf) – configuration of a layer
Attention(hidden, encoder_outputs, encoder_maskings)[source]
Parameters:
  • hidden – 1,B,D
  • encoder_outputs – B,T,D
  • encoder_maskings – B,T # ByteTensor
forward(string, string_len, context, encoder_outputs)[source]

process inputs

Parameters:
  • string (Variable) – word ids, [batch_size, seq_len]
  • string_len (ndarray) – [batch_size]
  • context (Variable) – [batch_size, 1, input_dim]
  • encoder_outputs (Variable) – [batch_size, max_seq_len, input_dim]
Returns:

decode scores with shape [batch_size, seq_len, decoder_vocab_size]

Return type:

Variable

class block_zoo.encoder_decoder.SLUDecoder.SLUDecoderConf(**kwargs)[source]

Bases: block_zoo.BaseLayer.BaseConf

Configuration of Spoken Language Understanding Encoder

References

Liu, B., & Lane, I. (2016). Attention-based recurrent neural network models for joint intent detection and slot filling. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, (1), 685–689. https://doi.org/10.21437/Interspeech.2016-1352

Parameters:
  • hidden_dim (int) – dimension of hidden state
  • dropout (float) – dropout rate
  • num_layers (int) – number of BiLSTM layers
  • num_decoder_output (int) –
declare()[source]

Define things like “input_ranks” and “num_of_inputs”, which are certain with regard to your layer

num_of_input is N(N>0) means this layer accepts N inputs;

num_of_input is -1 means this layer accepts any number of inputs;

The rank here is not the same as matrix rank:

For a scalar, its rank is 0;

For a vector, its rank is 1;

For a matrix, its rank is 2;

For a cube of numbers, its rank is 3.

… For instance, the rank of (batch size, sequence length, hidden_dim) is 3.

if num_of_input > 0:

len(input_ranks) should be equal to num_of_input

elif num_of_input == -1:

input_ranks should be a list with only one element and the rank of all the inputs should be equal to that element.

NOTE: when we build the model, if num_of_input is -1, we would replace it with the real number of inputs and replace input_ranks with a list of real input_ranks.

Returns:None
default()[source]

Define the default hyper parameters here. You can define these hyper parameters in your configuration file as well.

Returns:None
inference()[source]

Inference things like output_dim, which may relies on defined hyper parameter such as hidden dim and input_dim

Returns:None
verify()[source]

Define some necessary varification for your layer when we define the model.

If you define your own layer and rewrite this funciton, please add “super(YourLayerConf, self).verify()” at the beginning

Returns:None

block_zoo.encoder_decoder.SLUEncoder module

class block_zoo.encoder_decoder.SLUEncoder.SLUEncoder(layer_conf)[source]

Bases: block_zoo.BaseLayer.BaseLayer

Spoken Language Understanding Encoder

References

Liu, B., & Lane, I. (2016). Attention-based recurrent neural network models for joint intent detection and slot filling. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, (1), 685–689. https://doi.org/10.21437/Interspeech.2016-1352

Parameters:layer_conf (SLUEncoderConf) – configuration of a layer
forward(string, string_len)[source]

process inputs

Parameters:
  • string (Variable) – [batch_size, seq_len, dim]
  • string_len (ndarray) – [batch_size]
Returns:

output of bi-lstm with shape [batch_size, seq_len, 2 * hidden_dim] ndarray: string_len with shape [batch_size] Variable: context with shape [batch_size, 1, 2 * hidden_dim]

Return type:

Variable

class block_zoo.encoder_decoder.SLUEncoder.SLUEncoderConf(**kwargs)[source]

Bases: block_zoo.BaseLayer.BaseConf

Configuration of Spoken Language Understanding Encoder

References

Liu, B., & Lane, I. (2016). Attention-based recurrent neural network models for joint intent detection and slot filling. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, (1), 685–689. https://doi.org/10.21437/Interspeech.2016-1352

Parameters:
  • hidden_dim (int) – dimension of hidden state
  • dropout (float) – dropout rate
  • num_layers (int) – number of BiLSTM layers
declare()[source]

Define things like “input_ranks” and “num_of_inputs”, which are certain with regard to your layer

num_of_input is N(N>0) means this layer accepts N inputs;

num_of_input is -1 means this layer accepts any number of inputs;

The rank here is not the same as matrix rank:

For a scalar, its rank is 0;

For a vector, its rank is 1;

For a matrix, its rank is 2;

For a cube of numbers, its rank is 3.

… For instance, the rank of (batch size, sequence length, hidden_dim) is 3.

if num_of_input > 0:

len(input_ranks) should be equal to num_of_input

elif num_of_input == -1:

input_ranks should be a list with only one element and the rank of all the inputs should be equal to that element.

NOTE: when we build the model, if num_of_input is -1, we would replace it with the real number of inputs and replace input_ranks with a list of real input_ranks.

Returns:None
default()[source]

Define the default hyper parameters here. You can define these hyper parameters in your configuration file as well.

Returns:None
inference()[source]

Inference things like output_dim, which may relies on defined hyper parameter such as hidden dim and input_dim

Returns:None
verify()[source]

Define some necessary varification for your layer when we define the model.

If you define your own layer and rewrite this funciton, please add “super(YourLayerConf, self).verify()” at the beginning

Returns:None
verify_before_inference()[source]

Some conditions must be fulfilled, otherwise there would be errors when calling inference()

The difference between verify_before_inference() and verify() is that:
verify_before_inference() is called before inference() while verify() is called after inference().
Returns:None

Module contents