losses package

Submodules

losses.BaseLossConf module

class losses.BaseLossConf.BaseLossConf[source]

Bases: object

static get_conf(**kwargs)[source]

losses.FocalLoss module

class losses.FocalLoss.FocalLoss(**kwargs)[source]

Bases: torch.nn.modules.module.Module

Focal loss
reference: Lin T Y, Goyal P, Girshick R, et al. Focal loss for dense object detection[J]. arXiv preprint arXiv:1708.02002, 2017.
Parameters:
  • gamma (float) – gamma >= 0.
  • alpha (float) – 0 <= alpha <= 1
  • size_average (bool, optional) – By default, the losses are averaged over observations for each minibatch. However, if the field size_average is set to False, the losses are instead summed for each minibatch. Default is True
forward(input, target)[source]

Get focal loss

Parameters:
  • input (Variable) – the prediction with shape [batch_size, number of classes]
  • target (Variable) – the answer with shape [batch_size, number of classes]
Returns:

loss

Return type:

Variable (float)

losses.Loss module

class losses.Loss.Loss(**kwargs)[source]

Bases: torch.nn.modules.module.Module

For support multi_task or multi_output, the loss type changes to list. Using class Loss for parsing and constructing the loss list. :param loss_conf: the loss for multi_task or multi_output.

multi_loss_op: the operation for multi_loss losses: list type. Each element is single loss. eg: “loss”: {

“multi_loss_op”: “weighted_sum”, “losses”: [

{

“type”: “CrossEntropyLoss”, “conf”: {

“gamma”: 0, “alpha”: 0.5, “size_average”: true

}, “inputs”: [“start_output”, “start_label”]

}, {

“type”: “CrossEntropyLoss”, “conf”: {

“gamma”: 0, “alpha”: 0.5, “size_average”: true

}, “inputs”: [“end_output”, “end_label”]

}

], “weights”: [0.5, 0.5]

}

forward(model_outputs, targets)[source]

compute multi_loss according to multi_loss_op :param model_outputs: the representation of model output layer

type:dict {output_layer_id: output layer data}
Parameters:targets – the label of raw data :type: dict {target: data}
Returns:

Module contents