plugins.hf_ner.sequence_labelling_metrics
Metrics to assess performance on sequence labeling task given prediction
Functions named as *_score
return a scalar value to maximize: the higher
the better
#
get_entitiesGets entities from sequence.
Arguments:
seq
list - sequence of labels.
Returns:
list
- list of (chunk_type, chunk_start, chunk_end).
Example:
from seqeval.metrics.sequence_labeling import get_entities seq = ['B-PER', 'I-PER', 'O', 'B-LOC'] get_entities(seq) [('PER', 0, 1), ('LOC', 3, 3)]
#
end_of_chunkChecks if a chunk ended between the previous and current word.
Arguments:
prev_tag
- previous chunk tag.tag
- current chunk tag.prev_type
- previous type.type_
- current type.
Returns:
chunk_end
- boolean.
#
start_of_chunkChecks if a chunk started between the previous and current word.
Arguments:
prev_tag
- previous chunk tag.tag
- current chunk tag.prev_type
- previous type.type_
- current type.
Returns:
chunk_start
- boolean.
#
f1_scoreCompute the F1 score. The F1 score can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its best value at 1 and worst score at 0. The relative contribution of precision and recall to the F1 score are equal. The formula for the F1 score is:: F1 = 2 (precision recall) / (precision + recall)
Arguments:
y_true : 2d array. Ground truth (correct) target values. y_pred : 2d array. Estimated targets as returned by a tagger.
Returns:
score : float.
Example:
from seqeval.metrics import f1_score y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] f1_score(y_true, y_pred) 0.50
#
accuracy_scoreAccuracy classification score. In multilabel classification, this function computes subset accuracy: the set of labels predicted for a sample must exactly match the corresponding set of labels in y_true.
Arguments:
y_true : 2d array. Ground truth (correct) target values. y_pred : 2d array. Estimated targets as returned by a tagger.
Returns:
score : float.
Example:
from seqeval.metrics import accuracy_score y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] accuracy_score(y_true, y_pred) 0.80
#
precision_scoreCompute the precision.
The precision is the ratio tp / (tp + fp)
where tp
is the number of
true positives and fp
the number of false positives. The precision is
intuitively the ability of the classifier not to label as positive a sample.
The best value is 1 and the worst value is 0.
Arguments:
y_true : 2d array. Ground truth (correct) target values. y_pred : 2d array. Estimated targets as returned by a tagger.
Returns:
score : float.
Example:
from seqeval.metrics import precision_score y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] precision_score(y_true, y_pred) 0.50
#
recall_scoreCompute the recall.
The recall is the ratio tp / (tp + fn)
where tp
is the number of
true positives and fn
the number of false negatives. The recall is
intuitively the ability of the classifier to find all the positive samples.
The best value is 1 and the worst value is 0.
Arguments:
y_true : 2d array. Ground truth (correct) target values. y_pred : 2d array. Estimated targets as returned by a tagger.
Returns:
score : float.
Example:
from seqeval.metrics import recall_score y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] recall_score(y_true, y_pred) 0.50
#
performance_measureCompute the performance metrics: TP, FP, FN, TN
Arguments:
y_true : 2d array. Ground truth (correct) target values. y_pred : 2d array. Estimated targets as returned by a tagger.
Returns:
performance_dict : dict
Example:
from seqeval.metrics import performance_measure y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'O', 'B-ORG'], ['B-PER', 'I-PER', 'O']] y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O', 'O'], ['B-PER', 'I-PER', 'O']] performance_measure(y_true, y_pred) (3, 3, 1, 4)
#
classification_reportBuild a text report showing the main classification metrics.
Arguments:
y_true : 2d array. Ground truth (correct) target values. y_pred : 2d array. Estimated targets as returned by a classifier. digits : int. Number of digits for formatting output floating point values.
Returns:
report : string. Text summary of the precision, recall, F1 score for each class.
Examples:
from seqeval.metrics import classification_report y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] print(classification_report(y_true, y_pred)) precision recall f1-score support <BLANKLINE> MISC 0.00 0.00 0.00 1 PER 1.00 1.00 1.00 1 <BLANKLINE> micro avg 0.50 0.50 0.50 2 macro avg 0.50 0.50 0.50 2 <BLANKLINE>