glue

class AccuracyAndF1(topk=(1), pos_label=1, name='acc_and_f1', *args, **kwargs)[source]

Bases: paddle.metric.metrics.Metric

Encapsulates Accuracy, Precision, Recall and F1 metric logic.

compute(pred, label, *args)[source]

This API is advanced usage to accelerate metric calculating, calulations from outputs of model to the states which should be updated by Metric can be defined here, where Paddle OPs is also supported. Outputs of this API will be the inputs of “Metric.update”.

If compute is defined, it will be called with outputs of model and labels from data as arguments, all outputs and labels will be concatenated and flatten and each filed as a separate argument as follows: compute(output1, output2, ..., label1, label2,...)

If compute is not defined, default behaviour is to pass input to output, so output format will be: return output1, output2, ..., label1, label2,...

see Metric.update

update(correct, *args)[source]

Update states for metric

Inputs of update is the outputs of Metric.compute, if compute is not defined, the inputs of update will be flatten arguments of output of mode and label from data: update(output1, output2, ..., label1, label2,...)

see Metric.compute

accumulate()[source]

Accumulates statistics, computes and returns the metric value

reset()[source]

Reset states and result

name()[source]

Return name of metric instance.

class Mcc(name='mcc', *args, **kwargs)[source]

Bases: paddle.metric.metrics.Metric

Matthews correlation coefficient https://en.wikipedia.org/wiki/Matthews_correlation_coefficient.

compute(pred, label, *args)[source]

This API is advanced usage to accelerate metric calculating, calulations from outputs of model to the states which should be updated by Metric can be defined here, where Paddle OPs is also supported. Outputs of this API will be the inputs of “Metric.update”.

If compute is defined, it will be called with outputs of model and labels from data as arguments, all outputs and labels will be concatenated and flatten and each filed as a separate argument as follows: compute(output1, output2, ..., label1, label2,...)

If compute is not defined, default behaviour is to pass input to output, so output format will be: return output1, output2, ..., label1, label2,...

see Metric.update

update(preds_and_labels)[source]

Update states for metric

Inputs of update is the outputs of Metric.compute, if compute is not defined, the inputs of update will be flatten arguments of output of mode and label from data: update(output1, output2, ..., label1, label2,...)

see Metric.compute

accumulate()[source]

Accumulates statistics, computes and returns the metric value

reset()[source]

Reset states and result

name()[source]

Return name of metric instance.

class PearsonAndSpearman(name='mcc', *args, **kwargs)[source]

Bases: paddle.metric.metrics.Metric

Pearson correlation coefficient https://en.wikipedia.org/wiki/Pearson_correlation_coefficient Spearman’s rank correlation coefficient https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient.

update(preds_and_labels)[source]

Update states for metric

Inputs of update is the outputs of Metric.compute, if compute is not defined, the inputs of update will be flatten arguments of output of mode and label from data: update(output1, output2, ..., label1, label2,...)

see Metric.compute

accumulate()[source]

Accumulates statistics, computes and returns the metric value

reset()[source]

Reset states and result

name()[source]

Return name of metric instance.