performance {ROCR} | R Documentation |
All kinds of predictor evaluations are performed using this function.
performance(prediction.obj, measure, x.measure="cutoff", ...)
prediction.obj |
An object of class prediction . |
measure |
Performance measure to use for the evaluation. A
complete list of the performance measures that are available for measure
and x.measure is given in the 'Details' section. |
x.measure |
A second performance measure. If different from the
default, a two-dimensional curve, with x.measure taken to be the
unit in direction of the x axis, and measure to be the unit in
direction of the y axis, is created. This curve is parametrized with
the cutoff. |
... |
Optional arguments (specific to individual performance measures). |
Here is the list of available performance measures. Let Y and Yhat be random variables representing the class and the prediction for a randomly drawn sample, respectively. We denote by + and - the positive and negative class, respectively. Further, we use the following abbreviations for empirical quantities: P (# positive samples), N (# negative samples), TP (# true positives), TN (# true negatives), FP (# false positives), FN (# false negatives).
acc
:err
:fpr
:fall
:fpr
.tpr
:rec
:tpr
.sens
:tpr
.fnr
:miss
:fnr
.tnr
:spec
:tnr
.ppv
:prec
:ppv
.npv
:pcfall
:pcmiss
:rpp
:rnp
:phi
:mat
:phi
.mi
:chisq
:?chisq.test
for details. Note that R might raise a warning if the sample size
is too small.odds
:lift
:f
:rch
:tpr
vs fpr
) curve with concavities
(which represent suboptimal choices of cutoff) removed (Fawcett 2001). Since the
result is already a parametric performance curve, it cannot be
used in combination with other measures.auc
:auc
is cutoff-independent, this
measure cannot be combined with other measures into a parametric
curve. The partial area under the ROC curve up to a given false
positive rate can be calculated by passing the optional parameter
fpr.stop=0.5
(or any other value between 0 and 1) to performance
.prbe
:prbe
is just a cutoff-independent scalar, this
measure cannot be combined with other measures into a parametric curve.cal
:window.size=200
to performance
. E.g., if for several
positive samples the output of the classifier is around 0.75, you might
expect from a well-calibrated classifier that the fraction of them
which is correctly predicted as positive is also around 0.75. In a
well-calibrated classifier, the probabilistic confidence estimates
are realistic. Only for use with
probabilistic output (i.e. scores between 0 and 1).mxe
:mxe
is just a cutoff-independent scalar, this
measure cannot be combined with other measures into a parametric curve.rmse
:rmse
is just a cutoff-independent scalar, this
measure cannot be combined with other measures into a parametric curve.sar
:ecost
:ecost
has an obligatory x
axis, the so-called 'probability-cost function'; thus it cannot be
combined with other measures. While using ecost
one is
interested in the lower envelope of a set of lines, it might be
instructive to plot the whole set of lines in addition to the lower
envelope. An example is given in demo(ROCR)
.cost
:cost.fp
and
cost.fn
, by which the costs for false positives and
negatives can be adjusted, respectively. By default, both are set
to 1.An S4 object of class performance.
Here is how to call 'performance' to create some standard evaluation plots:
Tobias Sing tobias.sing@mpi-sb.mpg.de, Oliver Sander osander@mpi-sb.mpg.de
A detailed list of references can be found on the ROCR homepage at http://rocr.bioinf.mpi-sb.mpg.de.
prediction
, prediction-class
,
performance-class
, plot.performance
## computing a simple ROC curve (x-axis: fpr, y-axis: tpr) library(ROCR) data(ROCR.simple) pred <- prediction( ROCR.simple$predictions, ROCR.simple$labels) perf <- performance(pred,"tpr","fpr") plot(perf) ## precision/recall curve (x-axis: recall, y-axis: precision) perf1 <- performance(pred, "prec", "rec") plot(perf1) ## sensitivity/specificity curve (x-axis: specificity, ## y-axis: sensitivity) perf1 <- performance(pred, "sens", "spec") plot(perf1)