Sensitivity and specificity


Sensitivity and specificity are statistical measures of the performance of a binary classification test, also known in statistics as classification function:
Thus sensitivity quantifies the avoiding of false negatives, as specificity does for false positives. For any test, there is usually a trade-off between the measures. For instance, in an airport security setting in which one is testing for potential threats to safety, scanners may be set to trigger on low-risk items like belt buckles and keys (low specificity), in order to reduce the risk of missing objects that do pose a threat to the aircraft and those aboard (high sensitivity). This trade-off can be represented graphically as a receiver operating characteristic curve. A perfect predictor would be described as 100% sensitive (e.g., all sick are identified as sick) and 100% specific (e.g., no healthy are identified as sick); however, theoretically any predictor will possess a minimum error bound known as the Bayes error rate.