1. Sensitivity and specificity
2. ROC curve
3. Positive and negative predictive value
4. Positive and negative likelihood ratio
1. Sensitivity and specificity
Sensitivity is the number of true positive test results.

TP


Number of true positive test results

Sensitivity =



=




TP + FN


all patients with disease

(TP = True positives, FN = False negatives)
Specificity is the number of negative results in persons who don’t have the disease (true negative test results in controls).

TN


Number of true negative test results

Specificity =



=




TN + FP


all patients without disease

(TN = True negatives, FP = False positives)
The control group could be a population of healthy individuals. However, to evaluate the specificity of a test realistically, the control population should consist of patients with diseases, which are important in the differential diagnosis. For example, the specificity of a test for celiac disease should be evaluated with a population of patients with other gastrointestinal diseases such as inflammatory bowel diseases, gastrointestinal infections etc.
Example:
The prevalence of rheumatoid arthritis (RA) in the population tested is 2 %. Therefore, 100 of 5,000 tested individuals will have rheumatoid arthritis. 4,900 are healthy or have another disease but not rheumatoid arthritis.
With the test in this example, 73 of 100 RA patients had a positive test result (true positives). 27 were not detected and therefore had a negative test result (false negatives). In the control group of 4,900 individuals who don't have RA, 73 were positive (false positives) and 4,827 were negative (true negatives).
 Test positive  Test negative  Total 
RA 
73 
27 
100 
NonRA 
73 
4,827 
4,900 
Total 
146 
4,854 
5,000 
The sensitivity of this test is 0.73 or 73 % (number of true positives = 73/number of patients = 100) and the specificity is 0.985 or 98.5 % (number of true negatives = 4,827/number of controls = 4,900).
Relevance of specificity
In the example above, 73 out of 5,000 individuals would test false positive. This means 73 individuals who are referred to a specialist or, in worst case, are treated with toxic medication.
If the test has a lower specificity of for example 92 % (which still sounds very reasonable) the false positive results increase dramatically.
 Test positive  Test negative  Total 
RA 
73 
27 
100 
NonRA 
392 
4,508 
4,900 
Total 
465 
4,535 
5,000 
There is a risk for a wrong diagnosis of RA in 392 individuals, more than 5 times more individuals than with a specificity of 98.5 %.
Back to top
2. ROC curve
The sensitivity of a test is higher, when a low cutoff is chosen. A low cutoff leads directly to a lower specificity. The cutoff in autoimmune tests is always a balance between sensitivity and specificity. These both values correlate inversely with each other and for every sensitivity value a specificity value corresponds. This relationship can be illustrated in a ROC curve.
Definition:
In signal detection theory, a receiver operating characteristic (ROC), or simply ROC curve, is a graphical plot of the sensitivity, or true positives, vs. (1  specificity), or false positives.
Example:
In a study of Bizzarro et al. (Clin Chem 2007; 53:152733) 11 tests for the detection of antibodies associated with rheumatoid arthritis were compared. With the serum panel tested the following ROC curve was true for EliA CCP:
An optimal cutoff for a test is chosen where sensitivity and specificity are as high as possible. For EliA CCP the optimal cutoff reveals a specificity of 98.5 % and a sensitivity of 73 %.
In the same study with the same sera a classicl RF test was compared. Using the recommended cutoff sensitivity and specificity were 54 % and 86.1 %, respectively.
For an optimal comparability of different tests, the specificity can be set to a specific value and the respective sensitivities can be calculated with the help of a ROC curve. In this example, the respective sensitivities were compared at defined specificities of 99 %, 98 % and 97 %.
Assay  sens. at recommended cutoff  spec. at recommended cutoff  sens. at 99 % spec.  sens. at 98 % spec.  sens. at 97 % spec. 
EliA CCP 
73 % 
98.5 % 
69 % 
74 % 
74 % 
RF 
54 % 
86.1 % 
13 % 
17 % 
17 % 
Back to top
3. Positive and negative predictive value
Definition:
The positive predictive value (PPV), or precision rate, or posttest probability of disease, is the proportion of patients with positive test results who have the disease.Predictive value is related to the sensitivity and specificity of the test or screening method.

TP


Number of patients being positive

PPV =



=




TP+FP


all positive results

(TP = True positives, FP = False positives)
The negative predictive value (PPV) is the proportion of control patients with negative test results who are correctly diagnosed.

TN


Number of controls being negative

NPV =



=




TN+FN


all negative results

(TN = True negatives, FN = False negatives)
Example:
 Test positive  Test negative  Total 
RA 
73 
27 
100 
NonRA 
73 
4,827 
4,900 
Total 
146 
4,854 
5,000 
Taking the same example as in the paragraph about sensitivity and specificity (see above) for a test in rheumatoid arthritis (RA), the PPV is 50 % (73 TP / 146 P). The NPV is 99 % (4,827 TN / 4,854 N).
With this population, a clinician must be aware that half of the positive results are in individuals not having RA. A positive result predicts the disease with a probability of 50 %. On the other hand, a negative result predicts with 99 % that the disease is not present.
Dependency from pretest probability
The predictive values are highly dependent on the pretest probability, which is 2 % in this example which is rather small. If the pretest probability was 10 % (e.g. because it is a specialised rheumatology lab), the data would change accordingly:
 Test positive  Test negative  Total 
RA 
370 
130 
500 
NonRA 
60 
3,940 
4,000 
Total 
430 
4,570 
5,000 
The performance of the test or the clinical features of the marker don't change with the pretest probability. Therefore, the sensitivity and specificity of the test are relatively fixed. The positive predictive value however increases with a 5 times higher pretest probability from 50 % to 86 % (370 TP / 430 P) and the negative predictive value decreased from 99 % to 86 % (3,940 TN / 4,570 N).
Back to top
4. Positive and Negative Likelihood Ratio
Defintion:
The likelihood ratio incorporates both the sensitivity and specificity of the test and provides a direct estimate of how much a test result will change the odds of having a disease.
The likelihood ratio for a positive result (positive LR) tells you how much the odds of the disease increase when a test is positive.

TP / (TP + FN)


Sensitivity

pos LR =



=




FP / (FP + TN)


1  Specificity

(TP = True positives, TN = True negatives, FP = False positives, FN = False negatives)
The likelihood ratio for a negative result (negative LR) tells you how much the odds of the disease decrease when a test is negative.

FN / (TP + FN)


1  Sensitivity

neg LR =



=




TN / (FP + TN)


Specificity

(TP = True positives, TN = True negatives, FP = False positives, FN = False negatives)
Example:
 Test positive  Test negative  Total 
RA 
73 
27 
100 
NonRA 
73 
4,827 
4,900 
Total 
146 
4,854 
5,000 
Taking the same example as in the upper paragraphs (see above) for a test in rheumatoid arthritis (RA), the positive LR is very high with (73/100) / (73/4,900) = 50. If the patient has a positive test result, the likelihood is very high, that he has RA. The negative LR is (27/100) / (4,827/4900) = 0.27.
Interpretation of likelihood ratio:
 Negative LR  Positive LR 
no clinical value = test quality is not useful 
1 
1 
small difference that may be relevant 
0.2 – 0.5 
2 – 5 
modest, but substantial difference 
0.1 – 0.2 
5 – 10 
clinically important difference = test quality is very useful 
<0.1 
>10 
The marker in the example above has a very high positive LR of 50. Therefore, a positive result indicates the disease with a high probability. A negative result, however, does not exclude the disease. With a negative LR of 0.27, the clinical usefulness is only small.
Back to top