Likelihood ratios are a good measure of the clinical value of a diagnostic test.
Positive likelihood ratio = sensitivity/(1-specificity), where sensitivity is defined as the fraction of patients who have the disease and yield a positive test result, and specificity is the fraction of patients who do not have the disease and yield a negative test result.
So if a test has 90% sensitivity and 85% specificity, its positive likelihood ratio is 0.9/(1–0.85) = 6. A positive result means that the patient is 6 times more likely to have the disease or condition than they were before test results were known.
The negative likelihood ratio is just the inverse of the positive ratio, so a negative result would mean the patient is (1–0.85)/0.9 = 0.17 as likely to have the disease as they were before testing.
The nice thing about LRs is that they can readily be combined with other information to give an overall probability of a disease or condition. Let’s say that based on other information – patient history, symptoms, other tests – a physician estimates the odds of a patient having a disease are 1 in 10. A prudent physician would be reluctant to initiate an aggressive intervention based on those odds.
A positive result from the test would raise the probability that the patient does indeed have the disease from 0.1 to 0.1 x 6 = 60%. Not a slam dunk, but far more likely to warrant intervention. A negative result would reduce the probability from 0.1 to 0.1 x 0.17 = 1.7%, not at all likely.
Smart clinicians tell me that a test with a LR of 5 (or a NLR of 0.2) is generally good enough to provide reliable guidance in making critical treatment decisions.