Specifically they state:
This study compared PSA screening performance for detecting CaP in the ERSPC-Rotterdam with the US population. The authors report that PSA screening performance in this analysis could provide quantitative explanations for the different mortality results of ERSPC-Rotterdam and the US Prostate, Lung, Colorectal and Ovarian trial. ..The model includes 18 detectable preclinical states in the natural history of CaP that are derived from combinations of clinical stage, grade, and metastatic stage. In this model, PSA testing and subsequent biopsy is modeled as a single test, therefore PSA test sensitivity also depends on whether a positive test is followed by a biopsy.
...The predicted CaP incidence peak in the US was higher than the observed CaP incidence Peak (13.3 vs. 8.1 cases per 1,000 man-years), suggesting a lower detection of CaP in the US than in ERSPC-Rotterdam. The lower sensitivity of PSA screening in the US compared with ERSPC-Rotterdam may be due to a higher PSA cutoff level for recommending biopsies in the US. Data suggests that the biopsy compliance rate is over twice as high in the screening arm of ERSPC-Rotterdam. However, other differences included racial differences between the US and Rotterdam, frequency of PSA testing, explanations for the drop in CaP incidence after 1992 and the inability to compute 95% confidence intervals for the sensitivity parameters.
The study found that PSA screening in the US did not detects as many CaPs as in ERSPC-Rotterdam due to the lower sensitivity of PSA testing followed by a biopsy.
This study presents in a bit convolved way the problems with PSA testing. They are:
1. PSA tests are not consistent. One assay will give different results from another assay. The difference that we have measured can be as great as a 50% variation from assay to assay. The stated variation is less than 10% but the measured is closer to 50%. Thus a single test can have great variability.
2. Repeat testing with the same assay also has testing variances due to life style. Namely irritated prostates and the like cause variations in PSA as much as 25%.
3. PSA Velocity, VPSA, is the dominant test metric and that requires many years of tracking. It is the average of three consecutive measurements and the derivation of velocity therefrom. Thus one needs a good baseline of ten years of annual PSA data at a minimum to determine reliable PSA velocity. The three sample test is an attempt to reduce the variability from the above two causes.
4. There is a recent tendency to delay biopsy from an exaggerated PSA test. In fact many internists and family physicians do not pay attention to velocity because they do not have access to the data! It is questionable if they are even aware of the velocity testing.
5. The problem today is that PSA testing looks at just one PSA sample and we know they are highly variable. Thus rather than sampling bi-annually the test should be performed annually and the long term data recorded and analyzed.
The problem of having data on patient histories is pandemic. For example the PSA is but one yet so too is HbA1c, and even blood pressure as well as HDL and many other variables. Medicine is a science and art which is often driven by a change, change in some chemistry measurement, change in weight, sight, moles, and the like. Thus it is imperative that a good HIT notwithstanding that the patient develop their own records, and bring them with them to the physician. Noticing a change can save a life.