The centerpiece of the country’s drug-testing system — the randomized, controlled trial — had worked. Except in one respect: doctors had no more clarity after the trial about
how to treat brain cancer patients than they had before. Some patients
did do better on the drug, and indeed, doctors and patients insist that
some who take Avastin significantly beat the average. But the trial was
unable to discover these “responders” along the way, much less examine
what might have accounted for the difference... Indeed, even after some 400 completed clinical trials in various
cancers, it’s not clear why Avastin works (or doesn’t work) in any
single patient. “Despite looking at hundreds of potential predictive
biomarkers, we do not currently have a way to predict who is most likely
to respond to Avastin and who is not,” says a spokesperson for
Genentech, a division of the Swiss pharmaceutical giant Roche, which
makes the drug.
The author concludes, and I wonder if he knows what he is concludoing:
Part of the novelty lies in a statistical technique called Bayesian
analysis that lets doctors quickly glean information about which
therapies are working best. There’s no certainty in the assessment, but
doctors get to learn during the process and then incorporate that
knowledge into the ongoing trial.
We have argued that again and again. BRAF inhibitors are Bayesian in a sense. Namely we understand one of the deficiencies in a melanoma for a class of patients. We can then address that deficiency. That is in essence Bayesian, namely P[Outcome|Patient Genomic Condition].
In fact the development of our understanding of cancer pathways, inter-cellular matrices and their interactions, and the immune system and its control mechanism, will allow for patient, and even tumor cell, specific therapeutics.
The problem we have faced in therapeutics is that they have been meat cleaver approaches, cell cycle inhibitors which inhibited all cell cycles thus eliminating hair and other debilitating effects.
Now we can understand what genetic pathways have broken down and we can address that specific problem.
The Times author also states:
In a famous 2005 paper
published in The Journal of the American Medical Association, Dr.
Ioannidis, an authority on statistical analysis, examined nearly four
dozen high-profile trials that found a specific medical intervention to
be effective. Of the 26 randomized, controlled studies that were
followed up by larger trials (examining the same therapy in a bigger
pool of patients), the initial finding was wholly contradicted in three
cases (12 percent). And in another 6 cases (23 percent), the later
trials found the benefit to be less than half of what was first
reported.
In JAMA that article by Ioannidis states:
Clinical research on important questions about the efficacy of medical
interventions is sometimes followed by subsequent studies that either reach
opposite conclusions or suggest that the original claims were too strong.
Such disagreements may upset clinical practice and acquire publicity in both
scientific circles and in the lay press. Several empirical investigations
have tried to address whether specific types of studies are more likely to
be contradicted and to explain observed controversies. For example, evidence
exists that small studies may sometimes be refuted by larger ones
This paper has received a great deal of criticism. Yet the biggest concern in my opinion is that it lumps all trials together. Many trials are truly gross guesses. They are in effect meat cleaver therapeutics. The newer genetic based therapeutics totally change that perspective.