Risk Of Developing Liver Cancer After HCV Treatment

Thursday, October 25, 2012

Medicine rarely a slam dunk, despite splashy studies

NEW YORK | Tue Oct 23, 2012 4:31pm EDT

NEW YORK (Reuters Health) - Next time a research finding leaves you slack-jawed, thinking it's too good to be true, you might just be right, according to a massive new analysis tracking the fate of splashy medical studies.

 
It turns out that 90 percent of the "very large" effects described in initial reports on medical treatments begin to shrink or vanish as more studies are done.

"If taken literally, such huge effects should change everyday clinical and public health practice on the spot," Dr. John Ioannidis of the Stanford School of Medicine in California told Reuters Health by email. "Our analysis suggests it is better to wait to see if these very large effects get replicated or not."

Ioannidis has made headlines before with research showing that studies in medicine are often contradicted by later evidence, a phenomenon that has been referred to as "the decline effect."

The new work, published in the Journal of the American Medical Association, pools more than 3,000 research reviews done by the Cochrane Collaboration, a prestigious international organization that evaluates medical evidence.

Nearly one in 10 of the analyses in those reviews showed a very large treatment effect - harmful or beneficial - in the first published trial. But usually the reports trumpeting astounding findings were based on small, less reliable experiments.

There are several possible explanations, but smaller experiments are generally more likely to yield extreme results by chance alone. As more data accumulate, they start to approach the average - something statisticians call "regression to the mean."

Out of the tens of thousands of treatment comparisons they looked at only one - a respiratory intervention in newborns - showed a reliable, very large drop in death rates.

So taking new research with a grain of salt may be appropriate, according to Ioannidis.
"Keep some healthy skepticism about claims for silver bullets, perfect cures, and huge effects," he advised.

The team also found that trials showing big effects more often looked at lab values such as bone density or blood pressure than did those with more moderate findings.

Such lab values are tied to health outcomes - say, bone fractures and strokes - and are easy to measure. They are also easier for drugmakers to target than the health outcomes themselves.
But that doesn't necessarily mean drugs that tweak your numbers will have the health effects you are looking for, said Dr. Andrew Oxman of the Norwegian Knowledge Centre for the Health Services in Oslo.

"Even if they reduce the lab value, you can't be sure they reduce your risk of heart attack or stroke or fracture," Oxman, who wrote an editorial about the new findings, told Reuters Health. "There are lots of examples where things start to be used and have entered the market based on surrogate outcomes and then actually proved harmful."

He mentioned the heart rhythm drugs encainide and flecainide, which for many years were given to people with acute heart attacks. But then trials showed they were actually bad for these patients.
"These drugs were by given well-meaning clinicians, but they actually killed more people than the Vietnam War did," Oxman said.

He said people should understand that while medicine can be lifesaving and reduce human suffering, it usually comes with side effects and the benefits are often modest.

"A lot of things don't have these slam-bang effects, so people have to look at the trade-offs, the harms," he said.

SOURCE: bit.ly/PoG8om and bit.ly/PoGkEf Journal of the American Medical Association, online October 23, 2012.

Many Studies' 'Wow' Results Usually Fade in Follow-Up

By Serena Gordon
HealthDay Reporter

TUESDAY, Oct. 23 (HealthDay News) -- When initial findings about an experimental drug or device sound too good to be true, they probably are, according to a new study.

Stanford University researchers found that after a single study reports large benefits for a new medical intervention, additional studies almost always find a smaller treatment effect.

The study authors suspect that a small study size contributes to the initially inflated benefits.
"Beware of small studies claiming extraordinary benefits or extraordinary harms of medical interventions; the truth about these may be more modest," said Dr. John Ioannidis, a professor of medicine, health research and policy and statistics at Stanford's Prevention Research Center in California.

Ioannidis is senior author of the study, published in the Oct. 24/31 issue of the Journal of the American Medical Association.

Health experts know that most of the medical interventions introduced today have modest effects. Still, some clinical trials occasionally report finding large effects.

Dr. Andrew Oxman, author of an accompanying journal editorial, added that "few clinical interventions have been found to have big effects on outcomes that are important to patients; for example, to cut the risk of a heart attack, a stroke or some other bad outcome in half."

Typically, when big effects are reported, "it has been in small trials that do not provide reliable evidence and it has been on laboratory outcomes [such as cholesterol levels], which may or may not translate into big effects on outcomes that are important to patients," noted Oxman, a senior researcher at the Norwegian Knowledge Center for the Health Services in Oslo, Norway.

Ioannidis and his colleagues wanted to see how often these large benefits were reported in clinical trials, and to see if those benefits persisted when additional research was done.

For this, they went through 3,545 systematic reviews. A systematic review is a collection and critical evaluation of all of the currently available studies on a given topic. The researchers found 3,082 with data that met their criteria.

From those studies, they reviewed more than 85,000 "forest plots." These are graphs that display the main results from each study. These graphs make it relatively easy to see the strength of the evidence for a specific treatment or intervention.

Just under 10 percent of the reviews showed a large benefit in the first published trial, while another 6 percent had a study that showed a large benefit after the first trial was published. The majority of reviews (84 percent), however, had no studies that showed a large benefit to any treatment or intervention, the investigators found.

In follow-up studies, 90 percent of the first trials that showed a large benefit failed to show such a significant effect. And 98 percent of subsequent studies that had shown a large benefit failed to maintain that response.

Out of all of the reviews, only one intervention -- extracorporeal oxygenation for severe respiratory failure in newborns -- showed a large positive effect on mortality in a systematic review, without any concerns about the quality of the evidence in the studies, the researchers said.
"I think some healthy skepticism and a conservative approach may be warranted if only a single study is available -- even more so if that study is small and/or had obvious problems and biases," said Ioannidis. "Most of the time, waiting for some better, larger, more definitive evidence is a good idea. No need to rush."

Oxman added, "I suspect that many patients tend to think that an intervention either works, or does not work, without fully considering the size of the effect and potential adverse effects."
Ioannidis and Oxman suggested that increased health and statistical literacy would help consumers make more informed choices. Oxman also said that a simple drug "fact box" that explains benefits and risks could help, too.

No comments:

Post a Comment