The National Institutes of Health got a $1.1-billion check from Congress this year to sort out which medical treatments are actually most effective. But it would cost a whole lot less to turn reams of Food and Drug Administration documents into useful information about the pros and cons of specific prescription drugs.
That’s the argument made by doctors Lisa Schwartz and Steven Woloshin, two members of the Dartmouth Institute for Health Policy & Clinical Practice in Hanover, N.H. They are specialists in improving the way medical information is communicated to doctors, patients, policymakers and the general public, and they find plenty of fault with the way FDA handles drug information.
In Thursday’s edition of the New England Journal of Medicine, they describe how FDA experts in epidemiology, statistics, pharmacology and clinical medicine spend months poring over the data from clinical trials. They produce lengthy review documents that include tons of valuable information, such as whether approval for a drug was a slam dunk or a close call.
Alas, very little of that information finds its way onto drug labels, and that’s where doctors and patients get most of their information about the medications they prescribe and take.
For instance, the FDA approved Rozerem in 2005 to treat chronic insomnia. But even a careful read of the drug’s label provides no hint that it just squeaked by. Nor would it clue anyone in to the fact that in one phase 3 clinical trial, the drug failed to reduce the number of patients with insomnia, boost the total duration of shut-eye, improve sleep quality or reduce awakenings, Schwartz and Woloshin write.
And that’s not an isolated example. The 403-page review of another sleep aid, Lunesta, noted that the average patient in the biggest phase 3 trial continued to meet the clinical criteria for insomnia despite taking the drug. Typical patients also had just as many problems with next-day alertness and functioning. None of that data is on the label. The drug was approved in 2004, and a multi-million-dollar ad campaign pushed sales up to $800 million last year.
Sometimes the omissions can be dangerous. When Zometa was tested in patients with hypercalcemia of malignancy, the reviewers noticed that patients who got an 8-milligram dose had more kidney damage and a greater overall risk of death during the clinical trials than patients who got a 4-milligram dose. The drug was approved in 2001, but it wasn’t until 2008 that the label was amended to state: “Do not use doses greater than 4 mg.”
What to do? Schwartz and Woloshin recommend that FDA reviewers start producing executive summaries of their reports. These summaries should contain the primary results from the phase 3 trials and emphasize any uncertainties the reviewers had about the drug.
They also suggest that reviewers compile “Prescription Drug Fact Boxes” that lay out the benefits and harms of a drug in an easy-to-read table. Reviewers practiced doing this as part of a pilot test conducted by Schwartz and Woloshin, and they must have been good – the FDA’s Risk Advisory Committee has endorsed them, and the agency is now deciding whether and how to use them.
“We don’t need to wait for new comparative-effectiveness results in order to improve practice,” they wrote in the journal. “We need to better disseminate what is already known.”
-- Karen Kaplan
Photo: Very little of what the FDA knows about medications is printed on the labels. Credit: Karen Tapia-Andersen/Los Angeles Times