Drug utilization review is required of all state Medicaid programs and is also used by most private-sector prescription programs. In theory, these reviews examine prescription records in order to alert physicians to possible drug interactions or the availability of alternative - perhaps safer or cheaper - drugs. The Penn researchers, however, have not been able to identify any positive effects that these programs have, either in terms of clinical outcomes or in preventing errors.
"We compared the rate of drug review alerts over a four-year period and found that the existing system has no detectable effect in changing the way drugs are prescribed," said Sean Hennessy, PharmD, PhD, Assistant Professor in Penn's Department of Biostatistics and Epidemiology, and lead author of the report. "No matter how many notice letters are sent out, the rate of prescribing errors never changes. Given the lack of effectiveness - and the potential for harm cited in previous research - there is not much to recommend for keeping these costly review programs."
Typically, a review program uses computers to screen prescription information for potential drug interaction conflicts, based on a pre-established set of guidelines. When the software spots a violation of these rules, it marks the record for review. The program staff then sorts these marked records to determine whether or not an alert notice, usually a letter, should be sent to the prescribing physician.
"Anecdotal evidence suggests that most practicing physicians simply ignore these alert letters and find them to be useless," said Brian L. Strom, MPH, MD, Professor and Chair of Penn's Department of Biostatistics and Epidemiology, and co-author of the study. According to Strom, the drug utilization review programs do not account for the underlying reasons physicians must consider when prescribing specific drugs for their patients. "It is like one of those pesky "help" messages that pops up on your computer's word processor - the software wants to help you write a letter, while you are trying to do something completely different," said Strom. "Eventually, you learn to ignore the message."
Drug utilization review programs are supposed to work through two mechanisms: "direct" effects and "spillover" effects. Direct effects apply to patients who are identified in alerts and benefit from a change in therapy. Since most alert notices are sent months after a given drug is prescribed, the spillover effect is thought to be the driving rationale behind the review program. The term spillover represents the possibility that physicians, once alerted to a particular drug interaction or alternative therapeutic, might apply the information to the care of other patients.
The Penn study aimed to find the spillover effect on the rate of prescribing errors identified by the drug utilization review process. That is, if a particular error notice triggers changes in the behavior of physicians, it would be seen as a decline in similar errors. The researchers reviewed data from six state Medicaid programs and, using the same computer rules that the states use, found no evidence to suggest that there is a spillover effect at all.
The researchers also cite a previous study that suggests these reviews could be harmful if they influence a prescribing physician to use an alternative drug that does not violate the rules but is nonetheless harmful, or if they abruptly discontinue drugs that should be tapered slowly, which can often have harmful effects.
"Despite their cost, and the enormous amount of energy devoted by the well-intentioned professionals who run these programs, the current model simply does not seem to work. The current mandate should be withdrawn," said Hennessy. "Following that, the model should be completely re-designed and then tested before widespread implementation of a new approach. We require evidence of safety and effectiveness for our drugs, and should do the same with our quality improvement programs."