Applying Clinical Trial Results in Psychiatric Practice


Mark Tonelli, a pulmonologist at the University of Washington who is also trained in philosophy, has written a series of papers which put the role of clinical trial results in perspective and argue for considering several other kinds of information when making clinical decisions. Here I will discuss two of his papers and comment on how his arguments apply to psychiatric practice.

In the first, Tonelli notes that clinical trials provide information about populations of patients but not about how an intervention will affect any individual patient. In addition, clinical trials do not include consider values important to the patient, the profession, or society. He goes on to discuss five “topics” relevant to clinical decision making. Three are sources of medical information: clinical research results, clinical experience, and pathophysiological reasoning; the others are patient values and preferences, and system features.

Clinical trials study treatment effects in research groups that are often narrowly defined, and their results may not apply to “real world” patients, who often have more than one psychiatric diagnosis as well as medical and social problems which directly affect treatment feasibility and outcome. In addition, clinical trials are limited to specific places and times. Then clinician, then, has to decide how much the results of a particular trial are likely to apply to the patient sitting in front of her. Psychiatrists face this problem with pharmacological treatment of depression, where clinical trials have difficulty showing a benefit of antidepressant medication over placebo. There are many possible explanations for this, which Peter Kramer discusses in his book Ordinarily Well: The Case for Antidepressants; for now let's just say that clinical trials in psychiatry are fraught with distortions and very hard to do well.

Clinical experience includes the personal experience of individual clinicians as well as reports from colleagues and experts. It is subject to many biases. More experience with more patient outcomes is better than less, and Tonelli considers expert opinion to be the highest form of clinical experience. Along with its potential for biases, clinical experience tends to be static, as clinicians may be slow to adopt promising strategies with which they lack experience. In psychiatry we are familiar with some of the biases introduced by pharmaceutical marketing. In addition, group-think can develop in training programs and other institutions. When I was in medical school at Yale, perphenazine was the preferred antipsychotic, but when I got to the Massachusetts Mental Health Center, chlorpromazine and trifluoperazine were most popular. While there are differences, there is no justification for an overall preference for one of those drugs over another.

Pathophysiological reasoning is the foundation of Western medicine: we try to understand the biological perturbations which result in disease and use this knowledge to diagnose and treat our patients. Unfortunately, we lack the biological knowledge to apply it to most psychiatric disorders, and selecting, for example, an antidepressant based on its purported monoamine effects likely reflects a false sense of knowledge and certainty, or even the pseudoscientific understanding which has been employed in pharmaceutical marketing. But biological reasoning can serve as a check on spurious results, such as homeopathy, and it support the predictions of clinical research. (Spurious explanations and false confidence may also improve treatment results through placebo effects, but that is another discussion.)

In a second article, Tonelli discusses “compellingness,” the aspects of clinical trial results which make them practically relevant. He notes that clinical practice guidelines differ from decisions about individual patients in that they, like clinical trials, focus on groups of patients and cannot include the complexities of individual cases or the values patients attach to particular outcomes. In addition, individual clinicians vary in how persuasive they find particular research results, and implementation of guidelines is easier in some settings than others.

Tonelli discusses the effects of clinicians’ prior understanding and beliefs on the interpretation of research results. Findings consistent with previous knowledge and clinical experience are likely to weigh more heavily in clinicians’ thinking than information that differs from what they believe. This is not surprising, but it should also remind us that rather than dismissing surprising new resutls, it is best to re-evaluate prior beliefs in the light of the new findings.

Other factors contributing to compellingness are the consistency of a finding with other research and whether there is potential bias in the conduct or reporting of the research, as can occur with studies funded by entities which stand to gain from publication of the results.

Several factors influence a clinician’s belief that a finding is applicable to an individual patient, including applicability, effect size, the value of an outcome, safety, the time to effect, and alternatives. Applicability is how closely the participants in a research study resemble a particular patient or the patients the clinician regularly sees. In addition, clinicians are likely to be persuaded by studies which report large effect sizes, since a strong treatment effect may overwhelm other confounding factors.

Researchers choose outcome variables for various reasons, including convenience and ease of measurement. Clinicians and patients, on the other hand, look at outcomes that are meaningful and valued by the individual patient. A drop in a depression rating score may not be as important to a patient as reduced mental torment, improved relationships, or the ability to work. Clinicians also pay attention to safety: a clinical trial may not be large enough to evaluate the risk of rare but serious adverse events, but clinicians may be concerned about such risks based on experience, pathophysiological reasoning, and post-marketing reports. The cardiac conduction phenomenon of Q-T prolongation, for example, is associated with increased risk of serious ventricular arrhythmias and death. Several psychiatric medications, including citalopram, ziprasidone, and quetiapine, have been associated with some Q-T prolongation, though what this means in terms of actual arrhythmias and harm to patients is not clear. At least one drug company has used this information to dissuade clinicians from prescribing another company’s product.

My sense is that clinicians vary a lot in how they respond to information about uncommon but serious risks—some are anxious about any such risk, while others are inclined to ignore such unlikely problems. It seems likely that treatments which require special measures to manage the risk of rare side effects, such as electrocardiograms for Q-T prolongation, sodium levels for oxcarbazepine-induced hyponatremia, or careful patient education about drug-induced rashes with lamotrigine, are less appealing to clinicians and patients.

Some treatments produce noticeable effects quickly, while the benefits of others take much longer to appear. Quick results, as Tonelli notes, can to reinforce a treatment’s appeal to clinicians and patients. But we are charged with helping patients care for themselves in the long run, so from an ethical point of view, long term results are important. I am certainly familiar with the urge to provide a treatment that will reduce acute distress, such as prescribing a benzodiazepine to an anxious substance abuser. While such measures are sometimes necessary, we must keep in mind that they may be difficult to discontinue and carry long-term risks. The same can be said about privileging pharmacotherapy over psychotherapy, which may have long term prophylactic benefits that pharmacotherapy lacks.

Tonelli notes that while clinicians and patients have to choose among various treatment approaches, there is a paucity of research comparing active treatments. In practice, it is much more helpful to know that a therapy is likely to be more effective than another treatment or usual care than to know it is better than placebo. We do have a number of comparative treatment studies of antidepressants and psychotherapy for depression; unfortunately, they have rarely shown much advantage for one treatment over another.

As a philosopher and ethicist, Tonelli also addresses the stewardship aspects of applying clinical research. Clinicians are properly concerned about the cost of treatment and likely to value research on low-cost therapies. Ease and cost of implementation is more important in other areas of medicine, where new treatments may require new facilities and equipment, but therapies which require special training and credentialing, such as buprenorphine for opioid use disorders, or extensive laboratory monitoring, like lithium and clozapine, will be less appealing, especially when they are new and such procedures are not already in place. And electroconvulsive therapy, transcranial magnetic stimulation, and experimental therapies like ketamine infusion, do require substantial investments.

Tonelli's papers do not address the numerous technical and statistical issues which complicate the interpretation of clinical trial results and fall into the area of evidence based medicine. But he provides a comprehensive discussion of the other sources of information clinicians consider as they make decisions about treatment. Clearly, a clinical trial which finds a large effect size in a population similar to a clinician’s caseload, is consistent with or extends both a clinician’s prior knowledge and expert opinion in the field, is biologically plausible, promises significant benefit for individual patients, is reasonably safe, has both short-term and long-term benefits, is less expensive than the alternatives, and is reasonably easy to implement will be appealing. In the absence of credible and applicable clinical trial data, we have to turn to pathophysiological reasoning, which at this point is not of much help in psychiatry, and clinical experience, especially that of expert consultants who have access to multiple colleagues’ experience.

Further Reading:

Tonelli MR, Integrating clinical research into clinical decision making. Ann Ist Super Sanita 2011; 47:26-30.

Tonelli MR, Compellingness: assessing the practical relevance of clinical research results. J Eval Clin Pract 2012; 18:962-967

Recent Posts