|Probabilistic placebo: something that shouldn't work, but might
||[Jan. 29th, 2013|11:27 pm]
[Epistemic Status | Speculation only. The data in all of these studies are too noisy and confusing for my liking, and I'm not even sure I'm interpreting them correctly, especially in the case of the last one.]
Here is a cute little graph that purports to be about the effect of nicotine patches on quitting smoking but is actually much more interesting:
The graph compares three different conditions from two different studies. In the first study, participants were randomized to either get a real nicotine patch or a fake (placebo) nicotine patch. As you can see, at 24 weeks 2.8% of people with the fake patch quit and 5.6% of people with the real patch did. Because there were a lot of people in the study, this was a significant result and provides evidence that nicotine patches really do help you quit smoking beyond just a placebo effect.
The last number is what happened to people who weren't in a placebo-controlled trial. They were told outright that they were definitely getting a real nicotine patch. 8.2% of them managed to stay off cigarettes. Note that this number is better than the people who got the real nicotine patch in the placebo-controlled study.
So it looks like even though the people getting that 5.6% number were getting real nicotine patches, the fact that they couldn't be sure it was real lowered the effectiveness of the patch a little below its effectiveness on the satisfied and confident people openly receiving the real patch.
Papakostas and Fava (2008) study this from a different and much cooler angle. They analyze like two hundred studies of antidepressants, a drug notorious for suffering from a very strong placebo effect. In particular, they're looking for studies with different numbers of active and passive "arms", which means their subjects have a different chance of receiving placebo.
They find that people getting real antidepressants do a little better than people getting placebos, which is what most people find. But they also find something much more interesting, which is that antidepressants do worse in proportion to how likely people think they are to be receiving a placebo. In studies where most people get the real drug, the drug is very slightly more effective than in studies where most patients know they'll probably be receiving placebo. Neater still, the same is true of placebo effects.
In the most drug-heavy studies, where people had a greater than 4/5 chance of getting the real thing, about 51% of people respond to drug and 38% to placebo. But in studies where people had a 1/2 chance of getting a placebo, suddenly only 49% of people respond to drug and 31% to placebo - a significant difference given the 36,000 patients they're looking at. The two authors regress their data and state that "For each 10% increase in the probability of receiving placebo [there will be] a 1.8% and 2.6% decrease in the probability of responding to antidepressants or placebo, respectively." The significance of the probability on the placebo effect, in particular, was less than .01.
So people not only change their drug responses based on knowing they're in a placebo-controlled trial, but also based on what their precise numerical probability of getting placebo is.
One could complain that researching depression is kind of like playing the "study the placebo effect" game on easy mode. And nobody lied to any of the participants in this study, stuck their bodies into gigantic high-tech machines, or pumped their blood full of radioactive chemicals, which means it barely qualifies as real science at all. Luckily, we have another experiment that solves all three of these defects.
Lidstone et al (2010) took some patients with Parkinson's Disease, which is associated with low levels of the neurotransmitter dopamine. Then they told them they might give them medications that would raise their dopamine. In fact, they told the patients that they had a 25%, 50%, 75%, or 100% of receiving this medication; otherwise they would receive a placebo.
In fact this was all a total lie; everyone got the placebo. Then they pumped everyone's blood full of radioactive chemicals and stuck them into a PET scanner, which measured the amount of dopamine released in their brains. They found that the higher the probability of receiving real medication, the higher the amount of dopamine released (trend did not reach significance) except in the 100% case, where dopamine was lowest of all. These are pretty weird results.
The authors noted that along with being useful for motor activities and facial expression, the functions harmed in Parkinson's, dopamine is involved in risk and reward evaluation. So perhaps the experiment ended up measuring not just a probabilistic placebo effect on the release of dopamine, but the fact that the release of dopamine mediates the probabilistic placebo effect.
(fun fact: any paper on which you print the results section of this experiment spontaneously transforms into a Moebius strip)
So in retrospect that was probably the worst possible condition to to study probabilistic placebo effects on and I'm sorry I mentioned it. It may be the depression trial cited above is the only really solid work that has been done in this very interesting area.
This is probably good, because studies from boring foreign neuropsychopharmacology journals rarely make it to bioethicists, and the bioethicists must on no account be allowed to hear about this. Imagine if they realized that the possibility of receiving a placebo decreased the efficacy of an active treatment. Randomized controlled trials would be banned within a week.