You are viewing squid314

Jackdaws love my big sphinx of quartz - Probabilistic placebo: something that shouldn't work, but might [entries|archive|friends|userinfo]
Scott

[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

Probabilistic placebo: something that shouldn't work, but might [Jan. 29th, 2013|11:27 pm]
Previous Entry Share Next Entry
[Tags|, , , ]

[Epistemic Status | Speculation only. The data in all of these studies are too noisy and confusing for my liking, and I'm not even sure I'm interpreting them correctly, especially in the case of the last one.]

Here is a cute little graph that purports to be about the effect of nicotine patches on quitting smoking but is actually much more interesting:



The graph compares three different conditions from two different studies. In the first study, participants were randomized to either get a real nicotine patch or a fake (placebo) nicotine patch. As you can see, at 24 weeks 2.8% of people with the fake patch quit and 5.6% of people with the real patch did. Because there were a lot of people in the study, this was a significant result and provides evidence that nicotine patches really do help you quit smoking beyond just a placebo effect.

The last number is what happened to people who weren't in a placebo-controlled trial. They were told outright that they were definitely getting a real nicotine patch. 8.2% of them managed to stay off cigarettes. Note that this number is better than the people who got the real nicotine patch in the placebo-controlled study.

So it looks like even though the people getting that 5.6% number were getting real nicotine patches, the fact that they couldn't be sure it was real lowered the effectiveness of the patch a little below its effectiveness on the satisfied and confident people openly receiving the real patch.

Papakostas and Fava (2008) study this from a different and much cooler angle. They analyze like two hundred studies of antidepressants, a drug notorious for suffering from a very strong placebo effect. In particular, they're looking for studies with different numbers of active and passive "arms", which means their subjects have a different chance of receiving placebo.

They find that people getting real antidepressants do a little better than people getting placebos, which is what most people find. But they also find something much more interesting, which is that antidepressants do worse in proportion to how likely people think they are to be receiving a placebo. In studies where most people get the real drug, the drug is very slightly more effective than in studies where most patients know they'll probably be receiving placebo. Neater still, the same is true of placebo effects.



In the most drug-heavy studies, where people had a greater than 4/5 chance of getting the real thing, about 51% of people respond to drug and 38% to placebo. But in studies where people had a 1/2 chance of getting a placebo, suddenly only 49% of people respond to drug and 31% to placebo - a significant difference given the 36,000 patients they're looking at. The two authors regress their data and state that "For each 10% increase in the probability of receiving placebo [there will be] a 1.8% and 2.6% decrease in the probability of responding to antidepressants or placebo, respectively." The significance of the probability on the placebo effect, in particular, was less than .01.

So people not only change their drug responses based on knowing they're in a placebo-controlled trial, but also based on what their precise numerical probability of getting placebo is.

One could complain that researching depression is kind of like playing the "study the placebo effect" game on easy mode. And nobody lied to any of the participants in this study, stuck their bodies into gigantic high-tech machines, or pumped their blood full of radioactive chemicals, which means it barely qualifies as real science at all. Luckily, we have another experiment that solves all three of these defects.

Lidstone et al (2010) took some patients with Parkinson's Disease, which is associated with low levels of the neurotransmitter dopamine. Then they told them they might give them medications that would raise their dopamine. In fact, they told the patients that they had a 25%, 50%, 75%, or 100% of receiving this medication; otherwise they would receive a placebo.

In fact this was all a total lie; everyone got the placebo. Then they pumped everyone's blood full of radioactive chemicals and stuck them into a PET scanner, which measured the amount of dopamine released in their brains. They found that the higher the probability of receiving real medication, the higher the amount of dopamine released (trend did not reach significance) except in the 100% case, where dopamine was lowest of all. These are pretty weird results.



The authors noted that along with being useful for motor activities and facial expression, the functions harmed in Parkinson's, dopamine is involved in risk and reward evaluation. So perhaps the experiment ended up measuring not just a probabilistic placebo effect on the release of dopamine, but the fact that the release of dopamine mediates the probabilistic placebo effect.

(fun fact: any paper on which you print the results section of this experiment spontaneously transforms into a Moebius strip)

So in retrospect that was probably the worst possible condition to to study probabilistic placebo effects on and I'm sorry I mentioned it. It may be the depression trial cited above is the only really solid work that has been done in this very interesting area.

This is probably good, because studies from boring foreign neuropsychopharmacology journals rarely make it to bioethicists, and the bioethicists must on no account be allowed to hear about this. Imagine if they realized that the possibility of receiving a placebo decreased the efficacy of an active treatment. Randomized controlled trials would be banned within a week.
linkReply

Comments:
[User Picture]From: Roman Davis
2013-01-30 08:50 am (UTC)

(Link)

The Parkinson's effect is particularly interesting. Reminds me of the study from a few years back that showed that knowing that someone was praying for you increased complications after heart surgery.

The explanation posited by some was that the patient's would get "performance anxiety," in other words they were putting mental effort towards making the prayer "work," and this had stressed them out, which isn't good for a fragile heart.

As a former Charismatic, this sounds highly plausible.

We totally need to do more tests where we measure the amount of neurotransmitters in people's brains. Maybe with a little practice, (and a lot of feedback) you could control them at will.



Edited at 2013-01-30 02:44 pm (UTC)
[User Picture]From: st_rev
2013-01-30 11:24 am (UTC)

(Link)

The 2010 study had a whopping 35 subjects, and those error bars are as big as the actual effects. Pretty sure that's pure statistical noise.
From: Alex Schell
2013-02-01 04:17 am (UTC)

(Link)

I won't put much stock in those results either, but not because of anything about error bars (if those had been SD bars, some of those differences might have been borderline significant even with 8-9 subjects per group, despite SD bar overlap). The abstract doesn't tell what bars those are but reveals that they draw their conclusions from differences of significance levels rather than significant differences, which is a sin (http://dl.dropbox.com/u/1018886/Temp/NieuwenhuisEtAl2011.pdf).

Edited at 2013-02-01 04:20 am (UTC)
[User Picture]From: Johnwbh
2013-01-30 01:09 pm (UTC)

Additonal tests that might be interesting

(Link)

Is the effect linear? I would expect it to be clumped at certain points (e.g. 75-95% would clump as "practically certain" 5-45% would clump as "practically no chance").

I'd also like to see the effect of subjects mathematical and statistical knowledge.
[User Picture]From: xuenay
2013-01-30 02:56 pm (UTC)

(Link)

Is there something that I'm missing here? "(Placebo effect + drug) is going to be stronger than a placebo effect or drug alone" sound like a pretty obvious result, as does "the placebo effect is going to lose power if people don't actually believe that the treatment works" - after all, isn't that the very reason for using active placebos?
[User Picture]From: squid314
2013-01-30 08:00 pm (UTC)

(Link)

I think the interesting finding is that the placebo effect is probabilistic - that even knowing there's a chance you received placebo, even when you probably received the real drug, decreases the power of the placebo effect proportionally to that chance.
[User Picture]From: xuenay
2013-01-31 03:10 pm (UTC)

(Link)

Ah, that makes sense.
From: jimrandomh
2013-01-30 07:24 pm (UTC)

(Link)

My very-tentative interpretation of dopamine, based on reading and on limited self-experimentation with dopamine-influencing supplements, is that dopamine feels like urgency, that excess dopamine and anxiety are one and the same; and the widespread belief that dopamine is a "reward chemical" is wrong by about ten minutes. This model seems to be supported by the Parkinson's-placebo graph.
[User Picture]From: jin_shei
2013-02-14 10:51 pm (UTC)

(Link)

What is interesting (as a parkinsons specialist of sorts) that the majority of patients experience heightened anxiety, rather than reduced. PD people do respond dramatically to placebo, however.
From: neq1.wordpress.com
2013-01-31 04:54 am (UTC)

great finds

(Link)

I wish I had known about these studies when I wrote my paper on this topic. http://www.ncbi.nlm.nih.gov/pubmed/21989161 I did a lot of speculating that this might be going on, but didn't successfully find examples.
[User Picture]From: squid314
2013-01-31 03:16 pm (UTC)

Re: great finds

(Link)

I am glad someone else finds this topic interesting and I will definitely read that paper!