Use of complementary medicine may impair your ability to detect the effectiveness of evidence-based medicine

Use of complementary medicine may impair your ability to detect the effectiveness of evidence-based medicine

Alternative medicine is often promoted on the argument that it can do no harm. Even though its advocates are aware that its effectiveness has not been scientifically demonstrated, they do believe that it is harmless and therefore it should be used. “If not alone, you should at least use it in combination with evidence-based treatments”, they say, “just in case”. And so, because alternative medicine is often used in combination with standard treatments, it is often referred to as Complementary and Alternative Medicine (CAM). Indeed, when used to complement conventional medicine, most people have nothing to argue against it (except for monetary concerns), and the phrase “what harm can it do?” becomes truly convincing.

But this strategy is not without risk. An experiment that Ion Yarritu, David Luque and I just published in the British Journal of Psychology 1 shows that even treatments which are physically innocuous may have serious consequences in our belief system, sometimes with fatal consequences. When people believe that a bogus treatment works, they may not be able to learn that another treatment, which is really effective, is the cause of their recovery. This finding is important because it shows one of the mechanisms by which people might decide to quit an efficient treatment in favor of a bogus one.

It was already well known in the Experimental Psychology literature that in situations in which a potential cause and an outcome are actually independent from each other, people often tend to develop the illusion that one is causing the other. This is called the Illusion of Causality (also called the Illusion of Control in cases in which the potential cause is our own behavior). These illusions are particularly evident when the probability with which the desired outcome occurs spontaneously is high (e.g., a disease with a high rate of spontaneous recovery). In addition, the illusion can be further enhanced when the number of cases in which the potential (though bogus) cause is present is also high (e.g., when a patient follows the bogus treatment frequently; or when the majority of cases we observe are those of patients who are following the treatment). That is, cause-effect illusions are particularly robust when people are exposed to scenarios in which both the probability of the outcome and the probability of the cue are high (e.g., Blanco et al., 2013 2).

It was also well known that when there are several potential causes to which the occurrence of a given effect can be attributed, potential causes will compete among themselves for association with the outcome. Indeed, if two potential causes occur simultaneously, if one of them has previously been strongly associated with the outcome, then the new one will not become associated with the outcome (or will result weakly associated). These effects are collectively known as cue-competition effects (see, e.g., the blocking effect) and many variants of these experiments have been published in many laboratories, with many different procedures, and with many different animal species as subjects, including humans, rats, and honey bees, to name a few.

However, in all cases we were aware of, there was always a real cause (cause A) which was initially associated with the outcome, and this association of the real cause A with the outcome was what competed with the acquisition of a new association between another (real) cause, B, and the same outcome. Examples of this effect are, for instance, a person who is taking medicine A to sleep better, and then, once this medication is perfectly associated with good sleep, this person starts following another treatment in addition to medicine A. So, now, both A and B will in principle be responsible for good sleep. However, this person will typically keep attributing sleep to A and will not be able to learn that B is effective.

With this in mind, we asked whether the same effect would occur with illusory causes. That is, whether subjects who had acquired the illusion that a bogus treatment A is effective would be able to learn that a second treatment, B, which is taken in addition to A, is effective. In other words, would the illusion that a bogus treatment A is effective prevent learning that an evidence-based treatment is effective?

Our experimental participants were 147 university students who voluntarily took part in the experiment. They were exposed to a computer game where they were supposed to play the role of medical doctors. In each trial, the screen showed the medical record of a fictitious patient who had taken a medicine or not. Then the next screen indicated whether the patient felt better or not. The percentage of fictitious patients who indicated feeling better was 70%. That is, most of them reported feeling better. Importantly, however, this percentage was identical regardless of whether the patients took the medicine or not. That is, the drug was completely useless.

During the first part of the experiment subjects were divided in two groups, and we manipulated the proportion of fictitious patients who took the drug in each group. This was our critical independent variable. As previously mentioned, this manipulation should create a stronger illusion that drug A was effective in the group exposed to a higher probability of the potential cause, that is, in the group exposed to more patients consuming the drug. As expected, this part of the experiment replicated previous results and a stronger illusion was observed in that group.

The newer result of this experiment is that of Phase 2. This phase was identical for all subjects. They observed patients who took drug A (to which they had already developed either a strong or a weak illusion, as a function of the group they had been assigned) in combination with a new drug B. The procedure was the same as before, only that now the two drugs were taken simultaneously in each trial. That is, all the fictitious patients in this phase took these two drugs simultaneously or none. And then again they reported feeling better or not. Importantly the percentage of recovery was now better when the patients took the drugs than when they did not. That is, medicine B was effective and was now producing an improvement over the no drug base line.

After seeing all patients from Phase 1 and Phase 2, participants were asked about the effectiveness of drug B. The interesting result is that those participants in the strong illusion group gave a weaker estimation about the efficacy of B than the low illusion group. That is, their belief in the effectiveness of the bogus treatment A prevented them from learning that treatment B was effective.

In sum, developing the illusion that a Complementary and Alternative treatment is effective may prevent the acquisition of true causal knowledge concerning the efficacy of evidence-based treatments.

References

  1. Yarritu, I., Matute, H., Luque, D. (2015). The dark side of cognitive illusions: When an illusory belief interferes with the acquisition of evidence-based knowledge. British Journal of Psychology. Advance Online Publication. DOI: 10.1111/bjop.12119
  2. Blanco, F., Matute, H., & Vadillo, M. A. (2013). Interactive effects of the probability of the cue and the probability of the outcome on the overestimation of null contingency. Learning & Behavior, 41, 333-340. doi: 10.3758/s13420-013-0108-8

Written by

1 comment

Leave a Reply

Your email address will not be published.Required fields are marked *