You can find all kinds of stories online of people reporting negative experiences with different forms of psychiatric medication. However, there’s inherent bias in that kind of reporting of side effects, so you really can’t use that information to effectively generalize how safe or effective a treatment is. In this post, we’ll look at where that bias comes from, and why it’s not a great idea to base your treatment decisions on what people say online.
Let’s talk about the negative!
People who have positive effects from treatments are less likely to be motivated to write about them. Are you more likely to contact a company to give good feedback, or even neutral feedback, rather than rant about a problem? The motivation is far stronger to rant. If we’re considering people coming off of antidepressants, who’s going to bother writing about having no issues? It’s the people experiencing brain zaps who are going to be motivated to get loud. Those negative experiences are totally valid, but it gives a very skewed picture of what the average experience is.
With the stigma surrounding treatments like psych meds and electroconvulsive therapy (ECT), people are even less likely to be motivated to write about their positive experiences. After all, if they do, they’ll probably be met with comments that medications are toxic or ECT causes brain damage. That’s a lot to sign up for just to share the positive side of the coin. On the other hand, people who share negative experiences are likely to be met with a much more welcoming reception.
The problem with individual reports
Individual case reports are considered a very low level of scientific evidence, because it’s very hard to tell what’s causing what when you’re looking at just one person. Consider the link anti-vaxxers believe exists between vaccines and autism. Autism often starts to show up around the age that kids are getting their childhood vaccines. That doesn’t mean that vaccines cause autism or have anything to do with autism, but you can’t know that based only on individual case reports.
Self-reports can also miss certain information that’s either unknown or not recognized as relevant. Someone might report gaining or losing weight since they started on a medication, but it may have nothing to do with the medication and everything to do with them being hypo/hyperthyroid. Or someone may have only been taking their medication sporadically to begin with, and as a result, they were seeing more side effects than therapeutic effects.
In general, people aren’t particularly good historians; it’s very easy to mix up sequences of events and durations of experiences. This is why I find bullet journalling so beneficial; I would never be able to give anywhere near as accurate an account of things as my bullet journal can.
Another natural tendency is to make attributions based on beliefs more than events, as we seek out things that affirm our beliefs. Let’s say I start a new medication and I get a migraine a few days later. Whether or not I associate those two in my mind will have a lot to do with my beliefs about the likelihood of medications causing side effects. I don’t tend to be prone to side effects, so I probably wouldn’t attribute the migraine to the medication, but someone else might be absolutely certain that the migraine was caused by the medication. Which of us is more likely to be correct? You can’t tell that from our individual experiences.
The role of research
That’s where research comes in. Studies are designed to compensate for normal human failings (and there are a lot of those). In the case of the medication and the migraine, I’d want to see a blinded, placebo-controlled trial, where no one knows if they’re being given the med or the placebo. By comparing the number of headaches that come up in the treatment group vs. the placebo group, you’ll start to get a clearer picture of whether the drug and the migraine are linked or if it’s just coincidence. There’s really no way to know that without running that clinical trial.
Research data matters not because of some medical/scientific supremacy, but because solid data from a well-designed study can support making conclusions about the phenomenon being studied that weaker data just wouldn’t be enough to support.
Unfortunately, not all scientific papers are created equal. There’s a well-established anti-mainstream psychiatry fringe, with Dr. Peter Breggin being one of the louder voices. The anti-psychiatry fringe has generated quite a few papers, and they reference one another and it all sounds fairly legitimate on the surface. Depending on exactly what you’re searching for, Dr. Google may well serve up one of these papers, and red flags aren’t necessarily going to go off for people who aren’t used to reading scientific papers.
Getting back to individual case reports, where they can be really important, and why they’re published in scientific journals, is in identifying problems that aren’t well-recognized, and perhaps tend to be on the rare side. If the same problem keeps cropping up, that’s a good indicator that someone needs to do design a study to look into the matter further.
Individual experiences are valid, but…
Whether it’s medication or ECT or any other form of treatment, everyone’s experiences are equally valid, whether they’re positive or negative. Where it gets problematic is if people start to generalize those experiences. Even if there are multiple individual stories of similar side effects, those are still individual stories that give you no reliable information about the overall safety or effectiveness of a treatment. That’s not because the stories don’t matter; that’s just the wrong kind of evidence to use to draw that kind of conclusion.
Do you tend to draw conclusions about medications, etc., based on people’s stories you read online?