Reading Our Minds by Daniel Barron, a psychiatrist and pain management fellow, explores the incorporation of Big Data to improve the practice of psychiatry. The idea of supporting psychiatric assessment with solid data is an appealing one, but many questions come to mind.
I was surprised by an apparent blind spot of the author’s that appeared early on. He writes:
“A recent study showed that Google searches for explicitly suicidal terms were better able to predict completed suicides than conventional self-report measures of suicide risk. Perhaps this is because people who are ‘really gonna do it’ go through the planning and researching (i.e., on Google) of how to kill themselves, but it could be that people are more honest when they approach Google with what’s on their mind.”
Perhaps he hasn’t caught on to the fact that those of us with mental illness are well aware that the doctor who’s asking us questions about suicide has the power to commit us to hospital involuntarily, where our clothes and belongings will be taken away and we may be locked in a seclusion room with nothing to do or think about except how wrong it was to be honest with that damn doctor. That’s not a hypothetical, either; that’s exactly what goes through my mind when I’m contemplating disclosing suicidal ideation to a doctor, because that has happened in the past. Aside from that, though, just imagine if Google had an algorithm that would flag it to emergency services if they thought you were getting a little too close to the edge. I think I’d be motivated to start using Tor. Hello, dark web!
A running example through the book was the author’s assessment of a girl he ended up diagnosing with schizophrenia. Her mom reported that she’d had changes in behaviour patterns, social engagement, and internet use, and the author argued it would have been useful to have her browsing history, geolocation data, call/text logs, etc., as this would help to establish her baseline “normal” and what deviated from that.
The results of a number of relevant studies were presented. For example, changes in Twitter behaviour were observed in women who developed postpartum depression. Another study looked at Facebook posts by people with psychotic disorders and noted distinct changes that were seen shortly before people ended up being hospitalized. The changes included more swearing, anger, and references to death.
There were some interesting suggestions for objectively measuring things that are currently evaluated subjectively, which I agree would very much be of benefit to the practice of psychiatry. Speech was one of the examples given. I experience speech impairment as a psychomotor effect of depression, and it could be quite useful to be able to monitor that in a clinical setting.
If you’re wondering about the issue of consent and privacy with all this data, it came up, but it didn’t seem to be treated as much of a barrier. The author writes that he began the book thinking that it would be hard to get patients to agree to data collection, but COVID proved him wrong. As an example, he pointed out that people were willing to download apps that would track geolocation to determine COVID contacts. I’m quite confident in saying that the identifiable data that I’d be willing to give up in the context of a deadly pandemic is not going to be the same as what I’d give up to a psychiatrist.
I think this is where another big blind spot comes in. Patients are people. There is a significant power differential between psychiatrist and patient. Involuntary treatment takes away people’s rights for the sake of treatment. Even when treatment is voluntary, decisions are often made by the prescriber alone rather than as part of a collaborative process that supports the agency of individuals with mental illness. Sometimes physicians assume that patients should be able to put up with side effects rather than recognizing the patient’s right to make those choices for themselves. Mental health professionals are in no way immune to stigma; this is borne out both anecdotally and in the research literature.
I could go on, but that’s already a whole lot of context to consider, and it’s disappointing that the author just doesn’t seem to consider it. There’s no indication in the book that the author has sought out feedback from anyone on the patient side of the fence to see how they would feel about the idea of handing over their Google search history to their psychiatrist; perhaps this wasn’t seen as an important part of the process?
It seems like too big an overlook to be accidental that patients don’t appear in this book as people who are empowered to be advocates for themselves, their health care, and their privacy. To assume that patients will readily hand over anything the good doctor wants smacks of paternalism. That’s especially true when no argument has been offered about how all of this Big Data will benefit patients.
As someone who has straddled the patient and mental health professional side of the fence, I say a) back away from my data, and b) I would recommend the author reflect on what that fence looks like for him, and what it might be preventing him from seeing.
Reading Our Minds is available on Amazon (affiliate link).
I received a reviewer copy from the publisher through Netgalley.
You can find my other reviews on the MH@H book review index or on Goodreads.
25 thoughts on “Book Review: Reading Our Minds”
Holey-Moley – NO. I can’t process all that and remain coherent. Make assumptions here, anyone? And you know some of us are just curious, and some of us like to yank people’s chains and…NO. (And you I clicked on that Tor link…)
(OK, that was supposed to be *And you know I clicked…sheesh)
(Left out the word *know* – in And you know I clicked on…)
Of course I know you would’ve clicked on that!
Sorry about the double reply – it wasn’t showing up as being posted. thought WP was jerking me around…
That is what it tends to do…
I know without a shadow of a doubt, I’d never ever hand over my google searches to my psychiatrist, nor would I allow her to go through my facebook or any other social media posts. If she wants to read my blog, I’m ok with that, she doesn’t read it though, but she knows about it, I’m really open on my blog, so she’d have a lot of info to go on just by reading that. I’ve had some junior doctors read it before, they searched me out after dr. Barry had a case conference, where she discussed my did. XXX
I bet that would be a really interesting way for them to learn more about DID.
Yes, Doctor Barry told me they came back to her and told her they learned a whole lot by reading my blog so that’s good
A side note: When I was really ill, I once used Google to find pictures of people who had committed suicide. Found a whole site dedicated to it. Should a medical team been sent to my door because of it? Probably. I think maybe we confide in Google because we also don’t want anyone to know just how messed up we are.
I look up all kinds of weird shit, and no human needs to know the extent of my weirdness. That’s between me and Google.
Ugh, yeah, wow. I think this whole concept hits all my red flags. Ethically it feels violating. I’m sure psychological research could benefit from this kind of data collection but actually applying the technology to real life situations feels way too big brother. Yes we want to prevent suicides but at what point are we losing our freedoms? We should never lose our right to think about things, question things. If my blog were subject to some algorithms interpretation of my mental state I’d have been hauled off to a padded cell years ago. The freedom to write out the darkness is what saves me from succumbing to it, and if that freedom was taken away and I didn’t feel safe writing anonymously or googling my private questions then, well frankly I don’t think I’d still be here.
This work does seem incredibly biased and uninformed. I don’t think anything can replace respect for the person who is struggling with life and death questions. My own belief is treatment that is anchored in genuine relationship provides a basis for sharing, and even then (as you so clearly state) the loss of personal freedom and even a choice in living and dying is an incredible deterrent to sharing these thoughts.
And the only thing that can be reasonably expected to overcome that deterrent is trust in the therapeutic relationship.
Yes, I very much agree.
I’ve heard that when someone reports a tweet (or whatever) as being suicidal, Twitter or Instagram/Facebook will respond by sending a list of resources and temporarily deactivating the person’s account. Go ahead and kill yourself, just do it over there where we can pretend we can’t see it…
Yeah, that would be creepy.
I would think that this author just finished reading Orwell’s book.”1984″. Thinking that the government is watching all of our movements.
Not sure if Orwell lived long enough to see the thing we use to search, “Google”. He may think that his book has now become fact…lol 🙂
Most likely! Lol
Being off meds gets us away from psychiatrists. We feel safer without both even though they tell us meds would help. Years of experience say otherwise, especially once they took xanax away. Books sounds like big brother turned into big white father
Yes it does.