MH@H Book Reviews

Book Review: Reading Our Minds

Book cover: Reading Our Minds by Daniel Barron

Reading Our Minds by Daniel Barron, a psychiatrist and pain management fellow, explores the incorporation of Big Data to improve the practice of psychiatry. The idea of supporting psychiatric assessment with solid data is an appealing one, but many questions come to mind.

I was surprised by an apparent blind spot of the author’s that appeared early on. He writes:

“A recent study showed that Google searches for explicitly suicidal terms were better able to predict completed suicides than conventional self-report measures of suicide risk. Perhaps this is because people who are ‘really gonna do it’ go through the planning and researching (i.e., on Google) of how to kill themselves, but it could be that people are more honest when they approach Google with what’s on their mind.”

Perhaps he hasn’t caught on to the fact that those of us with mental illness are well aware that the doctor who’s asking us questions about suicide has the power to commit us to hospital involuntarily, where our clothes and belongings will be taken away and we may be locked in a seclusion room with nothing to do or think about except how wrong it was to be honest with that damn doctor. That’s not a hypothetical, either; that’s exactly what goes through my mind when I’m contemplating disclosing suicidal ideation to a doctor, because that has happened in the past. Aside from that, though, just imagine if Google had an algorithm that would flag it to emergency services if they thought you were getting a little too close to the edge. I think I’d be motivated to start using Tor. Hello dark web!

A running example through the book was the author’s assessment of a girl he ended up diagnosing with schizophrenia. Her mom reported that she’d had changes in behaviour patterns, social engagement, and internet use, and the author argued it would have been useful to have her browsing history, geolocation data, call/text logs, etc., as this would help to establish her baseline “normal” and what deviated from that.

The results of a number of relevant studies were presented. For example, changes in Twitter behaviour were observed in women who developed postpartum depression. Another study looked at Facebook posts by people with psychotic disorders and noted distinct changes that were seen shortly before people ended up being hospitalized. The changes included more swearing, anger, and references to death.

There were some interesting suggestions for objectively measuring things that are currently evaluated subjectively, which I agree would very much be of benefit to the practice of psychiatry. Speech was one of the examples given. I experience speech impairment as a psychomotor effect of depression, and it could be quite useful to be able to monitor that in a clinical setting.

If you’re wondering about the issue of consent and privacy with all this data, it came up, but it didn’t seem to be treated as much of a barrier. The author writes that he began the book thinking that it would be hard to get patients to agree to data collection, but COVID proved him wrong. As an example, he pointed out that people were willing to download apps that would track geolocation to determine COVID contacts. I’m quite confident in saying that the identifiable data that I’d be willing to give up in the context of a deadly pandemic is not going to be the same as what I’d give up to a psychiatrist.

I think this where another big blind spot comes in. Patients are people. There is a significant power differential between psychiatrist and patient. Involuntary treatment takes away people’s rights for the sake of treatment. Even when treatment is voluntary, decisions are often made by the prescriber alone rather than as part of a collaborative process that supports the agency of individuals with mental illness. Sometimes physicians assume that patients should be able to put up with side effects rather than recognizing the patient’s right to make those choices for themselves. Mental health professionals are in no way immune to stigma; this is borne out both anecdotally and in the research literature.

I could go on, but that’s already a whole lot of context to consider, and it’s disappointing that the author just doesn’t seem to consider it. There’s no indication in the book that the author has sought out feedback from anyone on the patient side of the fence to see how they would feel about the idea of handing over their Google search history to their psychiatrist; perhaps this wasn’t seen as an important part of the process?

It seems like too big an overlook to be accidental that patients don’t appear in this book as people who are empowered to be advocates for themselves, their health care, and their privacy. To assume that patients will readily hand over anything the good doctor wants smacks of paternalism. That’s especially true when no argument has been offered about how all of this Big Data will benefit patients.

As someone who has straddled the patient and mental health professional side of the fence, I say a) back away from my data, and b) I would recommend the author reflect on what that fence looks like for him, and what it might be preventing him from seeing.

Reading Our Minds is available on Amazon.

I received a reviewer copy from the publisher through Netgalley.

You can find my other book reviews here.

This post contains affiliate links, which let you support MH@H at no extra cost to you.

30 thoughts on “Book Review: Reading Our Minds”

  1. Wow. That’s some scary shit. I… don’t approve of handing over my search (or other internet) data! That would mess me up in the head. For one thing, I often research macabre stuff when I’m doing a NYC Midnight competition. (It’s a hilarious fact for fiction writers that there’s going to be random and freakish research done that could be misinterpreted.)

    I’d rather have an open and honest relationship with my psych doctor in which I can go to him if there’s a problem. I’m lucky to have Dr. Phlegm, as he’s one of the best.

    I hope you review this book on GR or someplace because it appears to be getting all 5-star reviews. (No pressure!) Oh, wait, you DID review it there. GR is so glitchy! Found it. Thank you! There, I put a like on it!

    One thing I like about not using a cellphone is that the government (or whoever) can’t track me as much, not nearly!

    1. That’s a good point. I know that all kinds of organizations are tracking all kinds of information about me, but it’s different when it comes to someone who’s actually in your life.

  2. I had another thought! 😀 If anyone had access to my search history, it would probably keep them entertained… if not gravely concerned! 😀 I’m laughing out loud right now. “Do purple cows cause more harm to the ozone layer, or is their milk a good source of calcium?” Yeah. [Groan.]

  3. Holey-Moley – NO. I can’t process all that and remain coherent. Make assumptions here, anyone? And you know some of us are just curious, and some of us like to yank people’s chains and…NO. (And you I clicked on that Tor link…)

  4. I know without a shadow of a doubt, I’d never ever hand over my google searches to my psychiatrist, nor would I allow her to go through my facebook or any other social media posts. If she wants to read my blog, I’m ok with that, she doesn’t read it though, but she knows about it, I’m really open on my blog, so she’d have a lot of info to go on just by reading that. I’ve had some junior doctors read it before, they searched me out after dr. Barry had a case conference, where she discussed my did. XXX

  5. A side note: When I was really ill, I once used Google to find pictures of people who had committed suicide. Found a whole site dedicated to it. Should a medical team been sent to my door because of it? Probably. I think maybe we confide in Google because we also don’t want anyone to know just how messed up we are.

  6. Ugh, yeah, wow. I think this whole concept hits all my red flags. Ethically it feels violating. I’m sure psychological research could benefit from this kind of data collection but actually applying the technology to real life situations feels way too big brother. Yes we want to prevent suicides but at what point are we losing our freedoms? We should never lose our right to think about things, question things. If my blog were subject to some algorithms interpretation of my mental state I’d have been hauled off to a padded cell years ago. The freedom to write out the darkness is what saves me from succumbing to it, and if that freedom was taken away and I didn’t feel safe writing anonymously or googling my private questions then, well frankly I don’t think I’d still be here.

  7. This work does seem incredibly biased and uninformed. I don’t think anything can replace respect for the person who is struggling with life and death questions. My own belief is treatment that is anchored in genuine relationship provides a basis for sharing, and even then (as you so clearly state) the loss of personal freedom and even a choice in living and dying is an incredible deterrent to sharing these thoughts.

  8. It’s an interesting idea for a book but after reading this post and some reviews online it seems to follow the trend of people who don’t understand Data Science writing books with the keyword “Big Data” to sound progressive. Collating search history and social media content has been shown to be remarkably worthless for even rudimentary things like ad serving. Someone talks about a piano to a friend on Hangouts so Facebook starts trying to sell them midi controllers… I think this has about a 2% success rate.

    I follow his logic that people who end up killing themselves are more likely to just do it rather than tell someone and people fantasizing about suicide will probably search the subject a lot online. That could be useful. But like you said, what do you do with that data? If we start sending police to peoples houses because they googled suicide 10 times in a day then that’s not much of a leap to arresting people for their YouTube comments.

    That said, it’ll probably happen regardless of how ethical it is.

    1. I’ve heard that when someone reports a tweet (or whatever) as being suicidal, Twitter or Instagram/Facebook will respond by sending a list of resources and temporarily deactivating the person’s account. Go ahead and kill yourself, just do it over there where we can pretend we can’t see it…

      1. I don’t think that’s completely true but I could see them deactivating an account or deleting posts if they’re constantly talking about killing themselves as it’s against their TOS. What would Facebook do otherwise? It’s not their responsibility to stop someone, it’s that persons friends and family. And if they’re not around, well, not much anybody can do is there? Calling the authorities might make them feel worse because now not only are they alone but they’re victims of the state on top of it lol.

  9. I would think that this author just finished reading Orwell’s book.”1984″. Thinking that the government is watching all of our movements.
    Not sure if Orwell lived long enough to see the thing we use to search, “Google”. He may think that his book has now become fact…lol 🙂

  10. Being off meds gets us away from psychiatrists. We feel safer without both even though they tell us meds would help. Years of experience say otherwise, especially once they took xanax away. Books sounds like big brother turned into big white father

Leave a Reply