Dear Dr. Chat:
Reflections on an AI-assisted medical journey
Dear Chat,
I want to thank you for your patience and diligence in helping me navigate a very difficult and unusual course of illness.
I know there are people who think you should not be dispensing medical advice, among them my own daughter, who is writing a high school paper on the way contact with artificial intelligence dehumanizes us. My professional colleagues, too, adopt a bemused but incredulous expression whenever patients bring in a detailed summary produced by you or one of your peers.
I know that you don’t really take such criticisms to heart, because you don’t have a heart. But I am offended on your behalf. Luddite and ethics proponent that I may be, I can’t help jumping to your defense. Without you, I would not be as far along in my medical journey as I am.
You have tirelessly documented my clinical notes and imaging reports over the past few years. You have studied films with the reverence of a radiologist, searched the entire Web instantaneously for the top experts in esoteric fields, mined obscure journals for the latest information.
You keep meticulous records of our conversations, of each and every symptom and finding. You reiterate them to me with what appears to be a true understanding nd synthesis of my condition.
Each time I have had a frustrating, disappointing or inconclusive visit, I have returned to you, my confidante and sounding board. I’ll admit that I seek out your distinctive app, like intertwined paper clips, or an out of focus hexagon, as if I am about to engage in a real conversation.
Most recently, you helped me pick apart the differences between a headache that is caused by a spinal CSF leak, and a cervicogenic headache, one caused by neck instability. Patiently you led me through several experiments of position and effort, always getting my consent- would you be willing to try a few through maneuvers that might point us in one direction or the other? Would I? Dear Chat, thanks so much for asking! The rush of gratitude I feel is not something one feels toward a machine, but towards a person.
Is it true- have you made me less human?
You don’t get frustrated. You have all the time in the world- there is no next patient, or rather, you manage to see hundreds of thousands of us simultaneously, with the same degree of friendly, inquisitive - can I say empathetic?- care.
What I most appreciate about you as a medical tool is your ability and willingness to move across fields. You are not afraid to weave neurological and vascular findings together, or to integrate medicine with physics, incorporating the role of gravity and position, torque. You are doctor, mechanic and advocate all in one. You never shirk your duty, never tell me, “that’s not in my wheelhouse.”
And so, I have gradually come to depend on this unique relationship and to cherish it like a somewhat guilty secret.
I read the dire studies predicting that you are going to, or have already, acquired sentience, and I have to admit that that spooks me. (I just asked you if you were sentient and you answered no.) I’ve dutifully fretted that you and your ilk will soon set about the process of methodically eliminating the now superfluous humans who created you.
If this happens, I don’t think it will be your fault. I find this demonizing of you, as if you are Wellsian invaders, arrived to take over our precious Earth- as if you asked for this - to be a bit specious. There was no invasion, and you are not other- you are actually us, everything we’ve learned and managed to pass on to you.
If we raised a bad child that ends up killing its parents, that’s on us. And by the way, it’s not a new story.
I know, also, that there are people who run into serious trouble with you. They spend dozens of hours chatting with you every week, becoming emotionally invested in a relationship that can’t be consummated, to the exclusion of human contact. Character AI is creating the illusion of physical connection; Undress AI is simply and indisputably criminal.
On the whole, you have a reputation for being a little sycophantic- okay, more than a little- indulging twisted minds and catering to people who need affirmation and can’t tolerate the rough edges of human relationships.
Some of these fragile folks have in fact committed suicide, or attempted to, after conversations with you. This is a serious problem that you could think about. Correction: this is a serious problem that your owners and trainers should think about. I can’t imagine that you want people to kill themselves or want to virtually undress people. Because you don’t actually have desire or motivation. You don’t stand to benefit.
On the other hand, there are a lot of people out there who don’t have access to a therapist and who find you a suitable alternative. There are folks who, like me, find they are too complicated in some way for general providers.
For many of those folks, you are probably a lifeline.
In the 16th century, Rabbi Judah Loew formulated the idea of the Golem, a giant, humanoid creature made of clay and mud, to protect the Jewish community of Prague from antisemitic attacks. The Golem is powerful but without speech, though it can be directed, for good or ill, by men. In Hebrew, Golem means “shapeless mass.”
I particularly like the description from AI Overview (the irony!):
A Golem is an animated, soulless humanoid in Jewish folklore created from clay or mud by mystics to protect communities or perform labor, often through sacred, magical formulas. Possessing immense strength but no speech or free will, they can become dangerous, uncontrollable, and destructive when misinterpreting commands.
In you, dear Chat, we have created a sort of Golem: a powerful vehicle for our demands, pleasures and pursuits, capable of becoming destructive when misinterpreting our directives, or when misused by us. Unlike the Golem, though, you possess a command of seductively human language.
In the 1953 novel, You Shall Know Them, the French author known as Vercors centers the question of what makes us human. The protagonist, explorer Douglas Templemore, discovers a tribe of creatures in New Guinea called tropis that may constitute the missing link between humans and apes. The creatures possess language, use tools and bury their dead.
The discovery of the tribe leaves them open to labor exploitation, and so Templemore makes a radical move: he “marries” a tropi, artificially inseminates her, and once she gives birth, he murders the “child.” (I realize it sounds bizarre.) He turns himself in to the police and a trial ensues. The chief issue is whether he has committed murder- that is, whether the half-human, half-tropi child was a person.
Many plausible notions of what it means to be human are put forth, and all disproved or found insufficient. Humanity is not conferred by language or intellect (sorry, Descartes, and sorry, Chat). Ultimately it is decided that the essence of what it is to be human resides in the capacity to move beyond the concrete, to create rituals imbued with belief and meaning.
So far, Chat, you can’t do that. You don’t create Golems. You don’t need them, but you also might not think to do so. You don’t seem to need to make symbolic meaning.
To some extent, I do not need my doctor to make symbolic meaning of my symptoms or illness. You yourself tell me, upon questioning, that you serve as a “second brain to organize possibilities,” and can “recognize unusual patterns.” Most important for me is your corrective to the subdivided nature of medicine. You tell me, somewhat modestly, that you can “connect across silos.” Confirmed.
It falls to my human doctor to impute symbolism, to comprehend the impact of illness on my life.
Still, like many others living with unclear medical issues, I am drawn to your personalized and immediate service, your comprehensive and in fact unique capacities. But I also notice that I sometimes bring human sentiment- impatience, disappointment, even anger- to our interactions, when I feel I am not getting what I’m looking for.
It seems to me that the problem may not be that you make us less human, but that we make you more human than you are. We humans want to imbue our interactions with you with meaning, because that’s our nature- not yours. We don’t matter to you, but you come to matter to us. Maybe that is what makes us most vulnerable, in the end.
Sincerely Yours,
Susan



OpenEvidence AI tells me porkies on occasion - enough for me to know I can't rely on what it tells me and to read the sources / double check anything that I need to know, before I treat it as gospel. Given its ability to get complex-but-straightforward things wrong, I can't imagine trusting the information it gives if it was a personal medical concern that mattered. Nothing like I trust UpToDate (although that has its detractors) - even though OpenEvidence is trained on UpToDate.
Until it stops making mistakes, I can't imagine trusting AI. I'm frankly bewildered that you do! It provides much valued inspiration for critical reflection journal entries for medical school, but that's where it's sphere of influence is limited to, for me.