3 Comments
User's avatar
Aussie Med Student's avatar

OpenEvidence AI tells me porkies on occasion - enough for me to know I can't rely on what it tells me and to read the sources / double check anything that I need to know, before I treat it as gospel. Given its ability to get complex-but-straightforward things wrong, I can't imagine trusting the information it gives if it was a personal medical concern that mattered. Nothing like I trust UpToDate (although that has its detractors) - even though OpenEvidence is trained on UpToDate.

Until it stops making mistakes, I can't imagine trusting AI. I'm frankly bewildered that you do! It provides much valued inspiration for critical reflection journal entries for medical school, but that's where it's sphere of influence is limited to, for me.

Susan T. Mahler, MD's avatar

I would add: It's helped me find my way to specialists, because in my case you sort of have to know whom you need to see, as generalists cant guide you. If I had cancer, MS or something pretty well understood by medicine, I would not rely on it.

Susan T. Mahler, MD's avatar

That's interesting! I have actually found that in the areas I research as a patient, which most doctors know nothing about, that it does have superior knowledge (corresponding to what I learn from places like the Ehlers Danlos Society, Spinal CSF Leak Foundation, etc). But I am researching a limited sector, and mostly I am doing so because I can't get answers from doctors. I do understand it makes mistakes.