I can add to this.
About a month ago, a friend had abdominal pains but was reluctant to go to A&E (Emergency Room).
I had my suspicions, but checked them with ChatGPT. The LLM said it was highly likely to be appendicitis, and that he should seek urgent medical attention, and also not eat or drink (other than water) as they may need to operate quite soon.
I passed it on, he went to A&E, and it all played out that way.
I’ve since switched my subscription to Gemini for work related reasons, but it has also been very helpful in my Gastritis recovery as I try to avoid flareups from dietary choices.
A typical HN stance is waiting for this fad to go away, but it certainly does have uses for me (currently being briefed by Gemini on an unfamiliar DIY task).
Wild to think we’ve reached the point where “my AI told me to go to the ER” is a plausible sentence and not the setup to a Black Mirror episode. Pre‑ChatGPT, you’d Google “droopy eyelid” and get a mix of WebMD hypochondria and SEO‑bait wellness blogs. Now you get a differential diagnosis, a list of red flags, and a gentle shove toward not dying.
AI had carotid dissection in mind from the first message, just quietly waiting for the plot to thicken.
Sure, there’s a lot to worry about with AI, but in this case it basically played the role of the one friend who says “you look weird, go to the doctor” and turns out to be right. Which is both comforting and slightly terrifying.
> AI had carotid dissection in mind from the first message
This does not follow from the evidence presented, even if we disregard questions of what "mind" means in this context. It's entirely plausible that the possibility of carotid dissection only made sense to consider partway through the conversation.
It's late night in North America; you said this less than an hour after the post went up; and plenty of posts get little traction on HN (including submissions of links that later become very popular on a separate submission or from the curated "second chance" queue).
This title is clickbait. The implication ("following ChatGPT advice caused an emergency requiring an ER visit") is nearly the opposite of the central claim made ("ChatGPT encouraged me to go to the ER, and it turned out to be a life-saving decision").
my dad has a similar story. the voice of reason can be very helpful for people who take pride in telling themselves “it’s fine”. thanks Chat!
I can add to this. About a month ago, a friend had abdominal pains but was reluctant to go to A&E (Emergency Room).
I had my suspicions, but checked them with ChatGPT. The LLM said it was highly likely to be appendicitis, and that he should seek urgent medical attention, and also not eat or drink (other than water) as they may need to operate quite soon.
I passed it on, he went to A&E, and it all played out that way.
I’ve since switched my subscription to Gemini for work related reasons, but it has also been very helpful in my Gastritis recovery as I try to avoid flareups from dietary choices.
A typical HN stance is waiting for this fad to go away, but it certainly does have uses for me (currently being briefed by Gemini on an unfamiliar DIY task).
Wild to think we’ve reached the point where “my AI told me to go to the ER” is a plausible sentence and not the setup to a Black Mirror episode. Pre‑ChatGPT, you’d Google “droopy eyelid” and get a mix of WebMD hypochondria and SEO‑bait wellness blogs. Now you get a differential diagnosis, a list of red flags, and a gentle shove toward not dying.
AI had carotid dissection in mind from the first message, just quietly waiting for the plot to thicken.
Sure, there’s a lot to worry about with AI, but in this case it basically played the role of the one friend who says “you look weird, go to the doctor” and turns out to be right. Which is both comforting and slightly terrifying.
> AI had carotid dissection in mind from the first message
This does not follow from the evidence presented, even if we disregard questions of what "mind" means in this context. It's entirely plausible that the possibility of carotid dissection only made sense to consider partway through the conversation.
Anecdotal evidence: the gold standard!
I guess, saying anything positive about LLMs is now an anathema here so there are no comments...
It's late night in North America; you said this less than an hour after the post went up; and plenty of posts get little traction on HN (including submissions of links that later become very popular on a separate submission or from the curated "second chance" queue).
This title is clickbait. The implication ("following ChatGPT advice caused an emergency requiring an ER visit") is nearly the opposite of the central claim made ("ChatGPT encouraged me to go to the ER, and it turned out to be a life-saving decision").
That’s your interpretation.
When I read the title I thought about positive case [ChatGPT saved my life], not the negative one.