225 points by barishnamazov 5 days ago | 174 comments | View on ycombinator
myhf 5 days ago |
InMice 5 days ago |
dreadsword 5 days ago |
leptons 5 days ago |
jeffbee 5 days ago |
Terretta 5 days ago |
Notice how little this sentence says about whether anything is any good.
bsimpson 5 days ago |
I wonder how accurate it is.
Nursie 5 days ago |
It's still baffling to me that the world's biggest search company has gone all-in on putting a known-unreliable summary at the top of its results.
totetsu 2 days ago |
chanux 5 days ago |
But Alas, infinite growth or nothing is the name of the game now.
[1] Well, not entirely thanks to people investigating.
ipython 5 days ago |
So interesting to see the vastly different approaches to AI safety from all the frontier labs.
bjourne 5 days ago |
Also try "health benefits of circumcision"...
n8m8 4 days ago |
mannykannot 5 days ago |
xnx 5 days ago |
‘Dangerous and alarming’: Google removes some of its AI summaries after users’ health put at risk: https://www.theguardian.com/technology/2026/jan/11/google-ai...
SirIsaacGluten 5 days ago |
hsuduebc2 5 days ago |
undefined 5 days ago |
thepotatodude 4 days ago |
There are a ton of misses. Especially on imaging. LLMs are not ready for consumer-facing health information yet. My guess is ~ 3-5 years. Right now, I see systems implementing note writing with LLMs, which is hit or miss (but will rapidly improve). Physicians want 1:1 customization. Have someone sit with them and talk through how they like their notes/set it up so the LLMs produce notes like that. Notes need to be customized at the physician level.
Also, the electronic health records any AI is trained on is loaded to the brim with borderline fraud/copy paste notes. That's going to have to be reconciled. Do we have the LLMs add "Cranial Nerves II-X intact" even though the physician did not actually assess that? The physician would have documented that... No? But then you open up the physician to liability, which is a no go for adopting software.
Building a SaaS MVP that's 80% of the way there? Sure. But medicine is not an MVP you cram into a pitch deck for a VC. 80% of the way there does not cut it here, especially if we're talking about consumer facing applications. Remember, the average American reads at a 6th grade reading level. Pause and let that sink in. You're probably surrounded by college educated people like yourself. It was a big shock when I started seeing patients, even though I am the first in my family to go to college. Any consumer-facing health AI tool needs to be bulletproof!!
Big Tech will not deliver us a healthcare utopia. Do not buy into their propaganda. They are leveraging post-pandemic increases in mistrust towards physicians as a springboard for half-baked solutions. Want to make $$$ doing the same thing? Do it in a different industry.
nickphx 5 days ago |
utopiah 5 days ago |
inquirerGeneral 5 days ago |
Meanwhile Copilot launched a full bot for it:
"Dos and don’ts of medical AI While AI is a useful tool that can help you understand medical information, it’s important to clarify what it’s designed to do (and what it isn’t).
Dos: Use AI as a reliable guide for finding doctors and understanding care options. Let AI act as an always available medical assistant that explains information clearly. Use AI as a transparent, unbiased source of clinically validated health content. Don’ts: Don’t use AI for medical diagnosis. If you’re concerned you may have a medical issue, you should seek the help of a medical professional. Don’t replace your doctor or primary care provider with an “AI doctor”. AI isn’t a doctor. You should always consult a professional before making any medical decisions. This clarity is what makes Copilot safe" https://www.microsoft.com/en-us/microsoft-copilot/for-indivi...
Hippieblog 5 days ago |
TwoNineA 5 days ago |
jnamaya 5 days ago |
https://www.fda.gov/medical-devices/digital-health-center-ex...