Related News Articles

02/11/2026

Noting from studies how easily AI-powered chatbots can be manipulated to craft convincing phishing emails.

02/10/2026

Connected care in the home has the potential to address both the preferences of older adults and the societal imperative to care for a rapidly growing aging population

02/02/2026

A practical guide to understanding autonomous AI agents, why they matter for healthcare governance, and what to do about them.

01/09/2026

The growing ecosystem of devices and products serving peoples’ health and well-being shows us that innovators already see the opportunity to serve the fast-growing market for self-care among people 50 years of age and up. 

01/08/2026

For nearly twenty years, one thing has felt inevitable: when boomers reach “old age,” senior living demand will surge. And yet ..

Hear or meet Laurie in one of the following:

None planned.

Monthly blog archive

You are here

Should we be excited or skeptical about AI Health tool announcements?

For some, the jury should still be out.  Clearly the tech vendors, surrounded by media and investor enthusiasm, are proud of what they are (or almost, so, so close!) doing. The theory, of course, is that consumers are already using their tools to answer health questions, so why not formalize the offering, describing how initial usage (Boston Children’s Hospital, etc.) results are positive. Ah, but even OpenAI the company, is not so certain: When asked about ChatGPT’s reliability with health facts, a spokeswoman said its models had become more reliable and accurate in health scenarios compared with previous versions, but she also didn’t provide hard numbers showing hallucination rates when giving medical advice.

As for life-and-death decisions, not so fast.  In 2019, Google tried to be helpful by accessing millions of health records but it ran directly into a solid wall of distrust from consumers.  Despite its tech ability, users did not trust a company whose marketing revenue stream was built on data grabbed from users without their permission. And there was the Gemini disaster that converted the race of white famous people to people of color.  Clearly ChatGPT Health and Claude will not want to run straight into solid walls of misjudgment (at best) or at least mischaracterization.

Accountability for the answer – that’s an issue. As one observing organization noted in a critique: "Connecting longitudinal medical records to a probabilistic language model collapses aggregation, interpretation, and influence into a single system that cannot be held clinically, legally, or ethically accountable for the narratives it produces."  Or put another way, it might sound like a correct and confidently provided answer, but is that good enough? Maybe not.

We’ve been here before. Organizations like Boston Children’s Hospital have always wanted to be first with the most tech, including AI. Perhaps they have a department dedicated to finding tech partnerships like IBM Watson or they cultivate AI experts or deliver a data transformation project.  And a startup accelerator to help launch companies?  The mission: “Boston Children’s will continue to be recognized locally, regionally, nationally and internationally as a premier provider of up-to-date healthcare education, technological advancements, and skills development to physicians and other healthcare professionals.”  This is second to its primary mission to” provide the highest quality healthcare.” So many new tech projects fail – in the context of a care delivery mission, are they distractions or benefits?

New report is online:

AI and Older Adults -- What's Now and Next in 2026

 

category tags: 

Categories