The AARP su
rvey of ‘Jobs to be Done’ by AI shows readiness for health-related AI. Health-related opportunities were cited – medication tracking, personalized wellness guidance, and active living reminders. At the same time, the report asserts that adoption depends on trust, customization and integration into day-to-day life. What stands between today’s AI offerings and that next stage of adoption, which will be dependent on our trust of the technology.
We have health questions and we are ready for good answers. But is AI tech ready for us? The current social media trial beginning this week about social media influence on children may signal future caution. The case: Meta/Facebook/Instagram, YouTube and TikTok sought to make their platforms more addictive to children to boost profits. This argument, if successful, could sidestep the companies’ First Amendment shield and Section 230, which protects tech companies from liability for material posted on their platforms. Could this liability extend to health advice that is incorrect or incomplete on social media sites that admit to making heavy use of AI algorithms?
ChatGPT Health is an unfortunate signal about the state of this ‘art’. The analysis of a Washington Post reporter’s health data via ChatGPT Health was an embarrassment. That actually would be an understatement. When asked to grade his cardiac health, the tool gave him an F, which was scoffed at by his doctor. Cardiologist Eric Topol called ChatGPT’s analysis baseless and said people should ignore its medical advice (as well as that of Anthropic’s Claude) as ‘not ready for prime time.’ Worse, when asking the same question several times, the reporter’s score swung between an F and a B, including forgetting his age and gender in subsequent queries.
Just ‘providing information?’ There is no regulation of these tools that are being used for health questions. Or any questions, actually. Okay, that would put them in the same category as all social media tools since the dawn of their time. It appears that the court case about harm to children could also be a big nothing – as with TikTok which is dodging the spotlight of a trial. Or depending on the outcome, could this scrutiny about social media impact these companies’ online use of health data? Remember how in 2021, a lengthy WSJ report analyzed Instagram’s harmful effect on teenage girls. It’s 2026 and it appears that nothing has changed.