OpenAI has unveiled ChatGPT Health, a new feature designed to help people better understand their health information using artificial intelligence. The service creates a dedicated space within ChatGPT where users can ask health and wellbeing questions and, if they choose, connect personal data such as medical records or information from fitness and nutrition apps.
Health questions are already one of the most common uses of ChatGPT, with OpenAI saying hundreds of millions of people worldwide ask for advice on symptoms, lifestyle and wellbeing each week. ChatGPT Health aims to build on that behaviour by offering responses that are more personalised and grounded in an individual’s own information.
The company says the tool is intended to support, not replace, doctors and other clinicians. It is not designed to diagnose conditions or recommend treatments, but to help people feel more informed and prepared when navigating healthcare.
How the service works
ChatGPT Health operates as a separate area within the wider ChatGPT product. Users can upload documents such as test results or appointment summaries, or connect approved apps including Apple Health and MyFitnessPal. The system can then help explain trends, summarise results in plain language, or suggest questions to raise with a healthcare professional.
OpenAI says the feature was developed in collaboration with more than 260 physicians across dozens of medical specialties. Their input has shaped how the system communicates, including when it should encourage users to seek professional care and how it explains complex information without oversimplifying.
The underlying technology is a large language model that predicts likely responses based on patterns in data. That approach can be useful for explanation and summarisation, but experts caution that it does not understand truth in the way a human does, and can produce confident sounding but incorrect answers.
Privacy and security concerns
Because health data is highly sensitive, OpenAI has placed particular emphasis on privacy. ChatGPT Health runs in a separate space, with additional encryption and isolation from other chats. According to the company, conversations in Health are not used to train its AI models, and information shared there does not flow back into non health chats.
Users can also delete their health data or disconnect apps at any time. For medical record access in the United States, OpenAI partners with a third party provider to connect to participating healthcare organisations.
Despite these measures, campaigners have urged caution. Health data is not covered by the same protections as information held by the NHS or other regulated providers, and some experts warn that users should carefully consider what they are comfortable sharing with a technology company.
Risks of AI health advice
Another concern is accuracy. Studies have shown that AI systems can hallucinate, particularly when records are incomplete or fragmented. Clinicians interviewed by US media have warned against using ChatGPT Health to make decisions about treatment, instead suggesting it be used for low risk tasks such as understanding test results or preparing for appointments.
Some doctors see benefits in helping patients engage more actively with their care, while stressing that human judgement remains essential.
How it is being rolled out
ChatGPT Health is being introduced gradually, starting with a small group of users. OpenAI says it plans to expand access on the web and on iOS in the coming weeks.
For now, the service is not available in the United Kingdom, the European Economic Area or Switzerland, where stricter rules apply around processing personal data. OpenAI says wider availability will follow as the product is refined.
As artificial intelligence moves deeper into healthcare, ChatGPT Health highlights both the promise of more accessible information and the challenges of privacy, trust and safety that come with it.








