![]()
Anthropic entered the healthcare AI space with Claude for Healthcare, a new suite of tools built for healthcare providers, insurance companies, and patients.
Following OpenAI’s ChatGPT Health launch, Claude for Healthcare aims to safely bring AI into medical settings while helping people access and better understand their health information.
Anthropic is introducing new integrations that let users connect health data to Claude. In the US, subscribers on the Claude Pro and Max plans can grant the AI assistant secure access to lab results and health records. These connections unlock features that make your medical data useful rather than just stored somewhere.
Once connected, Claude can summarize your medical history in plain language. It explains test results without medical jargon. It can prepare a list of questions to ask your doctor during appointments based on your records and recent results.

Claude also analyzes health and fitness data from wearables like the Apple Watch. It detects patterns across different metrics to show you a clearer picture of your overall health.
Maybe your sleep quality drops when your step count is low. Or your resting heart rate rises during stressful work weeks. Claude can spot these connections that aren’t obvious when looking at individual data points.
Anthropic released new HealthEX and Function connectors in beta. These allow users to grant Claude access to medical records and lab data.
The Apple Health and Android Health Connect integrations launch in beta this week through the Claude app for iOS and Android. These pull health and fitness metrics from your phone and connected wearables.
The beta label means these features are still being tested and refined. Expect some rough edges and limitations as Anthropic gathers feedback and improves the system.
Anthropic Privacy and User Control
Anthropic emphasized that privacy and user control drive these integrations. You must explicitly opt in to try these features. The company won’t turn them on automatically or assume you want AI accessing your medical data.
You control exactly what information you share with Claude. Want to share fitness data but not medical records? You can do that. Want Claude to see lab results but not mental health information? You choose. These levels of control let you decide what feels comfortable.
You can disconnect or change Claude’s permissions anytime. If you initially gave access to your full health record but later feel uncomfortable, you can revoke that access immediately. No waiting periods or complicated processes.
Anthropic promises that your health data won’t be used to train its AI models. This matters because many AI companies use customer data to improve their systems. Your medical information stays private and doesn’t become part of the dataset that makes Claude smarter for other users.
Claude includes contextual disclaimers when discussing health information. It won’t pretend to have more certainty than it does. If something falls outside its knowledge or requires medical expertise to interpret properly, Claude acknowledges that limitation.
The AI will point you toward healthcare professionals for personalized guidance. Claude can help you understand your data and prepare questions, but it won’t try to replace your doctor.
It positions itself as a tool that makes medical information more accessible, not as a substitute for professional medical advice.
This approach tries to balance usefulness with safety. Health information is sensitive and consequential. Getting it wrong can harm people. Anthropic is building safeguards to reduce that risk.













