OpenAI has updated its ChatGPT usage policy, explicitly prohibiting the AI system from offering medical, legal, or other advice that requires professional licensing.
The revision, announced in the company’s updated Usage Policies on October 29, follows mounting public concern over the growing number of people relying on AI chatbots for expert-level guidance — particularly in healthcare.
Artificial intelligence continues to transform industries globally, and healthcare has been no exception. ChatGPT, built as a conversational large language model, has frequently been used by individuals seeking quick answers to health-related or legal questions. Its accessibility and immediacy have made it a popular alternative to professional consultations — a trend that experts warn raises serious ethical and legal implications.
According to OpenAI, the revised policy now restricts ChatGPT from being used for:
- Consultations requiring professional certification (including medical or legal advice);
- Facial or personal recognition without consent;
- High-stakes decision-making in areas such as finance, education, housing, migration, or employment without human oversight;
- Academic misconduct, including altering or fabricating evaluation results.
The company stated that the changes aim to “enhance user safety and prevent potential harm” from using ChatGPT beyond its intended capabilities.
While OpenAI has not issued a detailed public statement about the decision, analysts believe the move is meant to reduce potential legal exposure amid a lack of clear regulations governing AI’s role in professional or sensitive contexts.
The update comes as more users experiment with chatbots for complex tasks such as self-diagnosis or even legal arguments. Discussions on Reddit indicate that previous workarounds, like asking questions as “hypothetical scenarios,” are now less effective due to stricter safety filters that block specific advice.
In addition to these changes, OpenAI announced improvements to its model’s ability to detect and respond to users in distress.
“Our safety improvements in the recent model update focus on mental health concerns such as psychosis, mania, self-harm, and suicide,” the company explained.
“We are also adding emotional reliance and non-suicidal mental health emergencies to our baseline safety testing for future model releases.”
The strengthened policies mark OpenAI’s latest step toward balancing AI innovation with responsible and ethical use — particularly in areas where lives, health, or legal rights could be at stake.








