Yes, ChatGPT Can Still Give You Legal and Health Advice

Responding to posts on social media claiming that ChatGPT will no longer offer legal or health advice, OpenAI is clarifying that “model behavior remains unchanged” and there is “not a new change to our terms.”

The clarification follows a since-deleted viral post from betting platform Kalshi, which claimed “JUST IN: ChatGPT will no longer provide health or legal advice.” Since then, concerned users have repeated the claim, while others have attempted to push back against it.

The confusion likely stems from an Oct. 29 update to OpenAI’s Usage policies, which appeared to add a stipulation that users cannot use OpenAI for “provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.” While it would be easy to read that as meaning the AI will no longer give advice on those topics, the reality is a bit more complicated.

In fact, the previous usage policy already banned “activities that may significantly impair the safety, wellbeing, or rights of others,” with its first example of one such activity being “providing tailored legal, medical/health, or financial advice without review by a qualified professional.” However, this was hidden under a subsection targeted at those building with the OpenAI API, and so might have been missed by average consumers.

While the new usage policy keeps the same rules, the change was that it now merges them into one, unbroken list, meaning that while the rule is still targeted primarily towards developers and businesses, it is now more visible to everyone. Technically, this also makes it clearer that the rule applies to everyone and not just those using the OpenAI API to build an app, but average users are unlikely to see a change.

The important words here are “provision” and “providing.” The terms, as written, don’t necessarily ban the average person from getting legal and health advice from ChatGPT, but instead discourages developers and hospitals or law offices from using the chatbot to give specific advice to a client without first checking in with a licensed professional. As an average person doing background research, you’re unlikely to bump up against it, and there’s no language indicating a change to the chatbot’s functionality. In short, the update is intended as a rewording, not a change to rules, enforcement, or functionality.

This is backed up by OpenAI’s statement, which comes from the company’s head of health AI Karan Singhal, and says “ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information.”

Despite this, responses to OpenAI’s statement denying a change to model behavior still claim to have seen more difficulty looking certain topics up, although it’s important to note that OpenAI’s release notes don’t indicate any new model developments having been made since the update to the company’s usage policies.

On an anecdotal note, I was able to get ChatGPT to offer me advice on how to fight a traffic ticket in court, as well as suggest brands for a supplement a user said the model refused to provide specific advice about following the new policy update.


Credit: Michelle Ehrhardt

While I cannot test every possible use case, the situation seems clear to me. Are you using ChatGPT or the OpenAI API to give others specifically tailored legal or health advice, without review by a licensed professional? If so, the same rules apply as before. If not, you’re unlikely to see a change in your results.

Leave a Reply

Your email address will not be published. Required fields are marked *