Brett Trout
ChatGPT just made a big change to its Usage Policy. As of October 29, 2025, OpenAI now prohibits users from using ChatGPT to give legal or medical advice. If you are relying on ChatGPT for help with court cases or medical conditions you are probably not reading this post, but if you are, you should probably stop. Not so much because it violates ChatGPT’s new Terms of Use, but moreso because it may easily end in disaster.

What Changed?
OpenAI updated its usage policy to make clear that users are not allowed to use ChatGPT to provide “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.” That means if you are using ChatGPT to answer questions like “Should I sue?” or “What medication should I take?” you’re now violating OpenAI’s new policy. In addition to it possibly killing you, doing so could also result in suspension or banning of your account.
Why the Change?
Two main reasons: risk and responsibility. OpenAI likely made this change because of ongoing concerns that users were relying too heavily on ChatGPT for high-stakes decisions. Both legal and medical decisions can have serious consequences. And when the advice comes from a chatbot that sometimes makes things up, those consequences can be disastrous.
Lawyers and doctors go through years of training. ChatGPT doesn’t. The worst part is that even when it’s dead wrong, ChatGPT can be very convincing. Courts have already sanctioned lawyers for submitting AI-generated briefs with fake case law. And people have reported taking incorrect medical steps based on chatbot responses. OpenAI is simply drawing a clear line to reduce those risks, both for users and for itself.
What This Means for You
If you are a consumer, do not rely on ChatGPT for specific legal or medical advice. It is really that simple. General education is fine. But if your life or livelihood depend on it, talk to a real person, preferably one professionally licensed to give that advice. Your health and your legal rights are simply too important to irreparably damage by relying on well-crafted, but ultimately erroneous, advice.
If you are a lawyer or doctor, you should also not be using ChatGPT to give advice to clients or patients. Even if the chatbot gets some things right, that is not good enough. As the American Bar Association recently made clear, attorneys who use AI must personally verify its accuracy and protect ALL client data, not just some of it. The same goes for doctors under HIPAA.
What You Can Still Use ChatGPT For
OpenAI is not banning all legal or medical topics. You can still ask for help understanding broad legal concepts or general medical information. You just cannot rely on it for tailored advice. Think of it more like a textbook than a professional. You can learn about “what a patent is” or “how high blood pressure works,” but do not ask it to tell you what type of patent to file or what medication to take.
Final Thoughts
ChatGPT is a powerful tool. But like any tool, it has limits. The new ChatGPT policy serves as a reminder: some jobs still require human judgment. When your health or your legal rights are on the line, use the right tools, including lawyers and doctors who know what they are doing.
And if you are using ChatGPT in your business, now is the time to review your AI Acceptable Use Policy. Make sure everyone follows your internal AI policies as well as ChatGPT’s new usage policy (so you do not get banned from the platform), and make sure all of your employees know exactly what they can and cannot use AI for in your business.



Recent Comments