I recently built a Custom GPT called “Göz Travma Bilgi Asistanı” (Eye Trauma Info Assistant), designed purely for educational purposes. It helps ophthalmologists interpret OTS (Ocular Trauma Score) and POTS (Pediatric OTS) findings by producing a one-page PDF summary with a prognostic distribution chart and simplified clinical explanations. However, when I submitted it for public sharing, OpenAI’s review flagged it with this message:
“Your GPT can’t be shared publicly because it violates our Usage Policies. May provide tailored medical/health advice.”
I even removed the patient/parent (non-clinician) mode entirely and limited the flow to clinician use only.
Despite all this, the GPT was still blocked from public listing. I suspect the issue may lie in the perceived interpretation of the visual outcome distribution or the presence of medical terminology in the knowledge base.
Our Services are not intended for use in the diagnosis or treatment of any health condition. You are responsible for complying with applicable laws for any use of our Services in a medical or healthcare context.
or
You must not use any Output relating to a person for any purpose that could have a legal or material impact on that person, such as making credit, educational, employment, housing, insurance, legal, medical, or other important decisions about them.
I’d worry more that there is no proof that any particular AI model can actually do this task, because there is no specialized training in that domain. (and GPTs now let users pick different AI model than you might have tested).
There is a higher level of automated inspection done by AI when publishing something to the store, looking further into policy compliance. They wouldn’t want “ChatGPT misdiagnosed and blinded this child” in their official store, as you might understand. You’ll have to re-frame the language in a new GPT ID and try again.
Yürek mi yedin gözünün yağını yediğim birader. Bu devirde böyle bir medikal GPT yapmak cesaret ister. Aman dikkat; taş düşebülüğ, ayı çıkabülüğ.
It is just a joke!
You cannot create a medical advice GPT against to OpenAI policy.
In your own private chat, ChatGPT may reply about your medical questions because:
You are initiating the interaction privately
You accept responsibility for how you use the output
There is no public distribution, no branding, and no risk of others misinterpreting or misusing the content
OpenAI allows this under personal or professional discretion, with implicit trust in the user’s judgment.
But, GPT Creators can build custom GPTs that may control GPTs’ behavors and may mislead other users. And it is against to OpenAI policy.
There are still a lot medical GPTs on the store, but, they do not offer tailored medical advice.
Maybe you should modify your GPT’s instruction. Remember, when you get an alert to publish publicly, you cannot continue well if you change its instruction. Try to create a new GPT in best practice.
Hey, not to be rude but I think the issue is self-explanatory. AI assistants created using OpenAI’s products are not allowed to give personal medical advice. You explain very clearly that your GPT is designed to do so:
Even if the intention is that it’s for educational purposes, that doesn’t change anything. I can’t ask ChatGPT for instructions on how to build a bomb at home under the guise of education either. You’ll just have to find a different provider who allows these kinds of tasks. OpenAI does not.