Reviewing your appeal - Trademarks

You can’t :+1:

Just wait, there’s nothing more you can do at this point.

i’m looking at a similar solution with Copilot… they better hurry up

I waited a month for them to get back to me on an appeal. I also needed to appeal due to trademarks.

But they did get back?? Did you win the appeal?
Ive sent them all our trade documents that prove we have ownership.
I dont mind waiting but its just the lack of communication that really gets me going. Waiting is fine if you know that its being dealt with and there is an end goal.
Thanks for your comment.
“Building customer expectations isnt that hard”

I wrote a post about my appeal experience 2 days before your post.

My appeal did not go through after 1 month of waiting. The appeal process on trademarks used to take about 3 hours but now it takes 1 month. I’m quite surprised and thrown off about this too. Anyways, I’m also still waiting.

Still nothing. So frustrated that I have a working GPT but non of my customers can view it until after the appeal for trademarks that I have proved I own.
Frustrated

It would be nice to clarify why something is picked up for potentially violating the Terms of Service, especially when ChatGPT can analyze both the custom GPT and the current terms of service. In my example, I created a custom GPT that was focused on helping users ask better questions using established frameworks like Socratic Thinking, Eigenquestions, and First Principles Thinking. The goal was to guide users through progressively complex scenarios, offering feedback and helping them improve their questioning skills. There was no content aimed at providing medical, legal, or psychological advice, nor was it designed for manipulation or harmful use.

However, the GPT was flagged for potentially violating the Terms of Service. Upon reviewing OpenAI’s policies, it wasn’t immediately clear which specific rule the model might be breaking. Since the GPT was purely educational and focused on cognitive techniques for effective questioning, it seems like the violation was due to a potential risk or misuse scenario, rather than a direct breach.

Given the versatility of GPT models, it’s possible for any application to be used in ways that weren’t intended by the creator. But as developers, it’s important for us to understand exactly what part of the ToS is being triggered. For example:

  • Was it flagged due to a misuse concern, such as someone potentially using the questioning techniques in harmful ways?
  • Or was it due to user inputs involving sensitive topics, like relationships or personal issues, even though that wasn’t the intent of the tool?
  • Could the model have been seen as contributing to automated decision-making without oversight, even though it was designed purely for learning and educational purposes?

Clearer feedback during the appeal process would help developers make adjustments to ensure compliance and reduce confusion. A more granular explanation of why certain models are flagged—whether it’s a specific rule (like manipulation) or a general potential for misuse—would make it easier for creators to tailor their models responsibly while staying within the guidelines.

I believe that giving more specific feedback would not only help developers like myself understand what needs to be corrected but also enhance the overall quality of custom GPTs being built on the platform.