Hi guys,
Recently conversion.ai bought shortly.ai and as shortly.ai has premission to generate long chunks of text (more than 100 words), now conversion.ai has that access too and they are providing almost complete api access to random people (who pay the subscription) and who doesn’t have api access. They allow all kind of long generations which we the developers who has api access don’t even dare to ask for premission because it is clear in the guidelines that it is against the rules to generate more than 40-50 tokens where conversion.ai are generating more than 150 tokens on any kind of subject.
We would really appreciate openai looking into this issue as we feel this is very anti competitive move which discourage us to be innovative because we will not be able to compete if the situation remains unchanged.
We know OpenAI encourages innovation and it is not the intention to favor anyone but we feel this is being overlooked.
Kind regards.
Hi @jan.buzan.1997 - we hear you, we know it’s frustrating that there is a disparity in which features different organizations have approval for. It’s something we are actively working on to resolve, and will hopefully be able to share more news on this soon. We definitely don’t want people to feel scared to ask for permission for additional features! We have some flexibility in things like token limits when other safety mechanisms can be met, so please feel free to reach out. We want to empower developers while also maintaining standards of safety, so we will do our best to work with customers towards a solution.
It is interesting to see you take the opposite viewpoint that I, the user of this technology, take. When I started using GPT3 in 2020, we were excited about the prospect of more innovation and further growth in the AI industry. I am indeed a proponent of innovation, and as such I want to find a way to balance both sides of this issue as best I can for everyone involved: Open.ai and its API users on one side, us developers on the other. At this point, you seem to think that developers should be penalized for being innovative. That’s not what we want from any party in our industry or society, but it remains our goal to find a solution that benefits everyone.
To be honest, you should follow the tried and true strategy of getting out of the way and letting it go at that.
Just wanted to put in our two cents as another organization that finds the OpenAI guidelines to be extremely overbearing. In the beginning, the guidelines stated that the rules would be lessened over time as you learn more about how to make AI safer. But the opposite has happened.
It’s disappointing to hear there is “fexlibility” on these rules on a forum because, as Jan rightly pointed out, the documentation is so heavy-handed that many users are afraid to even try. This confusing communication and plethora of rules is reminiscent of all the worst parts of the iOS App Store and makes it very difficult to justify investments in a new product based on OpenAI.
The fact that OpenAI seems to have a grandfathering policy in place, where any company that happened to get their use cases approved prior to a rule change now gets an unfair advantage, is especially worrying. OpenAI’s own guidelines mention this favoritism multiple times and I think Jan is absolutely right that this behavior is anti-competitive.
All these factors combine to give the impression that OpenAI does not actually care about improving the lives of humanity in general, but would rather keep this tech locked up in an exclusive club for flashy venture-backed startups and pet projects of people that happen to be in the right network.
Just using my case as an example, we’re a technology startup working to improve everyday health outcomes for all sorts of at-risk populations. More than 80% of our potential use-cases for OpenAI have been disallowed by guideline changes just in the short time since we signed up.
I’m encouraged by your mention that OpenAI team is working on this issue and will eagerly await news. We believe GPT-3 is an incredible technology that everyone should have equal access to.
Yes, this is very worrying and unfair, these double standards need to be resolved. The documentation and guidelines are incredibly specific, the rules should apply to ALL the same, and should be relaxed and generelized more. Why is Conversation.ai allowed to do that?
We are concerned about the big increase in rules over time with limited justification and community input, but equally concerned with the fact that the rules are not applied fairly across the board. This point was not addressed in your response.
There are a lot of assumptions in your statements. I’m American and my cofounder is Indian, but our use cases are not regulated by the FDA. We’re experienced in our industry and we don’t need to take a closer look. My background is as a doctoral AI researcher, and I am well-versed in the incredibly rapid progress being made on AI safety both in the research literature and OpenAI team’s impressive efforts in this area.
When we take investment from people, we don’t have the luxury to wait on being a “fast follower”. And most other companies, large and small, will find it difficult to invest money into development of a product on a platform where the rules seem so restrictive and they have no say. If the opportunity here is too restrictive or unfair, some of those companies and individuals will end up drawing on the now-vast open source AI research to build things on their own, completely outside the bounds of any guideline framework.
Rosie kindly reached out one-on-one to discuss these concerns and I’ll let her speak for OpenAI as an actual staff member.
As you point out, we have differing opinions and you’re not going to convince me or people who share my viewpoint with your current tactics. I bet many folks new to the community and sharing their thoughts on here would find your approach to discussion unwelcoming or even patronizing. I know I did.
I’d love to put my own input here, as I’m new and gone through a few reviews now. At the start it does seem like the rules are pretty specific, however if you read through and learn them, it makes you improve your own prompts, training data and your product. I’ve found this actually lead to me improving my product without needed the extra excess.
hi @adriantwarog I started watching you videos I came to know about you form here itself they really good.
thanks for making them and good luck with enhanceai.ai
Copy.ai also have seem to have extra privilages, but another thing I’ve noticed is Copy.ai are not doing any content filter checks at all!, is that allowed? You can almost type anything you want even religious and offending content and it will still generate say AIDA for that offensive content?..You can easly test it your self, so how come these guys allowed to operate while violating a major OpenAI content filter rule?
Why do you assume we have not read through them, that is exactly why we are saying the rules are very limiting for commercial development because we have read the rules
And how do you know they are using GPT-3 for everything?
Have you tried GPT-Neo / Jax?
With some training they are very capable.
And @adriantwarog never wrote that you haven’t read them. He responded with a friendly tone and gave a good advice.
And reading the rules, they are full of statements like this,
"Importantly, we do not expect you to implement every single Best Practice in the below sections. Rather, you should determine what seems most important for your use-case. "
" When unsure about whether a use-case is acceptable, you can engage in some creative brainstorming with the developer community and OpenAI staff in the Risk and safety section of our community forum."
Can you give some examples of specific rules that are limiting you from doing what you want to do?
You don’t seem to have been very active in the Risk and safety forum? Why not?
Have you been turned down by OpenAI to release any specific app?