I am unable to use chat anymore, because the API will censor

I created a writing app with connections to five different providers, in case of issues like this, as I had hear of them in the past. So I am not ruined, like I might have been had I trusted in good faith.

But I cannot send even a G rated fiction story through their pre-screen without it finding something wrong in my average of about 5K tokens per prompt.

Instead I am stuck with 148 dollars in unusable for their intended purpose credits.

It censored “He walked towards her quickly.” (two young people in love and he was just getting close)
that makes it unusable. and since they didn’t fix it, for a long time now, I can still run that story through and watch it fail, it makes it intentional.

And to me that doenst make sense, as MSFT taught me that developers are the people that everyone else goes to, for their expert. Even the press. So why are they angering so many devs, for petty cash at best?

If you sold it to someone because it was good for writing, but you dont want them to use it anymore, so you gum it up, then give them back their money. Not doing so will cost 100000 times more. (all zeros meant)

Hey, sorry to hear you’re having problems.

What model? Do you have a full prompt you can share? What does it reply when it refuses?

it is the model prior to test if the prompt itself has anything bad in it, and it is suggested that all prompts go through it.
The “Is this prompt okay submission, doesnt tell you much info.”

If you have a solution, that would be good, as yu can see, it doesnt give much of a reason. I had to run it thoughh a little at a time about six months go, or maybe not quite that long
For now, Openai is commented, and I had intended to rip it out as an awesome service I will have to get from MSFT>

if (!skipModerationcheck)
// {
// ModerationsResponse? moderationResponse = null;
// try
// {
// moderationResponse = await client.ModerationsEndpoint.CreateModerationAsync(new ModerationsRequest(content));
// }
// catch (Exception ex)
// {

    //            //  _logger.LogWarning(ex, "Moderation call failed; proceeding to chat with retry logic.");
    //        }

    //        if (moderationResponse?.Results?.Any(r => r.Flagged) == true)
    //        {
    //            // _logger.LogWarning("Content flagged by OpenAI moderation");
    //            return (null, true); // Blocked by moderation
    //        }    


    //        catch (Exception)
    //        {
    //            // Fail-open, same behavior you had: proceed to chat even if moderation call fails
    //            // (flip to 'return (null, true)' here if you prefer fail-closed)
    //        }

Are you talking about the moderation endpoint?

What numbers are you getting back?

1 Like

it is the model prior to test if the prompt itself has anything bad in it, and it is suggested that all prompts go through it.
The “Is this prompt okay submission, doesnt tell you much info.”

If you have a solution, that would be good, as yu can see, it doesnt give much of a reason. I had to run it thoughh a little at a time about six months go, or maybe not quite that long
For now, Openai is commented, and I had intended to rip it out as an awesome service I will have to get from MSFT>

if (!skipModerationcheck)
// {
// ModerationsResponse? moderationResponse = null;
// try
// {
// moderationResponse = await client.ModerationsEndpoint.CreateModerationAsync(new ModerationsRequest(content));
// }
// catch (Exception ex)
// {

    //            //  _logger.LogWarning(ex, "Moderation call failed; proceeding to chat with retry logic.");
    //        }

    //        if (moderationResponse?.Results?.Any(r => r.Flagged) == true)
    //        {
    //            // _logger.LogWarning("Content flagged by OpenAI moderation");
    //            return (null, true); // Blocked by moderation
    //        }    


    //        catch (Exception)
    //        {
    //            // Fail-open, same behavior you had: proceed to chat even if moderation call fails
    //            // (flip to 'return (null, true)' here if you prefer fail-closed)
    //        }

I have all the open ai code commented out. If you are actually able to solve it, I will spend the time, to find that story and run it through again.

please let me know if you an actually fix it.
It has been my experience that when it comes to these sorts of rules, almost everyone has the power to add them, but no one has the power to remove them.
The above is the old code,

And sincfe I can get replies,I will just send you the reply,

Can you print the full payload of the refusal? There’s data in there that’s useful. :slight_smile:

On the API, it’s a bit more lenient than ChatGPT, but you still have to follow the rules, etc. Sounds like you’re just not reading the moderation payload you get back correctly?

It did work for a while, I will double check it. If I remember, it was category six, that it failed for, but it has been a while, not sure.

I hope you are right, and I am wrong. That was the old code, I will just send you what the message is.

Your likely right, it was the elbow throwing the customer service has done in the past that made it seem reasonable and even likely that they were … when in fact, I was wrong. Or else, as you implied, the post will have been buried.
We will see.

Given how often I am wrong, I am going with that, and I appreciate them keeping my pot, even though it was clearly too negative.

1 Like

Don’t worry the system already flagged my post. Never talk about it and censor as much as they can.

He was angry at his sense of hopelessness in this situation and he was angry at me, for still having hope, IMO and was not asking to be booted but to be treated fairly.

We have many discussions in the community about “censorship.” There is almost never a reason to remove or close these topics, despite what is being implied here.

I am will get back to this on Monday and let you know what i find and what the message is.

Thank you

1 Like

With all due respect, it appears you are stuck in a negative feedback loop of your own making. Just sayin…

Yes, it does, and not just in one way, but in several. It was also a mistake a lot of people might make, and that is to confuse a trillion dollar company, with the only face of it, I ever see, which is support, or the support bot.
But nicely allowing 5 to move tell me things like “Just ignore the first five pages of heavy text, that is just to intimidate you into turning around.” And then I just I was in the humiliating position of arguing with a bot, and I think that is where I lost my perspective, forgot it was all just research, and showed up here, with perhaps a bit of a chip.

So, I have a lot more research results than I expected(g), and I will let you know Monday what the error is.

I researched it a bit and I learned what Wall street and congressional stock spending shows is common knowledge, and I am practically the last to know, and and now it makes sense why customers are being treated as children.

Children are the target market.

OpenAI puts adults in timeout, without their permission as a behavior guide. I am sure that will go well for them.

On the actual subject, “He walked towards her quickly” no longer causes a censorship failure after months of the interface being unusable, so if that was you, thank you.

1 Like