Hi! I’m having a rather weird issue: text moderation models aren’t available for my account.
Requests to /v1/moderations result in 403 error: "Project does not have access to model “text-moderation-007”. I’m the account admin and my API key doesn’t have any restrictions, as well as the project in question (but I’ve tried with the default project as well just to be sure).
I’ve tried requesting the available models via /v1/models and no text-moderation models could be found in the output.
I wonder whether it’s a frequent issue? Maybe I need to wait until my account will reach some specific usage tier? Currently my account is under the Tier 1.
Hi, we are facing the same issue here. We checked both API keys; one seems to be working, and the other one with the same configuration keeps showing that error above. Is there any solution?
{
"error": {
"message": "Project `proj_uJvOZZZZiMpuTvzqoDZxPAaQ` does not have access to model `text-moderation-007`",
"type": "invalid_request_error",
"param": null,
"code": "model_not_found"
}
}
Did anyone get an answer to this? I am using the Windows version of Anything LLM, so I don’t have access to the code that’s making the request. But as others have noted - text-moderation-007 is not a selectable model within a project.
First of all, the models on the /moderations endpoint are separate from the rest of the models that get billed for usage, and thus don’t show up when you make a call to the /models endpoint.
I’ve created a new project and ran some tests to try to get to the root of this issue, and I haven’t been able to reproduce the error.
If the project’s API key is read-only or doesn’t have access to the Model Capabilities scope, it would give an error:
openai.AuthenticationError: Error code: 401 - {'error': {'message': "You have insufficient permissions for this operation. Missing scopes: model.request. Check that you have the correct role in your organization (Reader, Writer, Owner) and project (Member, Owner), and if you're using a restricted API key, that it has the necessary scopes.", 'type': 'invalid_request_error', 'param': None, 'code': 'missing_scope'}}
If you were to set model = "text-moderation-007”, the following error would be received:
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid value for 'model' = text-moderation-007. Please check the OpenAI documentation and try again.", 'type': 'invalid_request_error', 'param': 'model', 'code': None}}
I’d recommend sharing more details about how you’re calling the moderations endpoint and checking the limits page for your project.
Out of curiosity, are the calls to the mod endpoint matching subsequent calls to the API? Or were you using it for other moderation? I believe it’s still limited to OpenAI apps…
We finally solved the issue, by building a new project. Our best guess was that because we jumped onto the project bandwagon quite quickly it wasn’t quite right.
The other problem we then had was that some of our Assistant where v1 and some where v2. we have removed all our v1 assistants and are jsut using the v2 ones now.
Thanks for coming back to let us know. I’ve marked your update as the solution, but feel free to start a new thread or use this one if the problem returns.