I have been making requests to the o1-mini api for solving math questions. My last three batches of requests got cancelled after a few seconds with the following error:
BadRequestError: Error code: 400 - {‘error’: {‘message’: ‘Invalid prompt: your prompt was flagged as potentially violating our usage policy. Please try again with a different prompt: https://platform.openai.com/docs/guides/reasoning/advice-on-prompting’, ‘type’: ‘invalid_request_error’, ‘param’: None, ‘code’: ‘invalid_prompt’}}
A few minutes later, I see that openai has billed me for these failed requests for over 100$. What happened? Does anyone know if I can get a refund?
Any attempt at asking about reasoning or thought processes or telling it how to think, the model can invoke a refusal.
OpenAI invented away to make you pay for a model to read their policies and guidelines and not do what you ask of it. Tokens are consumed internally and billed at output price.
Except unlike that ChatGPT “thinking” monitor, a complete blackout without a stream of such progress on the API. And they don’t want you asking about that internal production.
You can certainly send a ‘help’ message saying “these automated completion IDs were all refused to the tune of big bucks charged to not get a response, no real policy violation and I read terms up and down. Or I’d give you the IDs, but it’s your fault for not returning that info into a batch either after a paid-for service of generative non-service.”
I would pull down all the batch results in the latest files, as you shouldn’t universally get a refusal unless you are doing the same wrong untested thing over and over.