Apparently OpenAI made an update earlier this afternoon and my scripts broke because I was using depreciated parameters. I removed them (the max_tokens and temperature) and when I started using it again, both o3-mini and gpt-4o has been so slow that my scripts are timing out - but they sometimes work. Am I the only one experiencing these speed issues? Could removing those two parameters affect the speed? Anything else on the openAI config side that could be doing this?
It turns out, I was still using o3-mini. I am not sure what happened because I been using that since it came out, but today, the speed dropped significantly. Perhaps I wasn’t really using it? Maybe until this morning, “o3-mini” acutally pointed to gpt-4o? I was hoping that o3-mini would be at least as fast as gpt-4o but I guess not.