Within the first few days of the launch of gpt-4-0613, it was so much faster than the gpt-4 model before. This was very good for me because I have an API that calls OpenAI’s API and parses the data, makes database requests, etc., and my hosting platform has a request timeout limit of 100 seconds that cannot be edited. However, just today, it’s slow again and I cannot use my API and GPT-4. Has anyone had this issue, and are there any workarounds?
We are seeing a very high error rate via the api for gtp-4-0613 , even for the most simple calls not including functions etc. so we switched back to the base model gpt-4
So I hope on June 27th not everything will break together if openai does not fix this.
Pretty sure the usage of the API just went down/up. Less “people” using GPT-4 so it is faster while more “people” using gtp-4-0613.
There must be a limit in the abilities to scale. Maybe even in terms of energy that is needed for more hardware lol.