Since the max generations for ChatGPT GPT-4 became heavier, its basically unusable, it refusing or questioning prompts that are continuations out of an out of context perception, pilling up errors (which since I last mentioned them have pretty much cleaned up and been reduced) and the fact that Dalle is now part of GPT-4 use makes it so prompting with GPT-4 within the first hour maxes out, usually a bit short of where I want to be, if it were just 50 it seems it would be fine for me now for low volume low part texts, but not for larger articles with multitudes of parts, so I am forced to continually use the API.
For the money I might and project to make the pricing is basically ridiculous (at least for now, this might change in the future and seems like it will), GPT-3.5 is unusable and got downgraded from when it was first launched, it refuses prompts easily and is really bad at understanding chat context compared to what it was.
A model that is in between these two even if it is 5x the cost of GPT-3.5 (which is about half of GPT-4) and an in between level of good between GPT-4 and GPT- 3.5, adding GPT-4 level context understanding would be a viable way to go and actually make the api usable in an affordable way.