It appears that none of the OpenAI models can count in the most basic sentence when it comes to word counts. I’m pretty sure it’s a transformer thing but, if I’m right about this, it would be great if that extra bit of “knowledge” could be trained into the OpenAI higher end models. For example, ask the model to write a sentence that is exactly 5 words in length. Now ChatGPT
will fix the problem after telling it in a follow-up prompt that it made a mistake. But that doesn’t help those of us using text-davinci-003
with the completions API because I have yet to figure out how to trigger or emulate the “correction” behavior.
For language learning tools this would actually be an extremely useful feature. If I am wrong about this, my apologies in advance and please show me how to craft a prompt with the completions API that gives me the desired behavior.