Completion endpoints have completion engines. They (and GPT itself) are designed to continue creating text that matches the form of the prior input and prior tokens seen in the output.
text-davinci-003 is the most highly-trained to follow user input instructions for various scenarios. However, it also understands the language of simpler fine-tunes based on text processing functions.
Improve quality: “”“I started by breaking up the submitted .txt files by paragraphs and making API calls with 1000 word chunks, then writing the results straight into a new .txt file. This worked pretty well, but if someone happens to submit a file that has gigantic paragraphs, then a submitted chunk of text will exceed the token limit. So now I’m trying to break text down by character count, but I think this is screwing up my results, because now GPT keeps wanting to add content to the text instead of make simple corrections. I may try using the format shown in the grammar example but I just wanted to go ahead and ask for feedback before working on that. Is there a simpler way to get back results? I could not quite figure out how to submit an entire file to the API then ask for results, so I’m trying this method for now.”“”
AI:I started by breaking up the submitted .txt files into smaller chunks and making API calls with 1000 word chunks, then writing the results straight into a new .txt file. This worked well initially, but if a file had enormous paragraphs, then the submitted chunk of text would exceed the token limit. To solve this, I am now breaking down the text by character count, but I think this is causing some errors in the results I’m getting back. I may try to use the format shown in the grammar example instead for better results, but before I do that, I wanted to get some feedback. Is there a simpler way to get back results from the API? I am not sure what the best way is to submit an entire file if I can’t do it in one call.
If you start to get in to reasoning and logic, you exceed where the model performs best. Commanding an AI that doesn’t follow commands is the problem here, along with the lower performance in general for all models when the input size grows larger.
lots of direct instructions to somebody that doesn’t exist, that do what the AI already does:
rewritten pattern
If you want to “chat” about your operations to be done, you must use a chat endpoint and a chat model like gpt-3.5-turbo.