I’ve been thoroughly enjoying my experience with text completion using the
text-davinci-003 model. Whether it’s through the web playgrounds on the OpenAI website or the less polished Azure interface, I’ve been able to achieve satisfying results. However, I’m looking to replicate this “text completion behavior” but using new
My question is:
How can I mirror the text completion behavior that I’m accustomed to? Is it necessary for me to fill out the SYSTEM prompt in order to obtain completion in the ASSISTANT’s response?
BTW, When it comes to the Azure playground interface, which I must say is less than optimal, I’ve encountered a particular issue. I’m unable to circumvent the requirement to insert a USER input prior to obtaining the ASSISTANT response. Interestingly, when I place the prompt in the USER input field instead of the SYSTEM field, I observe a not desired behavior. As a result, I’m not inclined to pursue a “chat” model approach.
Do any of you have ideas or suggestions to offer? Your insights would be greatly appreciated.
The chat models cannot truly complete follow-up tokens organically:
- They have far too much training on the question/answer format;
- Extra tokens and carriage returns and “assistant” are inserted between the last user role message and where the AI will create an answer.
The best you can do is “complete this segment:” or “text completion:” as a system message, if not full language describing how the AI will finish and continue. This will create a simulation, but not the exact next token based on probabilities.
New completion models were announced to come in the future, but exactly where they will lie in training quality and true completion requires speculation.
Thank you for your feedback. It seems you share my concerns. So, if I understand correctly, you are suggesting either placing the prompt in the system message as a “workaround” while waiting for text-davinci-003 replacements to arrive, or continuing to use text-davinci-003. Is that accurate?
The applications for true completion are few, so there is likely some different instruction you can provide that will reach the same output goal.
BTW, the opensource library LiteLLM just implemented the text completion mode. See: