Hi, I got a simple program to query openai from python and I get responses but these responses are not the same as from ChatGPT. My current config:
prompt = (f"write a reports on data discrepancy")
response = openai.Completion.create(
You need to experiment with parameters to match them with it. And ChatGPT keeps text from chat in “memory”, so when you ask for your second prompt, it uses your first and its first reply as the input too.
So with the API, you need to keep doing that for the max amount of tokens it can keep, I believe 4K. Although the embedding API is now 8K.
We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response.
Using davinci-if:3.0.0 or other instruct series models is likely to give better results.
It will also require better prompting. You can refine the prompt to specifically ask what you want the model to do. Instruct series models are great at following user prompts.
The documentation that Paul pointed you toward shows the model name text-davinci-003
It looks like you may be getting ChatGPT to give you code examples.
If this is the case, it will often tell lies, makes things up etc. Unfortunately, your only choice will be to start at the link @PaulBellow sent you - otherwise, you will get frustrated by all the errors ChatGPT makes in its code examples.
To add to what @raymonddavey said, it really takes a lot of trial and experimentation… i.e. experience. I’ve got thousands of hours in studying and building with GPT tech over the last four years, starting with fine-tuning my own GPT-2 models…It’s not as hard as you might think, but it does take time.
Thank you, I have read through it but could not find a solution. My api created output is different then my output from chatGPT with the same input. There must be a way to get it done as what ChatGPT is doing is the default. No additional configurations were given.
Keep looking. You can’t replicate it exactly as we don’t know how exactly they’re doing it besides having a model fine-tuned for it. They likely have a custom prompt before the user input, however, that helps steer it.
There’s a lot of posts even here on the forum that discuss replicating it with text-davinci-003.
Correct. The models are different as @PaulBellow said above.
… and may I add, these discussions are mostly “speculations”, because the underlying models are different and it’s mostly a waste of time to try to get a GPT 3.0 model to perform like a GPT 3.5 model. That is why OpenAI is releasing a ChatGPT API (soon we hope).
ChatGPT does not use the same model at text-davinci-003 in the API (same params). ChatGPT uses a proprietary OpenAI model based on GPT-3.5. The API text-davinci-003 model in the API is bases on GPT 3.0
Using text-davinci-003 as the model name returns (using the same params as the OP):
This is because ChatGPT uses a proprietary OpenAI GPT3.5 model. This model is not available in the OpenAI API at this time.
Regardless of many of the back-and-forth that is being posted in the forums, devs cannot accurately and consistently get the OpenAI API to consistently mimic ChatGPT in a reliable, meaningful way. The underlying models are not the same.
Everyone who wishes to have an API which performs like ChatGPT should relax and wait for the “soon to be released” ChatGPT API.