Hi, I got a simple program to query openai from python and I get responses but these responses are not the same as from ChatGPT. My current config:
prompt = (f"write a reports on data discrepancy")
response = openai.Completion.create(
You need to experiment with parameters to match them with it. And ChatGPT keeps text from chat in “memory”, so when you ask for your second prompt, it uses your first and its first reply as the input too.
So with the API, you need to keep doing that for the max amount of tokens it can keep, I believe 4K. Although the embedding API is now 8K.
ChatGPT isn’t the same as
From OpenAI’s post on chatGPT:
We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response.
davinci-if:3.0.0 or other instruct series models is likely to give better results.
It will also require better prompting. You can refine the prompt to specifically ask what you want the model to do. Instruct series models are great at following user prompts.
thank you, How do I set it up? I have tried passing ChatGPT, InstructGPT and davinci-if:3.0.0 as a model. The API responds with “That model does not exist”. Any pointers?
The documentation that Paul pointed you toward shows the model name text-davinci-003
It looks like you may be getting ChatGPT to give you code examples.
If this is the case, it will often tell lies, makes things up etc. Unfortunately, your only choice will be to start at the link @PaulBellow sent you - otherwise, you will get frustrated by all the errors ChatGPT makes in its code examples.
To add to what @raymonddavey said, it really takes a lot of trial and experimentation… i.e. experience. I’ve got thousands of hours in studying and building with GPT tech over the last four years, starting with fine-tuning my own GPT-2 models…It’s not as hard as you might think, but it does take time.
The prompt is not an actual prompt I test with. Though it is not related to coding.
Thank you, I have read through it but could not find a solution. My api created output is different then my output from chatGPT with the same input. There must be a way to get it done as what ChatGPT is doing is the default. No additional configurations were given.
use text-davinci-003 as your model name, and your problem should be sorted
Unless I have misunderstood what you are asking (In which case you may need to ask it a different way)
please see: engine=“text-davinci-003”, so I am using it but get totally different results than from ChatGPT.
Keep looking. You can’t replicate it exactly as we don’t know how exactly they’re doing it besides having a model fine-tuned for it. They likely have a custom prompt before the user input, however, that helps steer it.
There’s a lot of posts even here on the forum that discuss replicating it with text-davinci-003.
I’m sure I’ve seen some off-forum sources of information too, including possibly YouTube videos.
There’s no really easy answer that someone can give you. You’ll need to test and experiment and learn.
Again, there’s likely a prompt added before user input on ChatGPT - as well as it using a specialized fine-tuned model that we don’t have access to yet…
Do you not understand why there is a difference? Or are you looking for an easy solution to get their outputs more similar?
Thank you. I do understand why there is a difference though I assumed that there should be an easy setup to emulate ChatGPT with an API. It seems like there is not one
As they say… “Nothing good in life comes easy.”
I thought that AI was invented to commoditize intelligence.
Hance makes it easy
That’s the destination, but we haven’t reached it yet. Still early days believe it or not. You won’t believe what’s coming as the tech advances…
If you do get started, feel free to post a thread if you run into any problems, including code and specifics on your difficulties, and I’m sure we can help.
Correct. The models are different as @PaulBellow said above.
… and may I add, these discussions are mostly “speculations”, because the underlying models are different and it’s mostly a waste of time to try to get a GPT 3.0 model to perform like a GPT 3.5 model. That is why OpenAI is releasing a ChatGPT API (soon we hope).
ChatGPT does not use the same model at
text-davinci-003 in the API (same params). ChatGPT uses a proprietary OpenAI model based on GPT-3.5. The API
text-davinci-003 model in the API is bases on GPT 3.0
text-davinci-003 as the model name returns (using the same params as the OP):
ChatGPT returns something totally different for the same prompt
This is because ChatGPT uses a proprietary OpenAI GPT3.5 model. This model is not available in the OpenAI API at this time.
Regardless of many of the back-and-forth that is being posted in the forums, devs cannot accurately and consistently get the OpenAI API to consistently mimic ChatGPT in a reliable, meaningful way. The underlying models are not the same.
Everyone who wishes to have an API which performs like ChatGPT should relax and wait for the “soon to be released” ChatGPT API.
Hope this helps. @robert5
there appears to be a lot of confusion and mis-information on this issue.
thanks for clarifying thats its not currently possible to get ChatGPT responses via API