ChatGPT: Completions API doesn't reply as the web chat playground does?

Hi

see the copy pasted conversation with https://chat.openai.com/chat :

come ti chiami?

Non ho un nome, sono solo un modello di linguaggio addestrato da OpenAI. Sono qui per rispondere alle tue domande e fornirti informazioni generali su una vasta gamma di argomenti. Se hai altre domande, non esitare a chiedere.

what is your name?

As I mentioned before, I am a large language model trained by OpenAI and do not have a name. I am designed to assist with a wide range of tasks, including answering questions and providing information on a variety of topics. I am here to help, so if you have any further questions, please feel free to ask.

Now, if I use the trivial script:
#!/bin/bash

# !/bin/bash
API_KEY='MY_OPENAI_APY_KEY'

curl https://api.openai.com/v1/completions \
  -H 'Content-Type: application/json' \
  -H "Authorization: Bearer $API_KEY" \
  -d '{
  "model": "text-davinci-002",
  "prompt": "what is your name?"
}'
{"id":"cmpl-6JXXXXXXXXX","object":"text_completion","created":1670105179,"model":"text-davinci-002","choices":[{"text":"\n\nMy name is Sarah.","index":0,"logprobs":null,"finish_reason":"stop"}],"usage":{"prompt_tokens":5,"completion_tokens":7,"total_tokens":12}}

My name is Sarah?

That’s insane. I mean that the ChatGPT web chat is great but when I call it via API I got very DIFFERENT (inconsistent/weird) answers. why?
I’m wrong selecting the model name or other parameters? I’m confused.

2 Likes

Hi, it is because the completions endpoint has been trained on Davinci Instruct (text-davinci-002, GPT-3 vintage), not the recent ChatGPT model, which is using a more recent GPT model (3.5) and which has been optimised in a different manner,

3 Likes

does anyone know if Open AI will allow access to ChatGPT-Modell via API and if so when is it planned?. I would like to try using this model with our data.

3 Likes

Thanks Paul for your feedback. Yes, I I think you are right: it seems that any text-davinci-* model in APIs is NOT the same model used by ChatGPT online. Sad.

That’s weird because openai API documentation (see OpenAI API and see: OpenAI API ) doesn’t say anything about the fact the ChatGPT model is NOT (still) available via API, and ChatGPT, when chatting, insists (supplying code examples) about the fact the correct model to be used on the API requests is just text-davinci-002.

That’s apparently false as you say; It’s a pity to be not allowed to use API for the new model.

Please everybody, let us know when the ChatGPT model will be available.

Meanwhile workarounds could be two unofficial Python wrappers:

1 Like

See also some feedback on: Discord

Use this code instead :slight_smile:

curl https://api.openai.com/v1/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
  "model": "text-davinci-003",
  "prompt": "The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly.\n\nHuman: Hello, who are you?\nAI: I am an AI created by OpenAI. How can I help you today?\nHuman: ",
  "temperature": 0.9,
  "max_tokens": 150,
  "top_p": 1,
  "frequency_penalty": 0,
  "presence_penalty": 0.6,
  "stop": [" Human:", " AI:"]
}'

ChatGPT is just a tweak of PlayGround. Adding few parameters makes it ChatGPT:

  • Prompt the Completion program to become an AI Assitant
  • Go to new line and begin with “Human:” or “AI:” at each line
2 Likes

That’s not true. ChatGPT doesn’t even use GPT-3 which is what Playground uses. Plus ChatGPT has much better memory than Playground allows for.

1 Like

I surely can be wrong, sorry for the confidence.
Still the code I put is great for improvement.

Now I read that text-davinci-003 (by default Playground model) is GPT-3.5, and GTP3.5 is what is used in ChatGPT. My source can be wrong, if you could clarify what you think

I could explain it to you but this video does a much better job: ChatGPT for Absolute Beginners - What is it and how does it work? - YouTube

3 Likes

Well, as I understood so far,

ChatGPT is based on a GPT3.5 model, probably a fine-tuned version of text-da-vinici-003, with a special hidden prompt, a sort of light conversational memory management, and, last but not least moderation layer on top.

Thx man, your solution worked very well for me.

@solyarisoftware, @tfinanceapi is correct:

"prompt": "The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly.\n\nHuman: Hello, who are you?\nAI: I am an AI created by OpenAI. How can I help you today?\nHuman: "

^^ this is the key. Yes, ChatGPT uses “GPT-3.5”, but the biggest reason you’re getting such different responses is because the prompt is totally different (yours was "prompt": "what is your name?")

ChatGPT, among other things, has a hidden prompt that starts off the conversation to make sure it stays on the rails.

2 Likes

Oh thank god!, I just thought I was just bad at asking questions to AI and started feeling so dumb haha.

1 Like

I tried many times to get the same response as the web chat, the solution that solved all of my problems was to provide a system message that modifies the personality of the assistant “set its behavior”. For example, {“role”: “system”, “content”: “You are an invoice parser.”}, then provide it with the user message like {“role”: “user”, “content”: “For the given invoice, extract the invoice details according to the schema …”}.

1 Like