Upgrading from DaVinci to Turbo has broken functionality

I’ve put both version of my code into a git repo. The first older version worked fine but the newer version has an issue with the line const bot = response.data.choices[0].text.trim(); I have tried fixing this for 2 days to no avail.

What have you tried so far? Hard to help with so little information.

Do you know there’s a new Chat endpoint for completions?

The docs have been updated to reflect the changes.

Good luck!

The above is incorrect for the new chat completion response.

See example chat response:

{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": "\n\nHello there, how may I assist you today?",
    },
    "finish_reason": "stop"
  }],
  "usage": {
    "prompt_tokens": 9,
    "completion_tokens": 12,
    "total_tokens": 21
  }
}

:slight_smile:

1 Like

HI,

Yes I added the new endpoints to the file, integrating previously what was the req.session.prompt functionality into them. But the final section of my code relied on the req.session.prompt to send a response to the user.

So it seems, that without that it will not work. So I tried rewriting it but it lays had an issue with the trim(); function.

It is trivial to see that the above is
causing an error!

The text key does NOT exist in the chat completion response!

{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": "\n\nHello there, how may I assist you today?",
    },
    "finish_reason": "stop"
  }],
  "usage": {
    "prompt_tokens": 9,
    "completion_tokens": 12,
    "total_tokens": 21
  }
}

The API docs are your friend.

:slight_smile:

I do not understand im not been lazy and just refusing to read the docs, I’ve been sitting here all Saturday going round in a circle, I wish with all my heart I could look at it and say its trivial.

I have changed my code

 const response = await openai.createChatCompletion({
      model: "gpt-3.5-turbo",
      prompt: [
        {"role": "system", "content": `You are chatting with an AI with ${req.session.personality} personality.`},
        {"role": "user", "content": `Hello, my name is ${req.session.name}.`}
      ].join('\n'),
      temperature: 0.5,
      maxTokens: 500,
  });

  const bot = response.choices[0].message.content.trim();
  req.session.prompt += `${bot}`;
  twiml.message(bot);

  res.type('text/xml').send(twiml.toString());

});

Now the error is “‘messages’ is a required property”.

I’ve looked at the API, at first I was confused as openai.ChatCompletion.create( didn’t work until I realised looked at other code written in JS and realised needed to be createChatCompletion.

I’ve looked at the response format over and over and I simply don’t get it. Should the response format be separate to openai.createChatCompletion({? or is my openai.createChatCompletion({ section fine.

Should it be message.content.trim(); or message.trim(); and if the object is chat.completiondoes that need to be added somewhere into my code? and looking at the request documentation I am even more confused as 'curl https://api.openai.com/v1/chat/completions ' has not been in any example of code that I have seen of similar models.

I do not expect you to answer all these questions, but I feel they do illustrate my point, that for me it is not trivial.

That should be “messages”…

Hi @c3637107566 , sorry but the above is wrong. There is no prompt key in the chat completion params. The API docs clearly illustrate this.

Sorry to be so direct again, but please inform us why are you using a key which is not in the API docs for the chat completion?

Also, just curious, why are you joining your params message content with a newline? To be honest, from here reading your posts, it is like “something” is just “making this up for you” since it does not follow the API docs.

You are using ChatGPT to code for you @c3637107566 ? Isn’t that right?

From the API docs:

{
  "model": "gpt-3.5-turbo",
  "messages": [{"role": "user", "content": "Hello!"}]
}

The logical explanation for the errors you are posting @c3637107566 is that you are making the mistake many folks are making and that is using ChatGPT to write code. Please do not do that because ChatGPT generates buggy code often and just “makes stuff up.” This is especially true for code released after the ChatGPT model was pre-trained. ChatGPT has zero data about the turbo release or chat completion API method.

If you earnestly follow the OpenAI API docs you can code this in two to five minutes, bug free!

I’m trying to help you by teaching you how to code faster and with less bugs. Please follow the API docs and forget about using ChatGPT to help you develop code. Please post back, be open, and if you are using ChatGPT to help you right code, please let us know.

We want you to succeed. We care about you.

Thanks!

:slight_smile:

Indeed I am using chat GPT, I studied law and haven’t touched computer code in 10 years and the biggest issue with that as I don’t know how or why things are written ways so I don’t know what is an error and what is not.

I started the project again and went through the docs and changed all the variables I got completely wrong and still ran into an error. In the end it was this line:

  const bot = response.data.choices[0].message.content.trim();

Which is the correct implementation, what I had before was wrong. On the upside, I’m learning quite a lot now, its fairly basic but I’ve expanded the functionality of my code quite a bit and my next iteration is going to try use Mongodb to store my variables persistently. Which… I tried with chat GPT and it went disastrously so im going to behave and follow a real tutorial.