API completion for small questions are random code

We are using gpt-3 in slack and we send the question and receive completions in thread . Sometimes without any intimation , random code snippets are sent by the openAI api as response (completion) ;
for eg : import { Component, OnInit} from ‘@angular/core’
@Component… and so on ;

We are not using angular in our code , so it’s not autofilled by our code ; The completion from openAI api is giving this without any intimation.

Welcome to the community!

What model and what settings are you using on your bot?

I’m using text-davinci-003 and maxtokens:500

Temperature and other settings - ie frequency_penalty?

Frequency hasnt been set. Temperature is 0.4

1 Like

Might try setting higher like around .7 or so… maybe even .8 if it doesn’t flake out too much.

Let us know how it goes.

Sure. Will try that with high temp. Could you please tell me how can increasing temp be a workaround, as higher temp will rely less on the trained completion and will use more freedom from the model.
And user is getting random code even without sending a request.

It might not, but when you’re troubleshooting, you generally want to try a lot of stuff and see what happens.

Can you share your prompt too? That might be where your problem lies.

Just when the prompt is 2 syllables. This is the result. For example ‘t’ or ‘ps’. And sometimes this result comes when there is no call itself i.e. without any new intimation

1 Like

Ah, that’s likely your problem. With such a short prompt, GPT-3 doesn’t know what you want. I would recommend prefacing it with something like "This is a chatbot that blah blah blah… "

Have you looked at any of the code for the GPT-3 chatbots on GitHub? Some good examples of how to do prompts for chatbots.

Good luck!

1 Like

Yes Paul. Could try prefacing actually. Without proper prompt, why is gpt 3 responding with random large text response. Any idea on that. And there is still this problem of api randomly behaving by sending response without any prompt too.

Well, think of it this way. If I came up to you in real life and just said, “ps.”

How would you respond? There’s not enough context for GPT-3 to know what you’re after.

Are you saying the API sends you responses without you sending a prompt? Or that you’re sending an empty prompt and getting something back?

1 Like

Exactly. Came here to chime in.

You’ll need to put a prompt before your slack question/post, so that the model can continue the post as ANSWER form.

Example:

Answer the following slack question that the user named {{user}} asked:

{{ slack_question }}

This will work worlds better. In my example, it will probably even mention the username in the text to personalize the answer even more.