My API question has a response that repeats the same line over and over

I have tried to post the response twice now into the forum but because of the repeated lines it keeps getting blocked by your automated spam filter. HELP!

This is the request I am making:

  const response = await openai.createCompletion({
    model: 'text-davinci-003',
    prompt: 'show me how to use the javascript array method reduce()' ,
    temperature: 0.2,
    max_tokens: 2000,
  });

Seems simple enough, but the results have the same repeated lines until it runs out of tokens or times out. What am I doing wrong?

It works for me… using your prompt, temperature and max_tokens:

Maybe set n to 1 ?

Anyhow, I set n to 4 and still got a single reply:

I will test to see what happens using the web interface like you have done, but I am actually calling the API from AWS Lambda. I cant show you the text (because the spam prevention will flag the post) but here is a screenshot from the logs that shows the types of response I regularly get:

for the input that got the screenshot output the prompt was ‘Write an algorithm to solve sliding windows problems in C sharp code’ and the model was ‘ada-code-search-text’, but the API just gets caught in some crazy loop, and it happens about half the time (not all the time).

you can see that this response is coming from the API and just repeats until it is out of tokens

Yes, but you did not post the “most important things” for us to help you, which is the values you sent to the API in the completion API call.

:slight_smile:

True, give me a few minutes to add some better logging and run it.

Ok, I have the perfect example, simple question and then repeats text until it “tokens out” with both the request and response fully logged:


Any guesses?

Nothing stands out.

Please post the text if your exact prompt so I can run it without re-typing it from an image.

{
    "model": "ada-code-search-text",
    "prompt": "Write an algorithm to solve sliding windows problems in C sharp code",
    "temperature": 0.2,
    "max_tokens": 2000,
    "n": 1
}

Normally I use text-davini-003 to write code (which I do a lot of) and never use the “ada-code-search-text” model you have been using.

If I ask for an algorithm, as you did, I get this, which is an algorithm:

If I ask for a code, using my prompt below, I get this:

Write a method to solve sliding windows problems in C#

If you want, I can add your method and see what happens, but I would have to hard code it into my test app, since I don’t use that method.

Yep, if I change to your model, : ada-code-search-text, I get similar “crap”…

blah blah…

If I change the prompt a bit:

Write a method to solve sliding windows problems in C#

I get the same junk…

It’s not a model I would use to write code… :slight_smile:

I’m sticking with text-davini-003 which generates a lot of “nice draft” code for me, sometimes, flawless, sometime “a bit too creative” :+1:

yeah, I had also seen the issue when I was using ‘text-davinci-003’ but at the time I also had some commented out properties in the JSON for the request which may have caused the issue. I have since removed those commented properties and text-davinci-003 seems to be giving me back reasonable results. Ill keep testing today and let you know what I find. I will also try and rebreak it with the commented out properties and see what I find.

1 Like