Code-davinci-002 incomplete responses

Using the default 64 max_tokens could return incomplete responses, i.e. completion.data.choices[0].text is missing text. I’ve played around with the Explain Code example, here’s an example of an incomplete response using max_tokens: 64:

  1. It’s exporting two functions: activate and deactivate.
  2. The activate function is called when the extension is activated.
  3. The deactivate function is called when the extension is deactivated.
  4. The activate function is registering a command called sayHello.
  5. The command

Is there a way to generate complete responses that fit a given max_tokens limit? Or do I need to increase the limit to cater to long responses?

1 Like

Hi @polpil

Welcome to the community.

It looks like you should increase the max_tokens. You’ll be billed only for what you consume.

BTW, are you working on a VSCode extension?

Thanks!

Was wondering if a response could auto-fit max_tokens, would be nice.

Yes, I am experimenting with a VSCode extension. What do you think?

Auto-fit could be counter productive as code length/function will change the length of explanation. Meaning smaller explanations would consume more tokens and larger ones would lack detail.

VScode extension is a great idea. Is the explain code functionality the only thing you are going for?

Got it - thanks.

For starters, yes. But I’d like to expand on that soon after.

1 Like

@sps - on 2nd thought, a VSCode extension will install the source files on the user machine so my OpenAI API key will be available to everyone :slight_smile:

Perhaps a Chrome extension is a better medium.

You can build an extension for VSCode, no need to store OpenAI API key locally on user device

That means asking the user to use their OpenAI API key with my extension

Or deploying a middleware API to interface with user on one end and OpenAI on the other