Using the default 64
max_tokens could return incomplete responses, i.e.
completion.data.choices.text is missing text. I’ve played around with the Explain Code example, here’s an example of an incomplete response using
- It’s exporting two functions: activate and deactivate.
- The activate function is called when the extension is activated.
- The deactivate function is called when the extension is deactivated.
- The activate function is registering a command called sayHello.
- The command
Is there a way to generate complete responses that fit a given
max_tokens limit? Or do I need to increase the limit to cater to long responses?
Welcome to the community.
It looks like you should increase the
max_tokens. You’ll be billed only for what you consume.
BTW, are you working on a VSCode extension?
Was wondering if a response could auto-fit max_tokens, would be nice.
Yes, I am experimenting with a VSCode extension. What do you think?
Auto-fit could be counter productive as code length/function will change the length of explanation. Meaning smaller explanations would consume more tokens and larger ones would lack detail.
VScode extension is a great idea. Is the explain code functionality the only thing you are going for?
Got it - thanks.
For starters, yes. But I’d like to expand on that soon after.
@sps - on 2nd thought, a VSCode extension will install the source files on the user machine so my OpenAI API key will be available to everyone
Perhaps a Chrome extension is a better medium.
You can build an extension for VSCode, no need to store OpenAI API key locally on user device
That means asking the user to use their OpenAI API key with my extension
Or deploying a middleware API to interface with user on one end and OpenAI on the other