Full code check without 'continue' option

I have a code with over 1000 lines that I would like ChatGPT, or Playground to clean and improve. Unfortunately, every section is cut off and just “continue” carries on from uncompleted responses. I need a whole complete response, can this be done in the free beta version with a workaround or do I need to pay Davinci tokens? Or should I be using the Playground for this instead of ChatGPT?

1 Like

Welcome to the community!

ChatGPT is more of a conversational model, I believe.

You might want to actually use Codex in the Playground…

Hope this helps!

I tried both the ChatGPT and the Playround, both stop after a few seconds probably due to the token size:

15,896 tokens in prompt
Up to 1,695 tokens in response

This model can only process a maximum of 4,001 tokens in a single request, please reduce your prompt or response length.
Learn more about pricing

Looks like I may have to increase the max. token size in a single request, that’s why I am asking if the pricing model would allow more tokens in a single request if I pay for it. Or would it still be the same outcome because the same model is now for trial in beta? I just dont want to waste my money if there is a workaround during this trial period, like the “continue” command in ChatGPT (but which breaks lines at invervals and would not provide the complete response due to missing data).

Thanks for trying Paul, but I need further details on how to solve larger coding responses in a single response.

Any other solutions from anyone?

Ah, I see. No, the token limits are hard limits due to hardware/compute requirements.

Only thing I can think of is breaking it up, but then it won’t see the whole code. Not sure if summarizing it would help either as it’s code and not words.

Good luck on your quest!

so you mean even if I paid and after the beta phase, it would still be limited to 4000 tokens? Or is the limitation just because of free beta?

Is there no solution to having it check all the code in one go?

“faucet”, you are correct. The models have upper limits for tokens. As far as I know, the token limit for any request needs to include the completion AND the tokens used in the prompt.

It would probably be best to try and refactor one function at a time using Codex.

Depending on the language, you may be better using a refactoring tool (Eg C# has Resharper or CodeRush)

1 Like

Ok so it looks like I have no choice, other than breaking it down to several functions. Though my coding knowledge is very basic and I wouldn’t know if everything in the interoperability is considered, when analysing just several parts individually.

The language would be Java. Could you please also recommended something thats free to test the full code? And maybe a community that I could turn to (Telegram, IRC, discord, etc)?

Many thanks!

There are a few java wrappers.

I haven’t tried either of these - but the first one seems quite active. It has updates from a few days ago.

1 Like