ChatGPT freezing during code generation

I am subscribed to ChatGPT Plus.

In conversations with ChatGPT where it is creating or updating code, it regularly freezes before entire code is completed. When I click on “regenerate”, most times it freezes again – usually at or near the same place. Sometimes I’ll ask it to repeat as it has frozen, it will apologize and start the generation again – and it will freeze again – usually around the same place.

This doesn’t seem like a random network error, since a) it happens very frequently and b) usually around the same lines of code (normally less than 100 lines).

I heard there is something like GPT Pro subscription, but I can’t find any reference to it on the OpenAI site. Anyone else seen this problem? It’s extremely frustrating when, after hours of back and forth you finally get close to a solution, and then have to try and piece together the final code because ChatGPt can’t complete writing it!

Sounds like it’s reaching its token limit.

Keep in mind that it’s a terrible idea to let cGPT write out multiple functions of code. Instead of saying “Write me a script to create a tic-tac-toe game in Javascript using Canvas”, you should be using it to assist you in writing the code.

“Write a function which returns an empty tic-tac-toe 3x3 board as an array in Javascript”.
“Write a function that draws a 3x3 tic-tac-toe board in Canvas 2D using this array as an example”.

If you cannot break down your theorized project into simple 1 function steps, then cGPT will not be able to create your project for you. It is not a lead programmer, it’s an assistant.

You will thank yourself later.

2 Likes

If you cannot break down your theorized project into simple 1 function steps, then cGPT will not be able to create your project for you. It is not a lead programmer, it’s an assistant.

Dude, I assure you, you’re preaching to the choir. I tell anyone who will listen that AI is simply a tool, and is not going to take away anybody’s job, especially coders.

As someone who has been developing software since the 1980s, I do know a little about coding. However, I’ve only been into this AI thing for about a month. I’ve been using ChatGPT to assist me in generating API scripts (OpenAI embeddings and completions) to process a knowledgebase I am developing.

I essentially write a script to perform the basic function, say an embedding, to understand how to use the API and successfully make the call and retrieve the response. Then, when I have an understanding of the process, I ask ChatGPT:

“Modify this script to read text files from a subdirectory and generate embeddings for each one, then store each returned vector array in an object json file that can be upserted to PineCone.”

ChatGPT generates the script. I look at it for obvious errors that I either fix or send back for modification. When I think it’s ready, I test it out. Again, if there are explicit errors or problems or potential problems, I fix or send back for modifications. An example:

“You are reading the text files into an array. This is a potential memory problem. Please process the text files one at a time, and display to the console the name of each file being processed as well as any errors that may occur.”

I keep doing this until I’ve got a script that does exactly what I want it to. I have developed 4 production scripts in the past week using this process. And, I am loving being able to spend less time coding and more time thinking about what I want to do and how to do it.

So, I am using ChatGPT as an assistant.

My problem is that, quite frequently, after an hour or so working on a script, ChatGPT will freeze while regenerating it. For example, I ask it to make small modifications on a script that has already been developed (and modified several times), and while doing this, it just stops – usually at somewhere around 90% completion. I’m trying to understand why this keeps happening.

If this is caused because it is reaching it’s token limit, is there anyway to know for sure this is the problem?

Hey,

I’m so happy to hear it. I apologize if I came off a bit blunt & rude. I completely agree with you! It really is nice to watch it all unfold and come out clean. I spend more time testing than I do writing!

Yes, usually after some length in a conversation it will start to cut itself off. Your best bet is to start a new conversation. You may like to try the playground as well for more control over the output.

I’d also like to mention that there is an API wrapper by OpenAI under the module openai in Python, if that’s what you are using. The reason I say that is because ChatGPT will sometimes write the script using Python Requests, which isn’t ideal.

Excellent idea! Thank you.

I’m coding in PHP. I know, I know. But I’m old, lazy, stuck in my ways. I’ve got 12 years in PHP, mostly doing Drupal development, so I’m gambling on leveraging that knowledge as far as I can. With OpenAI it’s a piece of cake. Not sure what’s going to happen with PineCone, and then the development of a UI. But, taking it one step at a time.

Thanks for the responses.

1 Like

I assume this is because you are asking ChatGPT to write code instead of smaller modules / methods / functions, which are building blocks of larger coding applications.

Yes, it is very important to conceptualize your application and break it up into small, management components. This is also how I use ChatGPT, only requesting short modules which do specific tasks with well define input and output params, etc.

I think you missed @RonaldGRuckus point. The second half of this statement was not his main point (in my view) but it is the first half of his statement:

This is not only because of “max_tokens” but it is because the longer a completion, the more errors accumulate. This is a basic, well known issue with generative AI. Long completions accumulation errors, so the longer the completion the more errors.

Yes, you should just post the exact prompt you are sending ChatGPT to complete and your follow-on prompt to modify the prior completion.

:slight_smile: Since we cannot see what you are doing and :artificial_satellite: do not have access to a spy satellite to watch you perform with ChatGPT and we have not :axe: hacked into your system to log your :keyboard: keystrokes, we really have no idea what you are really attempting @SomebodySysop because you are talking in generalities about a problem which require specific details to answer correctly. :crazy_face:

FYI, I use ChatGPT to draft code modules and method for me almost daily, including this morning, and never have a problem with ChatGPT freezing.

Random Example 1 (just making this up):

Random Example 2

Random Example 3

I don’t want to bore you @SomebodySysop with 100s of examples, but as mentioned, I never have a problem with “freezing during code generation using ChatGPT” because I request small methods which fit into a larger coding puzzle.

We can easily see why it is “freezing” when we test for you.

Actually, I did get @ RonaldGRuckus point. And, it could very likely be that I am exceeding some token threshold when these freezes occur.

I do understand the concept of modular programming having worked in a variety of languages over the decades. Drupal has always been modular, starting out with “hook” functions and has evolved with any number of plugins, events, processors, etc… However, in my mind, these scripts I’ve been developing with ChatGPT are just single function programs: read some files, do some stuff with them, then write the results.

The process I’ve developed is pretty simple and works like a charm for my needs at the moment. I would have been happy just fine-tuning a model, but somebody kept insisting that embedding was the way to go. So, I’m embedding.

As I indicated earlier, these freezes seem to occur when I’m almost done developing the script. My workaround now is, if ChatGPT freezes while executing a completion, instead of asking it to regenerate, I ask it to continue generating from the point it stopped. Not my preference, but it’s working.

I think the idea of going to the playground will resolve the current issue. But, actually, I’m pretty much done with my initial OpenAI processing calls. I am now ready to index and query. Moving on to PineCone and then the UI, I suspect I will embrace a more modular approach as I will need to translate Python routines to PHP (that’s going to be fun).

Anyway, thanks for the comments.

I didn’t want to bore everyone with the full development conversation. It’s pretty long and pretty boring.

But, my last AI conversation involving the same code that kept freezing earlier is pretty short. And, it is an example of how I am currently interacting with ChatGPT to develop my code.

Like I said, I use ChatGPT to draft code methods for commercial software projects daily and because I keep the methods and scope small, ChatGPT has never froze one time.

HTH

:slight_smile:

Now, why oh why would you think that?

This is the obviously part of the problem my view, and is what both of us, myself, @RonaldGRuckus have been telling you not do do:

I would now send such a prompt to ChatGPT. These kinds of prompts cause ChatGPT to “freeze” and “make many mistakes”.

Best of luck, however and you prompt ChatGPT as you wish and makes you happy.

:slight_smile:

The second request I made is a function to make a PineCone query with the vector array I have received from OpenAI ($vector value).

Better?

Yes but you should request a PHP function from “Chatty” and not a “script” and work on coding with functions as components of your PHP software architecture.

HTH

:slight_smile:

I am having a similar issue. I came here to find out if joining the paid tier would alleviate the issue. Seems like it will not.

It has not made a difference in my case.

If you asked him to continue the answer?

Yes, asking the AI to continue the answer usually works. But, what I have found that works even better is breaking the script down into smaller functions and having the AI work on those individual functions. No freezing, and overall better results.

I had 79 lines of code that ChatGPT was doing the same thing to me, so I broke it up in two parts. I said show me lines 1-40 and then after that, show me lines 41-79.

1 Like

Just a note. I finally finished writing my first fully functional chat application: it embeds all the content on a Drupal website into a vector store and allows me to execute semantic queries conversationally using chat completion api calls.

This is just a rough draft, but everything works. It’s written compeltely in PHP. The code is as modular as I could get it, and GPT3 (then 3.5-turbo, then 4) has been amazing at assisting me in building many of the functions.

I started out here not having a clue as to what an embedding was, and asking ChatGPT to write code like a drunken sailor, and have now created my own ChatBot infrastructure that returns responses on my data as good as any I’ve seen online so far.

A lot of people have helped me get here, but I owe you, @ruby_coder , a particular debt of gratitude. Looking back in hindsight, I can see how all those little suggestions you made slowly but surely helped point me in the right direction, despite myself.

As I sit here, amazed at my accomplishments to date, I just wanted to say, Thank You. :+1:

1 Like

I tried many methods, but none of them worked. I think we should build an app using the API key to avoid this problem. I found a solution to that problem—just follow the link to the YouTube video. I think it really works and is amazing!
youtu.be/6V5X0VTKEp4

1 Like