That seems like a great tool. Honestly, since I was upgraded to gpt-4 (8K context window) and got into the habit of feeding smaller functions to the model, I’ve not had a single instance of freezing. Although now I sometimes get a weird problem where if the code is a bit long, the model, when listing it, will “hiccup” and begin repeating it from the top instead of finishing it. Very annoying, but rarely happens.
I think someone at OpenAI recognized the problem because now when it hiccups, it resumes from where it broke off.
But, I’m going to look into this service you posted. It looks like it might be a solution for when you need to work with fairly large chunks of code at a time.