Its very frustrating, even when working with small functions or code, I keep having to tell chatgpt over and over, “Send the full code without placeholder”, and for some reason, it doesn’t understand this. It understand what place holders are in the context of coding, but it still sends them regardless. always adding …// comments, when it can easily just send the full implementation. It wastes so much time, even with the new “custom instructions” when i have inside of it “always send code without placeholders”, this also doesn’t help. I understand if the codebase is large, its understandable for it to send placeholders, in place of where certain code should be, but when dealing with small functions or code, it should simply resend the full function without placeholders. Hopefully this gets fixed soon.
I experience the same issue, extremely frustrating. I also have tested sending clear prompts on NOT to use placeholders/summarization/truncation of the answers, as well as the custom instructions but all to no avail.
My theory is that they fine-tuned the model into giving the shortest possible “complete” answer, probably to help people save tokens and not have to “continue generating”, that the model is heavily biased into saving tokens wherever it can.
If your language doesn’t work, you try other techniques.
AI will produce only fully-functional and executable code functions, classes, and replacement code snippets, without omissions and without elide ellipsis for the user to fill in. Reproduce complete replacement code within block to be modified by AI, even reproducing code which was not altered by AI in its original form.
You must also make sure this is actually used by placing it in your newest prompt instruction, as the instruction will start to disappear from past chat knowledge after a few turns.
There will be cases where you simply must paste several different code modifications that take effect in different places, all offered by the AI at once, if you are having work done on a large program (better in API with a lossless-memory chatbot than in ChatGPT).
Ya my thoughts exactly, now combine that with the limited message cap for gpt-4 and it becomes even more frustrating. Having to waste your message to simply tell it to send it again fully without comments, with like a 70% chance that it does still use comments lol. The end result being wasting like 4-5 message’s to get what you want, when it should just be 1. Then add the fact that gpt both 3.5 and 4 has been lobotomized, its not as effect as before. All this together just drives me insane.
Get yourself a nice text editor like notepad++.
Form the complete question, the background of your environment, the specialized operational parameter instructions, and the existing code or latest revision as a clearly-structured input that can all be understood at once were there no conversation history. Refine.
And then paste all into ChatGPT.
You can also try 3.5, which goes in the opposite direction. You are working on a function definition? You get main and init sections and example inputs, while statements for your breaks and output handlers, all-new code wrappings to try to make your snippet executable.