I am writing a simple web app to update add/chg/delete a single table. It started ok and built me 2 or 3 sevlets and some JAVA programs. But as each function I wanted to add things got complicated. Answers previous suggested then changed to a point elsewhere and other programs no longer match up. For no reason, just randomly changed. For example ChatGPT initially recommend update to be an insert-duplicate. Later changes it to separate insert, update, delete calls. Eventually I hit my daily limit and wanted money. I’m not to interested if AI is changing existing code originally it generated.
I noticed this as well after June 20 update. May be irrelevant or coincidental, but now it’s happening in 4o, 4.5, and 4.1. I tested multiple threads, same prompt.
I get the frustration (especially concerning consistency and usage limit), but a lot of this sounds like a misunderstanding of how AI tools like ChatGPT actually work.
First, in this case, usage limits ≠monetization pressure. You mentioned:
That’s not some evil upsell tactic mid-project, or the tool deciding to withhold help. You’re using a high-demand model that burns real compute power to generate responses on the fly, and limits are part of keeping it free for everyone.
If you’re running up against the ceiling often, that’s your cue to consider an upgrade.
Random changes aren’t so random, they’re stateless.
You might be seeing inconsistent code suggestions because AI models like ChatGPT 's free tier don’t remember past conversations very well unless you explicitly provide context. Also, temperature (aka creativity randomness) plays a role. Higher temps might give you clever ideas. Lower temps give more predictable ones. If you’re copy-pasting the same prompt into different chats and getting drift, that’s why.
This isn’t inconsistency, it’s how stateless systems work: each query is handled independently unless you thread the conversation.
[CHAT #1] recommends `INSERT ... ON DUPLICATE KEY UPDATE`
[CHAT #2] (without context) might suggest separate INSERT/UPDATE calls.
If you’re expecting ChatGPT to always generate identical code, that’s just not how it works; especially not on the free-tier. To fix the drift in responses, you can try anchoring your context:
- If you liked a previous approach, just say so. Literally tell it:
“Stick with that pattern.”, or “Continue my CRUD MySQL app project. Use theINSERT ... ON DUPLICATE KEY
pattern for updates.”
You’re the architect! It’ll follow that style consistently if you guide it.
- Set explicit rules:
“Do not change the update pattern. Maintain consistency with […]” - Provide code snippets by pasting your existing schema or critical files.
- Persistent memory, available in Plus, w/ custom GPTs.
- Code versioning on your end so you lock the design in.
Pro Tip: For mission-critical code, like building a structured CRUD app that requires steer consistency, the AI won’t do it for you automatically. It will follow your lead if you treat it more like a junior dev and less like a magic box.
This topic was automatically closed after 12 hours. New replies are no longer allowed.