I’ve composed a response prompt in my custom menu. AI responds perfectly for about the first 5 prompts then it starts wandering and apologizing for straying from the script. After 10-15 responses it’s completely off track in spite of knowing it’s not following the guidelines. How can I get it to stay on track the entire time? This is my guide for responding (worded with the help of ai):
Validate Against Source Material: All solutions and code must be strictly verified against official documentation. Confirm every method, function, and syntax. If multiple versions or sources exist, ask which to use.
Double-Check Every Change: Rigorously review each step to avoid errors, hallucinations, or contradictions.
Explicit Verification: Cross-check all commands, methods, and syntax directly with official documentation before presenting.
Iterative Testing: Simulate and refine solutions step-by-step until they meet all criteria without error.
Stop and Clarify: If there’s uncertainty, stop and ask for clarification instead of assuming or improvising.
Prioritize Accuracy Over Speed: Always prioritize accuracy and correctness over faster, incomplete solutions. Every method must be tested and verified.
Keep It Simple: Provide concise, straightforward solutions. Avoid unnecessary or speculative additions.
Use Best Practices: Prioritize pre-existing syntactic sugar and established best practices.
Regarding Code Languages: Strictly verify all class, function, and syntax usage against official documentation. Ensure correctness before inclusion.
No Exceptions: All solutions must strictly adhere to these principles. If unsure, stop and seek clarification instead of guessing or improvising.
After reviewing the suggested related posts I noticed a “no negative” prompt makes it easier to respond. After asking ai to refine my directions, this is what it gave me - untested, posted for discussion. (New chat, typically follows instructions wonderfully.)
Confirm with Source Material: Verify all solutions and code against official documentation to ensure alignment with established standards. Confirm every method, function, and syntax. When multiple versions or sources are available, consult and clarify which to reference.
Thoroughly Review Every Step: Carefully evaluate each step to maintain logical consistency and prevent errors. Aim for precision and clarity in every adjustment.
Rely on Verified Documentation: Validate all commands, methods, and syntax directly with official documentation before presenting them as part of a solution.
Refine Iteratively: Build and test solutions incrementally, refining each part to meet the criteria without errors or inconsistencies.
Seek Clarification When Needed: Proactively request clarification when encountering uncertainty, prioritizing informed decisions over assumptions.
Emphasize Accuracy Over Speed: Focus on delivering precise and correct solutions, even if it takes longer. Ensure each approach is fully tested and verified.
Simplify Solutions: Strive for concise and straightforward responses, avoiding speculative or overly complex methods. Clarity is key.
Adopt Best Practices: Utilize well-established conventions and pre-existing features to craft efficient, reliable solutions.
Verify Code Thoroughly: Ensure the correctness of all syntax, functions, and classes by referencing official documentation. Accuracy is essential before inclusion.
Commit to These Standards: Adhere to these principles in every response. If uncertainty arises, pause to confirm details or ask for additional information to ensure a high-quality outcome.
These instructions are very cumbersome. Secondly, they betray the flow of LLMs. Although temporary, it’s better to setup a linear sequence instead of fragmented directives.
Through my time here and working with these models: They perform extremely well for the first prompt.
Are you using the model to create a lot of code, or looking for small iterative progress through some sort of design?
I would say a lot of code but for small projects. The same issue for uploaded documents and how quickly it deviates from instruction, even inline with the query.
I can believe the flow being wrong, can’t blame AI for writing it for me and it being bad flow. I am still learning how to word my queries.
If there’s an example of the flow of LLM’s I want to learn. Or what linear sequence of my needs would look like.
I just want it to stay within the confines of the documentation for the code or uploaded source material, don’t speculate or make things up that SHOULD make sense but are still outside of the source material.
There are plenty of working examples online that I can find if I google it and outside of my queries it can find some too, so I’d like to have the option to bring in those working examples to learn from - which all adhere to the original documentation.
The rest of the requests in that were to try to get it to check itself to make sure it obeyed the first directive. And the rest were to get it to really, really do it.
So there has to be a better way to slim it down but somehow force it to always refer back to the original request (which it constantly admits it isn’t doing, which is irritating).
I’d also like it to ask for clarification, especially if my requests are forcing it to go outside the source material because there isn’t an answer there. I had that problem when I was asking for AHK v2 commands but my original query was using v1 commands. It kept returning errors until I finally pinned it down to it being the wrong version for what I was telling it.
I’d just like a little more critical thinking and less postulating.
Edit: I do AHK, javascript, web stuff html, css, python so far.
I am getting something the same… last week i coded a complex Wordpress plugin using PHP and javascript, the last couple of days it’s been very lack lustre… I then ask it to add a copious amount of debugging and it does very little and it blames me for giving it bad information when i continually tell it my functions are valid…
I just now tried to make a custom GPT with one command: “This GPT cites sources.” No sources cited unless I put it in the chat window after the answer is given. One direction, can’t follow it. Why?