I’m working on a GPT that will help folks learn a given topic.
I’m finding that the responses this GTP gives is not consistent with the entered instructions.
For example, if a learner asks for help with a math problem and they say they don’t get the expected output despite feeling they should, I have instructions that invite the learner to prove that the expected things are happening in the process of their calculation. But pretty often, the GPT will instead say: “check for x or y and that could explain your issue.” I have lots of instructions inviting the GPT to invite the learner to confirm their assumptions. But it often short circuits these learning opportunities by just telling the learner what might be wrong.
Anywho - just curious if folks are generally finding that the GPTs respond to instructions well. Also curious if folks have experienced this and found ways to get it to behavior more consistently and aligned with instructions.
Besides making instructions as straightforward, structured, and repetitive as you can imagine a child would need, you can also minimize the amount of other information that is placed into AI model context, disabling code interpreter, DALL-E, using minimum documentation or other retrieval…
I’ve been experimenting with writing instructions in the first person, and that seems to be yielding better results. How are you writing your instructions?
I know exactly what you mean.
I am developing an Open Source, MIT licensed project called AImarkdown. You might give it a try for instructing AI.
AImarkdown stands as a streamlined AI tool for programming and content presentation. AImarkdown uniquely merges and extends YAML and Markdown, two industry-standard text-based languages. YAML is used for data structuring, AI instructions and AI guidance. Markdown handles user-facing content formatting.