Gpt prompt gives different results every time

I have created a gpt that has a knowledge base in the format of a pdf. The prompt should compare the document I upload against the knowledge base and give me suggestions on improvements. But these are always different and varied. Any idea on how or what I can do to get the same result. At the moment I can’t rely on the information I getting

What do you want? Does the content match every letter or just the correct meaning? Including the presentation process of each message. Because these effects vary depending on the underlying element, from the OpenAI definition to the prompts you write in the instructions and documentation you include.

I don’t have an exact method for solving this problem because in many cases it may come from user prompts. But if it were me, presenting the contents of the document perfectly Remain using documents with grouped text. Content may be separated into files. and order them to be presented in the order specified in the files To control the content of the presentation. or pre-referenc content to start using, where I will throw big text to GPT to recognize as key word information that will be used to command action. And when you use it, just use a few defined key words and it will bring you relevant content.

It’s more of a knowledge base that’s included in the prompt which will match topics being discussed inside of another document that I upload to the gpt. Sometimes it picks out certain aspects correctly but either then misses areas or picks up areas it shouldn’t.

I had it working 3 days ago, and I have gone back to it and it’s failing again.

One thing that may have happened is your Instructions in the GPT Configure tab may have changed without your knowing it. If you are in the Create tab of an existing GPT, even a small message to GPT Builder can significantly change the performance of the GPT.

Given that you want the GPT to reference the uploaded file(s) you will want to add explicit deterministic instructions that say "when a user uploads a document in the message compare the uploaded document to the reference knowledge in {specific description here}. (replace that “variable” with detailed explicit instructions of what you want the GPT to do in that case).

Make sure to save your Instructions in a versioning system so you can easily revert to an earlier version if needed and regain performance.

1 Like

it is the nature of the math that it does not produce the same results twice. it’s randomized and they also have word priority variation so that even the generated text will be different every time so if it’s generating a question to your data, you’re going to get a different answer out every time by design