How to force assistant to use file information?

For example if I have a file which is a json array of character attributes. For example:
“shop_keep_1”: {
“personality”:“Angry and short tempered”,
“example”:“What do you want? I don’t have time for you to window shop!”
“personality”:“Friendly and overly helpful.”,
“example”:“How can I help you, sir? Please take your time, no pressure.”

If I am using a prompt such as:
User will input a name, return a greeting with a similar tone as the “personality” for the key that matches the user input.

But it seems like half the time it completely ignores the file attached. Any way I can force it to pull only from the key and only the information from the file?

For example, half the time I input “shop_keep_1” it does alright. But the other half of the time I would get responses like:
“How are you doing friend?”

To ensure that the assistant consistently uses the information from the file, especially in the context of responding based on character attributes defined in a JSON array, you can take the following steps:

  1. Clear Instruction: When you ask a question or issue a prompt, explicitly state that the response should be based on the information provided in the file. For example, “Using the character attributes from the file, respond to ‘shop_keep_1’ with a greeting that matches their personality.”

  2. Reference the File Directly: Mention the file directly in your prompt. For example, “Refer to the character attributes in the uploaded file and provide a greeting for ‘shop_keep_1’ that matches their personality.”

  3. Follow-Up for Accuracy: If the response does not align with the information from the file, you can follow up by pointing out the discrepancy and asking for a revised response that strictly adheres to the file’s content.

  4. Specificity in Prompts: Be as specific as possible in your prompts. If you notice inconsistencies, you can include a part of the character attribute in your question to guide the assistant. For example, “Given that ‘shop_keep_1’ in the file is described as ‘Angry and short-tempered,’ how would they greet a customer?”

  5. Use of Direct Quotes: You could ask the assistant to use or reference the direct quotes from the file. For instance, “What greeting would ‘shop_keep_1’, who says things like ‘What do you want? I don’t have time for you to window shop!’, use to greet a new customer?”

By clearly directing the assistant to use the file and specifying how it should use the information, you increase the likelihood of getting responses that are consistent with the content of the file. Remember, the assistant will try to balance the use of information from the file with its general knowledge and conversational abilities, so being explicit about your expectations is key. Meow~

cr. my CatGPT

1 Like

No matter how specific you will be about it in instructions - it won’t work consistently.

Been trying the last 4 days to tweak instructions over and over with input from this forum. To no avail, just frustration.

Use cases like this used to work great, but over the last couple of weeks something happened and it doesn’t anymore.

You can find dozens of posts in the forum complaining about the same. People suggest being clearer in instructions. But it’s not something that can be fixed tweaking instructions, it’s something that can be fixed by OpenAI.

My opinion.

Thanks for the feedback. I tried some of the techniques and it seemed to work better, but it still wasn’t consistent. I tried doing a set of instructions (do this, then this, then this), explicitly stating to always using the file, to no avail.
It seems like this is not something that can be solved via instructions. :confused:

Yes, tried exactly the same.

Read a post, get inspired about a potential better way to write instructions, try it out and crash & burn.

4-5 different methods with step by step etc etc. Nothing works consistently.

Works great 50% of the time, the other 50% doesn’t work.

I’m beyond frustrated tbh as I have to deliver a few (smaller thank god) customer projects this week. So looking at alternatives.

I am also facing same thing. The inconsistency in the responses is a lot.

I faced the same file-based knowledge prioritization issue with a json schema that i want to have priority over the pre-trained knowledge. I gave it a name and referred it in the instructions. Tried dozens of sentences in mandate GPT-4 turbo to take this json schema. But the Assistant took it or not randomly.
IMO, GPT should offer a way to enforce the file-based knowledge (with some magic sentence « always use the json schema named xx from the file»?). Obviously, asking ChatGPT 4, it should work. But it doesn’t!