Prompt structure is sometimes ignored

As follows my code.

examples = [
    {
        "input" : "ELA-Literacy.RI.K.1, Multiple Choice, dog", 
        "question" : "What is this story about?", 
        "text" : "Ellie is a good dog. She is white with black spots. She is very small, but likes to sleep in big beds. She likes eggs. She does not like carrots at all. She likes to pull out her toys and play. The toy she likes best is the ball. She is part of our family.",
        "answers": "Food, Toys, A dog",
        "correct_answer": "A dog"
    },
]
example_prompt = PromptTemplate(input_variables=["input", "question", "text", "answers", "correct_answer"], 
                                 template="{input}\n{question}\n{text}\n{answers}\n{correct_answer}")
few_shot_prompt = FewShotPromptTemplate(
    examples=examples,
    example_prompt=example_prompt,
    suffix="Input: {input}",
    input_variables=["input"]
)

Then depending what word I add in my input, it does not output the expected format.

As an example the following:

chain = few_shot_prompt | ChatOpenAI(temperature=0, model="gpt-3.5-turbo")

answer = chain.invoke({"input": "ELA-Literacy.RI.K.1, Multiple Choice, country"})
print(answer)

Outputs an unexpected format. It seems to completely ignore the “text” part.

content='What is this story about?\nA. Food\nB. Toys\nC. A dog\nD. Country'

On the other hand, switching country for fruit results in an expected results.

What is this story about?

Apples, bananas, and oranges are all types of fruits. They are delicious and healthy to eat. Fruits come in different colors and sizes. Some fruits have seeds inside, while others do not. Eating fruits is important for our bodies to stay strong and healthy.

Colors, Sizes, Fruits

Fruits

Any idea how I can force it to follow the structure I used in “examples”? I am using LangChain for prompting.

Is Example prompt your system prompt? Have you tried providing it with a detailed explanation of what you’re trying to do?

I feel like the model may need more context to respond accurately.

1 Like

The models are stochastic, they will always randomly do weird things.

Beyond that, your issue seems to be with LangChain, not the model, I’m not familiar enough with LangChain to tell you exactly what it is messaging to the model to debug its behaviour.

1 Like

Right, I was hoping random but in the “forced” output schema. Good point, will look more into LangChain.

Not yet, will try. Was hoping I don’t need to add more text but yeah now thinking about it makes sense. Thanks!