GPT-3.5-Turbo returning few-shot prompts back as output

I’m trying to get GPT-3.5-turbo to return analysis from information I’ve provided. My prompt looks like:

` Giving the model a persona, i.e. you are xyz you should do abc.
You should:

  1. Ask1
  2. Ask2
  3. Ask3
    Input data

Example

Example Input:
Input data
Example Output:’

However, whenever I do this the LLM returns an answer which is basically the example output rehashed. Is it my prompt style causing this or potentially the length of the prompt? Are there any techniques to avoid this?

1 Like

Welcome to the forum.

The issue you’re facing with GPT-3.5-Turbo may be related to a combination of factors such as the clarity of the prompt, the specificity of the questions, and the ordering or structuring of the input. Let’s break it down:

Clarity of the Prompt

If the persona is too vaguely defined, the model might not understand the context well. Be explicit about what you want the model to do. For example, if you say, “You are an experienced financial advisor,” you might want to specify what you want the “financial advisor” to analyze or discuss.

Specificity of Questions

Questions like “Ask1,” “Ask2,” etc., are placeholders, I assume. Make sure your actual questions are specific enough to guide the model to the type of answer you’re looking for.

Structuring the Input

You can be more explicit by structuring your prompt. Instead of saying “You should,” you could use a more direct phrasing like, “As a financial advisor, please analyze the following data and answer these questions.”

You are an experienced financial advisor. Please analyze the following investment options and answer the questions below:

  • Investment A: 5% annual return, low risk
  • Investment B: 10% annual return, medium risk
  • Investment C: 20% annual return, high risk
  1. Which investment option offers the best risk-to-reward ratio?
  2. What is your recommendation for a risk-averse investor?
  3. How should I diversify among these options?
1 Like