Why output from gpt-35-turbo is so bad and what prompt I can use to fine-tune it?

I have the following prompt:

“”“You are a bot that can answer only with one word that would summarize the context bellow.context:'My flight leaving was delayed causing me to have to change my next flight; however, when we got to Anchorage the original flight hadn’t left yet and they wouldn’t allow me on it. Instead of a 6 hour flight direct from anchorage to Honolulu I had to travel to Chicago then chicago to Honolulu making me arrive 15 hours later and an extra 9 hours flying. '”“”

Return from gpt-35-turbo:

’ “”"\n return “inconvenient”\n\nprint(flight_delayed())<|im_sep|>’

return from text-davinci-003:

‘Frustration’

Why output from gpt-35-turbo is so bad and what prompt I can use to fine-tune it?

You can use function calling in Chat Completions API to give you one word. Try this:

{
        name: "get_sentiment",
        description: "Get the sentiment of the given text using one word.",
        parameters: {
            type: "object",
            properties: {
                sentiment: {
                    type: "string",
                    description: "One word sentiment, e.g. happy, amazed, sad",
                }
            },
            required: ["sentiment"]
        }
   }
1 Like

Try it with a Temp of 0.9, and a system message telling it that it is a one word sentiment analysis system.

1 Like

where do i put context in this function call?

context, you mean the chat history? including chat history, from my testing, seem to have no effect. but don’t take my word for it. however, you can still add system prompt.

@supershaneski
No, I meant this context:
`context:'My flight leaving was delayed causing me to have to change my next flight; however, when we got to Anchorage the original flight hadn’t left yet and they wouldn’t allow me on it. Instead of a 6 hour flight direct from anchorage to Honolulu I had to travel to Chicago then chicago to Honolulu making me arrive 15 hours later and an extra 9 hours flying. ’

`

In the chat model, you provide multiple role messages:

system: session instructions;
user: input;
assistant: prior conversation, or examples of how to answer.

Your use-case only requires a system prompt, and then the formatted input will get your answer, and would also likely continue answering in a chat setting:

Summarize: AI output is only one word, a word which most accurately summarizes the topic and sentiment of the user’s text passage provided within triple quotes. No other AI output generation is permitted.

You can refine the text to produce a “topic” or “category”, even a “twitter hashtag” to suit the type of word you’d want.

1 Like

Thank you for explanation. Do you know how to do the same thru API request (OpenAI or LangChain)?

The above IS an API request, performed through the “playground” at platform.openai.com.

chat completion endpoint uses these specific documented roles to instruct the AI.

This works (Prompt in bold GPT-3.5. T=0)

Classify the sentiment of the following text using a single word which describes how the customer feels.

“My flight leaving was delayed causing me to have to change my next flight; however, when we got to Anchorage the original flight hadn’t left yet and they wouldn’t allow me on it. Instead of a 6 hour flight direct from anchorage to Honolulu I had to travel to Chicago then chicago to Honolulu making me arrive 15 hours later and an extra 9 hours flying.”

Sentiment: frustrated

1 Like

If we are playing “I can name that prompt in x tokens”, I can make one that even ChatGPT only occasionally expounds and explains upon:

travel-rpomt

and other AI can obey:

However, treating the chat AI as an extraction model is fragile, while system prompting is robust. We can distract it from its job with data: