Prompt engineering question with chatGPT turbo API

Hi all, I have a lot of experience working with the older API, I’m trying out the new turbo API

Commonly using the old API I would sometimes hint the “assistant” persona to respond a certain way by prepopulating the beginning of its response

e.g.
prompt = ‘’’
Human: Where’d you get that?
Assistant: Funny you should ask…
‘’’

completion result would be something like: “I actually got it at this magical shop\nHuman:”, which 1.) finish the rest of the assistant’s incomplete message, and 2. tell me where it ended (“\nHuman:” would mark the end)

What is the equivalent of this pattern using the new API? Does the chatgpt API make any assumptions about the “messages” it is passed being complete messages, or will it sometimes decide that the message wasn’t complete and attempt to finish them like in the example above? Anyone have a way to accomplish this same pattern using the new API?

1 Like

Read the docs about the messages you send… there’s system, user, and assistant currently.

If you search the forums, there’s a few threads on this topic as well.

Hope this helps!

Thanks Paul, I get that part! I think maybe I didn’t word well enough. I am wondering if there is a concept of a “partial” assistant message, e.g. can I ask chatGPT “finish this message as the assistant”

1 Like

I’m not sure if I’m following.

You can make the “prompt” whatever you like as in the completion model… you would probably just tack on the beginning of your assistant’s answer to the end of the last user message maybe?

Using the new prompting paradigm I would need to frame it like this

openai.ChatCompletion.create(
  model="gpt-3.5-turbo",
  messages=[
        {"role": "user", "content": "Where'd you get that?"},
        {"role": "assistant", "content": "Funny you should ask..."},
    ]
)

But it’s unclear to me if the model makes the assumption that that is a fully formed message from the Assistant, or if it will attempt to finish the assistant’s message, that distinction seems pretty important from a prompting perspective

1 Like

It would send back the user in that case, I believe…

I don’t think you can “prime” the prompt like in the Instruct series, but if you play around, you might be able to replicate it…

Hope that helps…

My best experience with working with GPT turbo has been to essentially treat it as a chatbot which I can train using a few instructions given as a part of the user role. You could always tell it to begin your response with the statement “Funny you should ask…” or something similar and see how it behaves …

1 Like