Best way to prompt a model to regenerate part of the response it has generated

Hi,

I am trying to use GPT4o-mini to complete a response that may have been written by a different model/human. In option 1 (below), the prefix is ignored and the assistant answers the question. In option 2, the response is completed but worried if the model would be robust to this. Are there alternative approaches to doing this?

Thanks!

Option 1:

import openai
import os

message_list = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What is the capital of France?"},
]

response_so_far = "I am not sure, but"

if response_so_far:
    message_list.append({"role": "assistant", "content": response_so_far})

openai_client = openai.OpenAI(
    api_key=os.environ.get('OPENAI_API_KEY'),
)

# Pass `message_list` to the model
response_out = openai_client.chat.completions.create(
    model="gpt-4",
    messages=message_list,
    max_tokens=1024,  # Adjust as needed
    temperature=0.7
)

print(response_out.choices[0].message.content)

Option 2:

import openai
import os

message_list = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What is the capital of France?"},
]

response_so_far = "I am not sure, but"

if response_so_far:
    message_list.append({"role": "assistant", "content": response_so_far})
    message_list.append({"role": "user", "content": "Finish the response above."})

openai_client = openai.OpenAI(
    api_key=os.environ.get('OPENAI_API_KEY'),
)

# Pass `message_list` to the model
response_out = openai_client.chat.completions.create(
    model="gpt-4",
    messages=message_list,
    max_tokens=1024,  # Adjust as needed
    temperature=0.7
)

print(response_out.choices[0].message.content)

You could try to tell the model what you are trying to achieve. Be as precise as possible in that and maybe also give a couple examples (research has shown the more examples the better).

There is also a chance that the model might answer with something wrong - and even two different models might answer with the same wrong since the guys who make models steal a lot of data from eachother or use the same data providers as sources e.g. the idea to use wikipedia as training data might have come up by multiple parties at once and when wikipedia has something wrong or not in your liking…

So if you want to make sure the answer is aligned with what you want it to tell e.g. a sales bot that should tell the right product prices you will need a mechanism on top that checks the data or you let the model generate some SQL to grab the data from a trustable source.

For your question I would go with a prompt like this:

I want you to complete an answer to the question 

"What is the capital of France?"

The answer starts with 

[response_so_far] <<< you will have to replace this