I’m using GPT for answering multiple-choice questionnaires, where the model is used for deciding between option A-E. I want the model to only return one of these options. I also want the model to give me the logprob of each of the options. The critical aspect is that the model can only return the tokens A, B, C, D, E. I do this by setting the logit_bias of these tokens to 100. However, it does not return these as the top log_probs, but as other tokens as visualized below. Even when I try setting the logit_bias as -100 for these unwanted tokens, I still get the same response.
Why is not logit_bias working the way I want it to? I thought that if I set the logit_bias of A, B, C, D, E to 100, then the function could only return one of these tokens. Why is this not working?
One month ago the top_logprobs were always A, B, C, D and E. But this is no longer the case. Are there any updates I need to take into account?
This is my call for the function.
response_message, response = gpt_call( prompt=mc_score_prompt, max_tokens=1, top_logprobs=5, logprobs=True, temperature=1, logit_bias={
32: 100.0, # A INCREASING THE LIKELIHOOD OF THE TOKEN APPERARING IN THE RESPONSE
33: 100.0, # B
34: 100.0, # C
35: 100.0, # D
36: 100.0, # E
})
def gpt_call(prompt="", model="gpt-4", robot="", logit_bias={}, logprobs=False, max_tokens=1024, temperature=0, tool_choice=None, tools=None, top_logprobs=None):
client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"),)
completion = client.chat.completions.create(
model=model,
messages=[
#{"role": "system", "content": robot},
{"role": "user", "content": prompt}
],
logit_bias=logit_bias,
logprobs=logprobs,
top_logprobs=top_logprobs,
max_tokens=max_tokens,
temperature=temperature,
)
return completion.choices[0].message, completion.choices[0]
This is my response:
Choice(finish_reason=‘length’, index=0, logprobs=ChoiceLogprobs(content=[ChatCompletionTokenLogprob(token=‘B’, bytes=[66], logprob=-5.3000836e-05, top_logprobs=[TopLogprob(token=‘B’, bytes=[66], logprob=-5.3000836e-05), TopLogprob(token=‘The’, bytes=[84, 104, 101], logprob=-9.859428), TopLogprob(token=‘Option’, bytes=[79, 112, 116, 105, 111, 110], logprob=-14.296928), TopLogprob(token=’ B’, bytes=[32, 66], logprob=-17.406303), TopLogprob(token=‘"B’, bytes=[34, 66], logprob=-17.578178)])]), message=ChatCompletionMessage(content=‘B’, role=‘assistant’, function_call=None, tool_calls=None))
Wherer the top_logprobs are:
TOP LOGPROBS:
B: -5.3000836e-05
The: -9.859428
Option: -14.296928
B: -17.406303
"B: -17.578178