I don’t think logit_bias works when you have response_format={“type”: “json_object”}
I’m really hoping it’s a me thing. My code is below. For this example I’m excluding the tokens in “garnered high praise”. The encoder I’m using is
tiktoken.get_encoding(“cl100k_base”)
The generated token ids from the encoder matches the online tokenizer tool for gpt-4
def generate_summary(user_prompt, system_prompt, bias_dict):
response = openai.ChatCompletion.create(
model="gpt-4-0125-preview",
messages=[
{
"role": "system",
"content": (
system_prompt
)
},
{"role": "user", "content": user_prompt}
],
response_format={"type": "json_object"},
temperature=random.uniform(0.1, 0.5),
logit_bias = bias_dict
)
return json.loads(response['choices'][0]['message']['content'])
bias_dict = {‘12440’: -100.0, ‘1215’: -100.0, ‘291’: -100.0, ‘1579’: -100.0, ‘29488’: -100.0}
AND… the output has the phrase “garnered high praise” verbatim.
Does anyone have any ideas? Or am I SOL when using the JSON response option?