Is there a way to verify whether logit_bias
actually influenced the model’s output?
I’m trying to assess the impact of setting logit_bias
in a request. For example:
- Without any
logit_bias
, I get this result:"Whispers of the Engine Guru"
- With a
logit_bias
applied to block"Engine"
and" Engine"
tokens, the result changes to:"Whispers of the Wrench Wizard"
This suggests the bias is working, but I’m wondering—
Is there any way to explicitly tell whether a logit_bias
entry caused the model to choose a different token during generation?
Essentially, can we track whether any logit_bias
items were actively avoided in producing the final output?
Here’re my requests:
Without logit bias set:
curl https://api.openai.com/v1/chat/completions
-H “Content-Type: application/json”
-H “Authorization: Bearer $OPENAI_API_KEY”
-d ‘{
“model”: “gpt-4o-mini-2024-07-18”,
“temperature”: 0,
“seed”: 42,
“messages”: [
{
“role”: “system”,
“content”: “Write a five word title for the story: In a small town garage, Luis, a quiet Honda mechanic, had a gift—he could diagnose any engine by sound alone. Locals said he spoke fluent Civic and Accord.”
}
]
}’
With logit bias set on “Engine” and " Engine":
curl https://api.openai.com/v1/chat/completions
-H “Content-Type: application/json”
-H “Authorization: Bearer $OPENAI_API_KEY”
-d ‘{
“model”: “gpt-4o-mini-2024-07-18”,
“temperature”: 0,
“seed”: 42,
“logit_bias”: {“7286”: -100, “11032”: -100},
“messages”: [
{
“role”: “system”,
“content”: “Write a five word title for the story: In a small town garage, Luis, a quiet Honda mechanic, had a gift—he could diagnose any engine by sound alone. Locals said he spoke fluent Civic and Accord.”
}
]
}’