Logit bias for words?

Is there a way to assign a logit bias to words as opposed to tokens?

I am working on a completion task that must complete with a specific set of “allowed words”.

E.g.

I went to the

With an allowed word list of {supermarket, banana, chair}, should complete as

I went to the supermarket

Hi Nicholas,

I re-formatted your question into a set of instructions and examples for the playground, successfully achieving the task of using “allowed words” without needing to implement logic bias.

If you’d like to use logit bias regardless, you can tokenize the relevant words and pass through the tokens. To further understand how tokenisation works, consider the following options:

1 Like

@joey Thanks for the reply. That is a good idea I had not thought of. Unfortunately, it does not seem to work (unless I am doing something wrong). I attach an example of the failing prompt here.

Regarding the second solution of simply biasing the tokens, I run into some problem due to my labels. For example, if I have the label Alarms_1 and Restaurant_2 I would encourage positively bias (encourage) the use of token 1 and of token 2. This, in practice, results in my completion being Restaurant_1 most of the time.

The ideal solution would be an endpoint that computes the likelihood of a completion given a prompt (P(completion | prompt)). In which case I could just compute P(“Restaurant_2” | prompt) and compare with P(“Alarms_1” | prompt). Do you know if that is possible?

Hi Nicholas, would you be able to share the prompt as a playground preset link?

I do not believe this is possible.

Sure. I see it does work with davinci, but not with other models. Unfortunately, given the dataset I am working on, using the davinci engine is not feasible (I am evaluating around 100k of these prompts).