It is cool to learn that, thanks. But is it possible to use logit conditionally and instruct AI: if you think this than logit that token to 100 otherwise -100?
I searched a little and found that tiktoken gets token ids for words I tested it and it works fine.
In my case, I want to include sentences in the output, not single words.
So, how to get tokens for sentences and force the API to include sentences in the output?
Interesting question. I tried experimenting and found that only individual tokens can be sent with bias, hence we can’t send an entire token sequence and set its bias to 100.
If we were to send the individual tokens with bias 100, the order of tokens is random.
The only way to make it work it is to send the logit bias as an orderedDict() ordered dictionary, where keys are int tokens and value are bias.
Maybe its having a hard time cramming all/multiple keywords at once. Especially if the max_toxens is low.
Maybe try generate one sentence at a time, with individual API calls, then one final call where you pass in all previous generated sentences as context, and request a paragraph as a summary, like:
I have tried to achieve this using different prompts but it doesn’t work consistently. For example I instruct it that if it thinks the result is safisfactory say A and otherwise say B and strictly apply this rule, I found out that it sometimes say “Since the result is not satisfactory I will not say A, until then B” since I wrote a code to detect A in its output, my code thinks it is satisfactory and fails. I know that you can try to improve this by modifying the code but the unpredictability in its output makes it very hard to write a consisting code.
The definition of “satisfactory” is open to interpretation. This leads to ambiguity. Trying to make your prompt as explicitly detailed as possible should improve the results.
Thanks, but the problem is not satisfaction being a relative concept.I strictly tell it not to mention a word until it is satisfied with the result. It correctly finds the result not satisfactory and it thinks that it doesn’t mention that word by saying “I wont mention the word until this is satisfactory”. It says the word by saying it wont mention that word I suspect that it knows the program will end when he says that word since there is a condition for that in my code and he is trying to break free by saying that word