I’m working on a classifier and I was hoping to leverage tools like guidance or lmql, and I got this for instance in the case of lmql
AssertionError: Chat API models do not support advanced constraining of the output, please use no or less complicated constraints.
The constraints I’m using are not “advanced” at all:
@lmql.query(model=llm, verbose=True)
def chain_of_thought(question):
'''lmql
# argmax
"Q: It's the last day of June. What day is it?\n"
"A: Today is [RESPONSE: r'[0-9]{2}/[0-9]{2}']\n"
# generate numbers
"Q: What's the month number?\n"
"A: [ANSWER: int]"
return ANSWER
'''
Why does OpenAI have these limitations and are there any other tools or alternatives to build a classifier ?
I was expecting something more solid. I know it’ll work like that, but one of the reason(s) we need tools like guidance or lmql is to avoid the “crazy” non-determinism of LLMs… especially in apps that require automation…
For example, the screenshot I shared is from Azure OpenAI API docs… don’t you think it’s a limitation ?
Depends on what you do with it, a jackhammer and a scalpel are probably not realy interchangeable.
I think we’re entering a wholly different domain where we need to consider letting go of classical imperative programming if we want to leverage the true power of LLMs. Of course, there need to be interfaces between the classical and the llm domains, but trying to get the LLMs to behave like a classical computer is a waste in my opinion.
Regarding your image: sure, logprobs are nice to have, but I don’t know if they will really help you/give you the result you think you’ll get. If you really need them, use a different model?