Chat API models do not support advanced constraining of the output, please use no or less complicated constraints

Hello,

I’m working on a classifier and I was hoping to leverage tools like guidance or lmql, and I got this for instance in the case of lmql

AssertionError: Chat API models do not support advanced constraining of the output, please use no or less complicated constraints.

The constraints I’m using are not “advanced” at all:

@lmql.query(model=llm, verbose=True)
def chain_of_thought(question):
    '''lmql
    # argmax
    "Q: It's the last day of June. What day is it?\n"
    "A: Today is [RESPONSE: r'[0-9]{2}/[0-9]{2}']\n"

    # generate numbers
    "Q: What's the month number?\n"
    "A: [ANSWER: int]"

    return ANSWER
    '''

Why does OpenAI have these limitations and are there any other tools or alternatives to build a classifier ?

Hi!

I think this is an LMQL issue, not an openai issue.

@Diet What makes you say so? Any sources please ?

By the way, this is from the Azure OpenAI API docs, so it means there are limitations on OpenAI API

Also, you can check this link lmql.ai/docs/models/openai.html#openai-api-limitations

GPT-3.5 can do tons of stuff. Unfortunately, it doesn’t always conform to the ideas of how some people think it should be used.

Just because the ETH boys don’t know how to proompt doesn’t mean shifting the blame to openai is an appropriate thing to do.

Don’t get me wrong, openai screws up all the time, but in this case I don’t think it’s their fault.

Respectfully, my opinion :laughing:

I was expecting something more solid. I know it’ll work like that, but one of the reason(s) we need tools like guidance or lmql is to avoid the “crazy” non-determinism of LLMs… especially in apps that require automation…

For example, the screenshot I shared is from Azure OpenAI API docs… don’t you think it’s a limitation ?

Depends on what you do with it, a jackhammer and a scalpel are probably not realy interchangeable.

I think we’re entering a wholly different domain where we need to consider letting go of classical imperative programming if we want to leverage the true power of LLMs. Of course, there need to be interfaces between the classical and the llm domains, but trying to get the LLMs to behave like a classical computer is a waste in my opinion.

Regarding your image: sure, logprobs are nice to have, but I don’t know if they will really help you/give you the result you think you’ll get. If you really need them, use a different model?

Of course, this is all just opinion.