Can someone explain the `response_model` chat completion param?

I was browsing the instagraph repo and came across a completion call that passes in a response_model parameter. But I can’t find any docs on that.

can someone explain how that works? It seems a way to pass in a classname and have the response magically formatted, creating an instance of that class.

In the API docs for completion i see response_format but not response_model

(I can’t include links so … )

What the parameter of a third-party library does will be documented in that project - or not documented.

It likely refers to setting the library to use a particular AI model such as gpt-3.5-turbo, which may even be an internal name or have a format for multiple AI services.

I figured it out, it’s a patch to the openAI API called “instructor” that allows you to pass typed models. very neat. Shame I can’t post the link as it’s very hard to google for.
I can’t post links but: github → jxnl → instructor