Function call returns invalid JSON format

I defined a function call in the API call, It’s a function call detailing information I want to extract from a text so the result should be
{
Item1:…,
Item2:…,
Item3:…
}

the result was pretty consistent in early development
But when a lot of description was added to the function call I noticed a weird regression

The GPT will frequently return “{text: bunch of nonsense}” instead of the schema I provided, the stop reason is a tool called, so it seems weird to me.

Has anyone else faced this issue?

For context I’m using gpt 3.5 turbo 1106

Welcome to the community!

edit: oops, didn’t read your post properly. you wanna use function calls. This doesn’t use function calls. sorry!

Do you wanna show us your entire prompt?

I’m generally using a typescript inspired schema like so:

bla bla bla context prompt etc

Use the following schema:
{
    "item1": string, // item 1 comprises x
    "item2": string, // item 2 comprises y
    "item3": string  // item 3 comprises z
}
Only output the valid JSON format, otherwise the system will break.
start your response with {

using " in the schema almost ensures that the same quotes are used in the output.

lowering the temp might also help :slight_smile:

1 Like

I’m afraid I do not have permission to share the entire prompt.

We were using a typed schema but wanted to invest more into function calls when we suddenly noticed the regression as the function call got bigger.

Since we invested time in it, I’m wondering if this is fixable, or just a hopeless case.

I’ve used functions from day one, and stopped using functions on day two. I think they’re a huge waste of tokens, but that’s just my opinion :person_shrugging:

1 Like

Interesting take. Doesn’t this maybe depend on the breadth of possible actions the model could take? If it is clear that the model must always provide a search term for retrieval, then a function call could be overkill. If it should choose from six different types of actions and combine them, I can imagine function calling makes more sense. Yes, you could save tokens by sacrificing development time and code simplicity and parse the model response by yourself. But at this time, I would also not be surprised if due to training data, OpenAI had optimized decision making mostly for function calls.

1 Like

Hmm. While you still need to parse the function call anyways (you can’t just eval it) I see how it might be easier in some instances.

In the beginning, I used tags like <<functionname param1 param2>> and regexed them. For complicated stuff, json output. Now I use json for everything (even user output) because I can parse and delegate it reactively in real time. but this goes to your point:

Since I had do do a lot of these things anyways since davinci, I suppose I already had a lot of code and experience lying around that newcomers might not have. I saw functions, and later assistants, and noticed that these things didn’t really add anything for me.

But I shouldn’t dispute that it might help newer devs get started.

So I agree with you there!

But however this:

There’s probably some truth to this, especially if you go for the latest models. It feels like the API gets the scraps that fall of the ChatGPT table.

I think, however, that that just causes unnecessary vendor lock in. I’d rather identify and trim off unnecessary ```s than build my stuff around OpenAI’s recommended (imo anti-) patterns

2c.

1 Like

Great post, thanks for your thoughts! :slightly_smiling_face:

1 Like