Function Calls is repeated unnecessary

A have some function to provide to user some fresh data. Given a single prompt like (it’s not my real prompt and not my real functions): How is the weather in NYC and should I go to the beach? When looking to logs I see some function repeated… for example:

get_weather { symbol: ‘NYC’ }
should_go_to_beach { beach: ‘123’ }
get_weather { symbol: ‘NYC’ }
get_weather { symbol: ‘NYC’ }
should_go_to_beach { beach: ‘123’ }

The assistant answer goes well but because it repeats unnecessarily it token costs a lot.

Thank you.

There’s very little to go off here. If GPT feels like it needs more information it will make another function call. You are showing what functions were called with what parameters but these are clearly not real logs.

What are you returning? That’s really what is needed to see. Can you post your actual logs?

The AI needs complete conversation history of what it has written and the function returns it has received, at the minimum since the last time it has replied directly to the user.

If you make an AI that forgets the previous function, it doesn’t know why someone is wanting to go to the beach, and it still looks like the weather is unknown.

Hi @RonaldGRuckus , yeah I put some example because my original data is sensitive. But get it as a premise. To answer the user AI just need call get_weather && should_go_to_beah once. Since the params are the same.

And @_j , Im sure the history is going properly.

To me the issue is that whatever you are returning is determined to not be sufficient. But I have low confidence that this is truly the issue because this is all I have to go on.

@RonaldGRuckus , gimme a sec I gonna print the real log…

If you are indeed using assistants, unfortunately the quality of the internal function call “conversation” is unknown and out of your control. You can’t see how the AI is building its knowledge from your own API.

Repeating is an immediate sign of it being done poorly. This is seen often by those that attempt.

If you see a symptom in the assistants mode/endpoint that would not happen when you’ve programmed a chat completions AI correctly, program a chat completions-based chatbot correctly.

@RonaldGRuckus , @_j

Real log from what is the final messages between the user → function calls → assistant response:

 [
  {
    role: 'system',
    content: 'As .... instructions',
  },
  {
    role: 'user',
    content: '...user question...',
  },
  {
    role: 'assistant',
    content: null,
    tool_calls: [
      { id: 'call_1IcZM3G3vLknClIuWSAksnc', type: 'function', function: { name: 'get_stock_infos', arguments: '{"symbol":"BBAS3"}' } },
    ],
  },
  {
    tool_call_id: 'call_1IcZM3G3vLknClIuWSAksnc',
    role: 'tool',
    content: 'Empresa: BRASIL ON NM ...',
  },
  {
    role: 'assistant',
    content: null,
    tool_calls: [
      { id: 'call_1IcZM3G3vLknClIuWSAksnc', type: 'function', function: { name: 'get_stock_infos', arguments: '{"symbol":"BBAS3"}' } },
      { id: 'call_fgn8oRpJjvkf61l56oDHAlksc', type: 'function', function: { name: 'get_stock_options', arguments: '{"symbol":"BBAS3"}' } },
    ],
  },
  {
    tool_call_id: 'call_1IcZM3G3vLknClIuWSAksnc',
    role: 'tool',
    content: 'Empresa: BRASIL ON NM ...',
  },
  {
    tool_call_id: 'call_fgn8oRpJjvkf61l56oDHAlksc',
    role: 'tool',
    content: '"Considerações... real data',
  },
  {
    role: 'assistant',
    content: null,
    tool_calls: [
      { id: 'call_1IcZM3G3vLknClIuWSAksnc', type: 'function', function: { name: 'get_stock_infos', arguments: '{"symbol":"BBAS3"}' } },
      { id: 'call_fgn8oRpJjvkf61l56oDHAlksc', type: 'function', function: { name: 'get_stock_options', arguments: '{"symbol":"BBAS3"}' } },
      {
        id: 'call_5BgOfJC2xS4EMSRDJt7y3D9a',
        type: 'function',
        function: { name: 'get_black_scholes', arguments: '{"optionName":"BBASA530"}' },
      },
    ],
  },
  {
    tool_call_id: 'call_1IcZM3G3vLknClIuWSAksnc',
    role: 'tool',
    content: 'Empresa: BRASIL ON NM ...',
  },
  {
    tool_call_id: 'call_fgn8oRpJjvkf61l56oDHAlksc',
    role: 'tool',
    content: '"Considerações... real data',
  },
  {
    tool_call_id: 'call_5BgOfJC2xS4EMSRDJt7y3D9a',
    role: 'tool',
    content: '... real data....',
  },
  {
    role: 'assistant',
    content: 'ai real answer...',
  },
];

See, if I understood that the input to a function “BBAS3” obtained that result, I wouldn’t have to call the function again the same way several times just to look again.

The AI is not understanding, either because of cognitive ability (unlikely) or implementation (probable).

@_j , I just hid the real answer because it’s sensitive but you can take the premise that the function aswer satisfies the A.I. What I wonder is if the construction of the messages are correct, sound like it increments functions the AI want but reinject the ones already processed. Got it?

I also have the assumption that you are doing everything correctly. That just leaves a black box where we can only change the inputs.

  1. different model. I would avoid the preview models except for the mere curiosity. Fixes are required for them to write functions correctly in any language requiring characters above ASCII 128.
  2. More elaborate function return language. You can write sentences. A good sentence might be “this is the only answer the function can provide, do not call again”.
  3. different non-beta method.

I guess, the issue is because I’m incrementing functions already answered.

Hi can you explain what you mean by this ? I think I face the same issue .

The problem here is that the model can be confusing the calls to the functions because the instructions are not clear and can cause ambiguois environment. My suggestion is to make clear each funtion and split them in the case you require different answers.

1 Like