Gpt-4o-mini can't parsing complicate function callings result but user message parsing is good

I’m encountering a situation while working with the GPT-4o-Mini model that I’d like to share for some insights and advice.

The Context:

Task: I need to provide two places and then find five associated places for each.

Tools:

  1. get_place(k) - Retrieves information about a specific place.

  2. find_associate_place(base_place) - Finds associated places based on a given base place.

The Problem:

• When I use GPT-4o-Mini for function calling to generate the associated places and then directly paste the results into a user message to create the final output, I often encounter issues. The output tends to get distorted, leading to inaccuracies and even hallucinations in the final result.

Workaround: If I manually parse the function’s result, extract the necessary information, and then input that into the user message to construct the final result, these problems don’t occur. The output is accurate, and there are no distortions or hallucinations.

The Question:

• What could be causing this distortion or hallucination when directly using the model’s function calling results in the final message? Why does manually parsing and then constructing the final result eliminate these issues?

i think gpt-4o-mini is more focus on user’s message.

I’m curious if anyone has faced a similar issue or if there’s an underlying mechanism within GPT-4o-Mini that might be influencing this behavior. Any thoughts or recommendations would be greatly appreciated!