Functions Calling bugs when used to get multiple results with one prompt or quantities with decimals

When I use the functions calling feature, for example to get the stock quantity of an item in my application using the following question: “What is the stock quantity of item Item1”, ChatGPT answers returning the quantity without decimal digits (the quantity is actually a float number), while, if I ask the balance for a customer it keeps the decimals (it looks like it considers stock quantities as integers while for the balance is ok). Another problem is that if I ask the stock quantity for 2 items in the same prompt, sometimes it replies switching the quantities, that means, the quantity of Item1 is displayed for Item2 and vice versa.

Current generation LLM’s should not be used to track numeric values like stock levels, prices, or any information that may have a significant effect on your business. Use traditional code to handle that kind of information. LLM’s are statistically based, they are fantastic at taking human centric unstructured data and putting it in to structured formats, they will even handle numbers during that task quite well, but you should not use functions to manipulate numeric values as it seems you may be doing.

I forgot to add an import detail. I’m not returning any number. I’m returning a number converted to text, so ChatGTP should take the text as it is, like any other kind of string. I do not expect, for example, ChatGPT to cut part of my name if I return it from a Function Call, so I’d expect the same behavior with strings that contain numbers with decimals and already formatted. For example, when I ask for the balance of a customer, ChatGPT also adds the currency symbol, something I didn’t ask and returned from my function. What if it’d do the same to names ? The Functions Calling feature was implemented to let developers add information that ChatGPT is not able to collect (like tomorrow’s weather at a specific location). I’ve seen published examples of functions processing math expressions, so I do not see any reason why it shouldn’t work with my data that are much simpler, just a string containing a number like “98.4”, which ChatGPT inserts in the next response as 98, dropping the decimal part, that is wrong.

As I see the issue: it’s wrong for your use case. In everyday parlance, dropping decimals is not considered wrong, have you set the model up with a system prompt describing it’s persona as an assistant that always includes decimal places? if not you will have whatever the generalised default persona is and that almost certainly will not give decimals every time, the same as the median average human you meet on the street will say it’s 105 out, not 104.96.

Hey, can you describe your exact flow? IIUC you’re doing something similar to the example in the documentation (completion with user input → local function call → completion with function call response)?
If so, wouldn’t dropping the last step and parsing the eventual response yourself solve the issue (I’m guessing you have the context according to the function called)?

Sorry in advance if I misunderstood the flow/usecase!