Gpt-3.5-turbo returning vastly different results

Hello community,

I’m trying to use the chat/completions API to make some seemingly simple calculations, for instance…

what is the volume of a cylinder with a height of 34cm and a radius of 8cm

i’ve noticed that the returned results are always different and vastly inaccurate, i don’t really understand why as it’s quite a simple equation…

would there be any reason for the variations in results of this prompt and would anyone be able to suggest a model that would catagorically return the correct result every time?

thanks for your help and time.


EDIT: In digging a little i discovered this…
This, imo, is a major pitfall in the infastructure of ChatGPT… is this at any stage going to be revised and accounted for?


These are language models.



As @ruby_coder said, they are language models. It’s hallucinations are inherent in its ability to function.

There are AI that function just as you’d like. You may combine these two together to perform some amazing tasks.

You can even play around with the synergy here:


yes, i understand that, that makes sense but would it be an idea to include a mathematical evaluative model along with the the rest of the openAI models? it’s weird because you can ask ChatGPT to write code that comprehends and generates rather sophsticated equations in javascript yet it can’t resolve a simple mathematical sum? i find that odd…

CherGPT generated code often looks more “elegant” than what it actually is. Its often “Dall-E” like - looks cool but is a surreal hallucination.

In fact, much of generative AI code is simply hallucinated crap code: especially if it is longer than 10 or 20 lines of code.

Its “quite spammy” code, TBH; but it does fool a lot of novice coders and non-software engineers!



hmm, i’m not sure i agree with this, i’d say i’m just below senior in terms of js and in my experience it seems to resolve some complex calculations very well, eg just recently, it perfectly calculated the ratio variables needed to distribute a canvas texture perfectly onto a 3D sprite from a variable length text input, as well as calculate the scale of the sprite so the text always fills 100% width of the sprite, in a very clean and sophisticated way… baring in mind i did feed it the function of around 80 lines i had already written to generate the canvas and update the sprite texture…

@RonaldGRuckus the links you’ve shared are really interesting, thanks for dropping those into the convo, i’ll have to look into these!

Well, i am “way beyond senior” and here in these forums I can easily spot ChatGPT generated crap-code related to the OpenAI API.

Sure, if you ask for a method that is a (static) method which has been around for ages then you can get lucky; but anything more complex which depends on changing and upgrading libs is full of deprecated code.

i code professionally and use CoPilot daily to save keystrokes : but it is more entertaining than accurate.

Having said that, I bought a yearly CoPilot subscription because of the keystrokes saved: and its fun to have a psychotic AI auto-completing nonsense half the time.


1 Like

To avoid hallucinations from the LLM, you need to use a framework like LangChain to break the input into things suitable for the LLM and things suitable for the calculations to then synthesize the correct response.

Quick intro video:


oof, i never got into it, i love to code manually just to keep wpm up :slight_smile:

the function in question i’d also written very recently but in all fairness is constructed around common javascript API methods relative to canvas. I do see that asking for code using niche little libraries such as THREE.js that ChatGPT is trained on revision <87 ( the latest being r150 ), so yes there’s heaps of depricated references, nonetheless, relative to making calculations of unknown variable values derivative of other parameters it seems to do a decent job…

thanks for the link @curt.kennedy i’ll have a look at this

1 Like