Is this common? I found that it has trouble doing simple math, for example, it was supposed to do a simple modulus but it gave the wrong answer 3 times before I finally told it the answer:
When we calculate 2499 % 512, it indeed should not yield 467. The correct calculation is as follows:
** Offset = 2499 % 512 = 475*
So the offset for the virtual address 2499 is 475, not 467.
Still wrong, it is 451.
also:
address 152 is (4 * 512) + 152 = 2152
Wrong
I worry as I am trying to convert some code but if it can’t even do simple math?
Language models are notoriously poor at mathematics, because they’re not actually doing mathematics. They’re predicting plausible next tokens.
You can improve the results by,
- Asking the model to slow down, explain the process it plans to employ, break the problem into discrete components, and check its work
- Give it 2–5 complete examples of the above
When converting code, I would break the conversions down into as small parts as possible. Usually having it translate individual functions one at a time. That way I can evaluate their accuracy more easily. As opposed to just doing it all at once.
It is bad a calculations but it is not necessarily bad at the logic of the calculations, if that makes sense.