Self correcting code from multiple wrong inputs:
I noticed that there is an interesting behavior happening that is more noticeable with Codex.
It appears to be a Self Correcting behavior on multi-head attention based models.
As it is observed below. The first code generation (output 1) is wrong however the model corrects itself on output 2.
When I runned a test on Leetcode and other databases of problems it happened a few times. It still need to improve a lot since I only got roughly 45% success rate solving problems.