Very good point brought by Wojciech Zarembaduring during an interview with Lex Fridman at 1:41:22 is that CODEX [1] can self-evaluate while it codes. So there is room for automating how we get better results.
For example, we could use any algorithm already used to solve OpenAI gym [2] problems to automate a process of self-evaluation and learning to make CODEX better.
Interesting point about it is that Transformer models are extra powerful when we add it into a larger framerwork, for example using Reference Frames to assist its knowledge. Numenta [3] has a series of videos about Bio-inspired architectures for Intelligence. Maybe CODEX can easily be the first model to come closer to true intelligence, it can self-evaluate and evolve on its own and at the same time create dynamical reference frames of how it correlates with other information and all based on Transformers, which mathematically are very flexible.
As shown by Aditya Prakash, et al. [4] , Transformer can be used for Multi-Modal data fusion. This would allow to CODEX easily have a deeper understand as it evolves and make efficiently extremely reliable code with zero-shot or close to zero-shot and as it builds the project it would get better and better, doesn’t matter the field.
Notes: Sorry for the lack of visual examples, still figuring out jupyter.
Reference:
[1] [1:31:42] Wojciech Zaremba talking about OpenAI Codex with Lex Fridman
[2] OpenAI gym
[3] Numenta
[4] Aditya Prakash, et al. Multi-Modal Fusion Transformer for End-to-End Autonomous Driving