Hey all,
I’ve been messing around with Codex for a while, and some of the biggest use cases that have come to mind so far have been:
- Code Review
- Generating Test Cases
Has anyone here had success with either of these usages? I know that OpenAI provides an endpoint to perform a semantic search through a set of documents, but I’m not sure how that stacks up with codex.
When performing code review, the diff proposed should be reviewed in context with the issue it is trying to tackle, as well as the pros/cons that it will bring to the rest of codebase. For example, a segment that produces a 200ms query speedup could also introduce a security bug, which could be a major concern depending on the context of the application.
In a similar way, a proposed diff could introduce a new feature or alter an existing one, so the ability to generate test cases that adequatly cover the new diff requires contextual knowledge of the codebase as a whole.
My experience with performing code review with codex has been that it works, but could be greatly improved by providing context, and maybe some way to allow the model to comment on a selected block of code, similar to GitHub PR reviews or commenting on Google documents.
If anyone here has also played around with these cases, I’d love to hear about your experience, what worked, what didn’t, etc.