When you find yourself in that rare air of asking a question on coding forums that seems to either have never been asked, or nobody recommending an answer, you know you are in a special probability space. I too would like to do exactly what you are describing, with the idea that the openai interface I’m building (for me), will have a good handle on my current code base, and where I’m looking to take it. It is a task that is perfectly suited for one or two example tutorials. Hello super-devs, please consider throwing us an example of fine-tuning Codex with a proprietary code base. Please, pretty-please!, with a cherry on top.
I don’t think finetuning is the answer for this. I have worked on an AI that builds its own source code and expands its features. My take would be that to accomplish something as complex as this, you would need the AI to:
read through all the source code
generate an understanding of all the classes, methods and the way they are connected and depend on each other
have a general understanding of what the app is about and how it works
then you could make requests for new features or changes, but the challenge is how to select and pass on the required information collected in the previous steps. It would have to be a chain of multiple prompts, allowing the AI to expand research on certain parts (when needed), coming up with conclusions and proceeding on its main task.
I code with Copilot as a VSC extension daily and the Copilot suggested auto-completions based on my private repo code. Not sure about PHP as I’m a Ruby person, gave up PHP coding years ago.
However, if you want to check for overall syntax errors in code, it is easier (at this time) to copy-and-paste modules, methods and subroutines into ChatGPT, which is pretty good at finding syntax errors, etc. I have not yet tried this with OpenAI API, as I just copy-and-paste into ChatGPT (modules, methods, not entire large pages of code) and check for syntax errors that way.