There’s been a lot of interest, but not many answers from what I’ve seen. I believe the answer, currently, is no.
Can I fine-tune a Codex Model? I’m trying to use davicni-codex as the base model for fine tuning job, but I get this error: “Error: Invalid base model: davinci_codex (model must be one of ada, babbage, curie, davinci) (HTTP status code: 400)”
I’m ultimately trying to create a model that will write code for a robot, so I would also appreciate any help on somehow increasing Codex’s knowledge of this Github repo: GitHub - FIRST-Tech-Challenge/FtcRobotController as well as its forks, thanks.
Thank…
Is it possible to fine-tune either of the codex models? I’d love to play with some block-based coding datasets. The stock davinci model seems to know a bit about the structure/internals of blockly, but doesn’t seem to have many samples of blocks and what they do in various contexts.
I could try a really long prompt with them, but have had such good outcomes with fine-tuning I would love to try that as well.
This last one might be worth your looking into…
We can fine tune language models like BERT, GPT-3. Can I fine tune GitHub Copilot model? I have already looked into the https://copilot.github.com/ but cant find the details. Would really appreciate if someone had fine tuned Github Copilot.
Reading time: 1 mins 🕑
Likes: 4 ❤
The Codex model that’s powering the Copilot product is not open sourced. However, there are a few models similar to Codex available on the Hugging Face Hub such as Incoder or CodeGen:
Welcome to the forum!