Codex Prompt Engineering & Finetuning?

Question:
Is there any workaround to do some sort of fine tuning with Codex?

More context:
Let’s assume my use case is translating Python to JavaScript and that such translation isn’t supported well yet by the current version of Codex. To support this use case better, I start the prompt with a few examples of JavaScript converted to Python for several functions. But this approach has limited space in the prompt input, which makes every call expensive and doesn’t let me feed it a lot of examples.

Because OpenAI has not release codex fine-tuning capability your workaround would have to be the use of fine-tuned text models for now. The latest generations of text models also have substantial programming code in its training corpus and it can produce code. Maybe try it out and let us know how it goes?

1 Like