Fine-Tune Davinci to write programming language

A use case I’m trying to achieve is the translation of NLP into SAIL program code. People should be able to write a command in human language (English) and get a code written in SAIL (It is a programming language like C or C++). I tried codex, but it doesn’t seem to write SAIL code naturally and there is no support to fine-tune codex series. ChatGPT was not accurate enough for me. I am planning to fine-tune the Davinci model, and I would like to know how large the dataset should be and what hyperparameters should be used to do so. Could you please let me know if fine-tuning the Davinci model will accomplish my needs?


I would like to share an update regarding this topic. I recently fine-tuned the Davinci model and achieved a satisfactory level of accuracy. However, upon analyzing the results of the fine-tuned model, I observed that the validation score did not have a significant impact, whereas the training score graph appeared promising. Nevertheless, what concerns me the most is sequence accuracy, as generated programming code sequences is crucial for this particular use case. Therefore, I would appreciate it if someone could advise me on how to enhance the sequence accuracy. To finetune the model I have used 300 - 400 prompt and completion having the code description and the code.
Thank you.

Following this topic as I am looking into this too.

For my usecase, I am debating to continue to use ChatGPT4 vs finetune. I have been used GPT4 and it is fairly accurate but definitely can be improved. It appears chatgpt doesnt allow finetune at this moment so I am very curious to learn if finetune would work better than ChatGPT4 and if so how many examples will be needed.