Hello, I’m aiming to create a chat aimed at helping developers of a gaming platform that allows you to create custom servers within it using the LUA language for the development of these servers.
Will using fine-tunning to instruct the AI to learn more about the documentation provide an accurate answer to my customers’ questions? I want to make a chat-bot aimed at this but I need to know if fine-tuning allows AI to go beyond the prompts sent to it in the learning process.
For example, if a person sends a code with a problem to the AI, will it know how to correct it even if it is not instructed to correct that specific code? Will she use the learning about the documentation that was passed to her in the fine-tunning process?
Or would it be better to pass the documentation using the assistant’s Retrieval? This process with the assistant seemed a bit costly to me, I want to know which way is best for this in terms of cost-benefit and also in terms of quality.
Hey there and welcome to the community!
So, this is a great project to work on, but it is going to be a bit hefty.
Fine-tuning can indeed work really well if the focus is on code. What fine tuning works really well for is formatting and for specific structures of conversations (or instructs). However, code interpreter w/ a gpt-4 model might still work better here.
That being said, I think answering some of your other questions might help elaborate what needs to be done.
No. It does not. It allows you to create your own prompting structure if needed.
That is an extremely difficult question to answer with certainty. I will say the more Lua code you feed it, the better it might get at correcting Lua code in a single shot. Keep in mind stuffing documentation will not help as much as the code itself. Now, you could go for more of the Auto-GPT approach, or something like wolverine, where it would basically loop itself until the code works, but this can plow through credits quickly.
This would be ideal for the human Lua programmers to more easily retrieve documentation, but not for the AI.
When you fine-tune models, it’s good to think about it from a “show don’t tell” philosophy. Show the model what you want from it, using training data as examples of how you want it to act. GPT-4 at least already knows Lua, so the issue would be reducing the amount of calls necessary to generate good code. I use GPT-4 to help me with conky scripting.