For OpenAI, I do not fine tune; I use prompt engineering and embedding retrieval.
However, for the custom query language I’m working with (which wasn’t on the web in 2021) the models aren’t smart enough to understand all nuances of the language using only prompting, so I have to use fine tuning there. However, when I tried fine tuning davinci, it didn’t perform very well, so I’m now using a fine tuned MPT-7B model for this special case.