Embeddings and Fine-tuning on one model with Llama-index

Hi,

I use Llama-index to train base model “text-davinci-003” on unstructured documents (see a part of the code below).

Is there a way to train this model additionally for specific prompts, using Fine-tuning approach?

I went through documents on OpenAI and Llama-index.
But looks like OpenAI’s fine-tuning works only on the models which are deployed in OpenAI account, so basically only on their base models.
And on Llama-index documentation I’ve seen how they use “BAAI/bge-small-en” as base model to fine-tune, but I couldn’t find how to use my local pre-trained model.

llm = OpenAI(model="text-davinci-003", temperature=0.6)

service_context = ServiceContext.from_defaults(llm=llm)
set_global_service_context(service_context)

# specify the input directory with the files
documents = SimpleDirectoryReader(input_dir=input_dir).load_data()
index = VectorStoreIndex.from_documents(documents)