Fine-tuned Model used with a private GPT

My target use case is to create a personal assistant that response in the voice of its owner, and ideally builds not only tone but also other knowledge similar to the owner. My thinking was to combine the custom GPT with a fine-tuned model. The fine-tuned model would contain various input/output examples of how the owner communicates in a couple of languages, etc. The bigger picture may be to include specialized knowledge.

My question is this possible without coding? Or is the plugin route the only way to achieve this? Knowledgebase seems to approximate this use case, but if I was to go with the OpenAI guidance on the difference of those two tools, fine-tuning to capture tone is the better solution.

I may be crazy and going about it in a less efficient way, but then again we are all crazy to be working with these new tools. Thank you for your thoughts!