This is the way.
It would cost the same to use gpt-4-32k
as it would to use a fine-tuned davinci
model.
There is so much you can do to improve model responses with in-context few-shot examples and using embeddings to add key information into the context that I would start there well before I would consider fine-tuning.