Fine tuning vs. Embedding

My understanding, and please keep in mind it is only based on my readings in the past couple of weeks, and I am not an expert, is that with finetuning, you are trying to overpower the knowledge the model already has, which is huge. I guess you would need to focus heavily on cities on Mars and fine-tune it with thousands of examples - and as that is a knowledge that the model did not have before, it would probably work in this case? It would be difficult in most other cases, as the knowledge about anything is huge.

I would guess that fine-tuning could also work in a way how it forms a reply - for example fine-tuning it to create replies in the manner of a basketball commentator if that is your segment. Not the content but the sentences, and phrases.

For everything else, I believe embeddings are the way to go.