I am not 100% sure what your question is but I am assuming it has something to do with the difference between having examples in the prompt, which OpenAI calls “In-context learning” and fine-tuning GPT-3.
In short, Fine-tuning alters the parameters of the models depending on the examples, but in-context learning does not. So using the same set of examples in-context versus fine-tuning should lead to different results.