Fine-tuning, few-shot learning prompt

Model: gpt-3.5-turbo-1106
Can an appropriate performance comparison be made by finding the optimal prompt through prompt engineering in few-shot learning, and using the same system prompt in fine-tuning as in few-shot learning?

I Think it’s a yes for a specific use cases. based on Latest announcement there’s nothing related to an optimal prompt for few-shot learning to be as the optimal prompt for fine-tuning.
Ultimately, everything is new so in order to determine whether an appropriate performance comparison can be made is to experiment both approaches !

Thanks to your kind reply :slight_smile:

1 Like