- Freezing of results - I am researching Predicted Outputs for our use case. I feel the “Structured Outputs” which are essentially system prompts also guarantee this to some degree.
- I understand why fine tuning can be an straightforward way , however, I feel it may be an overkill - expensive for our use case just yet. I can see this as a must have for coding, scientific research or other use cases where there is a constant stream of “new or synthetic” data - more frequent data updates?
https://platform.openai.com/docs/guides/optimizing-llm-accuracy - was useful and I see that it has been updated, I like the separation of context and behavior.
1 Like