i’m using the chat api (3.5 at the moment, can’t afford 4.0) to get some detailed responses for my prompts. the problem is, sometimes the responses are good, sometimes they’re really bad. i’m telling chatgpt to generate a specific writing style and structure - and sometimes it works, sometimes it doesn’t. right now, i’m not sure what would be the best way to fix this. my plan is to get the responses that i really like and either fine tune with them (not sure tho if GPT can be fine tuned) or use them as embeddings (not sure if this is the right thing to do, as i want to get a certain writing style, not new data / info) or use them in the initial prompt.
easiest solution would be to add them to the prompt (few shot prompting) - but my problem is that these things are 1 - 1.5k tokens long. if i use just one response, it doesn’t generate the perfect result all the time. if i use 3 of them, i run of out tokens for the completion.
what would be the best thing to try in this case?