Less repetitive outputs when using the API at scale

Hi

I am using the API to submit very similar prompts, with ever so slight differences (such as names etc).

Unsurprisingly, the outputs have plenty of similarities and are quite repetitive with the language they use (they are completely fine when read alone, but when read collectively there is too much repetition). Is there anyway for each prompt to generate an output which uses more varied language? Is there any setting within the API, or would fine tuning help at all?

Thanks in advance for any suggestions.

1 Like

Welcome to the community!

You could try dynamically setting temperature each call within a range.

Or try to ask for more than one result at a time. When it’s all in one API call, there’s less chance for it to be similar most of the time.

1 Like

Thanks Paul. I’m currently dynamically altering the temperature setting for each call, but there are still a lot of similarities.

Would fine tuning make any difference here? I’ve never used this functionality so perhaps it’s even not relevant at all in this case

1 Like

Fine-tuning likely wouldn’t help.

Maybe also add a variable system prompt that changes slightly?

Running multiples at once is usually the best, though.

1 Like

I’m also experimenting the same issue. I need to generate 500 or so small sized essays of 350 words - this is for academic research purpose. My prompt is 90% static but the remainder 10% change at each generation. I’m also changing the temp at each generation. I noticed more similarities at the beginning of the texts. I also notice that the message fingerprints are all the same. Any help or tips here are appreciated. I will definitely try the suggestion of generating more than once per run.