Hi there, that’s an interesting idea, but unfortunately it’s not possible at this time. When you set top_p to 0, you’ll get the same completions, even with a high temperature, but I assume that’s not really what you’re looking for.
If you are looking for random seeding the prompt, you may consider building some random (or structured/conditional) generator in your UI stack (python, JavaScript, etc) that calls the GPT endpoints
that either
a) selects from some pre-determined list of seed topics
b) if you are an over-engineer like me, then call a second GPT endpoint for a (depending on the temperature, nearly truly) random topic
I tried experimenting with this. You can insert some random characters in the prompt that do seem to act as a random seed and therefore change the output that you get. Here’s the saved prompt I tried it with:
Basically, just put a bunch of random characters into the Question number (see prompt below). And you get a different response [edited: sometimes] when you ask the same question even when temperature is set to zero. Prompt:
Question: BB2922B43
Here’s a list of discussion topics for chatrooms:
1)
I wasn’t able to replicate that. I tried around 6 generations on 5 different questions, and in those 30 experiments, I never got a different response within the same prompt.
Sorry about that. I tried it twice and got two different results, so I assumed it would keep generating different ones over and over. It seems that it doesn’t. I did get the two different results with two tries, but unfortunately, with further tries they’re the ONLY two results I got. …back to the drawing board.
Interesting – I incremented up each character of that Question to see if that changes the seed, but I got the same question both times. “What is the best way to get a job in the video game industry?”
Upping the temperature (to 0.25) gave me the same response as above the first time and then two different questions the subsequent time:
@sridhar1 - To some extent it defeats the point of temp 0 (max likelihood) and you may want to instead experiment with finding the threshold top_p and temperature which just about generate variations.
For temperature 0, you can try prepending generation with random numbers or neutral texts. This will influence completion. You can also randomize a neutral starting word for each list item then complete with temp 0.