Hi there, that’s an interesting idea, but unfortunately it’s not possible at this time. When you set top_p to 0, you’ll get the same completions, even with a high temperature, but I assume that’s not really what you’re looking for.
a) selects from some pre-determined list of seed topics
b) if you are an over-engineer like me, then call a second GPT endpoint for a (depending on the temperature, nearly truly) random topic
I tried experimenting with this. You can insert some random characters in the prompt that do seem to act as a random seed and therefore change the output that you get. Here’s the saved prompt I tried it with:
Basically, just put a bunch of random characters into the Question number (see prompt below). And you get a different response [edited: sometimes] when you ask the same question even when temperature is set to zero. Prompt:
Here’s a list of discussion topics for chatrooms:
Sorry about that. I tried it twice and got two different results, so I assumed it would keep generating different ones over and over. It seems that it doesn’t. I did get the two different results with two tries, but unfortunately, with further tries they’re the ONLY two results I got. …back to the drawing board.
@sridhar1 - To some extent it defeats the point of temp 0 (max likelihood) and you may want to instead experiment with finding the threshold top_p and temperature which just about generate variations.
For temperature 0, you can try prepending generation with random numbers or neutral texts. This will influence completion. You can also randomize a neutral starting word for each list item then complete with temp 0.