Different temperatures for different parts of the prompt

I have a prompt that requires strict formatting with examples, but also demands a broad and creative approach to topics. When I set the temperature to 1, GPT starts improvising with the formatting, which ruins further processing of the response. However, when I set the temperature to 0, it repeatedly provides the same topic (I asked about HTML tags, and it provides me with an example only using the strong tag).

Is there a workaround to request formatting with a temperature of 0, but request topics with a temperature of 1 in the same prompt?

You indeed have a conundrum without obvious solution: You want more random unexpected word token choices for creative writing, but unpredictable tokens will completely trash the strict formatting of your required container.

Temperature has some very unexpected effects. It is about probabilities, and just the re-weighting of them, not the complete elimination of low probabilities. β€œ1” might be the default unmodified temperature, but the way that possible language is decided internally by the language model when given any latitude doesn’t often reflect our use cases for unpredicable output.

Logits are noted by probability. They are selected by a multinomial distribution function. That means if a β€œ{” to start your JSON has a 99% probability of correctness and appearing, it’s still not set in stone; you roll a 100-sided die and β€œ100” means you lose.

Top-p is another parameter you can experiment with that will improve the constraints. A number like top-p = 0.2 will discard that 1% chance of near-certain words, while still leaving a few word choices for ambiguous situations. This can even be combined with higher temperature than 1 to make "I have a cute [bunny, baby, kitten] almost a near random choice of those top logits, while more certain choices are denied the alternatives.