Presence_penalty and frequency_penalty parameters

The API docs indicate that the range of possible values for presence_penalty and frequency_penalty is from -2.0 to 2.0. Has anybody tried to use negative values for those coefficients?

I’m using GPT-4 for technical translations, so I actually do want repetitions: a certain word might occur multiple times in the source string, and that’s perfectly fine. But whenever I tried going negative with these coefficients, I got garbage results.

Zero should be sufficient for most use cases, unless there are things that have the same name many many times, in which case there is a low probability of that rare item being common in the training dataset.

4 Likes

I found the following information

When the presence_penalty is increased, the output is more likely to branch into new topics related to the initial topic with less likelihood of repeating tokens already chosen for the output.This can be particularly useful in research or brainstorming scenarios, where the goal is to generate output that provides related information without redundantly using common terms.

May I ask is there anyone check this result true or false?