I would kindly suggest to have the filter check as a parameter setting in the openai.Completion.create function. When the filter check fails we have to retry and in such a way we spend for tokens that we don’t use. And in cases when we need to generate more than one completion, through the settings n or n and best_of, the filter checking may reduce the advantage of the gain in speed that one gets through the fact that there is a setting such as n. Maybe there’s a better way than the one I found here [OpenAI API](https://content filter OpenAI) and I just don’t know.
Building content filtering setting for incoming prompts and generated results for all the apis that take and generate text would be really helpful.
Right now I need 3 round trips for every call which really kills latency for my interactive app:
# Round trip 1
if ClassifyContent(user_prompt) < 2:
# Round trip 2
result = openai.Completion.create(... prompt=user_prompt ...)
# Round trip 3
if ClassifyContent(result) < 2:
# Show result to user
else:
# show them a message saying was potentially unsafe
else:
# show them a message saying their text was potentially unsafe
Maybe there is a better way to do this that I’m missing?