openai.Moderation max string length

I was having trouble with errors thrown on dalle image prompts and wanted a way to check for potential unsafe or moderated comment before calling a function on the image prompt.

I have a function I wrote that runs a dalle image prompt through moderation. My question is, what’s the max length i could run a string for the variable prompt_text_string_in ?


def is_safe_prompt(prompt_text_string_in):
    '''
    safe_prompt(prompt_text_in)
    Is it a safe prompt?  Reduces the likelihood of getting a rejection from OpenAI on a request.
    Check to see if the prompt is safe, before we run it through GPT.
    https://beta.openai.com/docs/guides/moderation/overview
    Parameters:
    prompt_text_in (str): The prompt text to be checked for safety.
    
    Returns:
    bool: True if the prompt is safe, False if the prompt is not safe.
    '''

    response = openai.Moderation.create(
        input=prompt_text_string_in
    )
    output = response["results"][0]
    # print(str(output["flagged"]))
    # print(str(output))
    if output["flagged"] == False:
        return True
    else:
        return False