openai.Moderation max string length

I was having trouble with errors thrown on dalle image prompts and wanted a way to check for potential unsafe or moderated comment before calling a function on the image prompt.

I have a function I wrote that runs a dalle image prompt through moderation. My question is, what’s the max length i could run a string for the variable prompt_text_string_in ?

def is_safe_prompt(prompt_text_string_in):
    Is it a safe prompt?  Reduces the likelihood of getting a rejection from OpenAI on a request.
    Check to see if the prompt is safe, before we run it through GPT.
    prompt_text_in (str): The prompt text to be checked for safety.
    bool: True if the prompt is safe, False if the prompt is not safe.

    response = openai.Moderation.create(
    output = response["results"][0]
    # print(str(output["flagged"]))
    # print(str(output))
    if output["flagged"] == False:
        return True
        return False