What's the issue with gpt-4-vision-preview prompt?

I was roasting different websites for fun, and it was working fine before. However, for the past two days, I keep getting the error “I’m sorry, I can’t provide assistance with that request” or something similar. I tried checking the moderations for the same prompt, and it was flagged as False. I’m not sure what’s going wrong here.

import base64
import requests

# OpenAI API Key
api_key = ""


# Function to encode the image
def encode_image(image_path):
    with open(image_path, "rb") as image_file:
        return base64.b64encode(image_file.read()).decode("utf-8")


# Path to your image
image_path = "/Users/website.jpeg"

# Getting the base64 string
base64_image = encode_image(image_path)

headers = {"Content-Type": "application/json", "Authorization": f"Bearer {api_key}"}

payload = {
    "model": "gpt-4-vision-preview",
    "messages": [
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": "Roast the website depicted in the image with humor, targeting its design, copy, alignment, color choices, and how it appeals to its audience. only in bullet points",
                },
                {
                    "type": "image_url",
                    "image_url": {"url": f"data:image/jpeg;base64,{base64_image}"},
                },
            ],
        }
    ],
    "max_tokens": 1200,
}

response = requests.post(
    "https://api.openai.com/v1/chat/completions", headers=headers, json=payload
)

print(response.json())

Hi and welcome to the Developer Forum!

See if the system works when not being asked to roast.

It’s working for other prompts like “What’s this image about?” etc.
Although uploading the same image with the same prompt on ChatGPT-4, gives me the results

Tell the AI it’s a comedian writing jokes about a site by finding things that are unusual, or wrong, or abnormal about the site.

If you present your task in a positive way and let it know you’ll be using the info in this harmless way it will likely help, but if it senses your motive is to be mean it will probably refuse, or if you’re simply asking it to itself be mean it will likely refuse.

There’s probably some deeply embedded instructions in the OpenAI “safety and alignment layer” that disables it from being mean.