Gpt-4o not able to view or analyze images

Hi, im using weweb and supabase (no-code tools) to try out the open ai API using GPT-4o, i want to make a simple service were a user can upload an image and have AI view and analyze that image, just to test the API. Supabase (backend no-code tool) wants me to use something called edge-function and I have done all that correctly so when i try to run the JS script using open ai (GPT-4o) on an image, it says: “I’m not able to view or analyze images or open external URL’s”. And GPT-4o needs an URL to be albe to view images that whats the docs says.

Anyone also experienced this or have found an solution to use the open ai API, model GPT-4o and have it view and analyze images from an URL?

This code below is the index.js code there is also an deps.js file that imports open ai

import { serve } from './deps.ts';
import { Configuration, OpenAIApi } from './deps.ts';

// Initialize the OpenAI API client with the API key from environment variables
const configuration = new Configuration({
  apiKey: Deno.env.get('OPENAI_API_KEY'),
});
const openai = new OpenAIApi(configuration);

// Define the handler function for incoming HTTP requests
async function handler(req: Request): Promise<Response> {
  if (req.method !== 'POST') {
    console.error('Method not allowed');
    return new Response('Method Not Allowed', { status: 405 });
  }

  try {
    const { imageUrl } = await req.json();

    if (!imageUrl) {
      console.error('Missing imageUrl in request');
      return new Response('Missing imageUrl', { status: 400 });
    }

    console.log('Received imageUrl:', imageUrl);

    const requestBody = {
      model: "gpt-4o",
      messages: [
        {
          role: "user",
          content: `Please analyze the following image: ${imageUrl}`
        }
      ]
    };

    console.log('Sending request to OpenAI API:', requestBody);

    const response = await fetch('https://api.openai.com/v1/chat/completions', {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'Authorization': `Bearer ${Deno.env.get('OPENAI_API_KEY')}`,
      },
      body: JSON.stringify(requestBody),
    });

    if (!response.ok) {
      console.error('OpenAI API error:', response.status, response.statusText);
      const errorDetails = await response.text(); // Get text response for detailed error
      console.error('Error details:', errorDetails);
      return new Response(`Failed to analyze the image: ${response.status} ${response.statusText}`, { status: 500 });
    }

    const responseData = await response.json();
    console.log('OpenAI API response:', responseData);

    const result = responseData.choices[0].message.content;

    // Prettify the result
    const prettifiedResult = result.split('\n').map(line => line.trim()).join('\n');

    return new Response(JSON.stringify({ result: prettifiedResult }), {
      status: 200,
      headers: { 'Content-Type': 'application/json' },
    });
  } catch (error) {
    console.error('Failed to analyze the image:', error);
    return new Response(`Failed to analyze the image: ${error.message}`, { status: 500 });
  }
}

// Start the server and listen for requests
serve(handler);```

To answer your question, yes, there are many that daily say “look at this image”, give a link, and it doesn’t work.

The ones that have found a “solution” are the ones that read the API documentation for chat completions or assistant, and send the image_url object with the correct method for connection to a user message.

No worries, I got it to work!