Send prompt error code 500

Hi guys i’m new to using openAI in workflows.

I’'m currently making one on Zapier that uses answers from a google from > sends them to a sheet and from there send the answers to openai to give for example business ideas to the person who filled out the form. But every time I try to test run it gives this error when sending the prompt.

This is my prompt that I’m using;
You are a social media expert. Create 30 social media posts for a company. Deliver the output in JSON so that I can process it automatically.

Use the following input data: Company name: {{Company name}} Description: {{What_does_the_company_do}} Target audience: {{Target audience}} Tone of voice: {{Tone_of_voice}} Colors/corporate identity: {{Colors}} Products/services to promote: {{Services}}

Rules:

Create exactly 30 posts, named post1 through post30.

Maximum of 40 words per post.

Each post contains: 1 short hook (brief), 1 key sentence, and 1 CTA at the end.

Vary formats: tips, lists, educational, storytelling, promo, testimonial.

Add relevant hashtags (2–4) per post.

Deliver the result STRICTLY as a JSON object without additional text, in this format: {“post1”: “…”, “post2”: “…”, … ‘post30’: “…”}

How can i fix this?

1 Like

Your displayed model is -instruct, which only operates on the /v1/completions API endpoint. Try gpt-4.1 or gpt-4o.

It is unlikely that this node is going to use the older endpoint and not “chat” of some kind. It also misidentifies the OpenAI API, which is not ChatGPT. However, Chat Completions returns this error on “gpt-3.5-turbo-instruct”, which Zapier might not forward correctly:

404 - 'This is not a chat model and thus not supported in the v1/chat/completions endpoint. Did you mean to use v1/completions?'

(also the same suggestion that o1-pro and some others give, incorrectly, btw.)

Use of legacy completions requires high understanding of the underlying text completion behavior and actual “prompt engineering”. Instead, you should pick a modern chat model, and start with the chat completions API endpoint. You will not be able to use a reasoning model such as gpt-5 if the UI is sending temperature.

The API also works best when you have prepaid a balance into your organization to fund services.

1 Like