GPT 4 from within MAKE Integromat

This is what I found in OpenAPI site:

“If you’re a Pay-As-You-Go customer and you’ve made a successful payment of $1 or more, you’ll be able to access the GPT-4 API (8k).”

I fall under that category, I am a plus subscriber, and pay as you go API customer, and have paid more than $1. So I should have access to the GPT4 API.

Yet MAKE does not offer me ChatGPT4 when I add a module.
Where am I going wrong here?

Is there any extra step I need to do on my Open AI account side of things to toggle on the GPT4 API? The total bill > $1 and new API key didn’t seem to cut it


So what happens when you request all available models from the API, or when you go to the model selector in the playground?
Can you see GPT-4 there?
This used to be an issue but it has been resolved for good.
And finally, what do you mean with “adding a module in MAKE”?

Yes, GPT4 is available in the playground.
However, when trying to connect from ‘outside’ (Make Integromat) via the API, I do not have GPT4 access

Well, then you know who to contact because you confirmed that you have access to the model on the OpenAI side.

Excuse my ignorance, as I’m not a developer.
I’m a plus subscriber too… is seeing the GPT4 model available in the playground evidence that gpt4 api is working just fine? Could it be that I’m seeing it because I’m a plus sub, but I don’t have access to the api yet?

1 Like

Don’t worry about it.
In short: If you can select the model GPT-4 in the playground then you have API access to the model. MAKE should be the one who can tell you how to proceed.
But I see they also have a in case that’s faster for you.

The plus subscription and the API access are actually two different things under one account. But what matters is that you have access to GPT-4 via API and not just ChatGPT.

I hope that helps.

1 Like

I’m trying bypass Make and playing with passing code in google colab. I’m stuck now when printing the response. Asking chatGPT what to use gives me this: <print(response.messages[-1][‘content’])> or this: <print(response[‘choices’][0][‘message’][‘content’])> Both give me error messages like this: <AttributeError: ‘ChatCompletion’ object has no attribute ‘messages’> When asking ChatGPT, it tells me: < It appears there has been a misunderstanding regarding the correct usage of the updated OpenAI Python client. The error AttributeError: 'ChatCompletion' object has no attribute 'messages' indicates that we are trying to access an attribute that does not exist in the response object.> Stuck in a loop there. How to print a response?

Also, just making sure I’m getting this right… gpt4 has 128,000 tokens in the context window, but the OUTPUT token limit is still 4096 tokens, same as GPT3.5?

Without seeing the code I can guess what is causing the issue but it’s likely faster to use a quick google search for an example. I found this one:

This should at least get you to the point where you can start working on your own implementation.

Note that you have to update the model name to gpt-4 after you got it to run initially.
And yes, the number of output tokens is still 4096 regardless of the context size.

I hope this helps!

1 Like


API billings are different than a ChatGPT Plus subscription.

Here’s you history of payments made for API use.

Have you paid $0 in order to use the API? Then you need to add a payment method and purchase credits to unlock GPT-4 model access.