Getting Error 429 from API as response in deployed version, BUT responses work fine in local production env

I am doing a side project and am using the Openai completions API. I have some data in the backend which is being sent as prompt and then a query attached to that prompt. In localhost, my project works just fine(I am getting proper responses wihtout any issues) but when I deployed it to vercel hosting(I did set the env var Openai API key), the api throws the following error: { “error”: “Failed to fetch response from OpenAI”, “details”: “404 The model gpt-4-turbo-preview does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.” } And when I use “gpt-3.5-turbo” ,it thros this error: And this is the error with gpt-3.5-turbo: {error: “Failed to fetch response from OpenAI”,…} details : “429 You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.” error : “Failed to fetch response from OpenAI”. Also note that I do have credits in my account around $5 in available credits. Localhost works perfectly fine the only error I have is in the deployed Vercel hosted version. For reference, I am using Nextjs 14. Any idea how I can fix this?