You send a question or a task for the AI to perform.
The AI answers with its ability to write novel compositions and answer from its pretrained knowledge.
For example, I “program” the AI with a system message that tells it to be a helpful expert on OpenAI, and tell it a bit more of the new credit system than it can know about.
Then I send your user questions as the second message within the request:
Welcome to OpenAI! I’m glad to hear about your enthusiasm for learning and experimenting with AI.
To answer your first question, the error code 429 you’re seeing means that you’ve exceeded your current quota. This quota is based on the number of tokens (units of text) that your API calls consume. Even if your requests were unsuccessful, they still consume tokens and count towards your quota. As a new user, you must purchase credits to pay for your use of the API.
To check your usage, you can refer to the OpenAI Dashboard. It provides detailed information about your token usage and quota.
As for why you need to make REST requests to OpenAI for building Language Learning Model (LLM) apps, it’s because OpenAI’s models are hosted on their servers. When you make a request, you’re asking OpenAI’s model to perform a task, like generating text. The model doesn’t reside on your local machine, so you need to communicate with it via REST requests.
Regarding your second question, the type of requests a developer makes to OpenAI depends on the specific use case. Commonly, developers make requests to generate text, complete prompts, or translate text. The interaction between a user’s app and OpenAI typically involves the app sending a prompt to OpenAI, OpenAI’s model generating a response based on that prompt, and then the app displaying that response to the user.
For example, if you’re building a chatbot, your app might send a user’s message as a prompt to OpenAI. OpenAI’s model would then generate a response, which your app would display to the user.
I hope this clarifies your questions. If you have any more, feel free to ask!