Implementing Form for OpenAI API

Hello!

I have been experimenting with the OpenAI API for a week or two now, and I have a question. I am trying to build myself an application that has a form with multiple separate inputs (ex. Prompt, Tone, Length, etc.) . When you call the API, do you combine this data, then send it, and if so, how? I am still new to this, so sorry if I sound a little disoriented.

Thanks!

Nick

Hi @nick.neuman,

I recomend reading and following the Quickstart guide in the docs. It’ll walk you through building an app in Node.js and Python to help you get started.

3 Likes

For example, if you’re using Python, you might use a library like requests to make HTTP requests to the API. You would then create a dictionary or JSON object containing all the input data you want to send to the API, and then send that object as the body of your request.

Then, you structure your API request using these inputs and send it off. The API will process your request and return a response, which you can then handle accordingly in your application.

1 Like

The original poster from 14 months ago has probably gone on to form their own AI startup or long ago given up. This is what we call a mega-bump.

Some turns of having a programming GPT in ChatGPT Plus make the desired app, where all the work was fighting its pretraining to use the exact methods provided in prompt and also part of its instructions and also file knowledge.

The length I had set the max_tokens. The tone is in a “response style/behavior” container, so one could say “short responses only” to instruct the AI to do so. But making it understand how to use the API was simply multi-shot to the AI answering the OP question for my own amusement, which is ultimately just repeating back my input:


Hello!

It sounds like you’re on the right track, and I’m happy to help explain how to integrate multiple input fields into a request for the OpenAI API.

How the API Works

The OpenAI API provides an endpoint client.chat.completions.with_raw_response.create() that accepts a list of messages in the format [{ "role": "system" | "user" | "assistant", "content": "..." }]. The API returns a generator object which contains a streaming response, and you can iterate over each chunk to process the result.

Specifying Parameters

  1. Messages: The messages parameter is a list combining inputs like prompts and system instructions.

    • System: Provides a context or persona to the assistant.
    • User: Contains the user prompt.
  2. Model: Specify the language model (e.g., "gpt-3.5-turbo-0125").

  3. Max Tokens: Controls the length of the response (e.g., max_tokens=100).

  4. Temperature & Top P:

    • Temperature: Controls randomness (e.g., temperature=0.5).
    • Top P: Alternative sampling method.
  5. Streaming: Enable stream=True for word-by-word responses.

  6. Stream Options: You can include additional options like {"include_usage": True}.

Sending the Request

Given a form with fields for the prompt, tone, and length, you’d follow these steps:

  1. Combine Inputs:

    • Prompt: User input.
    • Tone: Additional system instruction.
    • Length: max_tokens parameter.
  2. Build the Messages List:

    • Create a system message (persona or tone context).
    • Add user prompts and previous chat history.
  3. Make the API Request:

    • Call client.chat.completions.with_raw_response.create() with the specified parameters.

Parsing the Response

The response is a generator object containing chunks of streaming data. Each chunk represents a segment of the conversation, allowing for real-time updates to the application.

Here’s how you handle it:

  1. Iterate Over the Response:

    • Use a for loop to process each chunk.
    • Extract choices and usage data.
  2. Build the Full Response:

    • Concatenate all text chunks to form the complete response.
  3. Add to Chat History:

    • Append user prompts and assistant responses for subsequent context.

Example Explanation

Below is a step-by-step explanation based on the working code:

  1. Initialize OpenAI Client:

    client = openai.Client()
    
  2. Build System and Chat Messages:

    system = [{"role": "system", "content": "You are ChatExpert, a large language model AI assistant."}]
    chat = [...]  # Your chat history
    user_prompt = [{"role": "user", "content": "Prompt content"}]
    messages = system + chat[-10:] + user_prompt
    
  3. Make the Request:

    response = client.chat.completions.with_raw_response.create(
        messages=messages,
        model="gpt-3.5-turbo-0125",
        max_tokens=100,
        temperature=0.5,
        top_p=0.5,
        stream=True,
        stream_options={"include_usage": True}
    )
    
  4. Parse the Response:

    content = ""
    for chunk in response.parse():
        if chunk.choices:
            if not chunk.choices[0].finish_reason:
                word = chunk.choices[0].delta.content or ""
                content += word
            else:
                finish_reason = chunk.choices[0].finish_reason
        if chunk.usage:
            usage_dict = chunk.usage
    
  5. Update Chat History:

    chat.append({"role": "user", "content": "Prompt content"})
    chat.append({"role": "assistant", "content": content})
    

Conclusion

In summary, you combine different inputs by structuring them in the messages parameter and adjusting the other options like max_tokens and temperature. You then process the streaming response chunk by chunk to build the full response.

I hope this explanation helps clarify things for you. Feel free to ask any further questions.

Thanks!

It also describes streaming features introduced yesterday…