Multiple outputs using Structured Outputs

Hello everyone,

I hope you’re all doing well. I’m working on an exciting project - a page generator - and I could really use your expertise and advice.

My logic:

  1. API accepts the amount of pages, prompt, and type of each page
  2. I generate the pages using Structured Outputs and return content

While the concept seems straightforward, I’ve run into a challenge with the Structured Outputs’ limitations. When trying to generate multiple pages with varied structures, I’m hitting the object field limit. I tried to generate pages individually, but this led to the AI losing the context of previous pages, resulting in duplications and disconnected content.

To solve this, I’m think about implementing a ‘conversation-like’ approach. The idea is to request pages one by one, allowing the API to maintain context from previously generated pages to avoid repetition. I still need to utilize Structured Outputs in this process.

The question is: is this approach even possible, or if you might have alternative suggestions? It’s crucial for me to maintain context, avoid duplicates, and ensure a reliable response structure.

Thanks in advance!

It is sure possible and a very good idea. Chat GPT actually uses that logic. You have to make it chatlike to keep memory or you can use assistant in a single thread. This second approach may not be that good, assistants are still beta and it doesn’t seem the simplest way.

Start the conversation with a empty list, then add every interaction in that list. Then, the list into the prompt.

If you need more help, just say it in here.

Happy coding!

@danielmedeiros.medei first of all thanks for the response!

But I have a question: what if my pages will have a different structure? Therefore I will have to change response_format and I’m not sure if it’s possible during one chat session.

Also, if you have any examples, I would really appreciate it if you could share them.

You can actually say in each message how you want the page. If you don’t think this works, you can put in prompt some page options and remind what kind of page you want in every interaction.

The code looks something like that:

filename: chatbot_vscode.py

import openai
import os

Ensure to replace this with your actual OpenAI API key

openai.api_key = os.getenv(“OPENAI_API_KEY”) # Or hardcode ‘your-api-key-here’

Function to get a response from OpenAI’s ChatCompletion API

def get_chatbot_response(messages):
try:
response = openai.chat.completions.create(
model=“gpt-3.5-turbo”,
messages={
‘system’: prompt
‘user’: input(‘’)

},
temperature=0.7,
max_tokens=150,
)
return response[‘choices’][0][‘message’][‘content’].strip()
except Exception as e:
print(f"Error: {e}")
return “Sorry, I couldn’t process your request.”

Main function to handle the conversation loop

def chat_with_bot():
print(“Start chatting with the bot! Type ‘exit’ to stop.\n”)

# Initial system message to set the behavior of the bot
conversation = [
    {"role": "system", "content": "You are a helpful assistant."}
]

while True:
    user_input = input("You: ")
    
    if user_input.lower() == 'exit':
        print("Conversation ended.")
        break

    # Append the user's message to the conversation
    conversation.append({"role": "user", "content": user_input})
    
    # Get the chatbot's response
    bot_response = get_chatbot_response(conversation)
    
    # Append the bot's response to the conversation
    conversation.append({"role": "assistant", "content": bot_response})
    
    # Print the chatbot's response
    print(f"Bot: {bot_response}\n")

if name == “main”:
chat_with_bot()

I’m on the phone, so I can’t teste it right now, sorry. But that’s the base model for a simple chatbot.

1 Like

You can also follow the streamlit tutorial for chatbots.

@danielmedeiros.medei wow, thank you thank you! This will help me a lot!

1 Like