App framework or container for Assistants API

I’ve recently learned the Assistants API and have used it to create several assistants. Now I’m ready to make those assistants available outside of OpenAI.
I know there are a number of ways to integrate assistants into websites and other services but what I’m interested in is a web app that is similar to ChatGPT but allows you to plug in an API key and an Assistant ID and then be able to interact with the particular assistant using the app’s interface. This would be a standalone app.
My goal is an app setup to use the assistants API with an interface that can stream responses and format them nicely (like on ChatGPT). The app would work with any OpenAI assistant turning it into a ChatGPT of its own.
My question is has anyone developed such an app? I don’t mean a paid service or website that does this but just looking for the code.
I’ve looked at building this myself with either Flask or React but don’t want to reinvent the wheel if there is something like that available…although I will do one myself and share it if need be…

1 Like

I’d recommend streamlit. here is some streamlit code I wrote that pulls the api and assistant id from separately uploaded ‘secrets’, but could easily be a user input field. This code is not super polished but is working to create a pretty decent front end for an assistant.

This code streams responses with markdown formatting.

import streamlit as st
from openai import OpenAI
import time

client = OpenAI(api_key=st.secrets["OPENAI_API_KEY"])
assistant_id = st.secrets["ASSISTANT_ID"]

def ensure_single_thread_id():
    if "thread_id" not in st.session_state:
        thread = client.beta.threads.create()
        st.session_state.thread_id =
    return st.session_state.thread_id

def get_filename(file_id):
        # Retrieve the file metadata from OpenAI
        file_metadata = client.files.retrieve(file_id)
        # Extract the filename from the metadata
        filename = file_metadata.filename
        return filename
    except Exception as e:
        print(f"Error retrieving file: {e}")
        return None
def format_citation(annotation):
    file_id = annotation.file_citation.file_id
    filename = get_filename(file_id)
    if filename:
        # Replace '---' with '/' and '.html' with '.htm' for URL conversion
        file_url = filename.replace('---', '/').replace('.txt', '')
        if not file_url.startswith('www.'):
            file_url = 'www.' + file_url
        citation_info = f" ({file_url}) "
        citation_info = "[Citation from an unknown file]"
    return citation_info

def stream_generator(prompt, thread_id):
    # Create the initial message
    message = client.beta.threads.messages.create(

    # Start streaming the response
    with st.spinner("Wait... Generating response..."):
        stream = client.beta.threads.runs.create(
        partial_response = ""
        for event in stream:
            if == "":
                for content in
                    if content.type == 'text':
                        text_value = content.text.value
                        annotations = content.text.annotations
                        if annotations:
                            for annotation in annotations:
                                citation_info = format_citation(annotation)
                                indexes = f"from index {annotation.start_index} to {annotation.end_index}]"
                                text_value = f"{citation_info}"
                        partial_response += text_value
                        words = partial_response.split(' ')
                        for word in words[:-1]:  # Yield all but the last incomplete word
                            yield word + ' '
                        partial_response = words[-1]  # Keep the last part for the next chunk
        if partial_response:
            yield partial_response  # Yield any remaining text

# Streamlit interface
st.title("🌺 Discuss Actualism With ChatGPT")
st.subheader("Be wary that ChatGPT often makes mistakes and fills in the gaps with its own reasoning. Verify its responses using the provided citation links.")

# Chat interface
if "messages" not in st.session_state:
    st.session_state.messages = []

for message in st.session_state.messages:
    with st.chat_message(message["role"]):

prompt = st.chat_input("Enter your message")
# Streamlit interface
if prompt:
    thread_id = ensure_single_thread_id()
    with st.chat_message("user"):

    st.session_state.messages.append({"role": "user", "content": prompt})

    # Display assistant response in chat message container
    with st.chat_message("assistant"):
        response_container = st.empty()  # Create an empty container for the response
        full_response = ""
        for chunk in stream_generator(prompt, thread_id):
            full_response += chunk
            # Update the container with the latest full response, adding fire emojis
            response_container.markdown("🌺 " + full_response)

    st.session_state.messages.append({"role": "assistant", "content": full_response})
1 Like

That is exactly what I was looking for, thank you. A simple but effective implementation. I think I was overcomplicating it because streamlit seems easy to use for this…