How can I link to an assistant I made in the assistant builder from my web application, or at least replicate it in my app?

I have been spending the last two days trying to make a support chat bot for my company website, but I am struggling on all fronts.

  1. I built an assistant, but can’t seem to link it by ID from my application
  2. I can’t get the assistant to understand the thread context; every question the assistant is asked seems like a totally new question to it


import OpenAI from "openai";

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,

export async function POST(req) {
  const { messages } = await req.json();

  messages.forEach((message) => {

  try {
    const response = await{
      model: "gpt-4o",
      // model: "asst_ROx9RsOPl0DpbJdu6sEP5B6h",
      messages: [
        { role: "user", content: messages[messages.length - 1].content },

    const message = response.choices[0].message;
    return new Response(JSON.stringify({ message }), { status: 200 });
  } catch (error) {
    console.error("Error communicating with OpenAI:", error);
    return new Response(JSON.stringify({ error: "Internal server error" }), {
      status: 500,

export async function PUT(req) {
  const { messages } = await req.json();

  const assistant = await openai.beta.assistants.create({
    name: "Support Agent",
      "You are a customer support agent. Your job is to answer questions specific to both broadcast and products. You should always assume you are talking to a customer.",
    model: "gpt-4o",

  const thread = await openai.beta.threads.create();

  const message = await openai.beta.threads.messages.create(, {
    role: "user",
    content: messages[messages.length - 1].content,

  // We use the stream SDK helper to create a run with
  // streaming. The SDK provides helpful event listeners to handle
  // the streamed response.

  let run = await openai.beta.threads.runs.createAndPoll(, {
    // assistant_id:,
    assistant_id: "asst_ROx9RsOPl0DpbJdu6sEP5B6h",
    instructions: "Please address the user as Master.",

  let currentMessage = "internal error, appologies";

  if (run.status === "completed") {
    const messages = await openai.beta.threads.messages.list(run.thread_id);
    currentMessage =[0].content[0].text.value;
  } else {

  // this is something for when you are using the console I think
  // const run = openai.beta.threads.runs
  //   .stream(, {
  //     assistant_id:,
  //   })
  //   .on("textCreated", (text) =>
  //     process.stdout.write("\nassistant > " + text.value + "\n"),
  //   )
  //   .on("textDelta", (textDelta, snapshot) => {
  //     process.stdout.write(textDelta.value);
  //   })
  //   .on("toolCallCreated", (toolCall) =>
  //     process.stdout.write(`\nassistant > ${toolCall.type}\n\n`),
  //   );

  return new Response(
    JSON.stringify({ message: { role: "Bot", content: currentMessage } }),
      status: 200,

1 Like


I have been working on this for like 5-6 monts now! I won’t lie it isnt easy but here’s how im doing it!

  1. Create a thread
    const threadId = data.threadId != '' ? data.threadId : (await openai.beta.threads.create({})).id
  1. Create a message in the thread
    const createdMessage = await openai.beta.threads.messages.create(threadId!, {role: 'user' as 'user' | 'assistant', content: data.message.toString() });
  1. Run stream and forward it to the ui

const runStream =!

To keep the context of your thread the UI needs to save it and resend it to the API at every messages instead of recreating it every messages like you do in your code.

Also note that in your code you recreate the assistant everytime on the put you should only create one and everyone use the same.
You can look at my full code here:

You can also look at this example: GitHub - vercel/ai-chatbot: A full-featured, hackable Next.js AI chatbot built by Vercel

1 Like

Thank you so much for taking the time to help me! I tried looking at vercels template, but it was so complicated. What files do you think are relevent to my use case?

1 Like