Regarding OpenAI Assistant API V2, and whether it is possible to use an existing thread ID to chat in the Assistant Playground?

Hi everyone,

I have a few questions. Currently, I can create an assistant using Python and save the thread ID: (thread_sCXXXXXXXX) for use in future executions. I can also use the OpenAI Playground to modify the assistant created by my code: https://platform.openai.com/playground

  1. Can I use the saved thread ID in this Playground platform?
    Return to the conversation context window of the saved thread ID. (thread_sCXXXXXXXX)

  2. For the assistant, is it enough to just set the instructions?
    I remember that previously we could also add a system prompt.
    What is the difference between instructions and system prompt?

  3. I recently saw news about OpenAI Assistant API V2, which is quite confusing because I just finished writing my assistant’s Python code. Where can I find code samples for OpenAI Assistant API V2?
    Task: I want to learn how to attach knowledge files for RAG and assist with long text content analysis.

Below is the simple assistant Q&A function I wrote.
What modifications are needed to comply with the new V2 settings?

Best regards;

# Define a function to ask a question and receive a response
def ask_question(question):
    # Send a message to the thread
    client.beta.threads.messages.create(
        thread_id=thread_id,
        role="user",
        content=question  # Send the current question
    )

    # Run the assistant to process the current question
    run = client.beta.threads.runs.create(
        thread_id=thread_id,
        assistant_id=assistant_id
    )

    # Check the run result
    while True:
        run = client.beta.threads.runs.retrieve(
            thread_id=thread_id,
            run_id=run.id
        )
        if run.status == "completed":
            break
        elif run.status == "failed":
            print("Run failed with error:", run.last_error)
            return None
        time.sleep(2)

    # Get the assistant's response
    messages = client.beta.threads.messages.list(
        thread_id=thread_id
    )
    message = messages.data[0].content[0].text.value
    return message

1 Like

Yes.

Yes

No, not that I know of.

When I use Instruction in Assistant, I personally assume

Here’s the migration Guide from V1 to V2. If you need a video for this, I covered the Introduction to OpenAI Assistant V2

1 Like

Thank you for your reply.

Have you written or researched any code for the Assistant V2 API?

If I have some QA interview long texts (about 10,000 to 20,000 tokens), with data formatted as follows:

Q1:..........
A1:............
Q2:.........
A2:............
..............

If I want to have the assistant perform a specified task analysis, should I format my long text interview data as:

  1. Directly input the original format into the prompt
  2. Split the text and input it into the prompt in batches
  3. Convert the QA data into JSON format and then input it either all at once or in batches
  4. Attach the file (I am not keen on uploading these data)

Which method would make it easier for the assistant to remember the entire content and then handle the specified analysis task summary?

Thank you.

If you have an input-output task and not an ongoing chat needing the server-side thread management or document extraction, you should use chat completions.

Just provide a system message like:

“You are an AI expert at analyzing student tests” (or whatever you’re doing)

then user message:

Quiz to analyze:

{document}

-–

Provide a quantitative review of student answers. Then analyze: does this student fail the coursework?

1 Like

I just tested it. Yes. You can create thread in the Playground and continue it from the API. Be sure to get the message list to continue the conversation. Then you can go back to the Playground by refreshing the page and it will show the conversation from the API. You can do the same thing when you create the thread from the API. See the URL parameter (i.e. thread=thread_sCXXXXXXXX ) on the Playground and enter your API created thread Id there and refresh page.

1 Like

As @_j said, it all depends upon the type of talk you wanna have with AI model.

Thank you for your reply. I understand that using a stored thread ID, I can directly call it with the assistant API code to continue using it, and it retains the memory of the previous input and output content across different executions.

However, my question is the opposite!

Currently, I am using Python to write a program with the assistant API to create an assistant (which appears in the playground list) and create a thread ID conversation string with the code and save that ID.

Can I use this thread ID and directly paste it into the web assistant playground at https://platform.openai.com/playground to continue the conversation from that thread?

Thank you.

What does this mean? Sorry, my English isn’t very good, very pupu. and I didn’t quite understand your answer.
Can you explain in detail? Thank you.

You want to use the Assistant and Thread created from your code in the Playground? Yes.

Replace the Assistant Id and Thread Id and open it in your browser (you must be logged in).

https://platform.openai.com/playground/assistants?assistant=asst_yyyyyy&thread=thread_xxxxxx
1 Like

You can see the thread in thread view, but you cannot connect it to an assistant and run it within the playground site.

“Assistants” is not the primary method for interacting with AI models.

Instead, that is done by an API endpoint called “completions”, or “chat/completions” for current chat AI models.

You simply send messages and get back AI generated language.

You manage any past chat you’d want the AI to know about in your own code, sending the prior turns as previous messages before the latest input.

For getting AI answers, it is more efficient and direct. Use can be seen in the “chat” part of the API playground site. Here is a preset demonstrating an AI that performs a backend task instead of chatting with a user.

1 Like

Thank you very much! This really works. Using the previously saved ASSID and THREAD ID, I can resume the progress of the assistant’s conversation.

However, this is also quite concerning, as it means that the organization owner and the officials can see your input and output data.

Thank you for your explanation, but I still don’t quite understand. I previously gave up on using the chat API because it couldn’t remember the context of multiple question-and-answer exchanges, resulting in disjointed interactions. However, it’s strange that in your example, the chat output somehow generated an assistant.

Additionally, is it really possible to use the website playground to review a specific assistant’s thread ID and continue the conversation?
(All of these can be queried through the account, but the thread ID might be hidden and require modification in the settings)
https://platform.openai.com/playground/assistants?assistant=asst_XXXXXXX&thread=thread_YYYYYYY