I have a few questions. Currently, I can create an assistant using Python and save the thread ID: (thread_sCXXXXXXXX) for use in future executions. I can also use the OpenAI Playground to modify the assistant created by my code: https://platform.openai.com/playground
Can I use the saved thread ID in this Playground platform?
Return to the conversation context window of the saved thread ID. (thread_sCXXXXXXXX)
For the assistant, is it enough to just set the instructions?
I remember that previously we could also add a system prompt.
What is the difference between instructions and system prompt?
I recently saw news about OpenAI Assistant API V2, which is quite confusing because I just finished writing my assistant’s Python code. Where can I find code samples for OpenAI Assistant API V2?
Task: I want to learn how to attach knowledge files for RAG and assist with long text content analysis.
Below is the simple assistant Q&A function I wrote.
What modifications are needed to comply with the new V2 settings?
Best regards;
# Define a function to ask a question and receive a response
def ask_question(question):
# Send a message to the thread
client.beta.threads.messages.create(
thread_id=thread_id,
role="user",
content=question # Send the current question
)
# Run the assistant to process the current question
run = client.beta.threads.runs.create(
thread_id=thread_id,
assistant_id=assistant_id
)
# Check the run result
while True:
run = client.beta.threads.runs.retrieve(
thread_id=thread_id,
run_id=run.id
)
if run.status == "completed":
break
elif run.status == "failed":
print("Run failed with error:", run.last_error)
return None
time.sleep(2)
# Get the assistant's response
messages = client.beta.threads.messages.list(
thread_id=thread_id
)
message = messages.data[0].content[0].text.value
return message
If you have an input-output task and not an ongoing chat needing the server-side thread management or document extraction, you should use chat completions.
Just provide a system message like:
“You are an AI expert at analyzing student tests” (or whatever you’re doing)
then user message:
Quiz to analyze:
{document}
-–
Provide a quantitative review of student answers. Then analyze: does this student fail the coursework?
I just tested it. Yes. You can create thread in the Playground and continue it from the API. Be sure to get the message list to continue the conversation. Then you can go back to the Playground by refreshing the page and it will show the conversation from the API. You can do the same thing when you create the thread from the API. See the URL parameter (i.e. thread=thread_sCXXXXXXXX ) on the Playground and enter your API created thread Id there and refresh page.
Thank you for your reply. I understand that using a stored thread ID, I can directly call it with the assistant API code to continue using it, and it retains the memory of the previous input and output content across different executions.
However, my question is the opposite!
Currently, I am using Python to write a program with the assistant API to create an assistant (which appears in the playground list) and create a thread ID conversation string with the code and save that ID.
Can I use this thread ID and directly paste it into the web assistant playground at https://platform.openai.com/playground to continue the conversation from that thread?
What does this mean? Sorry, my English isn’t very good, very pupu. and I didn’t quite understand your answer.
Can you explain in detail? Thank you.
Thank you for your explanation, but I still don’t quite understand. I previously gave up on using the chat API because it couldn’t remember the context of multiple question-and-answer exchanges, resulting in disjointed interactions. However, it’s strange that in your example, the chat output somehow generated an assistant.