Connecting to an existing assistant

Hello! Please tell me. I am creating a chat on my website and I need to connect an existing assistant to this chat via the API. Please help me with the code. I’ve already dug through everything, but the documentation only contains the code that creates a new assistant, but how do I connect to an existing one? Many thanks for the help and maybe an example of the code design!

Welcome @analiticfindchia

If you already have an existing assistant you created using Playground or API, you simply have to create a thread, add messages to it, and then run the thread on the assistant.

Could you give an example of a connection. I’m not getting anywhere.

Hi @analiticfindchia

What all have you tried so far? After creating the new assistant, you will get an assistant “Id”

Next, you can use it with a create thread & then create messages in that thread or, do both at the same time.

If you are just starting out, I’d recommend you use the playground to add files (if you need them) & create assistants. It is much faster that way.

Hope this helps

Here is my code that I use, and it does not want to work in any way, giving out a 500 error. I don’t know what to do with it anymore. You can tell me where to look. Thank you for your time! I have changed the Api key and the assistant id, they are not valid in this example.

from flask import Flask, request, jsonify, send_from_directory
import requests
import logging

app = Flask(__name__)

API_KEY = ''
ASSISTANT_ID = 'asst_VEYREjiCdv0bTBiqfwuyEywb'

logging.basicConfig(level=logging.DEBUG)

def create_thread():
    url = f"https://api.openai.com/v1/assistants/{ASSISTANT_ID}/threads"
    headers = {
        'Authorization': f'Bearer {API_KEY}',
        'Content-Type': 'application/json'
    }
    data = {
        'title': 'New Thread'
    }
    
    try:
        response = requests.post(url, headers=headers, json=data)
        response.raise_for_status()  
        logging.debug(f"Thread Response JSON: {response.json()}")
        return response.json()['id']
    except requests.exceptions.RequestException as e:
        logging.error(f"Thread creation failed: {e}")
        return None

def get_assistant_response(thread_id, user_message):
    url = f"https://api.openai.com/v1/assistants/{ASSISTANT_ID}/threads/{thread_id}/messages"
    headers = {
        'Authorization': f'Bearer {API_KEY}',
        'Content-Type': 'application/json'
    }
    data = {
        'messages': [
            {'role': 'user', 'content': user_message}
        ]
    }
    
    try:
        response = requests.post(url, headers=headers, json=data)
        response.raise_for_status()  
        logging.debug(f"Message Response JSON: {response.json()}")
        return response.json()['choices'][0]['message']['content']
    except requests.exceptions.RequestException as e:
        logging.error(f"Message request failed: {e}")
        return "Произошла ошибка при получении ответа от ассистента."

@app.route('/')
def home():
    return send_from_directory('', 'index.html')

@app.route('/chat', methods=['POST'])
def chat():
    user_message = request.json.get('message')
    if not user_message:
        return jsonify({"error": "Сообщение пользователя не предоставлено."}), 400

    thread_id = create_thread()
    if not thread_id:
        return jsonify({"error": "Не удалось создать поток."}), 500

    assistant_response = get_assistant_response(thread_id, user_message)
    return jsonify({"response": assistant_response})

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000)

@analiticfindchia - Welcome to the community.

For detailed guide by OpenAI click here.

STEP 1 :

Retrieve the assistant id once created using.

assistant = self.client.beta.assistants.create(
                    name=name,
                    instructions=instructions,
                    model="gpt-35-turbo",
                    temperature= temperature,  
                    top_p= top_p,  
                    tools=tools
                )
print(assistant.id) #Prints the assistant id created.

STEP 2:
Create a new thread

thread = client.beta.threads.create()

STEP 3:
Add messages to the thread using this.

message = client.beta.threads.messages.create(
            thread_id= thread.id, role="user", content="hi!"
        ) #change content to the message you receive from the api

STEP 4:
This message on the thread needs to be passed to the LLM for it to generate output. You can use streaming for a typewriter effect or without streaming. Here is a sample code for without streaming.

Note : Handle different run status, I just did two for demonstration. you can find run status desc here

run = self.client.beta.threads.runs.create_and_poll(
            thread_id=self.thread_id, assistant_id=self.assistant_id
        )
tool_outputs = []


# If the run requires action, process the tools
        if run.status == "requires_action":
            try:
                # Iterate through the tools and execute them
                for tool in run.required_action.submit_tool_outputs.tool_calls:
                    args = json.loads(
                        run.required_action.submit_tool_outputs.tool_calls[0].function.arguments
                    )
                    # If the tool has a name, execute it
                    if tool.function.name:
                        print("Invoking tool:", tool.function.name)
                        function_name = tool.function.name
                        """
                         Call your tools here  and save op
                        """
                        # Save the tool output
                        tool_outputs.append(
                            {
                                "tool_call_id": tool.id,
                                "output": json.dumps(op),
                            }
                        )                

# If there are tool outputs, submit them
                if tool_outputs:
                    try:
                        # Submit the tool outputs
                        run = self.client.beta.threads.runs.submit_tool_outputs_and_poll(
                            thread_id=self.thread_id,
                            run_id=run.id,
                            tool_outputs=tool_outputs,
                        )

                    except Exception as e:
                        print("Failed to submit tool outputs:", e)
                else:
                    print("No tool outputs to submit.")

        # If the run is completed, return the messages
        if run.status == "completed":
            # Get the messages from the thread
            messages = self.client.beta.threads.messages.list(
                thread_id=self.thread_id
            )
            return [messages.data[0].content[0].text.value]

STEP 5:

Pass this message back as a response from the api as assistant message.

Cheers :smiley:!

3 Likes

I encountered the same issue, and I believe the method you’re looking for is this:
client = OpenAI()
assistant= client.beta.assistants.retrieve("Your_Assistant_ID")