I’m new to Assistant API’s.
I learnt how to upload files and how to ask questions that leverage code_interpreter to generate and run code in a sandboxed environment.
I also learnt how to leverage Function Calling with assistant API’s: run.required_action.submit_tool_outputs.tool_calls lists the functions (and their parameters) to call, so I collect each function together with its arguments, then I run it and I finally build a new {“tool_call_id”: “…”, “output”: “…”} dictionary to feed a new run.
So I’m wondering how I can merge the two things i.e. how I can make the Python code of my tools be executed in the same sandboxed environment where it normally runs the code that openai generated, rather than running the functions using the client that is invoking the LLM.