Different answers using assistants api vs playground

I have this code:

import os
from dotenv import load_dotenv
from openai import OpenAI
from rich.console import Console

def load_environment_variables():

def get_api_key():
    return os.getenv("OPENAI_API_KEY")

def initialize_openai_client(api_key):
    OpenAI.api_key = api_key
    return OpenAI()

def verify_api_key(api_key, console):
    if api_key is None:
        console.print("Error: OPENAI_API_KEY not found in environment variables.", style="bold red")

def create_thread(client):
    return client.beta.threads.create()

def create_message(client, thread_id, user_input, file_ids="xxxxx"):
    return client.beta.threads.messages.create(

def create_run(client, thread_id, assistant_id, instructions):
    return client.beta.threads.runs.create(

def retrieve_run(client, thread_id, run_id):
    return client.beta.threads.runs.retrieve(

def list_messages(client, thread_id):
    return client.beta.threads.messages.list(

def print_messages(messages):
    console = Console()
    if messages.data:
            f"\nAssistant: {messages.data[0].content[0].text.value}\n",
            "\nAssistant: No response received.", style="bold red"

def main():
    console = Console()
    api_key = get_api_key()
    verify_api_key(api_key, console)
    client = initialize_openai_client(api_key)

    # Example usage
    thread = create_thread(client)
    message = create_message(client, thread_id=thread.id, user_input="What is your pricing?")
    # Assuming you have an assistant_id
    assistant_id = "xxxx"
    instructions = "Be a friendly chatbot"
    run = create_run(client, thread_id=thread.id, assistant_id=assistant_id, instructions=instructions)
    retrieved_run = retrieve_run(client, thread_id=thread.id, run_id=run.id)
    messages = list_messages(client, thread_id=thread.id)


if __name__ == "__main__":

Here’s the problem. giving this exact prompt, using retrieval and the same file in the assistants playground, I get such a great response, exactly what I’m looking for. When I run it through my code, exact same prompt, I get horrible answers, sometimes just literally repeating my question back to me. Why would this happen? Has anyone else had this issue? Could anyone provide some insight on the problem, or even my code? thanks.

Hey there!

What is the specific prompt you’re testing?
That sounds like A. temperature may be too low if it’s parroting responses, and B. I see no evidence you’re using retrieval in this code, which is likely affecting the results.

1 Like

Hey! Good catch about the retrieval, I had that turned on only in the playground. The prompt is What is your pricing? .


Okay, yeah, then the retrieval is definitely the bottleneck here.

There’s some good info about implementing the code for that here through the API:


Keep in mind, you may still need to tinker with the prompt eventually for optimal results, but for now, this should alleviate the immediate problems you’re facing :slightly_smiling_face:.

Let us know if you need anything else!

1 Like

Thanks! I do have a question though. I am using the assistant in the code that I’ve created via the dashboard, so those settings that I have on the dashboard, wouldn’t they be persistent to the api? I remember now why I did not put retrieval in, and it’s because I am not actually creating the assistant via the code.

While I can’t verify completely, I think so long as you have some knowledge files in the playground, and can access them, you should still be okay. I personally prefer building it all via code, but only to specifically rule out potential bottlenecks like this. You are free to continue that as is.

As the API docs explained, the big thing is calling the retrieval tool. Without invoking the tool, it won’t be able to utilize the stored knowledge, because it wouldn’t have the tool to do so.