Openai chat completion api always give me same result when I use python, but it sometimes doesn't give me correct result in Javascript

const basePrompt = `Hey chatgpt, please analyze this call and give answers against these categories:
            1. "What Questions Were Asked"
            2. "And How They Were Answered"
            3. "Pain Points Of The Customer"
            4. "Understand Any Objections"
            5. "Any pricing discussed"
            6. "Results Of The Call"
            7. "Overall Summary Of The Call"
            Please give the response in a proper JSON format that will give a director of sales full context by simply glancing at your JSON response.

            Here is the example JSON output:
            {
            "questions": ["1st question", "2nd question"],
            "answers": ["answer 1", "answer 2"],
            "pain_points": ["Pain point 1", "Pain point 2"],
            "objections": ["objection 1", "objection 2"],
            "pricing_discussed": ["pricing 1", "pricing 2"],
            "call_results": "The call was so good",
            "call_summary": "This is the overall summary of the call"
            }`

      const completion = await openai.chat.completions.create({
        messages: [
          { role: "system", content: basePrompt },
          {
            role: "user",
            content: transcription,
          },
        ],
        model: "gpt-4",
      })

This doesn’t give me correct result sometimes.
But when I use same code as pytho, it always give me correct result.

def get_transcription_response_from_chatgpt(message_chunks: list, model_name):
    base_prompt = """
    Hey chatgpt, please analyze this call and give answers against these categories:
    1. "What Questions Were Asked"
    2. "And How They Were Answered"
    2. "Pain Points Of The Customer"
    3. "Understand Any Objections"
    4. "Any pricing discussed"
    5. "Results Of The Call"
    6. "Overall Summary Of The Call"
    Please give response in a proper JSON format that will give a director of sales full
    context by simply glancing at your JSON response.
    
    Here is the example JSON output:
    {
    "questions": ["1st question", "2nd question"],
    "answers": ["answer 1", "answer 2"],
    "pain_points": ["Pain point 1", "Pain point 2"],
    "objections": ["objection 1", "objection 2"],
    "pricing_discussed": ["pricing 1", "pricing 2"],
    "call_results": "The call was so good",
    "call_summary": "This is the overall summary of the call"
    }
    """
    system_prompt = {
        "role": "system",
        "content": base_prompt
    }

    chat_gpt_chunks_responses = ""
    for chunk_message in message_chunks:

        user_prompt = {
            "role": "user",
            "content": chunk_message
        }
        messages = [system_prompt, user_prompt]
        assistant_response = openai.ChatCompletion.create(
                model=model_name,
                messages=messages
            )
        assistant_response_message = assistant_response.choices[0]['message']['content'].strip()
        chat_gpt_chunks_responses = assistant_response_message

    return chat_gpt_chunks_responses

not sure if it makes a difference, but I notice you didn’t specify a temperature, and you’re not using a fixed model version. can you inspect the request that actually gets sent out?

The API libs never felt super reliable from the start, so I never really use them. using axios or post isn’t really that much different. that probably isn’t the issue here though.

I notice that you seem to be overwriting your chunks_responses. are you sure you’re comparing apples to apples here?

Thanks for your reply.
But what does temperature param do?
And also I think I set the model as gpt-4.