GPT 3.5 turbo API product different result with chatGPT

I am using this same prompt on gpt-3.5-turbo (API) and chatgpt (app), apparently the result produce from chatGPT app is more more better than querying it through API.

"write an outline for article blog titled "can dog eat bananas" in bullet point"

ChatGPT Resposne

- The Nutritional Value of Bananas for Dogs
- Benefits of Feeding Bananas to Dogs
- Risks and Concerns of Feeding Bananas to Dogs
- How to Safely Feed Bananas to Your Dog
- Serving Suggestions for Bananas
- Preparing Bananas for Dogs
- Appropriate Portion Sizes for Dogs
- Alternatives to Bananas for Treating Dogs


- Dogs can eat bananas
- Bananas are a healthy treat for dogs
- Bananas contain vitamin C, potassium, and fiber
- Dogs should only eat bananas in moderation
- Too many bananas can cause upset stomach and diarrhea in dogs
- Always remove the banana peel before giving to your dog
- Banana chips are not recommended for dogs as they may contain added sugars and preservatives
- Consult with your veterinarian before introducing any new foods to your dog's diet.

The result produce from chatgpt (app) is fairly good, however the API itself is off here, not really an outline, but more like summarize. Why is that so? Is that due to some settings I have to tweak from gpt-3.5-turbo or fundamentally chatGPT (app) is not using the gpt-3.5-turbo API.

I run the gpt-3.5-turbo this way.

response = openai.ChatCompletion.create(
    messages = [
        {"role": "user", "content": prompt},

Appreciate for helps. Thank you

1 Like

Maybe you can try tweaking the temperature you use for the API to see if a lower/higher temperature produces better results for your use-case?