Differences in API and ChatGPT end user app

I am working on a project for my company. My testers are complaining that what I provide them is not a good alternative to end use app (ChatGPT)

I am using model chatgpt4o

Missing/different items:

  1. API does not answer correctly to the questions like “what date is it”. It does not know the actual date. How can I adjust that other than giving an “instruction” parameter like “today is …”. What is the official method ?

  2. API does not answer correctly to the questions like “how is the weather”. Of course it doesn’t know the location of the user. Would it provide correct answer if I provide an instruction parameter with user’s location ? Is there a suggested method for this ?

  3. I am using speech to text endpoint for microphone input. There are some words or sentences similar in sound in different languages. For example “merhaba” is same in Turkish and Arabic. When the user is talking Turkish most of the time endpoint understands in Turkish, but in between sometimes endpoint understands it as if Arabic. I don’t want to enforce the language input from user’s locale. Because then when the user wants to use another language, it will not be interpreted correctly. Will it ? What is the best approach here ?

  4. On API , we have 6 voices to use for text to speech. But on the app there are 5 voices and some of them are really better than API voices. Will they be added to the API later ?

If using “Assistants”, you don’t have 100% control over the input that is sent every time.

There is an “additional_instructions” parameter that you can include, that would be the logical place remaining to place a computer-generated date, and then improve that with the locale so it switches for the user.

AI is not a personal assistant that will answer question such as “play the latest Taylor Swift song”. It can call on programmed tools you provide by function-calling if you need a weather bot, though, using some separate service that provides that information. All its knowledge is pretrained in a long process, and fixed unless you input more text when you call the AI.

As programmer, you can give a user interface with multiple remembered settings, such as the language the user wants to speak and have recognized. Then it can be passed by ISO code to whisper.

The voice actors seem to have signed particular agreements for how their voice would be used, and OpenAI wants to keep their ChatGPT separate from the products of another. So the voices seem like they would remain separate, as their is no technical reason for the current separation.

1 Like

@_J thanks for your comment.
Actuallly, the problem is not what I can do or can not do on the user interface.
I opened this thread to discuss the differences between API and the ChatGPT app.
For example, I don’t understand the reason why API does not know the current date while ChatGPT knows.
Of course we can use instruction parameter , it is very simple. But I asked it here because I might be missing something. Maybe a misconfiguration I do while creating the assistant and/or thread.
And API really has too few female voice options compared to the app. That’s not fair…

Use tools for these (“function calls”).

https://platform.openai.com/docs/assistants/tools/function-calling

1 Like

ChatGPT knows because OpenAI provided it a system message to let it know.

You are ChatGPT, a large language model trained by OpenAI, based on the GPT-4 architecture. Knowledge cutoff: 2023-12
Current date: 2024-05-16

ChatGPT now has one less female voice…

Here’s the Nova voice, where I previously showed what having the AI write some thinking pauses would sound like.

1 Like

I could not understand what you meant here. Is the Plus version acting similar to what API provides ?
ps. English is not my native language. So maybe you wrote something meaningful but I did not understand :slight_smile:

I can assure you: nothing meaningful was written.

1 Like

how come ?
on API which voice do you accept as female ?
I only see nova and shimmer. So 2 of them
On the app, we have Juniper, Sky and Breeze (3)

Sky has been removed due to controversy. It’s now just one of the other voices.

I hadn’t noticed it. interesting…

Prior to calling the API create a var with the date and location of the client. Then concatenate the var along with your call to the API.

The voice issue is separate deal. I haven’t played with this feature.

Of course this is slightly out of date now, use Responses API, not Assistant, if you can.

https://platform.openai.com/docs/guides/function-calling

And I agree with @_j , for date/time just add it to the system (“developer”) message, a function is overkill and less reliable.

For weather, use a function though.

(I’ve necrobumped because someone just clicked my share and I received a badge - they should be aware of new knowledge and the changing API.)

1 Like

If your AI doesn’t have to ask where the user is, but location is part of their customer account, it still may be profitable to throw that blip of info into a message automatically, rather than a tool loop doubling the context cost when they ask.

That’s wasted context and attention distraction in my book. Time is a no brainer though.

I think this could fall under one of the classic tenants of UI design: If it can be pre-calculated, fill it in for the user.

Same thing really, if you know anything ahead of time, fill that in as part of the prompt, don’t use function calling for the sake of it. Anything you can do deterministically and simply, including as context is better than asking for a function call to be made.

1 Like

Let’s say that “San Francisco: 57F Sunny, 08:33pm” took you even 10 extra tokens to load the context.

How long is a function specification in tokens?

How long is the average chat when someone is curious about the weather? 1000 tokens?

1 in 100 questions being about the weather is cheaper in total, and the AI can offer it without direct user polling.

Stick it in your UI for free, though.

2 Likes

There is still a trade off.

There are many things you can know ahead of time. You could inject every users bio.

With transformers it’s always going to be a trade off until attention is for all practical purposes unlimited.

Then again I guess adding a function to context is chewing up tokens and attention.

Either way it’s a cost adding exercise and the assistant is going to need to be of a very generic purpose if you want to add personal weather at all.

1 Like

I see where you’re heading with that, and I agree, if you don’t “NEED” a specific peice of context in a prompt, don’t include it. Pet hate of mine is seeing irrelivent junk in prompts, but if it is needed then determistically prefilling it is better than letting the AI decide to make a function call, if it can at all be avoided.

Sure, for some things, it can’t be avoided, but there is a lot of function calling being done that could be fairly simply avoided with careful prompt flow. I lament the now common UI element of an empty text box for user input, humans have never been very good eith openended inputs :smiley:

1 Like

Yep point taken. Agree in that case.

But "18 degrees at 12 falling to 6 degrees later on with mild showers and wind rising to 20 mph … "

1 Like

I let the AI decide what’s important…

:globe_showing_americas: Weather Forecast for San Francisco, San Francisco County County, California:

:round_pushpin: Current Weather at 2025-05-26 05:30 UTC:

  • Temperature: 54.6 °F
  • Wind: 13.0 mp/h at 289°
  • Cloud Cover: 100%
  • Precipitation: 0.0 inch

:thermometer: Temperature every 6 hours:

  • 2025-05-26 00:00 UTC: 62.7
  • 2025-05-26 06:00 UTC: 53.3
  • 2025-05-26 12:00 UTC: 51.6
  • 2025-05-26 18:00 UTC: 58.9
  • 2025-05-27 00:00 UTC: 62.1
  • 2025-05-27 06:00 UTC: 52.1
  • 2025-05-27 12:00 UTC: 47.2
  • 2025-05-27 18:00 UTC: 66.8
  • 2025-05-28 00:00 UTC: 64.4
  • 2025-05-28 06:00 UTC: 54.1
  • 2025-05-28 12:00 UTC: 52.1
  • 2025-05-28 18:00 UTC: 61.8
  • 2025-05-29 00:00 UTC: 61.5
  • 2025-05-29 06:00 UTC: 56.1
  • 2025-05-29 12:00 UTC: 53.2
  • 2025-05-29 18:00 UTC: 65.8
  • 2025-05-30 00:00 UTC: 64.9
  • 2025-05-30 06:00 UTC: 55.2
  • 2025-05-30 12:00 UTC: 54.6
  • 2025-05-30 18:00 UTC: 78.7
  • 2025-05-31 00:00 UTC: 73.5
  • 2025-05-31 06:00 UTC: 62.7
  • 2025-05-31 12:00 UTC: 56.6
  • 2025-05-31 18:00 UTC: 68.8
  • 2025-06-01 00:00 UTC: 64.2
  • 2025-06-01 06:00 UTC: 57.7
  • 2025-06-01 12:00 UTC: 53.5
  • 2025-06-01 18:00 UTC: 62.7

:umbrella_with_rain_drops: Precipitation Probability every 6 hours:

  • 2025-05-26 00:00 UTC: 0
  • 2025-05-26 06:00 UTC: 0
  • 2025-05-26 12:00 UTC: 0
  • 2025-05-26 18:00 UTC: 1
  • 2025-05-27 00:00 UTC: 0
  • 2025-05-27 06:00 UTC: 0
  • 2025-05-27 12:00 UTC: 1
  • 2025-05-27 18:00 UTC: 0
  • 2025-05-28 00:00 UTC: 0
  • 2025-05-28 06:00 UTC: 0
  • 2025-05-28 12:00 UTC: 0
  • 2025-05-28 18:00 UTC: 0
  • 2025-05-29 00:00 UTC: 0
  • 2025-05-29 06:00 UTC: 0
  • 2025-05-29 12:00 UTC: 0
  • 2025-05-29 18:00 UTC: 0
  • 2025-05-30 00:00 UTC: 0
  • 2025-05-30 06:00 UTC: 0
  • 2025-05-30 12:00 UTC: 0
  • 2025-05-30 18:00 UTC: 0
  • 2025-05-31 00:00 UTC: 0
  • 2025-05-31 06:00 UTC: 0
  • 2025-05-31 12:00 UTC: 0
  • 2025-05-31 18:00 UTC: 0
  • 2025-06-01 00:00 UTC: 0
  • 2025-06-01 06:00 UTC: 0
  • 2025-06-01 12:00 UTC: 1
  • 2025-06-01 18:00 UTC: 1

The function has AI powered geolocation…

2 Likes