What's a typical first token time for GPT 4 when streaming?

The time to first token seems to have noticeably increased in the last few weeks. Now it almost always takes more than 1 second to get the first token for GPT4 turbo preview when we stream the output. Is that what everyone else observe too?

I’m seeing a similar latency. My time-to-first-token is hovering around 2000ms using gpt-4-0125-preview, but I was also seeing 1000-1500ms for gpt-3.5-turbo-16k