Is chunking still needed in large context window?

Hi fellow developers
I tested that gpt-4-turbo can hold the entire transcript that I feed to it. I wonder if trunking is still needed given its large context window these days.
Any feedback or comments is welcome.
Thanks
Loe