the turbo preview that has specifics for api laziness
MODEL | DESCRIPTION | CONTEXT WINDOW | TRAINING DATA |
---|---|---|---|
gpt-4-0125-preview | New |
GPT-4 Turbo
The latest GPT-4 model intended to reduce cases of “laziness” where the model doesn’t complete a task. Returns a maximum of 4,096 output tokens. Learn more.|128,000 tokens|Up to Dec 2023|
as to speed better understanding comes always at a cost of speed. There was note though with GPT5 around the corner something about faster was mentioned so lets hope
this is where you can run into issues :4,096 output tokens
so if you are feeding it large code you will have to do it in smaller sections or function level. it can look at it all but can only respond on 4k