The latest GPT-4 model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Returns a maximum of 4,096 output tokens. This preview model is not yet suited for production traffic.
So, I assume that large context for input is so it can read the book, but only 4K tokens available to analyze or summarize it? OK.