Base model input/output context length

Hi, I want to know the context length of the base model Da Vinci, but the pricing (Pricing) page does not have that information.

Does anyone have this information? E.g., if I want to fine tune Da Vinci, what is the maximum input? And what about the maximum output once it’s fine-tuned?

Thank you

You can find details here OpenAI Platform

text-davinchi-003 is 4k