Which model to use for high token input?

I’m looking to deploy a model that can ingest anywhere between 2000 and 70000 tokens as the input and an output of anywhere between 1000 and 10000 tokens. This is my first deployment and foray into OpenAI and it looks like GPT-4-1106-preview would be the best fit for this use case but isn’t yet ready for production. This model will be deployed on Bubble as a prototype. I’m located in California and wondering what my best options are for region and model type for this kind of build?