Host openai models locally - token limit

Is it possible to host openai models on my own gpu server?
If so, is it possible to get rid of the token limit?

thanks

1 Like

Not very practical