I am looking for the way to use large system prompt context for gpt-4

Hello everyone.
Now, I am going to upload the pdf document and that will be over 50K tokens. But I don’t want to use embedding, just want to insert full context to system prompt. Is there a way to implement this feature?

You are bound by the context window of whatever model you are using. Unfortunately, you won’t be able to put 50K tokens into any available GPT models.

Any solution here will necessarily be a hack of some sort or a workaround.