Realtime caching between sessions?

I’m wondering if there is a way to maintain cached tokens between gpt-realtime sessions. For something like instructions or a large system message that is the same for each realtime session, can a new session take advantage of cached text tokens from a previous session to cut costs?