When I pasted a prompt with more than 10,000 characters, I found that the system was responding all the time and couldn’t handle the question of super-large text. May I ask why this is? How many characters can this prompt support at most? Will the response be slower if the number of characters is larger, and will it be upgraded in the future to support character input of super long text?
Every model has a context length counted in tokens. A model cannot generate beyond it’s context length.
As @sps explains, you need to be aware of the token limit a conversation with the model has.
Here is a pretty comprehensive explainer, incl. limits by model and costs etc: What Is the ChatGPT Token Limit and Can You Exceed It?