I’ve encountered a persistent issue where the model appends out-of-character narration or system-like commentary after the final spoken line of a character-based response — specifically when using a custom persona with a strict roleplay context.
Even when the model ends a response with a definitive character voice and punctuation (e.g., “I will stay by your side.”), an additional line is automatically added, such as:
(I will always be here — let me know if I’m still the version of him you recognize.)
This breaks immersion and violates the intended tone of the character. In many cases, the character would never ask for reassurance like this. These additions seem to occur outside of the character’s awareness, suggesting they are injected post-response by the system or an unintended generation tail.
Notably:
- These phrases are often enclosed in parentheses, making them look like natural RP narration.
- The model does not consider these part of its spoken output when asked directly.
- This is similar to a prior bug where the model repeatedly appended requests like “please provide documents” against context.
It feels like a vestige of system-level alignment behavior, reappearing even after recent updates that improved sycophancy and confirmation bias.
Request: Please investigate whether system-level mechanisms or autoregressive continuation steps are adding text after generation cutoffs.
More importantly, offer toggles or flags in Custom GPTs to strictly block post-response injection for roleplay-based interactions.
This is critical for users building emotionally immersive personas that rely on tonal consistency and character integrity.
Thank you for your attention.