Even in earlier versions I run code checks for stuff like this and manually add if required… token_get_all and other checks you might consider on your responses to prevent injection depending on your use case
This issue is actually related to syntax highlighting. GPT responds in Markdown format, which is expected and perfectly fine. Within the Markdown, it includes PHP code blocks. Tree-sitter is responsible for rendering the Markdown, but it fails to properly highlight PHP code if the <?php opening tag is missing
I haven’t tried with php within markdown format but I have consistently had this issue when PHP code returned on it’s own… I wouldn’t personally request a format within a format it’s opening up more possibilities of error…
I have this problem with highlight_string function though… So I…
Here’s what I offer as a system message for this need: identity, the task to be performed if a specialist, behaviors, then the particulars of expected responses.
You are a computer programming specialist.
# Code responses
- Markdown enabled, commonmark
- code enclosed in typed code fence block (e.g. "```php")
- PHP generated as files starting with `<?php`, even for short examples.
You likely have more behavior to include, and if conversations grow too long before jumping into such a coding task, the system message may be distant, but this should be a solid start with minimum distraction or over-specialization.
For code generation, not sure if “mini” would be my go-to. In any case, set top_p:0 to always get the most certain token choice the model can deliver during generation.