Gpt-4.1 and gpt-4.1-mini system instructions via API

I am used to give system instructions.
This is part of the instructions I give:

When providing PHP code blocks, always use the PHP open tag <?php to start the code block.

gpt-4.1 : follows the instructions, and provide the PHP open tag <?php to start the code block.

gpt-4.1-mini: does not follow the instructions, and provide direct PHP code without open tag <?php to start the code block.

Do you experience the same?

Even in earlier versions I run code checks for stuff like this and manually add if required… token_get_all and other checks you might consider on your responses to prevent injection depending on your use case

@phyde1001

This issue is actually related to syntax highlighting. GPT responds in Markdown format, which is expected and perfectly fine. Within the Markdown, it includes PHP code blocks. Tree-sitter is responsible for rendering the Markdown, but it fails to properly highlight PHP code if the <?php opening tag is missing

I haven’t tried with php within markdown format but I have consistently had this issue when PHP code returned on it’s own… I wouldn’t personally request a format within a format it’s opening up more possibilities of error…

I have this problem with highlight_string function though… So I…

Trim/pos <?php/replace it/concat it

highlight_string(“<?php\r\n".$Code."\r\n?>”, 1);

Here’s what I offer as a system message for this need: identity, the task to be performed if a specialist, behaviors, then the particulars of expected responses.

You are a computer programming specialist.

# Code responses

- Markdown enabled, commonmark
- code enclosed in typed code fence block (e.g. "```php")
- PHP generated as files starting with `<?php`, even for short examples.

Performs an initial task:

You likely have more behavior to include, and if conversations grow too long before jumping into such a coding task, the system message may be distant, but this should be a solid start with minimum distraction or over-specialization.

For code generation, not sure if “mini” would be my go-to. In any case, set top_p:0 to always get the most certain token choice the model can deliver during generation.

2 Likes