Here’s a tip:
- OpenAI: Expose or reconfigure the AI model top_p and temperature lower, so the AI model sampler doesn’t select from mistakes in reciting back code verbatim to the
apply_patch tool (or whatever it takes for reliability).
This is with the code base in context always being the current state of the full files that have write allowed, and the AI knowing they are the live versions intra-patch:
Codex 5.2 API says
…
- Patch one change at a time (imports, then a single function signature, then a single return statement, etc.).
- When a patch failed, I located the actual current text (as shown in the file content) and rewrote the hunk to match that exact text.
Concrete examples from our history:
-
The patch that tried to change:
xxx and ...
failed because the file already had a different expression.
Fix: patch the exact existing line (or patch the return tuple instead, which did exist).
-
The patch that expected a return True in _xxx failed because the function didn’t have that exact line in that exact place.
Fix: patch the exact block around the lazy-bq return that did exist, and adjust the function signature and returns accordingly.
Did the apply_patch tool not act like its description?
From what we observed: it behaved exactly as described.
- It is strict about matching the provided context.
- It does not do approximate matching, AST-based matching, or whitespace-insensitive matching.
- When it says “Invalid Context”, it really means “I can’t find this exact sequence of lines”.
So: no, it didn’t behave unexpectedly; the failures were due to my patch hunks not matching the file’s exact current text.
The errors are beyond what could be improved on with “fuzzy” matching.
I sort of automated a linter, but changing the file’s report on the AI for every tool call or injecting its text doesn’t seem it would be as productive as letting the AI patch progress status into this report in the place reserved for writing updates per-issue, and having a context where the tool call history makes sense.
Moar
Tip
Implement a file normalization workflow for the non-binary files the apply_patch tool can understand: both in terms of what you want the AI to see, what you want the patch tool to accept or reject, or that you’d track the original state of the input file’s end of line sequence, LF or CR+LF typically seen with Windows files, and have them patched properly regardless.
So… you then don’t have to write detectors and guards and repairs of past patches the AI model made, such as this in action:
[warning] Mixed line endings detected:
- /functions_apply_patch_schema.txt (LF=34, CRLF=1, CR=0)
You will be prompted to convert to: UNIX (LF), Windows (CRLF), or use as-is when tracking.
Actions:
0 - Track a new file name (permitted destination; not created yet)
<number(s)> - Track file(s) (e.g., '1' or '1,3,5')
all - Track all existing files
clear - Untrack all files
[Enter] - Return to chat
> 4
[warning] /functions_apply_patch_schema.txt has mixed line endings (LF=34, CRLF=1, CR=0)
Choose how to proceed:
1 - Convert to UNIX (LF)
2 - Convert to Windows (CRLF)
3 - Use as-is
4 - Skip tracking this file
Select [1/2/3/4]: 2
[workspace] Tracking: /functions_apply_patch_schema.txt