This probably isn’t directly related to your book but I thought I’d point out that we want to start capturing all of these pseudocode->code example pairs so that we can fine tune better coding models. All of the current generation of coding models are trained just off the code half of the equation. They’re missing the intent which is captured in the pseudocode.
If you start capturing the intent->code mappings I suspect that not only will you end up with a model that’s SoTA at coding tasks, but it will likely be SoTA at reasoning because the two tasks are closely intertwined.