Some stuff I run across isn’t necessarily ‘foundational’ and the depth isn’t always that extensive (at least math wise), but it talks about an idea that I’m interested in which perhaps doesn’t lend itself to formal exploration - when few other papers will. Probably because it’s hard to measure precisely and conclude concretely.
This is one example:
It’s particularly relevant I think because of GPT-4s visual capabilities, where you can generate code from UML modeling.
The biggest takeway I got from this was how it used object constraint language / OCL to enhance GPT4’s code generation capability. When I go to experiment with it, I’ll need to read that more carefully to see what skimming missed.
They used plantuml (text based uml), so some of that isn’t useful given image support but it is interesting. Maybe converting uml diagrams to plantuml first might work better than straight to code, especially if the diagrams are extensive and disparate.