Raising something I’ve noticed when using ChatGPT for Python development work.
When ChatGPT generates code that uses third-party libraries, it consistently recommends older version numbers, i.e., often a year or more behind the latest release. Many of these turn out to have known CVEs when checked against osv.dev.
For example, if I ask for a Flask-based API, I frequently get flask==2.3.3 or flask==3.1.2, both of which have CVEs. Same pattern with requests==2.31.0 and django==6.0.1. These show up reliably across different prompts and project types.
I understand this is likely a training data effect: versions that were popular in tutorials and Stack Overflow answers years ago have more representation, so they get recommended more. But it creates a practical problem: the generated code installs fine, there’s no visible error, and developers may not think to run a security audit on the dependency list that the AI just produced for them.
Question: Is there any planned work on making version suggestions more security-aware? For example, checking against an advisory database before recommending a specific version pin?
I ran a broader check and found this affects all the major LLMs I tested, not just ChatGPT — so it’s a shared problem. But since ChatGPT is the most widely used, addressing it here would have the largest practical impact.