Too many docstrings in code

I don’t know what’s going on, but lately, ChatGPT has been generating code with way too many unnecessary docstrings. Even when there’s enough context to provide complete functions, the responses feel incomplete, filled with placeholder docstrings and “examples,” and require tedious revisions to make them ready to run. Despite having all the necessary information—like uploaded files and project details—the assistant often fails to integrate directly with the provided context. Instead, it makes assumptions that don’t align with the project, gives placeholder responses that add no value, or creates generalized examples that don’t match the structure of the provided files. OpenAI needs to focus on generating complete, ready-to-run code tailored to the given context, avoiding unnecessary abstractions and placeholders, and processing uploaded files intelligently instead of adding filler. They should also consider allowing users to disable verbose docstrings entirely, especially in well-understood projects where they only slow down development. Has anyone else noticed this? It feels like it is a step back in usability.