GPT-4o - Python code analysis -> Mermaid - garbage response

I have a simple prompt to analyze source code and generate flowchart diagram in Mermaid syntax.
– For source code in C# the prompt works fine.
– For source code in Python the prompt takes 2 mins in average and returns just a bunch of /r /n.

Any idea, what am I doing wrong?

Prompt text:

Analyze source code of a project.
The project consits of one or more source code files.
Combine the logic of all source code files.
Generate a high-level flow diagram of what the project does.
The diagram must be in the syntax of mermaid flow diagram.
The diagram must be short, not more than 20 steps in total.

The diagram must be high-level, without technical details, like Garbage Collection (GC), Exit Function.
The diagram must not include names of classes.
The diagram must not include the source code file name.
The diagram must not include comments.

you’re using GPT-4o :laughing:

you can try to use logit bias to push the probability of that happening down

do know that the model can get crafty and construct"\r\n\r\n\r\n" in several different ways - and that probably won’t work, you’ll have to use tiktoken encoding_for_model(“gpt-4o”) GitHub - openai/tiktoken: tiktoken is a fast BPE tokeniser for use with OpenAI's models.

one other thing you can try is to add the following to the very bottom of your prompt:

start your response with “```flowchart\n”.

Thanks for reply.
Added suggested:

start your response with “```flowchart\n”.

Did not help - still garbage in response.
Tried to strip the comments - still garbage.

Presented the code as a text (wrapped into “”" … “”").
At lease received an error:
-“message”: "We could not parse the JSON body of your request. (HINT: This likely means you aren’t using your HTTP library correctly. The OpenAI API expects a JSON payload, but what was sent was not valid JSON. If you have trouble figuring out how to fix this, please contact us through our help center at

Dropped the idea with OpenAI, switched to Claude 3 Opus.
Claude Opus handles good, no unexplainable errors.

Update: after switching to GPT-4 (not 4o), the error is resolved.
Root cause is somewhere in the new model specifics.