Although I can’t provide insight on the why, I’ve successfully instructed the API to return all results formatted in HTML. With that being said, if you show the API how you want to receive each response, that may do the trick. In my case, I provided numerous examples where each example response I have provided from “assistant” is formatted with headings, paragraphs and lists tags.
I’m not using GPT-4 yet, but while playing around with ChatGPT-4, I found that by asking it for content, formatted in markdown, did provide consistent results.
By asking the API if it knows how to respond as desired may simply return an undesirable response in contrast of instructing the api how to return a response. That is, unless I have misread the issues you’re running into
It would appear that the problem arises because markdown isn’t being considered as a programming language. That may be accurate but it doesn’t help in correctly identifying the type of content in a code block.
I asked the following question where I explicitly asked ChatGPT to set the programming language to markdown:
give 3 examples of markdown and always set the programming language in the code blocks to markdown
It’d be interesting to look into. I was receiving some very orthodox typescript and it was labelling it as scss was enjoyable. I wonder if they’re using a separate AI for classification, or simply just asking GPT to guess?
It’s strange - I’m certain that if I were to copy and paste the same code that it has declared as scss, it would say “No, this is not scss”
ChatGPT correctly identifies that the code is markdown, but because it’s not a programming language but rather a markup language, it prevents itself from setting it as the “programming language” for a code block and the result ends up being more or less random!
Give it a code block in a language (for example Python)
Ask it to write the code in another programming language (don’t name any)
It will “guess” the code based on the prompt in this case it will say that it’s Python regardless of the language that it actually types out.
Apparently some of my typescript related questions are more suitable for scss…yikes…
Still would need more testing to know if it’s actually true.
Actually, it will update itself live.
I gave it JS, told it to re-write it into another language, it chose Python.
I was suggesting that if your goal is to receive responses in markdown, showing the API what you want by providing detailed examples (and experimenting with temperature, etc…) has proven successful in my experience. This, of course, is based on my experience using the GPT-3.5-turbo API and not the ChatGPT UI.
This is not GPT’s doing, I’ve run into this on multiple other sites, the “bug” in this instance is actually just part of how the code block feature in the extended markdown syntax is implemented. When you start a multiline code block with 3 backticks you can specify the language like this:
The markdown interpreter (not GPT) will try to guess the programming language if a language key isn’t specified, it’s not very good at this and will often fail, we can see that if we prompt chatGPT:
please phrase 10 code blocks without a language key, each containing 1 line of code in a different language. Above every code block there should be a headline in bold stating the language within the subsequent code block.
I suspect you’re correct although I haven’t done any further testing. Markdown is a typesetting language, like TeX but simpler, it needs an interpreter to output to a specific format like a website or a pdf. Markdown doesn’t have color support, that is all handled by the interpreter, we can’t really know how the guessing is performed without knowing what interpreter chatGPT site is running.