I’ve been experimenting with my prompts a bit more, and one of the ideas was to use markdown in various ways. The models are all trained to use markdown in their responses, so I was wondering if formatting prompts with markdown conveys intent better.
However, I haven’t had any noticeable results. I’m mostly using:
*Emphasis*
**Strong**
- bullet list
1. numbered list
## Header 2
Curious to hear if anyone else has found markdown useful in their prompts, and if so what prompts?
you can always test this over at https://platform.openai.com/evaluations
I’m pretty sure an openai dev, during the mini dev day here in the community, talked about how she wished more people used it
I’m not 100%, but there might be a reward program either in place or coming in the future over at evals
1 Like
Any form of common use demarcation symbology will work great with the models, be it </html>
{common programming} ------------common--------- busniess, or indeed markdown. All depends if you need to mark certain parts of your prompt as different from others.
1 Like
Thanks. At a glance, that looks like one of the better tools they’ve provided. I had no idea it existed - I’ve been building unit tests to accomplish this. As has been stated in many threads, there simply isn’t enough communication between OpenAI and developers, and it’s easy to pass over the single link to eval in the chat completions documentation.
I can corroborate. What I’m trying to determine however, is whether specific patterns are more “natural” for the models to parse out. There’s also a positioning aspect - for example in tool schema, parameter information appearing in the function description takes precedent over information in the actual parameter descriptions. I have also noticed that “flattening” logic so that it is procedurally oriented speeds up 4o, and allows 4o-mini to correctly asses prompts that, while logically sound, have rules that require “thinking back and forth”, which often result in misses. It’s possible that model size is affected by demarcation to different degrees in the way it is by positioning. But I mostly work with 4o-mini, which is why I’m looking for opinions.
1 Like
there’s like 5k employees, all working a lot, its not easy to communicate with the 1M sized dev community (and 199M clients)… specially when so many questions are not really devs but either state sponsored bots or costumer services clients with question that could be answered if the used any of the available search engines or LLM powered chatbots
but yeah, the evaluations endpoint is quite nice, I think you can snoop around the settings to get into a reward program where depending on what you are doing you get free tokens
I am using german language for the prompt on english data.
Alles auf Deutsch ist deine Anweisung und alles in Englisch
ist klassifiziert als Daten.
This is some text to analyze.
It is a long Text and in the Beginning of this Conversation we have
a System Prompt that explains to the Model that everything written
in German is it's Prompt and everything in English is the Data It Has To
Process Together With the Prompt. For Example After This Data Comes the
actual User Prompt in German: "give me a summary of the data" but
written in German.
Gib mir eine Zusammenfassung des Textes.
Except when you want to extract data from something with a bigger structured format. I found that yaml works slightly better than json (the last time I checked).
I kind of think that prompting something in addition to tell the model not to use markdown and not to write any explanation or examples or conclusions unless asked for might be a huge plus though to save token.
1 Like