Get formatted response for GPT-3.5-turbo

I’m using gpt-3.5-turbo api on my python application, my prompt is like this

Use the following pieces of information to answer the question.
----------------

Information about {company_name} website:
{context}

Question:
----------------
{question}

To display a more readable format to the user, your answer must be formatted in HTML.

I want to get answers in HTML format so I can display it nicely*(bold, italic, url texts using tailwindcss prose)* instead of just spiting out everything in a single p tag, but it’s not following the prompt and it’s not sending formatted response.

how should I tell it to format response in HTML always(whenever it’s necessary)

Thanks

Few-shot learning will be your friend here.

Give it a few examples in the prompt.

1 Like

Thanks bro
should I give it on the system message or on every prompt I send?

You can put it in the system message but you may have better luck with it as the final user message right before the prompt.

1 Like

Just note you’ve told us more information about how you want the table to look than you are telling GPT (how is it supposed to know you want tailwind?). Treat GPT like a dumb assistant, be specific in your instructions and give it an example or two.

1 Like

Thanks for replying novaphil
GPT doesn’t do anything about tailwind - I did not say that, what I want to say is that if GPT is able to return its response in html format then you can set “prose” classname on the parent element and tailwindcss will take of the styling(like for table, heading tags, links, code etc)
hope you got my point

Here’s what you should do now.

If you want more help, provide at least two examples of a prompt and precisely the ideal result you want the model to produce.

Then we will be able to help guide you to a solution.

1 Like

okay here is an example

example one

Question:
how to contact openai?

Result should look like this (no wrapping div):

  <p>You can contact openai using the following options</p>
  <h1>Contact OpenAI</h1>
  <p>Here are some ways to get in touch with OpenAI:</p>
  
  <h2>Email</h2>
  <p>As of my last update in September 2021, OpenAI does not publicly provide an email address for contact. Please visit the official OpenAI website for updated contact details.</p>
  
  <h2>Website</h2>
  <p><a href="https://www.openai.com/">OpenAI Official Website</a></p>
  
  <h2>Physical Address</h2>
  <p>3180 18th St, San Francisco, CA 94110</p>

  <h2>Social Media</h2>
  <p>Follow or get in touch with OpenAI on social media:</p>
  <ul>
    <li><a href="https://twitter.com/OpenAI">Twitter</a></li>
    <li><a href="https://www.linkedin.com/company/openai/">LinkedIn</a></li>
  </ul>

example 2

Question:
how to create a twitter account?

Result should look like this (no wrapping div):

<p>Follow these steps to create twitter account:</p>
<ol>
  <li>Go to <a href="https://www.twitter.com">www.twitter.com</a></li>
  <li>Click on 'Sign Up'</li>
  <li>Enter your name in the 'Name' field</li>
  <li>Enter your phone or email in the 'Phone or email' field</li>
  <li>Enter your date of birth in the 'Date of Birth' field</li>
  <li>Click on 'Next'</li>
  <li>Verify your phone number or email address</li>
  <li>Choose a password</li>
  <li>Choose a unique username</li>
  <li>Upload a profile picture (optional)</li>
  <li>Write a short bio about yourself (optional)</li>
  <li>Follow some accounts to get started</li>
  <li>Congratulations, you've created your Twitter account!</li>
</ol>

But that’s not HTML like you said you wanted.

To format your response in a code block so Discourse doesn’t try to format it for you use “fences” (three backticks) alone on lines above and below your text so,

```
<p>
Some text...
<table>
  <tr>
    <th>Header 1</th>
    <th>Header 2</th>
  </tr>
  <tr>
    <td>Data 1</td>
    <td>Data 2</td>
  </tr>
  <tr>
    <td>Data 3</td>
    <td>Data 4</td>
  </tr>
</table>
```

Will be rendered like,

<p>
Some text...
<table>
  <tr>
    <th>Header 1</th>
    <th>Header 2</th>
  </tr>
  <tr>
    <td>Data 1</td>
    <td>Data 2</td>
  </tr>
  <tr>
    <td>Data 3</td>
    <td>Data 4</td>
  </tr>
</table>

When I say give us an example of the precise output you want, I mean precisely that. Character-for-character what do you want the body of the response from the API to be exactly?

1 Like

okay I have edited it with ```
these are the precise result I want to produce

Some suggestions:

  1. enclose your requests in ```
  2. Experiment using markdown, if you can, instead of HTML. GPT seems to love markdown :slight_smile:
1 Like

Great!

Now, try feeding those examples into ChatGPT (include the fencing as part of the prompt) at the end of your prompt, in this form:

Here are some examples:

Q: <first example prompt goes here>
A:
<first answer goes here>

Q: <second example prompt goes here>
A:
<second answer goes here>

Q: <actual prompt>
A:

Depending on the complexity of some of the questions and answers you may need to generate additional examples.

One thing which might help to speed things along, as you’re doing this whenever ChatGPT returns an ideal response, grab the prompt and response and add them to your examples.

This is called few-shot learning and it can be a very powerful tool—especially if you are meticulous with your examples.

Now, depending on how many of these you need to do and if you really need the capabilities of gpt-3.5-turbo or not, you might consider ultimately doing a fine-tuning instead.

You’d need at least 500 examples (though 2–3 times that would definitely be better) but then you wouldn’t need to include the learning examples every time.

Because, with such short prompts and long responses, doing even 3-shot learning will basically quadruple your API costs (which is why a fine-tuning might actually be more economical).

But, you should experiment with this.

Maybe just one perfect example will be enough to keep the model on rails?

To better help others, I’d ask that you experiment a little bit and, if it’s something you can do (no data/privacy concerns), please report back your findings with successful screenshots (or even better the full text of a prompt and response).

Also, describe anything you discovered which worked particularly well, anything which surprised you, or any examples which were especially difficult to coerce the model into behaving well with.

This way your topic here can become a more valuable resource for others struggling and learning.

1 Like

okay thanks so much bro
I will experiment and share the results

to get best results you need to ask chatgpt to output in Markdown
then you just convert markdown in html with a python lib

1 Like

I have tried markdown but I couldn’t make it work here is my prompt, what am I missing?

Use the following pieces of information about Twitter Help Center website to answer the users question, You answer users question using the same language user used to ask.
If the answer is not present in the sources, you may use general knowledge and logic to answer questions, but notify the user your answer is not from the sources.

Begin!
do not say or mention things like "based on the information provided..." just answer as if you already know(have knowledge) about Twitter Help Center website.

--------
Information about Twitter Help Center website:
{context}

Question:
--------
{question}

Answer the question in the same language as the language of the question.
You must format answer in markdown without mentioning the format.

Answer:

to be honest, your prompt is a bit chaotic and has too many similar instructions which cause confusion and thats why chatgpt is ignoring parts of it

try to optimise your prompt with chatgpt :slightly_smiling_face:
use the system field to add how the chatbot should act and output as markdown.
more dynamic things can be included in the prompt itself

repetitive instructions just cause instability in the response.
ah and try to lower the temperature to 0,2 or so

1 Like

I have updated my prompt(asking gpt 4) and also set the temperature to 0.2 and prompt it to output markdown, but it’s not considering it, no Markdown. it uses only \n chars

here is the updated prompt

Use the following pieces of information about {chatbot_name} website to answer the users question.
If the answer is not present in the sources, you may use general knowledge and logic to answer questions, but notify the user your answer is not from the sources.

Begin!
Avoid phrases such as "according to the given information..." instead, respond as if you are already well-versed with the {chatbot_name} website.

--------
Information about {chatbot_name} website:
{context}

Question:
--------
{question}

Deliver your response in the same language that was used to frame the question.
Your answer should be structured in markdown format without acknowledging the formatting.

Answer:

One other note:

You will almost always get better results telling the models what to do rather than what not to do.

Give one or two precise examples of question/answer pairs and that should be sufficient to get it to do what you want.

1 Like

Hi guys, I want gpt model to produce output in a tabular format as chatgpt does, though I’ve mentioned it in prompt, it doesn’t seem to work. Has anybody tried anything similar? please help