Differences between text-davinci-003 model and ChatGpt

The API responses are simple, but the ChatGpt ones are very smart. Can that be improved? Can the answers be improved?

code:

const response = await axios.post("https://api.openai.com/v1/completions", {
            prompt: req.body.prompt,
            max_tokens: 3000,
            model: 'text-davinci-003',
            temperature: 0.5,
            n: 1,
            stream: false
        },
        {
            headers: {
            "Content-Type": "application/json",
            "Authorization": `Bearer ${API_KEY}`
            }
        });

Prompt: How I can get a good picture of Mars?

ChatGpt:

To get a good picture of Mars, you can use one of the following options:

  1. Use a telescope: You can use a telescope with a camera attachment to take a picture of Mars. A telescope with a large aperture will give you a clearer image, but you need a clear night sky to get the best results.

  2. Check online sources: There are many websites and online resources that have images of Mars taken by NASA spacecraft and telescopes. Some popular websites include NASA’s Mars Exploration website (mars.nasa.gov) and the European Space Agency’s Planetary Science Archive (psa.esac.esa.int).

  3. Use a planetarium software: There are many planetarium software programs available that allow you to view and capture images of Mars and other celestial objects. These programs often use images taken by telescopes and spacecraft to generate realistic views of the planets.

Remember, Mars is a small and faint object in the night sky, so it can be challenging to get a clear and detailed image without the right equipment or conditions.

API:

The best way to get a good picture of Mars is to use a telescope. Telescopes can capture detailed images of the planet and its features, allowing you to get a better view of the planet than you could with just a camera. You can also use a camera with a long lens to get a good picture of Mars, but the quality won’t be as good as with a telescope.

Welcome to the forum.

ChatGPT likely has a prompt before whatever the user enters is added to the end.

Your low temperature could have something to do with it too.

If you add something like “You are a large language model who is helpful… etc etc etc” the results will be closer.

Long story short, the inputs are not the same on the two which causes the differences in the output.

Hope this helps!

1 Like

This is an idea that I am constantly exploring myself. Until the ChatGPT API gets released, exploring parameters that help GPT-3 approximate ChatGPT is definitely an undertaking.

Temperature, presence_penalty, frequency_penalty and engine_model are parameters that seem to most influence the output. Below are some models and their descriptions, which lead me to believe that these may be of use.

text-ada-001
Text completion and answer generation, trained on a diverse internet text and generating fluent human-like text

curie
Text generation and question answering, focused on knowledge-based and conversational tasks

text-curie-001
Text generation and language understanding, performs well on a wide range of natural language tasks

Not really, the same type of completion is returned even with temp set to 1:

Example 1 (Similar to @elhumbertoz concern)

Yes, he need more prompt engineers with the the text-davinci-003 model.

Example 2

Here are some examples (prompt is at the bottom):

Example 3 (Much Better)

Example 4 (Even Better?)

Final Example (haha)

You can easily see from the many examples, that for text-davinci-003 (these examples), the “secret sauce” is in the prompt engineering.

Coding and trial and error is king :slight_smile:

Hope this helps you @elhumbertoz