API does not answer correctly according to my prompts

If I use a prompt in chatGPT it gives me a result according to my prompts instructions, But in the API request from my website, it never listens to the instructions and just answers the Question Simply…

The Prompt for Content Creation looks human-generated…
This prompt works most of the time in ChatGPT but using API the result is AI detected
I’m making a website Like JasperAI/CopyAI
Example Prompt: “Hey ChatGPT, regarding generating writing content, two factors are crucial to be in the highest degree, “perplexity” and “burstiness.” Perplexity measures the complexity of the text. Separately, burstiness compares the variations of sentences. Humans tend to write with greater burstiness, for example, with some longer or more complex sentences alongside shorter ones. Al sentences tend to be more uniform. Therefore, generated text content must have the highest degree of perplexity and the highest degree of burstiness. The other two factors are that writing should be maximum contextually relevant and maximum coherent.”

There are some differences, and these differences can be explained as this:

FEATURE ChatGPT MODELS API-access
Purpose general use development, deep knowledge, task-dedicated
Audience Public Professionals, Researchers, Business, Experts, etc.
GPT base GPT-3/3.5 User’s selection: GPT-2,-3,-3.5,-4, etc.
Task selection none User’s selection: text, code, math, etc. according to the model
Main interface Chat interface API, customized modes: panel, chat, etc.
Role fields User prompt User prompt, System role, Assistant/completion
Interface interference chat interface may select another model, may change context User controlled customized settings over a single model
Context by conversation (short) conversation and System role (long)
External links no access Datasets containing large amounts of data in text format (free-format, jsonl, etc.)
SETTINGS ChatGPT MODELS API-access
temperature 0.7 fixed [0 to 1] official, [0 to 2] unofficial
Controls output randomness. Higher value more random replies.
top_p unknown [0 to 1] user control
Nucleus sampling strategy:
Model considers tokens-subset with probability exceding top_p.
max_tokens fixed, limited, cost: free user control, according account, cost
Maximum tokens in the response.
One token is 2/3 to 3/4 of English words (approx.)
stop button numeric user control
Stops response sequence of strings.
presence_penalty unknown numeric user control
Penalizes similar tokens in response.
frequency_penalty unknown numeric user control
Penalizes frequent tokens in response.

In this specific case for an efficient use of the API, being able to obtain much better results than those provided by ChatGPT, use the resources provided by the API. For example:

  • Strategic use of the System role for context-maintenance:
System:

Provide text according to the topic and type of text instructed in the
`User` prompt.
Please follow the seven instructions below:
1. Write in an informative tone as much as possible;
2. Use rephrasing techniques whenever possible;
3. Use paraphrasing techniques  whenever possible;
4. Provide section separation and headings;
5. Make the text as human-generated content as possible;
6. Make the text unique whenever possible;
7. Take into consideration the temperature and top_p settings.
  • And in the User prompt:
User:

Please, follow the instructions provided in the `System` role.
Keep them in context all the time.
Please, write a blog post on {Topic}
...
Settings:

# Start with a 0.5 increase for uniqueness, reduce for excessive content
# mistakes;
temperature = 0.5

top_p =... # according to the thread;

# Calculate the approximate space in English words (EnW) of the desired
# text. # If another language translations please advise. Tips: 
# max_tokens = 2/3 * EnW + 20%;
# max_tokens = 3/4 * EnW + 15%;
# Do not reduce max_tokens excessively in order to save token cost,
# otherwise the model will provide short summaries and words;
max_tokens =... 
 
# Use frequency_penalty to control uniqueness
frequency_penalty = ...
...
  • Tips for the System role and User prompt:
    • Mind punctuation and delimiters, the models love them;
    • Itemize instructions, rules, lists, etc. - numbered is better;
    • Be concise (and robotic) on the instructions, mainly in the System;
    • Details and explanations for fine-tuning, in the User prompt - once achieved a desired new instruction, add it to the System - no need to repeat it during conversation;
    • Consider fine-tuning, for better textual results, a training conversation with the model;

I hope this helps. Please let us know the results. Enjoy.

1 Like

I think @elmstedt meant sharing the prompt and the API call. Without the two, he probably can’t help.

1 Like