There are some differences, and these differences can be explained as this:
FEATURE | ChatGPT | MODELS API-access |
---|---|---|
Purpose | general use | development, deep knowledge, task-dedicated |
Audience | Public | Professionals, Researchers, Business, Experts, etc. |
GPT base | GPT-3/3.5 | User’s selection: GPT-2,-3,-3.5,-4, etc. |
Task selection | none | User’s selection: text, code, math, etc. according to the model |
Main interface | Chat interface | API, customized modes: panel, chat, etc. |
Role fields |
User prompt |
User prompt, System role, Assistant/completion
|
Interface interference | chat interface may select another model, may change context | User controlled customized settings over a single model |
Context | by conversation (short) | conversation and System role (long) |
External links | no access | Datasets containing large amounts of data in text format (free-format, jsonl, etc.) |
SETTINGS | ChatGPT | MODELS API-access |
---|---|---|
temperature | 0.7 fixed | [0 to 1] official, [0 to 2] unofficial |
Controls output | randomness. | Higher value more random replies. |
top_p | unknown | [0 to 1] user control |
Nucleus sampling | strategy: | |
Model considers | tokens-subset with | probability exceding top_p. |
max_tokens | fixed, limited, cost: free | user control, according account, cost |
Maximum tokens | in the response. | |
One token is | 2/3 to 3/4 of English | words (approx.) |
stop | button | numeric user control |
Stops response | sequence of strings. | |
presence_penalty | unknown | numeric user control |
Penalizes similar | tokens in response. | |
frequency_penalty | unknown | numeric user control |
Penalizes frequent | tokens in response. |
In this specific case for an efficient use of the API, being able to obtain much better results than those provided by ChatGPT, use the resources provided by the API. For example:
- Strategic use of the
System
role for context-maintenance:
System:
Provide text according to the topic and type of text instructed in the
`User` prompt.
Please follow the seven instructions below:
1. Write in an informative tone as much as possible;
2. Use rephrasing techniques whenever possible;
3. Use paraphrasing techniques whenever possible;
4. Provide section separation and headings;
5. Make the text as human-generated content as possible;
6. Make the text unique whenever possible;
7. Take into consideration the temperature and top_p settings.
- And in the
User
prompt:
User:
Please, follow the instructions provided in the `System` role.
Keep them in context all the time.
Please, write a blog post on {Topic}
...
- Use the settings according to the goals. Read the thread: Cheat Sheet: Mastering Temperature and Top_p in ChatGPT API (a few tips and tricks on controlling the creativity/deterministic output of prompt responses.) for better experience:
Settings:
# Start with a 0.5 increase for uniqueness, reduce for excessive content
# mistakes;
temperature = 0.5
top_p =... # according to the thread;
# Calculate the approximate space in English words (EnW) of the desired
# text. # If another language translations please advise. Tips:
# max_tokens = 2/3 * EnW + 20%;
# max_tokens = 3/4 * EnW + 15%;
# Do not reduce max_tokens excessively in order to save token cost,
# otherwise the model will provide short summaries and words;
max_tokens =...
# Use frequency_penalty to control uniqueness
frequency_penalty = ...
...
- Tips for the
System
role andUser
prompt: -
- Mind punctuation and delimiters, the models love them;
-
- Itemize instructions, rules, lists, etc. - numbered is better;
-
- Be concise (and robotic) on the instructions, mainly in the
System
;
- Be concise (and robotic) on the instructions, mainly in the
-
- Details and explanations for fine-tuning, in the
User
prompt - once achieved a desired new instruction, add it to theSystem
- no need to repeat it during conversation;
- Details and explanations for fine-tuning, in the
-
- Consider fine-tuning, for better textual results, a training conversation with the model;
I hope this helps. Please let us know the results. Enjoy.