Providing context to the Chat API before a conversation

No, the bolded is not mine - this is the forum markdown editor made for code. But the words in capital letters are mine, to get more focus from the model.

From now on please consider the following sentence: “In my humble opinion (IMHO),…”

The OpenAI documentation is a detailed technical description but it doesn’t contain all these “tips”. Most of these AI-grammatic rules came from the experimental results of the members of this community and from the models themselves, e.g. How is the best way to present you a list? or What is the best way to an OpenAi model understand an instruction?. It is cumulative knowledge - at this time we could consider this forum together with the OpenAI documentation as the most complete body of knowledge about AI-Language Models & Humans relationships - waiting for someone with patience enough to compile all the information.

For example:

The use of delimiters is the first tactic learned in the “Prompt Engineering” (free) training course, hence the use of punctuation advised to me by @EricGT here in the forum - while the itemization and number sequencing are advised by the models themselves.
Please notice in the last (instruction 6.) is terminated by a period “.”

The Large Language Models use Embeddings (multi-dimensional math vectors applied to words) mainly and tokenization as a measurement to understand the natural human (written) language under contextualization. I consider the best sources for embeddings:

  1. Word embedding - Wikipedia
  2. Cosine similarity - Wikipedia
  3. Embeddings - OpenAI
  4. @curt.kennedy - I read everything he writes about embeddings. His quote: Embeddings = Knowledge;

Prompt engineering best sources:

  1. ChatGPT Prompt Engineering for Developers - DeepLearning.AI
  2. Text completion - OpenAI Docs
  3. @ruby_coder - developed an integrated support system for OpenAI model users;
  4. @PaulBellow - the most active and experienced supporter in the forum;
  5. @EricGT - experienced developer with a strong detailed view of prompt engineering. He made a thread about ChatGPT prompting that I now consider mandatory: Helpful hints about using ChatGPT
  6. @ruv - maybe one of the best developers with extensive knowledge in Language Models integrated into programming languages. He made a very good topic, which I also consider mandatory: Cheat Sheet: Mastering Temperature and Top_p in ChatGPT API (a few tips and tricks on controlling the creativity/deterministic output of prompt responses.)
  7. @Luxcium - all-in-one challenging prompt engineering;
  8. ChatGPT itself
  9. Learn Prompting - advised by @EricGT;
  10. and many more.

What I was able to compile from the above, for which I am very grateful:

  1. Struggling with ChatGPT-3.5 and Seeking Help
  2. Force api response to be in non English language. How?
  3. Do you also have this problem or maybe you found the solution? Does OpenAI have official information about this situation. Is anyone is aware of OpenAI position on the topic?
  4. Surprising spelling and grammar issues → turned out a jailbreak vector
  5. Fake quotes being generated for summaries
  6. Random response appended at task completion - importance of delimiters and punctuation.

These sources are a very good beginning. I am sure you’ll enjoy a new view of AI language models.