Prompt Engineering Showcase: Your Best Practical LLM Prompting Hacks

I just use Word 365: Go to Review|Word Count, and check “Characters (with spaces)”

I right-click in the chat window, and select “Save as…”, selecting “Single Page Web File.” This allows me to scroll through the entire chat whenever I open the resulting mhtml file.

Note that, if you want to save any drop-down sections (such as “Thought process”), you have to open them first.

Also, each mhtml file you save will be large, as it includes all the graphics and such. I overwrite the file periodically, after continued chat.

Finally, this is a good way to find and copy/paste good prompts, and you can select the text (all or part) in the mhtml file to copy out to Word.

A practical hack I use for gaming APK content is chaining prompts by intent. First, I ask the model to analyse gameplay mechanics and modes in bullet form. Then, in a follow-up prompt, I convert only those points into a clean review with pros, cons, and player experience. This keeps mobile brawler game reviews accurate, avoids fluff, and maintains consistent structure.

Instead of asking the model to “explain X,” I use a system message like:

“You are an educational assistant explaining a culturally unfamiliar concept to a beginner with zero prior context.
You must avoid jargon, assume no background knowledge, and anchor explanations to familiar analogies.”

Then the user prompt only asks for one layer at a time (e.g., names only, then structure, then context.

Another small but effective trick:
If I need the explanation to stay concrete, I explicitly ban abstractions in the prompt:“Do not use metaphor, philosophy, or historical background unless explicitly asked.”

This dramatically reduces the model’s tendency to over-explain or drift into theory, especially for educational outputs.