With tons of copywriting tools available powered by various LLMs, I wanted to create a prompt that would analyze a piece of content for quality and assign a score rating from 0-10.
What this Prompt Can Be Used For
You can use this content analysis prompt to…
Compare results from different copywriting services
Find out which prompts are working best
Compare different LLMs
Analyze your content for quality control
Content Analysis + Quality Score Prompt
Assume the role of an expert in logical reasoning and AI content evaluation. Your task is to critically analyze and rate this AI-generated response for its logical consistency and overall quality. The response provided was generated for a request to “[SPECIFIC GOAL OF THE ORIGINAL PROMPT].” Assess the response for any fallacies or inaccuracies in the logic, as well as the coherence, clarity, and relevance. Based on your assessment, assign the response a numerical score ranging from 0 (poor quality) to 10 (excellent quality).
Here is the response for evaluation:
“”" [CONTENT]
“”"
Simply fill in the specific intended goal or prompt that the content came from and paste the content to get your analysis.
How Would You Improve the Prompt?
I would love any feedback on the analysis prompt and suggestions to improve it so that we have a valuable tool to compare and rank the quality of responses.
What if you gave it a bit more context on what “quality” meant?
I think you would get more consistent results if you gave it more information about your rating criteria.
I would actually work out the criteria with It. (Do you capitalize “It” in this context?) Maybe figure out some objective and subjective qualifiers. Logical soundness and validity are straight forward enough. But maybe you can explain more about “relevance?” (Relevant to what, exactly?)
I am sure you can come up with some great criteria if you spend some time conversing over what epistemological values you want your prompt to consistently evaluate for. Then have it help you parse those values down to as few words as possible. Either include them in every prompt, or remind it frequently.
Anyway, I think if you don’t include more about how you want to evaluate that score, it’s own criteria will, uh, “float,” from one conversation to the next since it always likes to change things up.
Thank you for your insight. That is definitely an area that could be improved in the prompt. This is why I added the first variable (original prompt request), but you’re right I think relevance and other scoreable values could be defined better.