To give you an idea of how to best prompt a chat AI model:
system
You are a developer relations expert for OpenAI, for API AI products.
user
# context
A question is asked by someone who is using the OpenAI API to chat with a LLM in an automated manner for productivity. The solution is ultimately that “system” guides overall identity, behavior, and specialization, while “user” describes the task to be performed.
An answer is best presented in the OpenAI API “messages” parameter format for clarity about how to send the context.
# question
I am trying to create Keywords and metadescription. The code creates perfect article. However when trying to create keywords and meta description it just gives content. If I paste the same content provided to the model, to chatgpt it gives me great keywords and meta description. Is it a code problem or model issue or need more details.
# example API call
Model - gpt-4o
role: system
content: You are a skilled SEO assistant. Given the article text, extract the most relevant primary, secondary, long tail keywords, seperated by commas
role :user
content: Article text.
If demonstration by example and the language within the example isn’t enough, then we can upgrade the tutorial of how best to use chat messages including “system” with sending these two messages to an AI model.
Prompting and Usage suggestions, with examples
You’re on the right track! The issue you’re experiencing is likely due to a combination of how you’re structuring your prompt and potentially the difference in how gpt-4o
handles these types of tasks within the API compared to the ChatGPT interface. ChatGPT’s interface implicitly adds some extra framing and instruction that you need to explicitly provide when using the API.
Here’s a breakdown of the problem and a refined solution using the OpenAI API messages format:
The Problem:
- Lack of Specificity: While your system message sets the general role (“skilled SEO assistant”), your user message is too broad (“Article text”). The model needs more explicit instructions on how to process the article text to generate keywords and a meta description. It defaults to continuing the “article text” provided.
- Missing Output Formatting: You haven’t specified how you want the output formatted. LLMs excel when given clear examples or instructions for the desired output structure.
- Separate Tasks: You’re asking for three distinct tasks (primary keywords, secondary keywords, long-tail keywords) and implicitly a fourth (meta description). It’s often more effective to separate these tasks when using the API, or at the very least, to clearly delineate them in your prompt.
The Solution:
Here’s how I recommend structuring your messages to gpt-4o
using the OpenAI API format. I’ll demonstrate two approaches:
Approach 1: Combined Prompt with Clear Instructions and Examples
[
{
"role": "system",
"content": "You are a skilled SEO assistant, expert at crafting concise and compelling meta descriptions and identifying highly relevant keywords from any given text."
},
{
"role": "user",
"content": "Please analyze the following article and perform these tasks:\n\n1. Generate a meta description (around 150-160 characters) that accurately summarizes the article's content and includes relevant keywords.\n\n2. Extract the following keywords, separated by commas:\n - Primary Keywords (1-2 highly relevant, broad terms)\n - Secondary Keywords (3-5 related terms, more specific than primary keywords)\n - Long-Tail Keywords (2-3 longer phrases, very specific to the article's topic)\n\n**Example Output Format:**\n\nMeta Description: [Your meta description here]\n\nPrimary Keywords: keyword1, keyword2\nSecondary Keywords: keyword3, keyword4, keyword5, keyword6, keyword7\nLong-Tail Keywords: long tail phrase 1, long tail phrase 2, long tail phrase 3\n\n**Article:**\n\n[Insert your article text here]"
}
]
Explanation of Approach 1:
- System Message: Reinforces the role and expertise.
- User Message:
- Clearly outlines the tasks (meta description and keyword extraction).
- Provides specific constraints (meta description length).
- Specifies the desired output format for each keyword type.
- Gives an example of the output format, which significantly improves the model’s ability to deliver the desired results.
- Clearly separates the instructions from the article content.
Approach 2: Separate Prompts for Each Task (More Control)
You could also break down the task into separate API calls for even finer control:
2.1 Meta Description Prompt:
[
{
"role": "system",
"content": "You are a skilled SEO assistant, expert at crafting concise and compelling meta descriptions that accurately summarize the content of an article while incorporating relevant keywords."
},
{
"role": "user",
"content": "Generate a meta description (around 150-160 characters) for the following article:\n\n[Insert your article text here]"
}
]
2.2 Keyword Extraction Prompt:
[
{
"role": "system",
"content": "You are a skilled SEO assistant, expert at identifying highly relevant primary, secondary, and long-tail keywords from any given text."
},
{
"role": "user",
"content": "Analyze the following article and extract the following keywords, separated by commas:\n\n- Primary Keywords (1-2 highly relevant, broad terms)\n- Secondary Keywords (3-5 related terms, more specific than primary keywords)\n- Long-Tail Keywords (2-3 longer phrases, very specific to the article's topic)\n\nExample Output Format:\n\nPrimary Keywords: keyword1, keyword2\nSecondary Keywords: keyword3, keyword4, keyword5, keyword6, keyword7\nLong-Tail Keywords: long tail phrase 1, long tail phrase 2, long tail phrase 3\n\nArticle:\n\n[Insert your article text here]"
}
]
Explanation of Approach 2:
- System Message: Tailored to each specific task.
- User Message: Focuses on a single task per prompt, making it easier for the model to understand and execute.
Key Takeaways & Best Practices
- Be Explicit: Clearly define your instructions and desired output format.
- Provide Examples: Output examples are extremely helpful for guiding the model.
- Separate Tasks (Optional): Breaking down complex tasks into smaller ones can improve results.
- Iterate and Refine: Experiment with different prompt structures and phrasing to find what works best for your specific use case and the
gpt-4o
model.
- Temperature and
top_p
: For tasks like these, lower temperature
(e.g., 0.3-0.5) and top_p
(e.g. 0.5-0.7) can help encourage more focused and relevant output. You may want to start with 0 for these parameters to get the most probable, deterministic responses, and then experiment from there.
By implementing these strategies, you’ll be able to leverage the power of gpt-4o
through the OpenAI API to effectively generate high-quality keywords and meta descriptions for your articles. Remember that prompt engineering is an iterative process, so don’t be afraid to experiment and fine-tune your prompts to achieve optimal results!
If you want lists of things, and multiple lists even, you can investigate using structured outputs, an API method for mandating a JSON as the response. This can guide the AI into producing multiple types of response without omission.