Rating RSS feeds with gpt-3.5-turbo

I am using gpt-3.5-turbo for rating articles from my RSS feeds list. I am passing my interests, article title and short description and expect a score between 0 and 10. Some times the scores are way off, specially if I keep the default temperature.

My current prompt is

 const prompt = `I am interested in ${userInterests}.
    Am I going to like this article?
    Article Title: ${article.title}.
    Article Description: ${article.description}.
    Give a score from 0 to 10. Respond with the score only.`

My interests:

const userInterests = `Web development, AI, global news, Ukraine, Russia, China
  tech news, geopolitics, Greek news, financial news, stock market, crypto
  cyber security, software updates, movies`;

This article keeps getting a 7 even though it has nothing to do with my interests:

  {
    title: '‘Help me’: fans watching bear camera help save Alaska hiker’s life',
    pubDate: 'Sat, 09 Sep 2023 18:57:01 GMT',
    description: 'Wildlife enthusiasts watching live feed from remote national park spot hiker in distress and alert authorities to rescue him',
    img: 'https://i.guim.co.uk/img/media/a0ef32988a2442dc92ed5cbf5a94c3f739a2f9a0/0_313_6000_3600/master/6000.jpg?width=140&quality=85&auto=format&fit=max&s=57781635342923699a4436df7bef9a0c',
    link: 'https://www.theguardian.com/us-news/2023/sep/09/bear-camera-hiker-rescued-alaska-national-park',
    score: 7
  },

If I make the same question using chatGPT (3.5) I get a consistent 2:

Given your listed interests, this article appears to be quite unrelated to your main areas of interest, which include web development, AI, global news, geopolitics, and other topics. It focuses on a wildlife rescue incident and doesn’t directly align with your preferences. I would give this article a score of 2 out of 10 for its likely relevance to your interests. However, individual preferences can vary, so you may find it mildly interesting even if it’s not directly related to your primary interests.

Why is there such a big difference? How can I improve this prompt? I’ve noticed that if I ask “Why did you give this score?” the results are much better.

Hi and welcome to the Developer Forum!

Can you post a code snippet of your API call and any setup it relies on? specifically looking for the temperature setting and the system prompt used.

Hi, thanks! Right now it’s 0.5.

const rateArticleRelevance = async (userInterests, article) => {
  try {
    const prompt = `I am interested in ${userInterests}.
    Am I going to like this article?
    Article Title: ${article.title}.
    Article Description: ${article.description}.
    Give a score from 0 to 10. Why did you give this score? Respond with the score only.
  `;

    const response = await openai.chat.completions.create({
      messages: [{ role: "user", content: prompt }],
      model: "gpt-3.5-turbo",
      temperature: 0.5,
    });

    const score = parseInt(response.choices[0].message.content, 10);
    if (isNaN(score)) {
      throw new Error("Failed to parse the score as a number.");
    }

    return score;
  } catch (error) {
    return 0;
  }
};

  const scoredArticles = await Promise.all(
    articles.map(async (article) => {
      const score = await rateArticleRelevance(userInterests, article);
      return { ...article, score };
    })
  );

  const sortedAndFilteredArticles = scoredArticles
    .sort((a, b) => b.score - a.score)
    .filter((article) => article.score > 0)
    .slice(0, 20);

Ok, I don’t see a system prompt in your message object, please add a role: “system” with the content of “You are a helpful assistant who rates RSS feed data for possible interest to a user, they will list their interests and you produce a score, further detail will be provided in the user prompt”

Also up the temp to 0.7-0.9

2 Likes