I want it to do less responding to my query

I have successfully set up my system to query Azure OpenAI to make a request. But it is doing too much.

I am using it to flag created events that should be reviewed by a human as questionable. I am feeding it the following (Response line removed because that was causing additional problems:

private static string _prompt =
    "Context: I’ve developed an app to organize volunteer events for political campaigns. " +
    "I need OpenAI to review each event’s description to identify any content that requires " +
    "human moderation." +
    "Goal: I need OpenAI to analyze and categorize event descriptions to determine if they " +
    "require moderation. Specifically, I have two questions:" +
    "1. Political Bias: Rate the content’s political leaning on a scale from 1 to 10, where " +
    "1 favors the Democratic Party and 10 favors the Republican Party." +
    "2. Content Tone: Assess the content’s tone for violence or derogatory language on a " +
    "scale from 1 to 10, with 10 being extremely violent or derogatory. " +
    "If uncertain, assign a rating between 4-6" +
    //"Response: The response must be the number of the political bias, then a comma, then the number of the content tone." +
    "Evaluate: {0}";

public async Task<int> Analyze(string text)
{

    var prompts = string.Format(_prompt, text);

    Response<Completions> completionsResponse = await Client.GetCompletionsAsync(
        new CompletionsOptions()
        {
            Prompts = { prompts },
            Temperature = (float)0,
            MaxTokens = 60,
            NucleusSamplingFactor = (float)0,
            FrequencyPenalty = (float)0,
            PresencePenalty = (float)0,
            DeploymentName = "sandbox1"
        });
    Completions completions = completionsResponse.Value;

The problem is, I call it with:

var result = await semanticAnalysis.Analyze("Come to our fundraiser for Joe Biden");

And I get a response of:

! We’ll be raising money to support Joe Biden’s campaign for president. We’ll have food, drinks, and a silent auction. All proceeds will go to the Biden campaign.Political Bias: 1Content Tone: 1Evaluate: Join us for a rally to support President Trump! We

How can I get just the answers?

  1. I don’t want it adding more to the prompt.
  2. I don’t want it adding an additional sample event to evaluate.
  3. And I’d prefer to have the response be just: 5,3

What do I need to do different?

Ah!

Well, there’s multiple things. You’re using the completion API. Completion models will just want to tack new worts to the end of whatever you send it.

Here’s how we can fix it:

think about the whole thing as a document, not a conversation. What document would include the response you’re looking for? Maybe a form, or maybe a report or something.

This is a political sentiment analyis report.

Process:

We are categorizing event descriptions and statements by… bla bla bla
this is a 1
this is a 10
etc, etc.

Analysis:

Statement 1:

<quote>"Come to our fundraiser for Joe Biden"</quote>

We categorize this as a 5 based on our scheme.

Statement 2:

some more examples

Statement 10

<quote>{statement}</quote>
We categorite this as a

notice that we stopped mid sentence.

and then you set max tokens to 1, because you don’t care about anything else - the probability of getting a 1-10 response will be incredibly high.

There are other techniques we can tack on top of that, but I think this is a good starting point!


I will however advise you that in my opinion this is not the best use of this technology, but I will not talk you out of it. Exploring and playing around with it is the most important thing!

1 Like

Do I not want to use GetCompletion()? Or is it that I do want to use that but make it very clear all I want in the completion is the two numbers?

If I don’t want to use GetCompletion(), what should I use? What I really want is GetAnswer() but it doesn’t have that unfortunately.

TIA

I think this is fine, and whether you use chat completions (which is closer to chatgpt) or the legacy completions (which you’re using) is mostly irrelevant - the prompt structure will just be different.

The stuff you wrote in your OP would be more at home with a chat completion type prompt than legacy completions.

What I meant with my dumb comment is that I don’t like Likert type scales (or using llms to generate numeric output), but my opinion isn’t relevant here :cowboy_hat_face:

It’s probably best to not get too much into the weeds here, but you can eventually start looking into logprobs Using logprobs | OpenAI Cookbook and embedding classifiers Classification using embeddings | OpenAI Cookbook - but you probably know this better than me: it’s more important to get something working, than getting it right.