BatchAPI prompt chaining, if the response is "No Significant Changes", I want to be able to execute a follow-on prompt, in that request itself

In BatchAPI, we send a list of requests and we get the responses after it’s processed. But is there any way to apply conditions in the BatchAPI,

Example: for the prompt in each request.
if the response is “No Significant Changes”, I want to be able to execute a follow-on prompt, in that request itself.

I need the above to happend for every request in the batch. Is there anyway??

Welcome to the Community!

No, this is currently not possible. As part of the submitted batch, each request is treated individually and there is currently no configuration that would allow you to specify the order in which requests as part of a batch are executed and/or chain the requests based on the custom ID. Therefore, executing a follow-up API request based on the outcome of a specific individual API request as part of the same batch is not an option at this time.

But: Theoretically, you might be able to design your prompt to incorporate the additional step as part of the same request (if it’s not too complex).

2 Likes

@jr.2509 How do you suppose I can incorporate that.

If the request gives the output as “No Significant change” as output i just want to run another prompt like " please look into it again" that’s it .

Is it possible? If yes could you please help me with a sample prompt. I will try out, it would be of great help to me.

I am happy to take a look but to see how it could be incorporated into your existing prompt, I would need the full prompt ideally.

@jr.2509

Prompt format is like below:
“”"
The first text below is an extract of certain accounting policies for {company_name} for the prior year (set out under “PRIOR YEAR POLICY”). The second text is the equivalent policies for the current year (set out under “CURRENT YEAR POLICY”). Compare these two policies and create a json list for each significant other policy change. Only include changes that may represent a more aggressive approach - i.e. could result in lower expenses and higher earnings being recognized in the current year. The fields for each json list should be:

  1. policy_change_description: Description of the policy or wording change (1-2 sentences)

  2. prior_year_policy: Prior year policy. Reproduce relevant text from the policy verbatim. Keep as short as possible: Only print enough for the reader to understand the change.

  3. current_year_policy: Current year policy. Reproduce relevant text from the policy verbatim. Keep as short as possible: Only print enough for the reader to understand the change. Highlight key changes with html ‘bold’ tags around the relevant words or sentences (use )

  4. potential_impact_description: Description of the potential impact

  5. expected_significance_change: Expected significance of change (High, Medium, Low or Unknown).

Be concise. If there are no significant changes, do not generate a json, instead generate a string “No Significant Changes”

In addition to explicit policy changes, you should also consider changes to methodology, definitions or wording that might allow higher earnings to be recognized. Even where the basic principles remain consistent, these types of changes can represent a more aggressive approach.

Typically, where absolute currency values are provided for each year they concern year-to-year fluctuations in the business, not policy changes.

Ignore changes to the wording that simply relate to a new business or revenue stream. This is not a change in policy but simply a description of the accounting policy for this new stream.

Take a methodical approach. Where the disclosure is broken down into identifiable sections or paragraphs, perform a side-by-side comparison of each section or paragraph from the prior year to the current year. You should also perform a direct comparison of sentences which start with the same structure (e.g. Revenue from components is recognized at a point in time when…) Has the wording changed from one year to the next?

Key Areas of Focus:

  • Changes to the methodology or approach. Focus particularly on changes which result from the company’s own choices.

  • A change that may result in more costs being capitalized to the balance sheet rather than expensed to the income statement. E.g. a change that results in more production costs being capitalized as inventory or fixed assets on the balance sheet.

  • A change that delays the point at which expenses are recognized.

  • A change that may result in assets being depreciated or amortized over a longer period. E.g. where the company extends the “useful life” of an asset.

  • Changes to underlying assumptions or estimates (even if small and justified by historical patterns) which may result in lower expenses/ higher earnings being recognized.

Exclusions:

  • Changes that are due to a change in the business model, changing end markets, new revenue streams, changes in products or services sold or acquisitions made.

  • Changes in amounts which are due to year-to-year fluctuations in the operation of the business or are simply due to the progression of time from one year to the next

  • New policies or methodologies that cover new products or services or revenue streams.

  • Changes that result from commercial terms rather than changes to accounting policies or estimates.

  • Additional clarification or explanation of a policy or methodology, rather than a change in that policy or methodology.

  • Additional descriptive detail of a policy where there is no substantive change to that policy

  • Additional description of policies which cover new segments or geographies

  • Changes that result in costs being recognized faster.

  • Changes that result in costs being recognized earlier

  • Changes that are not likely to result in lower expenses/ higher earnings being recognised in the current year.

  • Changes that represent a more conservative policy

provide the output in json key “data”

PRIOR YEAR POLICY:

{previous_year_policy}

CURRENT YEAR POLICY:

{current_year_policy}

“”"

1 Like

Thanks for sharing. I understand the ‘context’ better now.

If your goal is to basically re-run the above prompt in the event that there are no significant changes detected in the accounting policies, then it would indeed need to be a separate request and could not be embedded into the existing one. That said, if you are in doubt, you could always consider submitting two separate requests as part of the batch and then either combine the results of the two or only take the one that indicates changes. You can set your batch custom IDs for the two requests such that you know that they are the same request so you can easily reconcile the results when the batch is completed.

Another thought I have on this: Sometimes, it is not optimal to just run the same prompt again to perform a validation. Instead you could look into a slightly alternative approach for your validation that would achieve a similar outcome. One way I could think of is to first programmatically compare the two policies and extract the areas that have been subjected to change. You could then feed the identified changes along with the original policies to the model and ask it to evaluate them from a materiality/significance perspective.

1 Like

@jr.2509 The problem is when I am using normal playground gpt, we are getting the output,but when we send the same prompt through api, we are getting the data in the follow on prompt not on the first prompt. It seems to be the case in most times.

1 Like

One of the general challenges is that it is a very complex prompt / task and there is room for optimization.

For example, through the exclusions you are introducing a lot of nuances and subjectivity. Instead, I would aim to consolidate the list of key areas and exclusions into a single list where you positively formulate the changes you would like the model to detect as specifically as you can.

For example: Changes in methodology if they meet the following criteria; Criteria A, criteria B, criteria C


Besides this, there are also other areas in the prompt where you should further streamline the instructions. This might help you achieving more consistency in your responses.

1 Like

will try to inculcate the suggestions into the prompt. thanks

2 Likes

let us know how it goes. good luck!

1 Like

Many have noted that results in the Playground often differ from what we see with actual API responses.

If you’re also experiencing this, could you please check the attached screenshot and confirm that you’re using the same code as in the Playground?

This could help us identify if the issue is with the API or something else🙂

I see this as you have two options,

  1. When the batch finishes running you collect all the responses which were “No significant changes.” And run those again, either directly in the chat/completions endpoint or in another batch if it’s not super time-sensitive.
  2. Just set n to some arbitrary number for all of your requests. Depending on the size of your inputs, expected responses, and proportion of requests in which you expect to need to re-run the evaluation, it might actually be much cheaper for you to generate, say 5 responses for each and aggregate them than to resubmit the requests with no significant changes.