Output matched against incorrect rule

I am using gpt-4-vision-preview model and doing the following

1 - Pass a System command where I am telling it to act as a compliance officer and identify any issues with the marketing material
2 - Pass a User command with rules and input. The rules are passed as embedding
3- Image with content is passed with the input.
3- The output identifies a certain problem and matches it to a certain rule (passed in Step 2).

Openai chat completion api 's response states that a violation occurred and identifies the text that caused the violation and the rule that was used by the api to make this decision.

The rule and the text do not appear related.
For instance, the rule states “… identify a company name that appears in the input …”.
The text openai chat completion api identifies as causing the violation is " … manage all accounts in one place". This text appears in the image next to the Company name.

The violation is stated as “Misleading Representation”. The explanation given is that all accounts may not be supported therefore it is a misleading representation.

It does make sense when you look at the text and the violation’s explanation however not sure why it matched on the particular ruleset. Is it because there is no other rule given to identify a Company name?

I am trying to understand the approach of chat completion api when it is instructed to identify a problem and match that against a certain rule.

Welcome to the community!

Do you wanna give us an example of a prompt where this happened? There are a bunch of things that could be going on, but it’s difficult to analyze without a concrete example :confused:

Curious too to hear more about this. I have built an application that involves compliance assessments.