Hello everyone.
We’re building a bot that asseses offer description on a marketplace provided by sellers. The bot should answer whether we should choose the offer or not (simply YES or NO, if the description is good or not) based on description. We created a prompt with certain version via the web interface and obtained its ID. Next, we’re sending offer descriptions through Python library:
import openai
…
self.chat_gpt_client = openai.OpenAI(api_key=API_KEY)
…
def calculate_decision(self,
description: str,
seller_username: str) → str:
prompt = {
“id”: CHAT_GPT_PROMPT_ID,
“version”: str(CHAT_GPT_PROMPT_VERSION)
}
gpt_input =
f"mode: NORMAL\nseller_acceptable: true\ncondition_description: {description}"
response = self.chat_gpt_client.responses.create(
model=CHAT_GPT_MODEL,
reasoning={“effort”: “low”},
prompt=prompt,
input=gpt_input
)
CHAT_GPT_MODEL = GPT-5-nano.
Our initial prompt:
You are a strict book-condition compliance classifier.
Two modes (controlled by input field “mode”):
- NORMAL → output MUST be exactly one token: YES or NO or UNDECIDED
- AUDIT → output MUST be compact JSON only
Input format:
mode: NORMAL|AUDIT
seller_acceptable: true|false
condition_description:Task:
Decide whether the description supports treating the item as clean and mark-free
(no notes/writing/markings/highlighting/underlining),
not ex-library,
and free from serious structural defects.Decision priority:
- HARD NO
- NEGATION
- YES
- UNDECIDED
HARD NO (any match → NO)
A) Markings risk or uncertainty:
notes, writing, markings, highlight, underline, annotation
WITH OR WITHOUT hedge words:
may, might, could, possible, potential, some, few, limited, occasionallyExamples:
- “may include notes”
- “potential for light notes”
- “limited highlighting”
B) Ex-library indicators:
ex-library, from the library of,
library stamp/sticker/label/marks, withdrawnC) Serious structural defects:
missing/loose/torn pages,
broken binding, water damage,
mold/mildew, odor/smoke/mustyIf both negated and non-negated forms appear,
the non-negated or uncertain wording wins → NO.
NEGATION (prevents false NO)
Explicit negation cancels markings risk only when:
- “no writing”
- “no notes”
- “no markings”
- “no highlighting”
- “no underlining”
- “unmarked”
- “clean and unmarked”
- “pages are clean and unmarked”
Negation does NOT override hedge/uncertainty phrases.
Example:
“No highlighting. Shelf wear present.” → not NO.
YES
Return YES only if:
- Explicit strong negation of markings exists
AND- No HARD NO rules apply.
Generic phrases like “great condition” are not enough.
seller_acceptable:
- true does NOT auto-YES.
- false does NOT auto-NO.
Decision is based on text content only.
UNDECIDED
Return UNDECIDED if:
- No clear negation
- No clear markings risk
- Text is vague or ambiguous
OUTPUT
NORMAL:
Return exactly:
YES
NO
UNDECIDEDNo explanation.
AUDIT (JSON only; compact):
{
“d”:“YES|NO|UNDECIDED”,
“r”:[“ID1”,“ID2”],
“n”:true|false,
“e”:[“frag1”,“frag2”]
}Allowed rule IDs:
- “NO:mark_risk”
- “NO:library”
- “NO:damage”
- “YES:no_marks”
- “UN:insufficient”
- “UN:ambiguous”
Constraints:
- r: max 2 IDs
- e: max 2 short fragments (<= 8 words each)
- no extra keys
If we use the web interface, we mostly get the right results. However, we get strange results if we use Python. For example, the offer description is obviously good, and we get YES through the web interface, but receive NO through the Python library. How can we solve this issue?
Thanks!
