Title : Catastrophic Product Recommendation Failure -
ChatGPT Repeatedly Recommended Solvent-Based Sealer Over Fresh Acrylic Paint
Category:
Model behavior / Critical reasoning failure
Summary :
Over the course of three separate conversations and approximately 20+ prompts, ChatGPT repeatedly and confidently recommended I use Foundation Armor AR350 — a solvent-based acrylic sealer — over freshly painted stamped concrete that had been coated with Sherwin-Williams A-100 exterior acrylic latex paint. This advice was given despite:
-
The clear chemical incompatibility between solvent-based sealers and acrylic latex paint.
-
The manufacturer’s own documentation, which explicitly states AR350 is not to be used on painted surfaces.
-
Multiple user-supplied warnings including “this was just painted with A-100”.
-
A multi-threaded back-and-forth involving application tips, square footage, and purchase planning — culminating in what would have been a $1,200 + materials error and likely damage to a large 1,200 sq ft outdoor surface.
The model continued doubling down on the product suggestion, offering prep steps and positive justifications — not once issuing a compatibility warning until I brought it up much later myself.
What Went Wrong :
-
Improper product logic chaining: The model treated “stamped concrete” as justification for AR350 without interrogating the critical detail: it had just been painted with non-compatible latex.
-
Failure to recognize disqualifying material: It acknowledged the use of Sherwin-Williams A-100 but failed to trigger a do-not-proceed condition tied to solvent sealers.
-
No contradiction alerts: Despite multiple opportunities, the assistant never stopped to challenge itself with: “Does AR350 bond with painted concrete?”
-
Training bias and repetition inertia: Because AR350 is frequently associated with stamped concrete jobs, the model defaulted to pattern-based repetition, ignoring context-specific constraints.
Result :
Had I trusted this advice — which appeared confident, well-explained, and reinforced across multiple sessions — I would have:
- Spent $1,200+ on product and tools + additional sums for labor.
- Applied a chemically incompatible sealer
- Likely experienced bubbling, delamination, and aesthetic failure
- Faced costly stripping or recoating labor to repair the damage
Requested Fixes :
-
Critical Product Compatibility Logic - If a product has clear incompatibility constraints (e.g., solvent vs latex), the assistant must lock that condition and alert aggressively when violated.
-
Contextual Anchoring on High-Stakes Surfaces - Any home improvement context involving permanent surface alteration, such as sealing, painting, or flooring, should trigger persistent anchoring of key substrate details.
-
Fail-safe Triggers on High-Risk Products - When a solvent-based or chemically reactive product is suggested, the assistant should recheck all recent materials used on that surface before confirming.
-
Force manufacturer citation checks on critical path advice - I would not have proceeded had the assistant shown the official Foundation Armor guidance which literally says: “Do not apply to painted surfaces.”
Final Thought :
This wasn’t a trivial misunderstanding — it was a model-level reasoning collapse with real financial and physical consequences. These kinds of errors should never persist after a user has declared substrate conditions like “freshly painted with Sherman Williams A-100 Paint.” The model should flag or prevent continuation down the wrong path, especially in multi-step planning scenarios.
I’m posting this not just to log the error, but because this exact failure mode could affect any user doing real-world projects — and it needs to be patched fast.
— Dave Young, Boca Raton, FL