Recently, I encountered a tricky issue while developing a custom GPT that supports multiple characters. To maximize extensibility, I stored all character information, including avatar links, in a file. These image links, hosted on GitHub, were displayed in Markdown format.
For a month, everything worked fine, until suddenly, the avatar images stopped displaying. However, repeating the same content in the following chat made the images appear again. This issue seemed to trigger under specific conditions, unrelated to the image source, link format, or browser, and it wasn’t a network issue either.
I posted on the OpenAI forum (In custom GPT, some images in Markdown format cannot be displayed irregularly, but they can be reproduced stably under the same conditions) seeking help and found other users facing the same problem, but no solutions were provided.
After extensive testing, I discovered that images display normally if their link has appeared in a previous prompt. If the link only appeared in the file and not in any prior prompt, the images wouldn’t display. This appears to be a new security mechanism by OpenAI, which somewhat hinders creativity.
Should my assumption prove true, I urge OpenAI to reconsider this constraint. Links originating from code interpreters and files are inherently safer compared to actions. The rationale behind this security protocol is unclear to me. To monitor whether this bug or security feature will persist, I’ve specifically created a GPT to test this. https://chat.openai.com/g/g-horVE39hs-does-custom-gpt-support-md-image-only-in-file
I warmly welcome everyone to share your experiences or opinions.