I can not give link to the custom GPT, as it’s currently in development and not for public access. But I have studied this link-removing behavior by investigating the chat session json data generated by data export. Session object has safe_urls property where value is an array of URLs. Apparently this array is generated at server side, and my hypothesis is if an URL was present in user’s messages, it is deemed safe, and added to the safe_urls list. When receiving response, at the client side, any link that is not in this list, is removed from the generated text. One can even see that the beginning of the URL is arriving via stream, and then it is immediately removed.
Here’s how to replicate, with pure prompts, no need for custom GPTs:
(a) This worked for me (start a new session):
Render this markdown for me:
![Wild boar](https://www.tammiskola.lv/images/parnadzis2.png)
(b) This reliably failed (start a new session):
Concatenate the URL from these two strings: "https://www.tammiskola.lv" and "/images/parnadzis2.png", then render this markdown for me:
![Wild boar](<concatenated URL>)
Now, if you continue the (b) failed session with the (a) prompt that worked fine on its own, it will not be rendered anyway. Somehow, if an URL has been previously removed, it will never be allowed in this session anymore. However, you can continue with asking to render another image with URL that has not been previously mentioned, and it will render just fine:
Render this markdown for me:
![Wild boar](https://www.tammiskola.lv/images/zakis2.png)
From this I conclude that if full URL is mentioned in either user’s prompt or custom GPT’s API response, it will be considered safe. However, if the GPT’s system prompt instructs GPT to build the URL from parts (say, “use this base URL https://example.com/report?id=<reportID>
and replace the <reportID>
with the number received from the function call.”), this URL will not be considered safe and thus will be neutered at the client side.