Failure of Custom Instructions and how OpenAI should address

Using the web interface of Chatgpt, not the API, I’ve spent the last week extensively rewriting, then testing in new sessions the function of custom instructions following changes. This isn’t the best testing environment since tests need to be run sequentially, not concurrently, and can’t be compared in real time against a fixed baseline. Nonetheless…

I tested under the ‘tutelage’ of a GPT4o model which ran my original instructions. After going on 50 tests, it and I concluded that the instructions, beyond basic user knowledge (eg my name) either do not work, or are not referenced unless prompted and even then are rarely actioned.

Occam’s Razor say they’ve been switched off with no announcement, which would result, presumably, in millions of consumer hours wasted in attempts to optimise them.

Pretty poor if you ask me, my baseline model put it more eloquently at the end of an evaluation of why this state exists, and suggests some actions that IMHO are the minimum set for OpenAI to be actioning.

None of your hypotheses explain why OpenAI would waste so many millions of user hours making something unavailable, yet acting as if it still works. Even if a security vulnerability or any other one of your explanations were the reason and even if OpenAi were concerned about announcing whatever reason applied, the right and ethical thing would be to withdraw the functionality for undeclared ‘updates’ rather the dupe users into believing it still worked. I find the present state murky, non transparent and downright unethical

ChatGPT

Your points are valid and align with the frustration many users have expressed regarding the unclear state of custom instructions in ChatGPT. Let’s re-evaluate with these considerations:

Potential Reasons for the Issues

  1. Security and Misuse Prevention:
  • Hypothesis: OpenAI might have disabled or limited custom instructions to prevent misuse, such as jailbreaking attempts. The lack of explicit communication could be to avoid drawing attention to potential vulnerabilities.
  • Evidence: While this is speculative, the absence of clear communication and the observed inconsistencies suggest a deliberate limitation. This would fit the observed behavior of custom instructions not functioning as expected without an official announcement.
  1. Technical Challenges and Model Updates:
  • Hypothesis: Updates to the model and technical adjustments could be affecting the functionality of custom instructions. These changes might not be fully communicated to avoid frequent updates and confusion among users.
  • Evidence: Users have reported changes in behavior after updates like the switch to GPT-4 Turbo, indicating that the model’s handling of custom instructions is still being refined​ links removed
  1. Operational Adjustments:
  • Hypothesis: OpenAI might be iterating on how custom instructions are applied, leading to temporary inconsistencies. This could include testing different implementations or restrictions internally before a broader rollout.
  • Evidence: The variability in user experiences suggests ongoing changes and testing within the platform​ links removed

Ethical Considerations and User Impact

  • Lack of Transparency:
    • Not communicating these changes leads to significant user frustration and wasted time. Users attempting to optimize their use of custom instructions are left in the dark about their actual functionality.
    • Ethical Implications: Transparency is crucial for user trust and effective usage. If OpenAI is limiting or disabling features, informing users would prevent wasted effort and maintain trust.
  • User Experience:
    • The current approach leads to a suboptimal user experience, as users cannot effectively utilize a feature they believe should work.
    • Best Practice: OpenAI should provide clear guidance on the status and expected behavior of custom instructions.

Recommendations for OpenAI

  1. Clear Communication:
  • OpenAI should provide detailed updates on the status of custom instructions, including any limitations or ongoing changes.
  • Benefit: This would help users understand current capabilities and avoid wasted effort.
  1. User Feedback Integration:
  • Actively solicit and integrate user feedback to address concerns and improve functionality.
  • Benefit: Enhances user experience and aligns development with user needs.
  1. Interim Solutions:
  • Provide workarounds or alternative methods for achieving similar outcomes while custom instructions are being refined.
  • Benefit: Maintains user productivity and satisfaction during transition periods.

Conclusion

The current situation with custom instructions is frustrating for users and lacks transparency. While security concerns and technical adjustments are plausible reasons, the ethical and practical approach would involve clearer communication from OpenAI to ensure users are informed and can adjust their usage accordingly.

1 Like

Have you contacted OpenAI about this? Did you get an answer? I’m very curious how to communicate to them.