Some of the main bugs in MyGPT, among several:

bugs in MyGPT, among several:

  1. When requesting the same task four times, even changing information like names or days, ChatGPT states it cannot fulfill the request, especially in tasks involving reading PDF files. For instance, I meticulously trained my GPT to access text files, summarize, highlight the main points addressed, and create activities based on the text. It performs very well up to the third time on different texts. However, by the fourth text, it simply says it cannot complete the task. If asked why, it repeats the phrase or shows an error.
  2. The same GPT I trained gives different answers to the same task in separate chats. I’m not just referring to the way of speaking but to the efficiency in applying an activity. It’s like having two twins performing the same function, but one is efficient and the other is completely inefficient…
1 Like
  1. Consider: Would OpenAI put a supervisor or special training on ChatGPT so that it cannot be employed for repetitive data processing tasks or operations that look like training data extraction? That’s one plausible scenario without seeing exactly what’s going on with all of your inputs and seeing what kind of refusal you are receiving. You can prompt against that behavior to make the AI always output particular text to start instead of having the choice of saying “I’m sorry”.

  2. The AI uses sampling to choose randomly from probabilities of output for each word token it emits. If the first token is 50% “call a function”, and then after that, 75% the knowledge function, you are going to get very different answers on runs. That is by design. Always the same output means an answer will never have a better variation, and there will never be something better for you to press the upvote button on. The sequences of words become very inhuman.

there are a couple of threads on the PDF issues.
https://community.openai.com/t/chat-gpt-4-has-lost-all-abilities-to-read-documents

1 Like