MyGpt Knowledge doesn’t work anymore

Hi everyone,

Since 2 days and GPT store launcing, My GPT doesn’t read my knowledge during chat.

I have 12 .txt file in my knowledge worked since 2 days ago.

I face different random behaviors:

  1. Screen stucked in “reading documents”, ending after minutes with an error message “your most recent request failed”

  2. When sometimes retrieval works, my gpt answers with a “i didn’t find specific information about ” but the was in different documents in knowledge.

Users using my gpt are bored a bit about it :frowning: anynone is facing same behavior?

1 Like

We recently had another user describing a similar issue and we discussed possible causes.
You can try disabling the Code Interpreter, updating, enabling the Cost interpreter and updating again. But this bugfix is hit or miss.

Then we had the assumption that it may be caused by instructions somewhere in the GPT that breach the usage policies. But we can’t confirm this either.

So, sorry nothing concrete to solve your issue.
But maybe some entry points to look for solutions.

2 Likes

Thank you very much for your feedback.

Unfortunately Code Interpreter has always been deactivated, but thanks for the tips, i will have a look around instruction looking forward news about the real cause.

1 Like

I’m also having similar issues.

I did some limited testing and after I uploaded a file, gave it context I left the builder, went and used the chatbot and it could read and access the file appropriately. I went back into the builder and despite the file still being listed in the configuration area, the builder chat wouldn’t recognize the file.

I think opened up another chatbot conversation, and while the builder can’t read the file, the regular chatbot conversation still could.

Not sure if that interaction helps or not, but I thought it weird.

How did you name the files?

What language are you using to interact with them?

The file names are simple descriptive English names, such as “Compiled Wring Sample”
I simple typed “What files do you have access to?”

It listed 3 of 4 files.

1 Like

In my case, i’m still waiting for a confirmation, it looks like something goes wrong if there’s a part of text in knowledge could violate policies. By the way i found so hard to find them, think if you have an academic paper on art mentioning a living artist. It requires a lot of effort if you have 10/12 long documents…

I’m trying to solve with instructions like:
“Filter your search avoiding export of content might violate content policy” but it doesn’work.

Any suggestion would be appreciated

1 Like

You’ll need to clean the data of any copyrighted content likely, as OpenAI doesn’t want copyrighted content on their servers.

I’m not sure how the filters work or how to claim your own content.

1 Like

Contents of the unlisted file? Copyrighted content maybe?

My case is a bit more borderline. If i take a book, make my own summary with my opinion and i put everything into a .txt file, inspired obviously by book content but filtered by me, is it a violation? I’m in this field.

Another thing not clear to me is the following: similar content output sometimes is retrievd by the model base knowledge (no uploaded file) during conversation, why in that case is there no violation?

1 Like

Good questions!

I suggest you try the following based on a somewhat similar issue we were able to solve:

Create a new GPT and see if the same issue happens again.
Then take some files that you know will not be copyrighted in any way and create a GPT just to make sure that it’s not a bug with the retrieval.

If you then determine that the problem is actually in your files you have the time intensive task of finding out where the issue is inside your files.

Maybe go one file at a time and see which one breaks your app.
Then when you have a potential candidate do a quicksearch algorithmic approach where you split the file in the middle and see which part causes the issue and then dig deeper until you found the culprit.

This is cumbersome and I think it makes sense for us to relegate this info directly to OpenAI. At least we should get a better error message. That would be helpful for all of us.

Good luck and please keep us posted.

1 Like