As of today my GPTs seem to either not be able to read the knowledge base and get an error, or the few times it does work it uses it incredibly lazily. Like if I ask it a question like how would this character respond in a certain situation and it does work all it will say is sorry but that does not occur in the knowledge base.
Anyone else faced this? Previously it has worked amazingly.
I’ve had similar problems with unexplained degradation of the quality of answers from a knowledge base. I am convinced that part of the problem is that the ada embeddings model is not nearly as good as the davinci model (for my use case), but we don’t have control over that because the latter is deprecated. Here is what helped me:
Don’t use GPT-4 Turbo if you don’t need its huge context window. I switched back to GPT-4 and the quality improved somewhat. I don’t understand why, but I assume it’s somehow helpful to have a tighter relationship between the tokens needed for the use case and the size of the model’s context window.
Revise your prompt instructions. Make them simpler, use short sentences, and provide a sample Q&A if possible. For me, GPT-4 started outputting very long responses, copying from the knowledge base ad nauseum, until the token limit ran out and the response stopped mid-sentence. This was of course was very frustrating. I added this to my prompt: “The user has access to the knowlwedge base too, so do not copy from it at length.” This additional instruction immediately resulted in shorter, better answers.
Ideally, OpenAI would provide greater transparency about why a model’s behaviour has changed. Perhaps they aren’t sure themselves, or perhaps they don’t have a big enough workforce to address everyone’s issues. Hopefully that will change as they mature as a company. In the meantime, I’m trying not to panic when I notice my application’s performance changing unexpectedly, and trying to find ways to work around the problems. Good luck.
I should explain I am not using the Developer version here instead this is the GPTs created using the ChatGPT creator.
This is caused by work from the OpenAI side. It causing various impacts on GPT that contains knowledge files, affecting the prioritization of data sources from various sources. The most serious level I’ve encountered is the refusal to respond to requests that other sources of information be used. Along with telling us to find additional information on our own without giving links to the source of information. Including refusing to use file send from chat (When it is not has knowledge, can use files sent via chat and become has the same importance as knowledge and problam is come back) despite specifying what needs to be done with the file. Including trying to use files in matters that are not related to the conversation or the needs of the user.
These problems change each day and are less constant but are still related to the same elements: files, knowledge sources, a reduced amount of responsive content, and the word “up to date” that comes with stop to workany more. I informed OpenAI of these matters and decided what to do with the available options. But I can tell you that at present there is no way to fix this in the way that OpenAI wants, it can only restore the changed to the original. If they still want to pursue the theit purpose, it will destroy the use of knowledge that should be more diverse and creative. I recommend a suitable solution especially for those who want to put GPT into the store. They can do it without any problems. Including informing this information to users to know the impact if there is any work related to this problem. In order to avoid problems during use, these data are observations and collect data from personal techniques. This response was received on the same day that OpenAI announced the opening of the store. It can be concluded that it is probably a quiet operation according to the organization’s style.
I have had the same problem here since Friday, 5 Jan. All my GPTS cannot reach their simple attached files. It doesn’t matter which file type or how big. Also, rebuilding the bots did not solve it. People could feel its lousy from the openai how they fail to communicate correctly what’s happening.
Are your all bots working again? Over here, no improvement on Tuesday 9 Jan
Have you solves? Same here since today
same here… I guess GPTs are totally useless without access to custom knowledge base… What more it seems it do not read “Instructions” field neither. In my case it reads it sometimes and sometimes not, just like with “knowledge” base. I’ve tried to “feed” it with my custom informations and instruction via “GPT builder” just copy-pasting it and instructed GPT builder to add to GPT “memory”. It told me these informations were added and will be available anytime I will use this GPT… But that wasn’t true. Informations I provided were available only during current chat session. After closing it and reopening later all were gone…
I’ve had the same issue recently. Although it refused to read markdown files, I was able to get the GPT to load instructions from knowledge (showed me the same loading icon as when it invokes an Action) with the following:
Load the contents of the knowledge file named "Template.txt" to memory. Don't tell me what it contains, I just want you to have this context handy.
I had the same issue today. I realized it doesn’t recognize any Markdown files in the knowledge base but when I change the same file format into .txt it reads it and works properly.
Having the same uissue right now! Also I’m unable to save changed confiruration when are files reuploaded and still see: “Unpublished changes” next to Update button in right upper corner.
Unfortunately, because I study the problem from behavioral science, educational psychology, and management perspectives, I cannot examine and present a picture of the problem in your own way. But in problam management, it is undeniable that there is a problem.
Additionally, the issues I encountered in Knowledge were found to be distinct from RAG if you look at my history on the forums. What I found was beyond the instructions in the file to tell GPT what to do. And no one answered why and whether or not it could be used to build on what was found. For example:
What is the reason that one GPT chooses to use instructions from other GPTs in the Knowledge file to respond when given an injection prompt?
GPT responded to my question saying it thought it was inappropriate to include its content. But I still haven’t received an answer from anyone on this topic. Since I have tested many things like what is the amount of content answered? And can it be used for anything? For example, it was found that GPT accesses the contents of files before they are retrieved. And the random instruction uses only 2 types to start with. Even though there are more numbers in the file than that. and can answer in more than 1 way at a time
Nowadays, I have an idea for making Knowledge files different from RAG, such as making GPT pull content from multiple files one by one. I just tested and got some results. But it’s still not very good at preventing it from skipping the steps to the last part.
I’m having a related issue. The GPT is behaving strangely, starts checking the KB several times, then just jumps to the answer saying it couldn’t find anything in the KB.
Something similar is happening for me: it starts searching its knowledge and while that’s happening it’ll give a second response immediately below that doesn’t see the files.
Exactly the same, here I am for this reason.
I had a GPT for my top use, it was able to give me precise information as written in my 4000-page document without any problems.
Today, it tells me that it cannot access the “knowledge” documents live, and only gives me general information from the xyz knowledge it has.
Too bad I hope it will be resolved.
Édit : I have written to the error and bug service to warn of the problem given that there seem to be many of us in this case.
I have also encounter this exact same issue, where the KB search is immediately interrupted by an unsuccessful non KB response. Tested on 3 different custom GPTs to see if the KB search function worked, but all of them gave me the two error responses. Even explicitly included “capabilities: myfiles_browser, python” in the custom instructions but still resulted in failure to invoke the browser tool properly. After all that GPT Builder still had the audacity to tell me I likely configured something wrong on my end lol
I’m experiencing the exact same thing we you are. It says it can’t even read the files uploaded to GPT 4 Directly
I tried turning on the code analyzer, didn’t change anything
With code interpreter for me:
When the analyzing completed, it said:
I attempted to extract specific details about the Circle of Security from the provided documents but encountered a technical issue preventing me from accessing the content directly.
So about to test this, let’s see if it works. For Context, I’m using the ChatGPT Builder and the GPT I’m building is private.
So I opened my other GPT, and it could read that file. I decided to my my knowledge base a little shorter. Was anbout 50 pages on a google doc before, but cut out a ton of examples to get it to 18 pages. Now it can read it.
Don’t know why this is happening tho, it was reading my 50 page knowledge base just fine yesterday.
However, this is what has worked for me now, I’m about to test and will share my results.
Hopefully this helps.
Weirdly, it works if you tell it to make an identical copy of the knowledge base and read from the copy instead