Don’t forget to check the “Code Interpreter” checkbox.
If you don’t, your GPT will probably work, but I’m not sure if it’s really checking the knowledge files.
I solved most of my issues (I’m using CSV file format)
Don’t forget to check the “Code Interpreter” checkbox.
If you don’t, your GPT will probably work, but I’m not sure if it’s really checking the knowledge files.
I solved most of my issues (I’m using CSV file format)
GPT can see the knowledge with code interpreter disabled.
I checked and you’re right.
My problem was that I was using CSV files, that seems to require Code Interpreter to be used.
Foo-bar, davidthomasheider, Openai, et all,
I am working with regulating code trying to find the best file, format, preparation & cleaning to improve the quality and speed of knowledge files.
Since the various regulatory codes is usually difficult for humans to understand and it’s time-consuming to find related codes. As regulatory codes are text based sentence structures with references as Chapter, section, subsections which are used with references impeded into the sentences throughout the code referencing other specifics.
as you can see this intricacies make reading the code a complex web thats difficult for the GPT or API to understand.
im thinking maybe the code needs to be formatted into a .XSLX file with the sections in one column and the text / sentences in the next column… just seems like too much prep and openai might just have to increase capabilities on their end??
Any input on this matter would be greatly appreciated!!
Zakaria, my regulatory code is similar to academic papers but maybe harder to follow without the code references… or the code references make it harder?? IDK how the back end works but I would like to know so we can get this working better.
Indeed, gaining more insight into the backend is essential. When I inquired ChatGPT about its training and which formats are most effective, I learned that .xlsx files are generally preferred due to their structured nature. In contrast, PDF files can be cumbersome, especially when extracting information from tables, images, and graphs. You could try with the .xlsx files, but like you said, it requires a tremendous amount of work, so it’s counterproductive, imo.
I modified some PDFs to isolate only the essential information (but also too much prep ), and this approach seemed to enhance the performance of custom GPT models. Also, the way prompts are configured plays a significant role. It’s often beneficial to set them up so that the model first searches through its knowledge database, sometimes even referencing the title of a specific PDF.
Could you elaborate on this please? Are the articles hard coded into the spreadsheet or linked? If hard coded, what kind of file size are we looking at?
Markdown works for me, but only when saved as a .txt file
Yes, it always seems to fail saying that it has encountered a problem accessing and reading the contents of .md files.
with open('/mnt/data/MyFile.md', 'r') as file:
MyFile_content = file.read()
MyFile_content
Yes. I had the same problem. Any explanation for this behavior?
No idea. Not too happy with the quality of retrieval anyway, so I’m working around it
I m currently building a gpt for my newsletter. I m feeding the gpt with txt files (conclusion from this topic) containing previous article with the Source that led to the result.
I got a lot of datas , and i m wondering wheter it s better to parsed it for each previous article or make longer document. Any idea if this has any effect on the learning process of a gpt guys?
If you have the ``` symbols in your MD file or other queer symbols it throws an error when uploading the file. Try to sanitize your MD file.
When you say you are putting these in .xlsx form in an excel spreadsheet… are you putting actual articles with headings and such? How are you laying this out/formatting it in a .xlsx file?
Is JSON a good option?
If so, could you please give some kind of best case structure of JSON file because I have no idea how to train GPTs more effectively.
Thanks in advance.
I am exploring ways to handle this as well. My ideal scenario is that I could feed it technical documentation where it would be able to tell me where to find certain content. This is probably more suitable for other approaches involving some kind of indexing and search, but I’m interested in whether it would be possible to get both worlds for cases where you don’t even know what to search for and you’re only able to describe your requirement.
I have noticed that for structured formats like JSON and HTM/HTML, it may choose to either “look at it” or parse it via python. If it goes with the former option it seemingly has to scan over the whole thing, and starts talking about adjusting the increments parameter for its scroll tool to handle timeouts and precision.
I can explicitly tell it to parse the data using python, but then it needs to know exactly how it’s structured in advance. If I can fit this into the instructions it can be fast and precise, and looking at the debug output it appears to print sections of the data to itself before giving its response.
However, with this structured approach it loses the ability to search and traverse the file to discover unknown aspects. I’ve messed a little with teaching it how to list out properties first and using these results to drill further, making a separate ToC file, etc, but at this point the instructions get far too contrived and verbose.
Ultimately I am still seeing the best results by pasting the relevant documentation in the actual conversation, ideally in markdown. Other formats are usually fine, but like others have mentioned markdown is a good middle-ground between plain text and a semantic structure.
As an aside, I’ve noticed that information fetched via actions or knowledge are persisted “in the background” for the conversation. This is different from when it uses the browsing tool where it forgets everything it saw and only has its own summary to go by after that point.
Hi all, after a ton of testing, I have found that taking a PDF and uploading it to chatGPT and telling it to convert the PDF to markdown syntax with a .txt
file format is what works the best for me.
Hi,
How do you do that in a GPT? Do you upload the PDF, ask it to convert to TXT, then upload the txt file to your GPT?
I tried that but it gives me a “summarized” version of the PDF, which is unusable, of course.
L
I cleaned it up a bit:
To effectively format ‘knowledge’ documents for GPT models like ChatGPT, you can integrate the following comprehensive strategies:
Clear and Concise Language:
Structured Format:
Contextual Information:
Avoid Ambiguity:
Data Accuracy:
Relevant Keywords:
Summary Sections:
Use of Examples:
Regular Updates:
Accessibility and Readability:
For GPT models specifically, consider these additional formatting tips:
Thanks. CSV for text seems… ill-advised. I converted some DOCX files to Markdown semi-manually (Pandoc) and queries on the same text give better results than the equivalent PDF file.
I agree that more documentation on that topic from the folks at openAI would be welcome. ChatGPT’s answer doesn’t look like hallucinations, but who knows. Maybe humans know better.
Thanks,
L