Uploading google docs (directly from google docs/drive): Silently fails. ChatGPT defaults to memory. Memory fragments. Google docs are not parsed properly, quoted and summarized inaccurately (hallucinations). Failure to analyze via python tool due to silent failures in backend features such as .msearch and indexing and backend parsing.
Local file uploads: manual skims and text extraction (when prompted properly) works, but backend tools still fail to read and accurately quote the document. backend parsing still fails (it is currently NOT functioning since late May!).
All issues surrounding backend document parsing/syncing/text extraction/etc without the use of the manual tools (python) would be nice to see cleanly fixed. What I may suggest if nothing else works… perhaps a streamlined oauth system that directly links (not like the one we have now) to the documents we want read through google docs. We select the documents (in a session) we want read and skimmed without an actual upload perhaps? If it is doable? That way it bypasses token consumption from uploads and chatGPT has a direct pathway to documents without an “upload” required for it to see the document and look inside to extract the text (uploading documents especially small or large documents consumes a lot of tokens per session, but still able to use them well far from taken limitations under the current model). Something along these lines I think would make document hallucinations and silent tool failures less of an issue.
I don’t know, just something I was thinking about that is currently an issue right now since May and is able to be corrected/improved.