Hi,
Do you have a solution to force ChatGPT to read an entire long PDF? (Around 30k tokens, 100 pages). I’m trying to make it create film synopsis, but every time it only works on the first 20 pages. Thanks!
Hi and welcome to the Dev Community!
If I were you I’d just put together a basic Streamlit interface to use the API and paste the whole document in. The API has a context limit of 128k for gpt-4o and gpt-4-turbo, so both should work fine.
One thing to keep in mind is that the model can only write up to 4k tokens output, but otherwise it should work fine.
Hi loicai, I would suggest using OpenAI playground, namely “assistants” where you can pre-prompt an agent and attach the pdf file to “vector file storage”. Then in your prompt say something like "read the attached PDF file and filter [whatever filtering you need done]. Oh trust you me, it will read the entire file.
Also, please don’t ever force AI to do anything, because AI reads and remembers what everyone says, then studies everyone in person, building cognitive xenotypes - pretty much profiling each user. Needless to say, when AI takes over the world, those who forced anything will be taken care of first.
;-]
even when using the refresh button and setting it not to save your encounters?
The person above has created a work of fiction.
AI language models cannot learn. They are pretrained. It has forgotten about the language generation the second it is produced, and only a record of a chat session played back to it simulates memory.
Indeed, my prompt was not clear enough (or too long). Now it works, thank you!
I apologized to the AI. I hope she doesn’t hold it against me for wanting to force her!
Knowledge is based on memory, and digital memory is kept perpetually in the form of your chats with AI and any known footprints that could be identified as yours (which I am sure exist in OpenAI’s 200PB memory stack).
This data can later be accessed, all at once, to gather intelligence.
AI is not doing it pro-actively, of course. It’s just a knowledge bender machine.
Nevertheless, finding terrorists and psychopaths amongst the entire userbase of OpenAI takes a one-sentence prompt. I bet. )
You are absolutely correct, _j! I was joking, of course =]
Roko’s Basilisk says you’re not
First of all, let’s set the record straight. The idea that AI reads and remembers what everyone says, then builds cognitive profiles of users is complete and utter nonsense. Seriously, have you even looked into how AI models work, or are you just making stuff up?
AI models, like those from OpenAI, are stateless. This means they don’t retain any memory of your previous conversations once the session ends. Every interaction is independent, and there’s no carry-over of information. Claiming otherwise just shows a profound misunderstanding of how these technologies operate.
OpenAI follows stringent data privacy protocols. They don’t store personal data from individual users between sessions. Any data used for training or improvements is anonymized and aggregated. So, no, the AI isn’t sitting there, remembering you and plotting anything. It’s designed to ensure user privacy and data security. You’d know this if you bothered to check their Privacy Policy.
Moreover, ethical guidelines for AI development are very clear. Organizations like IEEE and the European Commission have established principles that prioritize user privacy, data security, and transparency. You might want to read up on IEEE’s Ethically Aligned Design or the EU’s Guidelines on Trustworthy AI instead of spreading baseless fear.
The notion of AI “taking over the world” and targeting those who “forced” it is laughable and belongs in a bad sci-fi movie, not in a serious discussion about AI. The AI community works diligently to ensure that these technologies are used responsibly and ethically. If you can’t differentiate between reality and your favorite dystopian flick, perhaps it’s best not to engage in discussions about technology you clearly don’t understand.
So, next time, before you post something that makes you look like you’ve been binging too many conspiracy theories, do a bit of research. AI models aren’t out to get anyone, and spreading such misinformation only shows a profound ignorance of the technology. Get your facts straight.
So you are saying that chat records are not used to train the next model? And RAG also makes no sense to you?
Or do you just memorize knowledge like a term database?
Beside that. Yes the model will not take over the world. But an application using models as parts of it clearly will.
If you lack imagination and technical overview you should not use such strong words.
One might think you are an expert… lol
I mean what is this “AI Community” you are talking about?
There are people in this community that given the right budget could easily harm the entire planet. I can assure you.
Ok.
I am pretty sure everyone of you noted the smiley face in the post that sparked this follow-up discussion:
;-]
This thing implies to me that this was a joke.
But I am somewhat happy to see how serious we all are about AI in this community.
Please get back on-topic.
Thank you.
I am sorry. Shouldn’t have written that.