Openai embedding token limit

Hi everyone, I am building a RAG system and currently in the process of converting data to embeddings and ingesting to vectordb. However, I am running on an issue with the embedding large model when using the api. After 145 min of converting the data to embeddings, I got an error saying that the index exceeded the token limit. The problem is that I coded a mini tool to identify if there were any indexes/nodes that I have that exceeded the 8191 limit. After running it it said that no index had over 8190 tokens. I know it works well because before, I had a lot of issues with exceeding the token limit and the tool worked great and identified each index. The problem is now I don’t know what is happening, I obviously counted metadata to do this, but the tool sais there is not index but openai api stopped after a while because of one index. Please answer if you have issues with token limits or if you know smth about this. What I was thinking now is that maybe the request to the api adds more tokens but I am not sure since I did this many times and never happened.