Is there difference between embedding text and tiktoken

Is there difference between embedding text and tokens (res1 vs res2)? The embedding result is not deterministic. That’s why even the same code run twice will produce different vectors. Are these 2 requests produce similar embedding vectors?

res1 = openai.Embedding.create(input=['Hi! How are you?'], model=EMBEDDINGS_MODEL)
res1 = res1['data'][0]['embedding']

encoding = tiktoken.get_encoding("cl100k_base")
res = encoding.encode('Hi! How are you?')
res2 = openai.Embedding.create(input=[res], model=EMBEDDINGS_MODEL)
res2 = res2['data'][0]['embedding']
1 Like