How can I use Embeddings with Chat GPT 3-5 Turbo

OK another update. I think my example above was a little too simple. So I have another variation below where I use system on a passage out of a book, and put this in the “system”, and then have the “user” ask a question, including where it got the information from, and it gave good responses in both cases, including where it got the info from (so I knew it was listening to the embedding).

This seems like a more canonical approach (maybe) wondering what others’ thoughts are. Here are the results using this approach in Playground

GPT-3.5 (on Descartes from the embedding):

GPT-4 (on Descartes from the embedding):

Both are giving good results with this approach (both 3.5 and 4).

Here is another shot at the same data:

GPT-3.5 (on Sartre from the embedding):

GPT-4 (on Sartre from the embedding):

It looks like GPT-4 is much more aware of the context surrounding where “it” is in the whole big picture, whereas 3.5 gets lost and seems to regurgitate.

Finally, and I think I am done here, but it is important to all “embedders” out there, is for the AI to admit it doesn’t know. GPT-3.5 has a very hard time with this. Whereas GPT-4 seems more willing to follow along with the system instructions. This is already known (3.5 is bad at following instructions in “system”). So this could be a consideration for you, and you may want to swap things to “user”/“assistant” for 3.5 until it gets patched.

GPT-3.5 (off-topic, but instructed not to):

GPT-4 (stays on-topic and says “I don’t know” when challenged):