Can I match a GPT with an assistant?

Hi. I have created a GPT which follows my instructions, reads an uploaded file, and gives me a satisfactory response that I’m happy with. For info, I’ve told it to ONLY use the file for its response.

I want to make a “matching” assistant within the API. I’ve tried to match the settings, the instructions and the prompt as much as possible - but the responses are completely different.

I’ve also tried using the Chat API, and the response is different still!

Does anyone have any ideas? Otherwise I can’t replicate the functionality of my GPT for users outside of ChatGPT+.

Also… is “chat” API now deprecated? Is assistant the new version of “chat”?

I would love this feature as well. Reduces redundancies and maintains a single source of truth if I am supporting both an Assistant and GPT.

I’ve seen that a lot. Considering that GPTs have Vision, Dall-E and Voice while Assistants don’t I am thinking they are currently either different versions, or possibly forked. Historically API users get the short-end of the stick and need to wait longer for features.

No.

1 Like

Thanks for your response, much appreciated! So, if assistants aren’t the “new” version of chat it begs the question … what’s the difference?

Assistants are the glue that connect all of OpenAIs functionalities together. Chat, Vision(soon), Voice(soon), Dall-E(soon), RAG, Code Interpreter, instead of having to hard-code it yourself.

Chat is a single component.

They take away a lot of heavy lifting that was required beforehand.

For my use-case I use Assistants for their simple RAG as a public-facing chatbot (an assistant that can fully interact with my application). It actually has a function that calls on the Completions endpoint to modify a string.

I’d argue that Assistants are not production-ready yet though. Compared to ChatCompletions / Completions they are hella slow and hella expensive. Plus the RAG is brutally restrictive. No control & no insights.

Interesting. When you say “insights” what do you mean? As in - which part of the retrieval doc are you using?

By insights I mean we can’t see how the document is being chunked, how the query is being modified, and what is being passed.

I, among others have noticed that our token count gets blown up with retrieval - which leads me to believe that not only are they performing a semantic search, but they are stuffing the prompt with results until GPT has determined it to be sufficient to respond with.

This is the issue. There is no documentation regarding how their retrieval system works. Unless I’m mistaken (they change their documentation and don’t bother with any changelogs) they don’t even suggest the best ways to prepare/pre-process our documents for retrieval.

1 Like

Hmm. Not quite sure how to proceed then, if I can’t reliably verify the outputs!

Why are all the settings different? ie. there’s no temp, Top P etc with assistants OR GPTs, so it’s all kinda … different!

we can’t see how the document is being chunked, how the query is being modified, and what is being passed.

This is such a big deal.

I would also like control of the conversation history. My personal systems work night and day better for me. Their system often creates a hard ‘break’ where the conversation falls apart. Their approach is really opaque for me I haven’t been able to find any information on how they do it.

1 Like

I mean, I think it’s just important to get some consistency, so you can reliably say “hey that GPT I made is also available here”.

But there seems to be 3 methods of creating “bots” and they’re all a little different, with no explanation of what does what.

@RonaldGRuckus all I’m doing is a real simple book GPT. We’ve written a book, and I want to GPT-ify it.

So I’ve built a functional GPT, with every setting off except retrieval - then attached the book as a file and we’re good.

Are you saying it’s best to use an assistant to recreate that, as it’s simple RAG?

1 Like

If you only want to use GPT to transform text then Completions / ChatCompletions is your best bet.

If you need to use some sort of retrieval system then it’s really up to you. I definitely think it’s worth trying it out first. They are very easy to setup. You can try it out in the playground.

It’s also all in it’s infancy. OpenAI has been moving so incredibly fast and steamrolling anybody who tries to improve/beat their systems. So my money is on Assistants. If that means anything.

2 Likes

I’m excited to see what can be done once they scale up their systems and raise the rate limits. A price decrease would be nice too, but lets go one step at a time.

The possibility of using the Vision API for the thousands of images my company generates per day gets me all tingly inside

1 Like

At risk of re-railing this thread back on topic - the current answer is “no, there’s no way to match a GPT to an assistant or the chat API so the responses are similar”.

That seems a shame, but I guess it’s early days.

I will go for an assistant for now, thanks @RonaldGRuckus appreciate your advice!

1 Like

You could call an assistant from a GPT… but not the other way around.

1 Like