Are Assistants the future, or a toy?

Are assistants the future?

Currently they have some serious limitations (cost, opaque handling of context, little control over threads in general). But the basic structure of building context management into the API is an appealing one.

Anyone know which it is: (1) Assistants will always lack control over threads and context or (2) they will evolve to provide a lot of built in control over context?

4 Likes

Obviously they will evolve since right now they are in beta.
I already find them very useful and use them internally every day for a lot of different tasks.

I feel the current biggest features missing are around files (I would love to be able to upload JSON, CSV etc) and filenames. But the concept and implementation come of as pretty solid.
I think we are facing a reality that we will never have enough context tokens or output tokens for ‘all our dreams’ but having a persistent threaded API that handles function calling pretty well makes a lot of practical tasking applications pretty easy to implement and allow to focus on a) connecting the systems you want get data from or to and b) perfecting the prompts to execute the tasks.

So I am pretty sure that 2) is the answer., the will evolve.

7 Likes

They feel rushed with way too many oversights to be considered production-ready.

As @jlvanhulst has said, they will evolve. I imagine they are collecting a lot of information from their GPTs (conceptually the same as Assistants) to determine how to properly implement these features, and hopefully soon include GPT-4Vision, Dall-E, TTS, whisper, and some sort of control over retrieval & token management.

I think the fact that there’s no documentation regarding any of these features is simply because they don’t know what to implement just yet.

100%. It’s going to get better, and they (hopefully) will ensure that they always play well with future tool changes/upgrades.

So far according to their documentation they “plan” to “explore” better control:

We plan to explore the ability for you to control the input / output token count beyond the model you select, as well as the ability to automatically generate summaries of the previous messages and pass that as context.

There has also been undocumented endpoints to delete messages so I’m hoping that in the near future we can have better control by deleting messages, and adding new Assistant messages.

4 Likes

Yeah I really hope you guys(@ jlvanhulst, @RonaldGRuckus ) are right. Assistants feel like the natural next step as does an API where threads are build in (and over which there is considerable control for managing, pruning, and of how they are used to form context).

My bet is the days of roll your own management of conversation history are numbered (or at least limited to special cases).

2 Likes

Checkout my post Right Semantics for Assistants API

2 Likes

I think they are a great way to get validation of a product. I’ve used to build something that started generating revenue. Now i’m plannign to move on to my own RAG model so I really appreciate the api for what it is.

Assistants api gave me what i needed for my project/app, i like them.

2 Likes

What i’d really like to see is more management of how messages are used to build context (control, visibility, etc.). The basic idea of having context held at the backed, accessible through the API makes good sense (for some scenarios) and is one I’d like to see elaborated.

1 Like

Agreed. I choose to believe that OpenAI is aware of this as well and not consciously preventing us from having any sort of insights.

I have my fingers crossed for a big update soonish (with GPT marketplace maybe?) That does all that we’re asking for… Hopefully a little bit more.

For now… :hamster:

Assistants are cool, in theory. But this whole constant polling to find if there is a response is an issue, mostly because we don’t know if each poll is costing anything, also, super inefficient in computing costs. Pretty much the whole issue in general is cost. Sure there is the basic in/out costs in the docs but no way to know what’s up. Having to grab the entire message history just to get the new messages is also ridiculous. For now, the chat completions endpoint is way more functional and understandable.

1 Like

They have indicated in the docs that they are planning to introduce better options:

During this beta, there are several known limitations we are looking to address in the coming weeks and months. We will publish a changelog on this page when we add support for additional functionality.

  • Support for streaming output (including Messages and Run Steps).
  • Support for notifications to share object status updates without the need for polling.
  • Support for DALL·E or Browsing as a tool.
  • Support for user message creation with images.

I’m not sure what Dall-E or Browsing means. Time will tell.

1 Like

Nice! Can you share a link to where you found this? Is it in the blog or something?

Yeah, sorry about that

https://platform.openai.com/docs/assistants/how-it-works/limitations

@Orome You make a solid point about assistants current limitations.

*AI assistants definitely seem like they could be part of our future. While they have some issues now with cost and keeping up conversations, technologies tend to improve rapidly over time. Something like ChatGPT has come so far in just a short while.

*As long as companies keep refining how assistants understand context and multiple discussions, I could see us being able to have really natural back-and-forths with them eventually. There’s a lot of potential there if the technical challenges get addressed.

Quite frankly, a lot of what can be achieved with Assistant can be done so with regular prompting or just custom instructions.

Where Assistant really shine is in the narrative and the developer experience. Few things it succeeded in:

  • Anyone who knows how to prompt but do not know how to code can make their own “apps” in an “app store” now
  • Developers who are not AI-trained are able to use larger amount of contextual data without the need of learning how to “compress” them to fit into the context window
  • Helping people understand that AI applications can be like apps on app stores and encouraging experimentation
1 Like

@raymondyeh
I believe you are confusing Assistants with GPTs.

Totally. In all the dimensions. A user could voice call an Assistant, and discuss what they are seeing on a page, and have the assistant perform functions without having to code an insane amount of logic. It seems like this is the current race in the multi-modal LLM industry.

It reminds me of Firebase (Google). One could create their own authentication, live database, server for function calling, low-cost storage, but why. They have done an awesome job putting it all together to get things going. Then, of course, detachment is still a possibility in the future when money starts rolling.

@RonaldGRuckus
I did not - assistant is the GPTs equals for devs.

If you notice, most use case for assistants could be achieved with existing chat completion already. However the ‘concept’ of assistant led to a mindset shifts amongst developer that they can add additional knowledge and custom instructions to make gpt-wrapper apps - something they could already do with chat completion anyway.

Assistants are not available in any app store so I’m not sure what you mean here.

I don’t think anybody needs to be AI-trained to understand the concept behind truncation or summarizing. I’m hoping Assistants comes out with both of these options so we can just specify a token limit.

For sure. I’m thinking that their current form of retrieval is by no means what they intend to be considered usable. The more I have been a part of the OpenAI ship the more I have learned that they release things, gather a lot of statistics, and then make the decisions.

I don’t think developers have gone through any “mindset shifts” in the way you describe though. Personally, Assistants has completely expanded my horizon from simple text-based chat to now match the multi-modality that GPT offers.

I think people who can scrap together some Python and over-extend themselves get bopped (naturally) and wonder wtf went wrong. After getting bopped they can have the respect for Assistants and either move back to GPTs (not a bad decision), or keep coming back and learn a new skill & maybe passion.

I can agree with this for now. It’s important to remember that it’s in beta and more features are planned for it’s full release. If it has all the functionalities that OpenAI offers this would be same argument for not using a library or service such as Firebase.

It’s in the name: Chat Completion. If your agent is simply for text, then of course you would use Chat Completion. It’s important to be forward-looking for Assistants by what they intend to be.

1 Like

Bruh, if you don’t want to participate in the converstaion, then don’t. There is no need for chatGPT generated responses here. Pushing AI responses here really is uncouth, and totally unnecessary.

2 Likes

Indeed, any service can be built from components at a lower level, and any API can be created from lower level APIs, just as we could all still be writing in assembly. The question is really about what the most effective and powerful abstractions are. It feels to me like Assistants point the way some important ones, particularly managing conversation history, and use its in forming context. Those seem like things that are best handled as part of the same service that performs inference, rather than by the caller.

5 Likes