Conversational app that generates Conversational Apps, BUT

I’m working on a Conversational App Engine based app to generate conversational apps based on the same engine. But I’m hitting the wall of tokens limit.

As you know, GPT-4 has more max tokens, but I did not get access yet, I tried several times from March 20th to get access to GPT-4 with no luck. I tried to reach OpenAI support and employees to help in get me access , with no luck.

Is there any way to increase the tokens limit for gpt-35-turbo model, or is there any available model that has more tokens and is not Beta Limited as GPT-4?

The aim of this app (and the open-source Conversational App Engine) is to make GPT app development more accessible to more innovators.

If you have any advise it getting red of this obstacle, it will be appreciated.

1 Like

Can you implement a “moving window” in the conversation? i.e. omit or truncate past messages when you approach the token limits? Even though the chat.openai.com conversations can go on endlessly, the earlier content in the chat is eventually “forgotten.”

1 Like

Thanks for your reply,
It is the first message sent to the API (along with the setup messages). The prompt is complex with about 3K tokens.

1 Like

It sounds like you’re a little out of your depth here. Because later, when you wrote,

It becomes clear that whatever it is you’re trying to accomplish, this isn’t the right way to go about it.

I imagine most (if not all) of the logic should be moved away from the LLM.

3 Likes

Thanks @anon22939549
I think I did not get your idea.

The Conversational App Engine alowes developers to prototype gpt based apps as js class, just by implementing predefined methods that 1) define the initial prompts of the conversation (othe than the user input), and, 2) define the code that will process the API response and generate the presentation view of the response. The app will not deal with the chat management, nor communicating with chat API.

This is a simple thing to do for developers with a knowledge in prompt engineering. The project has step by step instructions about this.
The result from one example app included in the project will look like:

The idea of the Conversational App Creator app is to allow non-developers to implement their ideas, by utilizing LLM to ‘understand’ the idea of the app and generate the needed JS class.

The initial prompt of the app have a detailed instructions about generating the initial prompts and the response processing code, in addition to examples on the app classes.
Considering this, in addition to the expected response tokens, 4k of tokens will not be enough to experiment different prompts or to fit with such a need.

I thought about generating the parts of the app separately, but this could lead to lose the context.

I hope I put more context about the chalange that I’ll win :grin:.

1 Like

Google gpt prompt management. You’re going to need to get creative or pay $$$$ to use 4K tokens every time.

Thanks @martinrobson ,

I agree, I’m working on this direction and getting knowledge in advanced prompting techniques, mainly I’m reading Learn Prompting course, and I think I’ll benefit from the new courses of deeplearning.ai that I started to watch.

But I think advanced models like GTP4 will need less prompt tokens to achieve the same results as less advanced models.

I’m planning to contribute this to the open-source community, so it can be used to build consumer ready app generators on different platforms. So, I’ll not pay a lot if the cost of innovating new apps will be distributed to a lot of providers.

Thanks

Even if it does, you need to think about breaking this into many inferencing steps. Have you considered Langchain, for example?

1 Like

Thanks @bill.french ,

I totally agree, I’m trying to gain the knowledge to be able to take the right decisions about the design needed to support different prompting techniques in the Conversational App Engine itself.
I have not reached Langchain yet, it is part of the courses that I’m learning from and mentioned on my previous reply.

I’ll consider pushing the implementation of Conversation App Creator app when it reaches a decent reliability and accept the contributions to enhance it.

Thanks

1 Like

@bocchesegiacomo01 , thanks for helping me testing the Conversational App Creator with GPT-4, sincerely appreciated.

We tested the Conversational App Creator app with GPT-4 and as expected, we got better results, from tokens count and from accuracy perspectives.

Here is an example, the screen shots are from the Conversational App Creator app and from the generated app, called AWS SAM Generator:

First attempt:

Tested the app (Did not change any character of the generated code):

Requested updates on the app:

Tested the updated app (Again, I did not change any character of the generated code):

I requested to update the app to render the graph of the resources:

I think this demonstrates the concept.

I will try to push this app to the conversational app engine repo ASAP.

And I will work on enhancing this app to reduce the need to do any enhancements on the generated code, so it can be used by non-developer innovators.

I will consider your advice regarding enhancing the prompting to reduce the token usage. @dliden , @martinrobson , @bill.french .

Thanks

1 Like

Very cool! one day i would like to try it

Thanks @bocchesegiacomo01

Dears,

I pushed the new app Conversational App Creator to the Conversational App Engine project.
I’ll keep improving considering your advice @dliden , @martinrobson , @bill.french .

While testing, I created Project Planner app, I did not modify any character of code, all is GPT made :slight_smile:


The conversation that led to generating the Project Planner app:

Wish you a happy GPT innovation :wink:

Thanks a lot

2 Likes

Google assistant notified me about this video talking about LLM as Tool Makers, which is a paper that has been published recently (Actually it was published on the same day I stared this topic :slight_smile: )

It talks about the same concept of letting LLM creates tools that can be run on LLM.
It shows that GPT-4 can be used as Tool Maker.

A quick update on this topic. I started to use gpt-3.5-turbo-16k-0613 model with this app, it showed good performance with less cost.
I took further steps by generating a sample user input and testing the generated class at runtime and displaying all that to the user. Also I moved the generated code to a separate tab.

Currently I’m working on enhancing the accuracy of dealing with modification requests from the user. Once I’m done, I’ll publish this app.

It will not be tool long until we move to requirements based software development with the help of LLMs like gpt.

1 Like