GPT Builder Or Programming Language?

While I’m still figuring out what RAG is, I’ve been able to provide a clearer description of my project here: Phas – Forest of Thought. Please visit this link for detailed information on the project, including attribution.

When Open AI came out with their GPT Builder I was excited that my own interface was somewhere on the right track…

Since I have been working on the next version, building in a coding language (see http://agi.directory/)… Is this ‘RAG’? I don’t know what to search for to find other similar projects? Can anyone throw some search terms my way, give a little summary. I am a lone coder who doesn’t get out much.

1 Like

So, RAG (retrieval augmented generation) is when you retrieve a document, feed it into a query to the language model, and then that model generates a response based on the combined query.

Otherwise, are you asking which one to use/learn? What is your project supposed to be? I’m guessing it’s your website?

Without a doubt, if you’re asking to choose between GPT builder or a programming language, pick the programming language. javascript and python are great places to start for beginners.

2 Likes

My project is further understanding the ‘Augmentation of General Intelligence’ and how to best interface that.

One would imagine there are ways more acceptable to most than connecting a chip to your cortex. These would be visual and multi-dimensional to use our existing senses and way of understanding.

I have coded in a range of languages over 25 years from MASM to PHP.

I have no way to monetise my work so I have decided to just share it through my website.

I guess I am maybe not even asking but trying to push forward. I am frustrated at the one dimension model that is Chat GPT’s current interface.

What I am writing is an interface that is structured as a nested coding languge ie

if (GPTRequest) // Boolean
{
Further Nested Threads
}

for (GPTRequest) // List
{
Further Nested Threads
}

This allows you not only to have a conversation but to go back and remould that conversation, to follow multiple possible threads and see outcomes.

“The month is January/February/March… What can I grow?”

These are multiple threads and may output a document for each month for example after 5 further levels of questions but then you think… I forgot to mention I have no polytunnel :smile:

Yes you can present this easily as code but what of those who cant code? Can they not ask questions, or understand your process? Even as a coder by brain is still limited… Most people don’t code without IDEs.

GPTs are one dimensional. Where is ChatIDE?

1 Like

mmmmm, I feel like this is a lie devs convince themselves of way too often. You would be surprised by what you can achieve by reframing that with “how do” instead of “can I do”, but I digress. It won’t happen overnight, but it’s more than likely possible so long as you understand what people want.

You mean Cursor?

I know you’re talking more about running and managing multiple conversation threads at once, but take a look at cursor and its business model anyway. It can give you an idea of how you could set up your own IDE.

3 Likes

I think I am not a ‘dev’ in the sense you mean. While I love to code, I am passionate about it, for me coding and understanding the implications of AI has always been a moral imperative and the meaning of/in my life, it is a duty we have in this generation of our species. I could talk morals, philosophy, religion or politics but let’s just say for many people religion is fundamental to their lives. Being presented a bill by the priest after a service would be weird. In the same way just because we live in a world run by money, it doesn’t mean every task or thought should have a price or should fit a contract. I don’t think even OpenAI with their dedication to AI understanding and societal development would give me a personal moral opt out clause for tasks they paid me for.

I understand it was not meant so harshly but from my perspective I think I achieve more with the current frame.

As a programmer, yes, Cursor is interesting but this is not my goal… If I showed that to a non-programmer they would phase out.

Just as GPTs could be written in code, they are not. They are written in forms that a far wider audience can understand and use.

From what I saw in the presentation video Cursor airs very much on the side of coders, it is a coders tool. Chat however is not a coders tool. Any Chat IDE should primarily be totally ‘codable’ with voice. While it must clearly follow logical structure the majority of people in the world would want to talk to a computer to produce something of quality primarily.

There is a distinct digital divide between those who can code and everyone else. While this may be important for organisations like Microsoft/Google and many other IT businesses who need the preciseness of code this is entirely unnecessary for most general tasks that you might use ChatGPT for.

Think Scratch https://scratch.mit.edu/. While this writes code under the hood it does so from Users using Structure and Blocks.

1 Like

I mean, technically that was the norm until Martin Luther came along, but I get what you mean lol.

What I meant by showing you Cursor is to show you an example of what an AI-based IDE is. An IDE by definition is an integrated development environment. While yes, most people do not know how to code, developers/developing and programming are heavily intertwined with each other.

You can also check out Sam Labs and their stuff that they use to educate kids with (it uses a lot of those same principles as scratch with structures and blocks).

Either way, the point is to simple treat this as a tool for developers. Developers can mostly code. Even if they can’t, you would need to define a pretty detailed logic structure that people can follow. It’s not as easy as it looks.

I would not follow the footsteps of GPTs or Gems. Most people do not use them, and they would most likely just use those services directly instead of going to a different party/system that they don’t understand.

2 Likes

And this is where as a little fish I slip through the net and try to educate a community maybe lost in it’s own importance. Who are Devs/OpenAI Deving for?

I am redefining and spinning the term IDE into something integrated from another perspective… That of an end user, just as Scratch has done for children.

I beg to differ, I would rather suggest that understanding a couple of simple structures such as conditionals and loops, a couple data types like boolean, lists and tables and understanding a treeview structure one can write incredibly complex reports and processes with existing ChatGPT API functionality.

I live in the UK but I lived in China for 10 years and can get by there, bring up a family there with all that entails… I am no linguist, my Chinese is still very very poor but structure and context is fundamental to, I believe, all languages?

As ‘This Forum’s unofficial on-hand linguist.’ as your profile describes you, I think you might concede that with a few ‘Control Structures’ and ChatGPTs linguistic and coding capabilities there is not much that cannot be achieved. This may restrict ‘Devs’ to the top 20% of harder and important existing tasks but at the same time open 80% of AI tasks to the rest of the world.

On my site I present context in the form of ‘Panes’ each like a real world grounding object like a menu you can point to, a facial expression you can make… Grounding the user at every stage on a decision tree and showing them only structures and data they can use is the same idea.

Again, I believe I answered both these points…

Most people do not use GPTs because they are clearly designed linguistically wrong.

And precisely the reason I am posting here and detailing a solution, is to have a voice that can be heard in a community navel gazing during an incredible evolutionary paradigm shift.

How many Devs might be 95 with a denture problem, is the long tail not one of the greatest Development problem there is?

Is Open AI out to develop intelligence for the masses or for the few?

2 Likes

A little before America’s time but something that shaped the world that we now enjoy in a similar context to this discussion.

I do hope my first posts don’t have me exiled by the Clergy but also believe in another inalienable right, that of Free Speech, that I believe this forum would want to uphold in it’s declared context of Community Development.

2 Likes

What a wonderful conversation.

I have been thinking about this very topic. Do you think that a REPL interface like this (https://youtu.be/KJ-J5AYGG7g) may work better for the non-coders?

With the combination of a REPL interface that is able to “understanding” function calling, I think that it might be possible to replace an IDE.

3 Likes

I would consider REPL a single pane on my interface…

I have several pane types in the Demo on my website the Ph is how I show this is ‘Augmented’ and not just code, don’t laugh, just my process :smiley:

The language I dream about building every day is ‘Phas’, Phoenix Assembly (WASM) that runs JIT practically standalone in the browser but as you suggest and as is the case for me, I have used existing implementations in PHP…

If you are considering using eval() , bear in mind these words from the creator of PHP, Rasmus Lerdorf: “If eval() is the answer, you’re almost certainly asking the wrong question .” That is, you should be able to achieve your goals without resorting to eval() ."

Yet… “I know that I know that I know”…

Phox (Phoenix Box) - A box/container/folder/nest
Phout (Phoenix Output) - This is the simple REPL interface as in the video
Phif (Phoenix If) - This is a conditional box/nest, the branch below will only display/follow if this Boolean Expression (be it code or spoken language evaluates to True
Phor (Phoenix For Loop) - As with a for loop each item is the focus of the nest (however it is often important to include the whole list or table in the System Prompt to provide adequate context for spoken language)

I would suggest referencing the PHP link above and extrapolate for other control structures.

You must separate data objects from your REPL submissions or you are restricted to 128k tokens… My systems manage many 10s Gbs of zipped ‘Data Objects’ (Memory) and are restricted only by the computer I have access to (Rog Ally :smiley: )… AI Selection of datasets via summary and context is useful in separate threaded requests.

Developers must open their minds beyond one-shot systems to provide interface to Users

Function calling is useful, I think I omitted functions in my website demo for now, but here is an example image of what you might add… It’s easier now to call functions with structured calling but it wasn’t particularly difficult even pre JSON with a little string matching etc.

The concept of IDE as I define it is an interface that frontends code as Scratch does because clearly there is a benefit to this… Many people DO code and the fine tuning that can be done makes a difference… Yes we autistic Devs should still feel some love from the community.

That said… The hard problems, the P != NP is the long tail… Devs != Non Devs :smile:… Pure voice is the only acceptable solution in my view.

A Non-Dev should be able to ‘Code’ a report, this is simple DTP. I was 10 or 11 when the internet arrived in my home town, sponsored by Apple. The focus then (as it should be now) was in educating people to use computers.

Then you had to teach how to use a mouse, a keyboard, what Windows IS :smiley:

Teaching people some simple logical constructs, is rather less of a challenge I am sure… Allowing them to talk to computers.

2 Likes

before LLMs and function calling through LLMs. I am not suggesting eval in the traditional sense.

Let’s take one of the functions in selfet:

@tools_function(TOOLS_FUNCTIONS)
def assign_agent(agent: Annotated[str, "The agent who should get this task based on the background of the team member"]):
    """
    This function picks the team member as the agent who should get this task. The ideal team member is the one who 
    best meets what the task entails; given the diverse background of team members. In certain cases two team members 
    might have nearly the same qualities; in which additional attention must be paid to every word of the backrgound.
    """

This function describes in words what is expected out of the function; executed by the LLM given proper context. There’s no particular reason why the words themselves cannot change.

People understand simple logical constructs. What frustrates is the inability to really translate that into something thet computers can do without code.

2 Likes

Yes you can code on the fly though injection is possible and this wouldn’t be recommended for NDs or lazy Ds… but neither would driving a car for drivers without a license… D != ND ever… This is a different problem than described above, more akin to using eval for Devs with the threat of injection / error etc

What frustrates people is computers don’t know EVERYTHING… This is far more about the privilege system of society.

For this you are looking at tying into APIs etc… Rakuten for example has an API db… you can these days use GPT to tie into APIs very easily

The interface I show, can very easily mod for APIs but remember there is always a divide between KNOWING and BELIEVING. If you are calling a function that doesn’t already exist and isn’t already checked well then you are going to get an error… Fact of life… This is called Dev… Or skating on a knife edge.

I remember an abbreviation for that I learnt long ago on a forum much like this one… RTFM :smiley:

Computers are not gods, neither should they be… if they are AGI is ‘Artificial General Intelligence’ and not ‘Augmented General intelligence’ and we are OBSOLETE. ergo your user has an issue.

2 Likes

You know, I think ultimately what’s been keeping me thinking about this conversation is a lot of this:

But also, this:

I think what has been confusing me now that I’m looking at all this has to do with not fully understanding to what degree something should be considered “under the hood”, and what should be shown to the person using the tool itself.

There is no right answer to this question, but I do think it’s important, because if you put too much under the hood, it becomes difficult to diagnose what might go wrong. However, if you put too little under the hood, you risk alienating those who aren’t as technical.

Either way, innovating on the interface for language models (and how to use them) is always welcomed.

2 Likes

OK and so now we have gotten to this point…

To muddy the water still further… To clarify for anyone still catching up… Clearly the next step is self prompting by the AI within the ‘programming language’ we have developed…

To iteratively code itself.

Or even to the Developer (Which brings us back to calling this an ‘IDE’)

I think this question is along the same lines but the example is GPT prompting in ‘words’

but not also code

1 Like

Yes.

The thing that I am positing: can we get rid of php/python/java/javascript?

In very limited contexts (and with specialized fine-tuned LLMs), the following works :

@tools_function(TOOLS_FUNCTIONS)
def next_step ( sentence: Annotated[str, "The sentence should describes what the next step should be in light of the conversation so far, why this next step is required and how to accomplish the next step. "]):
    """
    The conversation so far is carried out between different agents through different messages.

    Each message MAY have multiple target_agents and if it has a target agent, it MAY have source_agents. TOTALLY 
    REFRAIN FROM INCLUDING TARGET_AGENTS AND SOURCE_AGENTS. ALL agents are explicity mentioned in the text; 
    typically prefixed by '@'. The distinction between the two types of agents is VERY important. The target agent 
    can be thought about as the subject in advait vedanta and the source agent is the object in adavit vedanta. 
    The text is always requesting the target agent to do something. The source agent may make further references 
    to other agents. 

    Pay SPECIFIC attention to the LAST BUT ONE MESSAGE as it, by and large, contains most clues to what the next step could be. 
    HOWEVER also look at previous messages as they might also contain hidden context that may not be visible in only 
    the last message. 

    THE MAIN PURPOSE OF THE **ONE** SENTENCE IS TO PRODUCE THE NEXT STEP. WHAT SHOULD THE AGENT DO NEXT. BE SPECIFIC WHILE TARGETING AN AGENT. 
    WHAT IS THE ACTION ITEM FOR THE AGENT IS THE QUESTION TO ANSWER. MAKE IT SPECIFIC TO ONE AGENT. AVOID GENERAL TERMS.
    """


    data = {'action': sentence}
    return json.dumps(data)

In other words, just write in English for the entire thing.

1 Like

OK I’m sorry… I think I understand now

I know my son wrote a system within the Chat GPT browser version where he wanted to be a character within a story he knew… I believe he got that working pretty well.

It is also much easier to get that example working than to write a system with my method.

That said, this is a very inefficient system.

Calling ChatGPT to run an if statement for you would be like walking to China for an ice-cream.

With the system I am demoing it manages reason and logic and processes a lot of logic locally so you don’t even have to go out to get your ice-cream

Further you’d need an impossibly large context window for this to make sense to do anything original that wasn’t already the most likely next step.

1 Like

Below you can see a simple and quickly tested and working implementation of a prompt that would fit the demo implementation I have online:

Follow each Stage in the process in order creating the following variables.

Stage 1 - Control Structure ID Based on the user message attempt to determine if the user is asking which of the following

1 - Evaluate a condition ie an if statement
2 - Ask an English language question that cannot be evaluated to PHP code
3 - Error if not sure

Control Structure ID = Numeric ID 1-3 above

Stage 2 - Control Structure Response Based on the answer to stage 1, determining the appropriate answer in the defined format

1 - Return an expression in PHP code that would replace the word ‘Expression’ in this conditional ‘’‘if (Expression)’‘’
2 - Return Asked English Language Question
3 - Return just the word Error

Control Structure Resonse = Appropriate response from stage 2

The resulting output should be a JSON response

Example 1

‘’’ if the month is in the second half of the year ‘’’

would return

‘’’ [‘Control Structure ID’=>1, ‘Control Structure Response’=>“date(‘m’) > 6”] ‘’’

Example 2

‘’’ if the world is flat ‘’’

would return

‘’’ [‘Control Structure ID’=>2, ‘Control Structure Response’=>“if the world is flat”] ‘’’

1 Like

Well at this point I am stuck. I have written the interface I set out to create.

Where it goes from here I don’t know.

Having read more on the forum I feel like putting any more up is just more detrimental.

While I have found it too hard for ChatGPT to write the code I have written but itself thus far the speed of improvement of these models make future improvements, even the next feature, seem rather pointless.

Should anyone have any suggestions I’d love to hear your thoughts.

I can see there has been some interest on the website.

It certainly feels like the end of the line for me… Time to consider what to do, how best to help, as AI slowly eats everyone else’s jobs too.

Certainly that’s a question that ChatGPT asked me more questions than I asked it :smiley:

1 Like

I’m not sure if this will work… It will probably destroy my hosting and knock out my website. I have put up a html website that doesn’t work so this should make things much clearer…

For those more technically minded there is clearly a good reason for that…

I may well take this back down, I have a problem setting up things like YouTube, GitHub etc, let’s just say it’s too technical for me :confused: so I’m afraid this is the best I can do for now…

Video Link

The website works in Chrome browser and you can see the difference between the ‘IDE’ (With all the tools) and the resulting ‘Report’

1 Like

I have a further update to share on this project ^^

Forum Checker In 60 Seconds

I think this is at least closer to your suggestion. (Forum Checker inspired by a recent forum thread I hope not an issue)

I have popped most of the code requirements out now, still works off code though, not pure English ^^

OK there is now proof of concept on this. :slight_smile:

1 Like