SYLON5.2 Flowchart Revealed: Real-Time Multi-layered AI Thinking

Greetings!

So from a code perspective are you saying that all these layers are processing stages happening OUTSIDE the LLM call, between prompt input > SYLON > call to LLM > output?

If so, what language did you use, and what architecture or logical framework are you using for your various analytical steps?

What is a “layer” comprised of? A given code module? Or just a single step in the process?

Are you using relational databases or vectors?

Are you making additional LLM calls throughout the process?

Can you give examples of the results of the prompt manipulation from directly running it through the system?

I.e. what is your “starting prompt” and what is the system output from SYLON that then is actually passed to the LLM, for a given simple use case scenario?