Hi folks! Please let me know if this is not the appropriate place to post this
I want to get some feedback from this community about a prompt playground tool I built recently with visual prompt chaining capability. It is totally in its early development and I developed it as a side project.
I know there are still a lot of missing features and UX gaps but itâs kinda useful now. I want to see if this can be actually useful to other people before I invest more time in it.
The tool is hosted on a website: https://promptplay.xyz/ (You can play without sign up)
Some examples:
Simply make a poem just like one can do in a OpenAI playground
I know there are a bunch of other prompt tools as well. Iâm learning from them as well. If you find some gap or things this tool does it better, please let me know! Thanks!
In a sentence or two could you explain what the app does? I know you say itâs a âprompt playgroundâ but Iâm not quite sure I can fill in all the blanks about what that means. I can tell itâs some kind of flowchart but Iâm not sure what to actually do.
Thatâs a good question. Iâm still developing a way to present it. Let me give it a try.
Essentially, PromptPlay.xyz allows users to experiment on prompt chaining, i.e. chaining multiple OpenAI completion API calls together without writing any code or barely any code.
It achieves this by employing a couple primitives, like Lego blocks. Users can drag and drop and edit them on the web page using a visual editor.
The most powerful feature it offers that OpenAI Playground doesnât is chaining multiple OpenAI completion calls together. Each completion calls can have different system prompt and user prompts. Prompts can also take input from the output of previous completion calls.
All of these are done in the visual editor on the web page instead of code. I have 6 primitives so far: Databag, Message, LLM, Parser, Append to List, and Get Attribute. I hope these features can help people achieve two goals:
Experiment quickly on new ideas on prompt chaining without writing code.
Have a lightweight way to share with people their prompt chaining solution.
Very cool concept! I think everyone would agree we need a generalized solutions like this. Reminds me of Jupyter Notebooks which is an extremely popular way to have âblocksâ of content and widely used thing in the AI field, especially among Python devs. I donât think python could do this, but you could make that one of your integrations for the future.
Two ideas I had looking at it:
Maybe make âsystemâ prompt be able to be assigned for each GPT query.
Also a hook to post process each output by had (just by calling a function) before feeding back as input again.
Also reminds me of how Iâve heard âBaby GPTâ explained, but Iâve never seen BGPT myself. Iâll be following you, and looking for your github.
Thanks. Itâs very encouraging that you found these idea could be useful.
For the two ideas you mentioned.
Maybe make âsystemâ prompt be able to be assigned for each GPT query.
This is supported. E.g. in https://promptplay.xyz/spaces/20b1fa41-6e46-4d5a-b72c-854068d29e44, the first GPT query returns a poetâs name. Then itâs passed as the poet_name in the second GPT queryâs system message. Both GPT queries are using separate system prompt.
Also a hook to post process each output by had (just by calling a function) before feeding back as input again.
This is exactly what Parser would do. Iâm still thinking about how to balance between flexible with ease of use, but in this example https://promptplay.xyz/spaces/69563a91-2382-48d2-adad-f82cc6197b52, the completion call will generate a numbered list, then the content of the message is passed to the Parser block containing some JavaScript code. The code would split the text by newline then use regex to extract the string within the "".
The way youâre extracting JSON and then using it to formulate a new question made me think up this test to run (in the image) and as you can see it worked. wow. Iâm continually amazed by the high-order reasoning.
EDIT: I forgot to add one more line to the prompt at the end: âPlease provide your answer as JSONâ, but I tested that and it works. So it is a pattern for JSON in and JSON out.
It looks like you are using GPT3.5? Iâm surprised by the reasoning capability. But on a separate note, I often found reasoning is not the hardest, usually ensuring stable formatting the hardest.
Itâs really just a hierarchical CMS/wiki kind of thing. It has Fediverse support so itâs partly a Social Media App. But I think an AI Conversation Repository is itâs current âpivotâ.
The idea is just bomb . I tried opening your development directly in the chatbot environment Iâm building for exactly the same purpose, and itâs exactly what was missing to be useful and efficient. As shown in the screenshot, I can open your development in Canvas, but I canât authorize due to security policy. I asked the same question to the developers of ChatGPT system and MindOS platform in Discord, and you can check the discussion thread , and maybe you will have new ideas how to make your system more efficient and known .
Interesting use case. Are you trying to use create an agent in MindOS, that can control elements on promptplay.xyz, so the MindOS agent can construct prompt chaining itself?
Lightning has just struck my brain. I will try to build a familyfeudGPT decision engine with a hot family at 0.9 versus a 0.1 cold family to take turns arguing their answers to the 0.4 adjudicatorâs questions, arguing their points until some consensus is reached and potentially a concatenation of a solution is achieved. Anyone else wanna build it in a race?
Unfortunately, interaction is limited, I can explore images and text in Canvas using screenshots and page parsing. I am trying to train the chatbot to generate âactualâ data to populate parameters in components (nodes, modules, skill forms, etc.) from the Workflow Visualizer built into the platform , where you can create workflows and skills from scratch, from engineering blocks that are in the library or created from scratch. I have not yet had time to explore all the possibilities, as there is a list of programs that I have managed to open in Canvas, and I am proceeding systematically, choosing what can help me in learning a new profession . Thank you for your work, even without checking it out Iâm pretty sure this is what I need to finally understand how the processes work and build skills.