Getting feedbacks about a visual prompt chaining tool I built

Update: Sep 17

Hey folks, I added a new feature: Using CSV as input for batch evaluating. Check out the demo video at 45s timestamp:

The latest version is already deployed at
Source code: GitHub - d6u/PromptPlay: An experiment on visual prompt chaining. (with updated README for setting up locally yourself)

Updated Sep 11

A new version of the web app is released. The information below might be outdated.

Checkout this post for latest updates Just uploaded a new version of a visual prompt chaining tool I made!

Hi folks! Please let me know if this is not the appropriate place to post this :smiley:

I want to get some feedback from this community about a prompt playground tool I built recently with visual prompt chaining capability. It is totally in its early development and I developed it as a side project.

I know there are still a lot of missing features and UX gaps but it’s kinda useful now. I want to see if this can be actually useful to other people before I invest more time in it.

The tool is hosted on a website: (You can play without sign up)
Some examples:

I know there are a bunch of other prompt tools as well. I’m learning from them as well. If you find some gap or things this tool does it better, please let me know! Thanks!


Looks slick. Thanks for sharing with us.

Hope you stick around the community.


In a sentence or two could you explain what the app does? I know you say it’s a “prompt playground” but I’m not quite sure I can fill in all the blanks about what that means. I can tell it’s some kind of flowchart but I’m not sure what to actually do.


That’s a good question. I’m still developing a way to present it. Let me give it a try.

Essentially, allows users to experiment on prompt chaining, i.e. chaining multiple OpenAI completion API calls together without writing any code or barely any code.

It achieves this by employing a couple primitives, like Lego blocks. Users can drag and drop and edit them on the web page using a visual editor.

The most powerful feature it offers that OpenAI Playground doesn’t is chaining multiple OpenAI completion calls together. Each completion calls can have different system prompt and user prompts. Prompts can also take input from the output of previous completion calls.

For example, in, the first completion call outputs a poem. And the poem become part of the user message in the second completion’s user message using {}:

Translate below text into Spanish:

All of these are done in the visual editor on the web page instead of code. I have 6 primitives so far: Databag, Message, LLM, Parser, Append to List, and Get Attribute. I hope these features can help people achieve two goals:

  1. Experiment quickly on new ideas on prompt chaining without writing code.
  2. Have a lightweight way to share with people their prompt chaining solution.

The complete the UI looks like this:

Very cool concept! I think everyone would agree we need a generalized solutions like this. Reminds me of Jupyter Notebooks which is an extremely popular way to have “blocks” of content and widely used thing in the AI field, especially among Python devs. I don’t think python could do this, but you could make that one of your integrations for the future.

Two ideas I had looking at it:

  • Maybe make “system” prompt be able to be assigned for each GPT query.
  • Also a hook to post process each output by had (just by calling a function) before feeding back as input again.

Also reminds me of how I’ve heard “Baby GPT” explained, but I’ve never seen BGPT myself. I’ll be following you, and looking for your github.

1 Like

Thanks. It’s very encouraging that you found these idea could be useful.

For the two ideas you mentioned.

Maybe make “system” prompt be able to be assigned for each GPT query.

This is supported. E.g. in, the first GPT query returns a poet’s name. Then it’s passed as the poet_name in the second GPT query’s system message. Both GPT queries are using separate system prompt.

Also a hook to post process each output by had (just by calling a function) before feeding back as input again.

This is exactly what Parser would do. I’m still thinking about how to balance between flexible with ease of use, but in this example, the completion call will generate a numbered list, then the content of the message is passed to the Parser block containing some JavaScript code. The code would split the text by newline then use regex to extract the string within the "".

1 Like

The way you’re extracting JSON and then using it to formulate a new question made me think up this test to run (in the image) and as you can see it worked. wow. I’m continually amazed by the high-order reasoning.

EDIT: I forgot to add one more line to the prompt at the end: “Please provide your answer as JSON”, but I tested that and it works. So it is a pattern for JSON in and JSON out.

It looks like you are using GPT3.5? I’m surprised by the reasoning capability. But on a separate note, I often found reasoning is not the hardest, usually ensuring stable formatting the hardest.

I haven’t even tried GPT-4 yet. I need to try it! btw here’s how my app is making use of GPT:

This is cool. It looks like some QA tool powered by AI. What does it do?

It’s really just a hierarchical CMS/wiki kind of thing. It has Fediverse support so it’s partly a Social Media App. But I think an AI Conversation Repository is it’s current “pivot”.

1 Like

Thanks for sharing and creating the tool!

The website doesn’t open for me on Safari FYI.

Works great on Chrome :slight_smile:

1 Like

Thanks for letting me know! I should have tested in Sarafi. Will definitely fix that in the future.

I just upgrade some front-end dependencies. I tested it works on Safari. Feel free to give it a try!

There is some technic to force gpt respond in the right json format using functions api.

functions: [
name: ‘poem’,
parameters: {
type: ‘object’,
properties: {
poem: {
type: ‘string’

You could also add some prompt like:

Always use the “poem” function for your response.

The idea is just bomb . I tried opening your development directly in the chatbot environment I’m building for exactly the same purpose, and it’s exactly what was missing to be useful and efficient. As shown in the screenshot, I can open your development in Canvas, but I can’t authorize due to security policy. I asked the same question to the developers of ChatGPT system and MindOS platform in Discord, and you can check the discussion thread , and maybe you will have new ideas how to make your system more efficient and known .

{https ://}

Interesting use case. Are you trying to use create an agent in MindOS, that can control elements on, so the MindOS agent can construct prompt chaining itself?

Lightning has just struck my brain. I will try to build a familyfeudGPT decision engine with a hot family at 0.9 versus a 0.1 cold family to take turns arguing their answers to the 0.4 adjudicator’s questions, arguing their points until some consensus is reached and potentially a concatenation of a solution is achieved. Anyone else wanna build it in a race?

Unfortunately, interaction is limited, I can explore images and text in Canvas using screenshots and page parsing. I am trying to train the chatbot to generate “actual” data to populate parameters in components (nodes, modules, skill forms, etc.) from the Workflow Visualizer built into the platform , where you can create workflows and skills from scratch, from engineering blocks that are in the library or created from scratch. I have not yet had time to explore all the possibilities, as there is a list of programs that I have managed to open in Canvas, and I am proceeding systematically, choosing what can help me in learning a new profession . Thank you for your work, even without checking it out I’m pretty sure this is what I need to finally understand how the processes work and build skills.

вт, 5 сент. 2023 г. в 23:54, Daiwei via OpenAI Developer Forum <>:

1 Like

Go for it! I may end up custom designing and building a tool myself. As nothing out there is currently much use to me.