More specifics (this is a hobby project of mine) : I’ve built a RAG with about 500 Pubmed papers about a very rare kidney disease (apparently only a couple of thousand people on Earth have this disease and I am helping their foundation with various gen AI uses cases).
The problem is: Instead of just Q&A on this RAG (which is easy) – I need the chatbot to act like an Interviewer, so that it can be used to have a conversation with someone wondering if they have the disease. So basically, the bot asks the questions (instead of the other way around).
I put up a quick demo on a Saturday morning (for more context)
On the face of it, it seems fairly trivial to embed the interview questions and expected ideal answers along with programmatic flow control to ask a set of questions in order, build prompts from that flow… is the point of this to give a score at the end, or just to take the data? Also, is it expected to function as a pre-screening element for a larger recruitment pipeline?
Indeed. I’m building out a project I call “CustomGPT Forms” - a simple [dynamic] mechanism for generating CustomGPT models that gather information. I didn’t want to say much about it just yet, but it is very relevant to this thread.
Forms - by and large - are laden with issues. They are rigid and unwavering. They cannot accommodate branches that are unpredictable in the data gathering objective. My approach allows “data form builders” to define a universe of data the AI needs to gather, build the corpus, create the CustomGPT project, generate a custom chat UI, and then execute the data collection conversation, pushing the collected data to a webhook.
This – I believe – is probably the future of forms. I want to implement this in CustomGPT for a number of reasons that I can’t explore here, but it’s starting to work well in my limited prototyping and tests.
The flow might be quite complicated – so I am wondering if all of it can be jammed into the system value in one go.
For this specific case, yes – it would be like a score (or a suggested based on the score). I have other use cases, where the interviewer wants to kickoff other workflows based on the path taken (e.g. I have a large financial planning firm that wants to ask 8 questions and then take the user down some path)
This would be a wonderful use case too. In this specific case, it would be a pre-screening to urge the user to maybe get further testing.
For example, here is the pre-screening flow for this specific case:
In the context of generative AI, it prompts the question -
What comes after forms as we know them?
The forms providers are busy making their legacy approach more productive with AGI. This signals that disruption lies ahead for them, but what will it look like?
CustomGPT appears to have all the ingredients to demonstrate a better way to collect information. If your support team would help me get through ticket #5316, I might be able to demonstrate this next week.
In my experience, relying solely on a system prompt and allowing it to operate autonomously is often an inconsistent method. This process can be enhanced through a more agent-driven approach, as outlined below:
Agent 1 is provided with the flowchart you posted in a Mermaid chart format. The fundamental question that must be presented to the user is: “What leads you to believe you have ___ disease?” Moreover, Agent 1 should have access to a specific function (function calling) that allows for interactions similar to consulting a doctor about symptoms. Except it will be consulting another agent.
Agent 2’s role is to translate the inquiry into three separate questions that may either assist in answering Agent 1’s question or facilitate further investigation. Agent 2 will direct these questions to Agent 3, utilizing a multithreaded approach. (This is done so that there is enough text in the question to match relevant chunks)
Agent 3, having access to research paper embeddings, will extract relevant information from these sources.
Agent 2, armed with the data obtained from Agent 3, will formulate additional questions for Agent 1.
Agent 1 will then assess whether further questions are required, or if a conclusion can be reached.
This approach necessitates a specific level of technical expertise for successful implementation.
For prototyping without writing code, you can try superagent.sh
Also agree, trying to pile everything into one prompt will yield less and less performant results as the number of tasks in the prompt spread the attention heads in too many directions at once, missing key points, hallucination and general low quality context referencing.
If the use case can stand the additional cost (latency can be handled with parallelism/streaming), then multiple prompts covering multiple topics and tasks is the way to go.
You can mitigate some of the cost by running some agents locally, not all tasks need to be done using state of the art technology, I’m personally using the core_web_sm model from spacy to find similar text, it’s 26mb you can run it on a laptop.
The system prompt should only be used to initiate and explain the interview.
Hey, good to know that someone trying to resolve the hiring issues.
I’m also doing similar one to gather the information based on their resume but challenges are that system is going in a cycle while asking questions based on their content.
I built a medical follow up plan conductor based on the similar idea, in medical scenario, medical professionals need to collect patient’s info as they are on a follow up plan due to prospective researches or on the purpose of given treatments.
Hi @bill.french, @alden et al - I am willing to volunteer to help test your products/projects and to showcase/promote them on the social platform I am building and to my linkedin followers.
I am interested in helping people to liberate their data from social platforms and to have a safe space to work on their life story, including a bot to interview people. I could see a bot that could help people with some issues that toxic social has worsened such as self image/self-harm. I haven’t been able to find funding and am considering going open source. The fundamental featureset has validated demand.
The idea of using technology to capture personal histories will certainly help people if it gets off the ground.
Regarding angel funding, it’s a common misconception that all investors will see the broader societal value in every pitch. While your project has a societal impact, many investors prioritize financial returns, exit strategies, and profit margins.
Considering you’ve mentioned the idea of open-sourcing the project, this could indeed be a strategic move. By going open-source, you can garner community support, contributions, and possibly find collaborators who share your vision and can help bring it to fruition.
If you’re open to advise, I’d say try connecting with organizations or groups dedicated to mental health and storytelling. They might provide you with insights or partnerships that can fuel your journey. Additionally, consider creating a separate topic on this forum dedicated to your project. It’ll not only provide a dedicated space for discussions but also make your initiative more searchable and accessible to those interested.