Add feedback to the output to improve the upcoming output

Hi, I hope you guys are doing well. This is my first post in this community.

What am I doing?
I am working on a project that is based on the transcript generated by the Zoom meeting from the conversation of cyber security experts. What it does is, let us suppose there is a constant conversation going on in the discussion. Based on the project let’s say I process the transcript every 5 minutes.

Now, what I am looking for is to extract the keywords from the transcript that are related to the cyber security domain. Now based on that keyword, I will be outputting the challenges/situations being discussed in the transcript. From that given challenges/situations I would output the necessary solutions that the cyber security experts might consider to mitigate/resolve the challenges/situations they are facing.

Therefore, there are 3 steps in my projects

  1. listing keyword
  2. describing the challenges/situation based on keywords
  3. Providing necessary solutions.

What I need?
Now, the thing is I need a feedback mechanism to the process for each specific user. How can I add the human interaction suck that if they didn’t like certain keywords being listed they can change some of those.

When the challenges/situations are list and if the user think that this part of challenges/situations is useless then the model should be trained such that the future meeting between the cyber security experts would not list the challenges/situations that is useless and instead provide a better response based on the previous meeting feedback.

Also this same incurs to the solution part that if the solutions are not feasible next time it would improve the solutions.

Note: I am using openai API from the langchain js library just to get rid of processing the huge transcript and the steps listed would be repeated after every 5 minutes interval.

Thank you guys for reading this whole problem and if you find something useful to me, I would love to explore that part.

Can I ask, why do you care what the keywords are? Seems like you could just tell the model to “describe the challenges/situations in the transcript and recommend necessary solutions.”

yes, we can but it misses some important talks in the transcript. Say the members are discussing regarding disconnection process of the orphan servers then by directly prompting about challenges/situations it won’t get that specific conversation.

Whereas first trying to show the keywords to the users so if they find those helpful then only it would process further else they can regenerate new keywords that they might be looking for and would not show the keyword the specific users didn’t like in the upcoming actions