LLM Feedback to User (User who submitted the request to LLM) - Best practices

I’m currently working on a text-to-SQL project that combines metadata and LLMs to generate the best SQL queries using LLMs. My question is about how to communicate the generated SQL back to the user. In my current use case, I’m using VS Code, but in reality, users will be the ones submitting the text. While I’m familiar with the LLM side, I’m unsure about how to relay the SQL query back to the user effectively. As I’m new to frontend development, I would really appreciate any suggestions or guidance on how to do this. Thank you.

Topic I am working : Text 2 SQL Finetuning and metadata

Welcome to the community!

I would say it depends on what you’re familiar with. If you’re working in python, and especially if you use langchain I think, a common choice is simply using streamlit (GitHub - streamlit/streamlit: Streamlit — A faster way to build and share data apps.) to set up a quick UI.

Here’s a tut I found on google (Build a basic LLM chat app - Streamlit Docs)

I personally recommend neither streamlit nor langchain in the long run, but it could be a quick solution in a pinch.

Is it possible to fine-tune SQL using a similar approach? Also, if my users are using Toad, how can I send the optimized SQL back to those users? Is there a way to integrate this process with Toad? Any guidance would be greatly appreciated.