So I’m starting to think about a whole new semi-SQL like query language that LLM’s can generate to surface information and thought I’d share a single inspirational screen shot of gpt-3.5-turbo
output:
The core idea is to use semantic search to surface a list of human authored query fragments and then let the model combine them into a novel query that answers the users question. In this case I didn’t tell turbo how to compose the fragments so you’d probably want that to be less random and provide some examples for composition but you get the idea…
Seems interesting…