NL to SQL, very large database: new tools?


I am building an NL to SQL bot for my company, that should be able to query our database. The problem is that the database is huge (the section of the database that I have to query is over 100 tables).

With old gpt-3.5 context, I managed to get a working version by working in layers: I first have the bot select tables from the schema, and then provide the schema in context for the selected tables. Without fine tuning, I got a decent accuracy that was promising for finetuning.

But then the release of gpt-4-turbo happened, together with the release of GPTs and such tools. This is quite exciting, but also very confusing. I am wondering how to leverage those new tools to improve my bot (either accuracy or less tokens used, both would be improvements).

I’ve seen that the GPTs and assistants are used as an ‘alternative’ to fine-tuning a model to a wanted behaviour. In particular, one can pass documents to them, and they would take their info from the document.

Naturally, my first instinct was to ask myself “Should I now pass the database schema as a file to an assistant/GPT”? But then I did some searches, and it looks like files are sent in the context anyways, so I do not know if that would help.

Other topics suggested to vectorize the db schema and give it to openai as an embedded file. Apart from the fact that I don’t know how to do that (but I can research), I was under the impression that an embedded file allows more “freeform” interpretation from the AI. As an example, I doubt the bot would get table and field names right, and it would be an headache to correct it. Am I right?

I’d like to know if you have already applied the new tools to an NL to SQL problem, with which results.