How to impart large, specific instruction set -- prompting, fine-tuning, other?

Need a little guidance (and/or) pointers on how to customize my gpt-4 based agent – prompt, fine-tuning – or other?

The agent is embedded in an app builder (think bubbler, Adalo, xeno etc.) – it guides users thru using the builder UI but also has extensive (50+) embedded function calls to control the UI and every aspect of internals. I’ve been noodling with this quite a bit but still not clear how best to train or customize the LLM. This is the information I need to impart:

  1. Basic top level role, approach etc. this is straight-forward and easy to manage in the 250 lines of recommended system prompt.

  2. Detailed specs of each function call – large, but again well defined already what I need to do.

  3. Detailed instructions of how to use those functions to interact with users, call the functions, add / delete code snippets, much nuance <== this is easily 100x the recommended 250 lines of prompt

  4. Refinement of interaction based on captured assistant chats to train, again seems clear I should use fine-tuning for this (but correct me if wrong).

So the question is on (3) how to impart/train this body of specific instructions that is way way larger than recommended prompt-size (and prompting doesn’t feel like right approach or make $ sense). When I read about fine-training docs seem to caveat I should exhaust prompting first, so not sure.

Thanks for any input, q is as short as I can make it.

-J

I don’t think finetuning is a solution for something that is about complex workflows - ie how to respond to detailed sections of work.
Finetuning is really (IMO) about input - output tuning.

Have you tried added the different ‘manuals’ that you have in to attachments so that they can be used to trigger specific use cases (and thus function calls).

Another path could possibly be ‘nested assistants’ - for example I have an Assistant that has several function calls that are themselves assistants.

And lastly, assuming you use thread/run as foundation - you can also use different subsequent Assistants on a thread. Which Asisstant to use next on a particular thread could be determined by an Assistant as well :slight_smile:

3 Likes

Yeah, “nested assistants” or “experts” under a “manager/task router” would be the way I would approach this.

Yes I’ve encountered those ideas but thanks for calling them out as potential better options.

Yes different manuals, that feels right, so in the prompting I’d point the LLM’s at instruction docs I’ve previously uploaded, thats how I reference?

Yes I use assistant/thread/run as APIs, dall-e also.

Thanks a lot for the reply. I’ll google around those things.
-J

1 Like

Yes - I think that is worth trying - and in the RAG version you’d go the ‘opposite’ way from being token conservative - you want to be super precise in talk and discussing all the different scenarios in detail given the RAG search the best way to easily come up with the path you envision. You might still want to be smart defining many different Assistant that each have a disctic subset of your documents (and functions) attached to them.

Even after working with run/threads for more than a year - the realisation a few months ago that I could actually start a new run WITH A DIFFERENT ASSISTANT every time I please was an eye opener.

Feel free to share more - always fun to think along and share experiences!
JL

Yeah this seems like the way. ty!

1 Like