I started from scratch and re-wrote my Assistant runner without any of the Django /Celery requirements. Full project repo, ready to deploy on AWS (Beanstalk) a demo.py with several examples and a tutorial (no paywall) on Medium.
Server application meant to run a lot of Assistants concurrently.
Curious to hear feedback!
And what a funny coincidence that all happened on my one year anniversary on the forum
Hi, as the only contributor to your previous Django/Celery version of this, are you abandoning that?
You said its for running a lot of assistants, but the medium article mentions a config file with a single system prompt. Donât different assistants need their own system prompt (or âinstructionsâ) specific to that assistant?
Iâm curious what is the advantages over this compared to your previous Django/Celery solution (ignoring the streaming response for now, I get why thatâs an improvement).
What is replacing celery I guess is what Iâm asking? How are multiple âworkersâ managed now?
Looks promising though ⌠I might get around to trying it soon.
Richard
Not discarding that one since it is currently in production. This one is more light weight. Since it is fully async that is what is âreplacingâ Celery. The âqueueâ is simply the Python threads.
Certainly exactly the same in terms of functionality but a much easier âfrom scratchâ process.
I see, that makes sense. Thanks for clarifying!
I think you might find https://github.com/SylphAI-Inc/AdalFlow of interest.
It was called LightRag before.
https://adalflow.sylph.ai/tutorials/lightrag_design_philosophy.html
(I only discovered it yesterday)