In economical context*s: All GPT4 sessions, should have a gpt-3.5-turbo engine as an assistant/tool

Personal context*: So… While ive been in IT for a very long time 20+ years: Windows, linux deployment and repair, and more recently ios/android dev/mods etc…
Very little coding. Even in data recovery, very little coding. in that disclosure:

Been playing with AI for only about 3 months. I went from zero idea how AI worked, to understanding the intricate construction, function, strengths and limitations of LLMs, as well as the existence of other forms. (too busy to read about this i suppose. Sorry.)

However from what ive learned, training a more extended GPT4 with the context (by the other definition, thus why every time i write “context” I write an *) of GPT 3.5’s (turbo in particular) strengths and limitations… seems to be an ideal pairing for the sake of computational resources. The most certain and constant functions of which is instructed could be delineated to its… “assistant” /shrug. Of which is very competent in summarizing with exactly requirements, with efficiency.

(additionally, semi-related: The idea of blacklisting less efficient models, unless otherwise explicitly requesting its usage in a segregate API tunnel. but thats not THAT important here.)

That said, once again, I am new in this game, and thus i am, as they say… “spitballing”

  • respectfully so.

Hmm… I hope this isnt a controvercial topic. It was not intended to be.

Hmm. been thinking about this alot and reading a lot.
Im still convinced this is a good idea to reduce openai’s costs.
has to do with the nature of inference computation.