Model Sliding: A Logical Approach to AI Model Selection

Model Sliding: A Logical Approach to AI Model Selection

Model Sliding is a proof-of-concept use of logic that employs a unique transition between different AI models based on the nature of the prompt and task at hand. This dynamic model selection strategy optimizes AI performance by matching each task with the model best suited to handle it - GPT-3.5-Turbo or GPT-4.
model

The application uses a logical dictionary of specific keywords associated with each task type to determine the most suitable model. If the prompt doesn’t match any specific keywords, the application defaults to a predetermined model.

The Model Sliding method offers an interactive and user-friendly interface. Users can interact with different models by simply typing their prompts or tasks. The assistant avatar and model information dynamically update in response to the chosen model, providing real-time feedback and enhancing user experience.

Include your Fine-Tune Model, the logic is built in to refine optimal responses.

Meta Model Sliding: A Proof of Concept for Dynamic Model Selection

In an era where artificial intelligence is rapidly advancing, there is an abundance of AI models, each with its unique strengths and specializations. Leveraging these strengths for specific tasks can significantly enhance performance. However, the challenge lies in switching seamlessly between these models, ensuring that each task is handled by the most competent model.

To overcome this challenge, we introduce the concept of “Model Sliding”. This proof-of-concept application demonstrates a novel approach toward logical model selection, capitalizing on the strengths of each model for optimal responses and use of tokens.


Here are the roles that the Assistant will use. Example responses for different assistant roles:

Meta Model Sliding Assistant Examples


The Logical Challenge

  • Is there a more cost efficient and logical way to use different GPT models for various tasks? And can it be seamless?

Model Sliding: An Overview

Model Sliding allows for the transition between different AI models based on the task at hand. The underlying principle of this approach is to maximize the potential of each model for specific tasks.

  • GPT-3.5-Turbo could be favored for generating Python code.
  • GPT-4 might be the go-to for crafting engaging social media content.

The Meta Model uses a set of keywords to determine which model is best suited for the task. The keywords are associated with specific models and tasks, enabling the Meta Model to “slide” between models as per the requirements of the task.

Visualizing Model Sliding

To better understand the concept of Model Sliding, we have visualized it as a network of nodes, where each node represents a model, task, role, or keyword. The edges connecting these nodes signify the relationships between these elements. The various task dimensions are represented by different colors. A neural network of layers can be seen at the core of the network, representing the models.

The network visualizations below show the Model Sliding network in different layouts. The models (GPT-3.5-Turbo and GPT-4) are at the core, connected to various tasks like ‘code’, ‘story’, ‘essay’, and ‘social media’. These tasks are further linked to other keywords and the roles they play.

In these network visualizations, the models (GPT-3.5-Turbo and GPT-4) are at the core, connected to various tasks like ‘code’, ‘story’, ‘essay’, and ‘social media’. These tasks are further linked to other keywords and the roles they play.

Advantages of Model Sliding

Model Sliding offers several benefits:

  • Task-specific Optimization: By allocating the most appropriate model for each task, Model Sliding enhances performance across a wide array of tasks.
  • Efficient Resource Utilization: Instead of concurrently operating multiple models, Model Sliding leverages one model at a time, ensuring efficient use of resources.
  • Adaptability: The system can be effortlessly updated to incorporate new models and tasks as they emerge.
  • Optimal Token Usage: With the ability to dynamically select models based on the task, Model Sliding ensures optimal use of tokens. This is especially advantageous when multitasking or engaging with large language models (LLMs), as it allows for effective token allocation, leading to enhanced performance and cost efficiency.

Project can be found here: GitHub - AdieLaine/Model-Sliding: Enables the application to transition seamlessly between different OpenAI models, choosing the one that is most suitable for the task described in the user's prompt.

2 Likes

Nice, I didn’t know that was what it is called, but I have been using this method in a project I’m working on. I call each model/algorithm a “subsystem” which is responsible for some particular unique task. Like voice recognition, language understanding, voice synthesis, etc.

1 Like

I think I’d be tempted to feed every request into GPT-3.5 and ask for a “complexity” score out of 100, anything over some threshold goes to GPT-4.

1 Like

That’s a pretty good idea actually. I wonder if there is a way to more correctly get a complexity score. Maybe like comparing top_k choice embeddings and seeing if they are above a certain spread. Any thoughts on this?

You could certainly one shot it with two questions of a predetermined (experimentally) complexity levels and a result of gpt-4 or 3.5 as the result for each… maybe? mutli shot it with more examples, I think I’d actually just have a series of Q’s from say various year levels of exams, as they have usually been carefully graded for this and get an idea of complexity score returned, you could even have a hysteresis zone where even if the complexity only justifies 3.5 you select 4 out of caution.

I “think” what this is attempting to do is answer the no brainer stuff with 3.5 and anything above with 4.

I always consider answers or results from GPT’s scoring as “subjective”. I know it’s not, because its a statistical model. But I feel like it is too black box to trust. Not to mention new versions seem to change the results drastically.

Lately I’ve been trying to create concrete metrics to get definitive answers in my unit tests. I’ve failed to come up with anything useful, so I have fallen back for now on some of my earliest efforts in making a more yes/no/multiple choice options for model steps.

Butttt, I’d really like a more metric like approach that is not “ai subjective”.

1 Like

Yea, you have hit the nail on the head there, evals, evals, evals, evals. There is a reason you get access to all the toys if you create a great eval for OAI to use, they are literally super powers.

For example, lets say you take this proposed list of exam questions with idealised answers and you run those against your model daily to get a benchmark value to what the “complexity” score is, as, as you say, it’s a stats model, now you have a statically modeled score that is evaluated against a set of known standards, you can even get the model to score it’s own answers to the evals knowing what the last set of evals was!! it’s evals all the way down.

1 Like

I’ll be 100%, I haven’t really taken a look at the evals or how they are even being used. Maybe I should be investing more time in checking those out. Thanks for the summary though. Probably just saved me like 2 months of experiments of things that have already been done xD.

1 Like