Help with our chatbot project for school. We are making an AI advisor for CS students

Hi guys,

My group is researching and trying to build a chat bot for a school project. I would love to ask for your input. Here are the details, and also issues we have faced so far:


  • It will be using Chat 3.5 Turbo.
  • It is a chat bot for the Computer Science department at my school.
  • The chat bot acts as an advisor for students.
  • It should be able to answer students’ questions regarding CS courses.
  • A dictionary of CS courses will be provided. CourseID, Title, Desc, Prereq, Units.
  • We will not implement “schedule” until we get this basic idea working first.
  • Model is instructed to reply only with information from the supplied list of courses.

Issues we have faced, and solutions we have attempted so far:

  • Our course catalog for the CS department far exceeds the 4096 tokens.
  • It doesn’t always get the right course if I ask “What classes do I need to take to get in CS 4000 if I’m in CS 3500 right now?”, it will also include classes that is not in the prereq. We kept changing the prompts to make it better. We got it to return only classes going down a chain of prereqs with the following instruction:


  1. Only reply with provided Course List.
  2. If course or course information is not found in Course List, output an empty python list only.
  3. If you are to recommend classes, base only on Prerequisite(s) field and nothing else.
  4. You are required to cycle down a chain of Prerequisite(s) to get all the classes for the user.

Chaining prompt attempt:

  • We tried chaining prompts. We stripped the course catalog into two datasets. One set is with just the CourseID and Prereq. We used that stripped down dataset to supply to the model for the initial prompt.
  • We restricted the first prompt to scan “courses” mentioned in user’s message and complete with just a list of courses found in the message. Then, we will go into the full-list and grab the detailed information from the mentioned course and spit it back to the model to respond to the user.
  • If a student asks a question like… “What classes do I need to take to get in CS 4000 if I’m in CS 3500 right now?” the model would be able to read down a chain of prereqs with completion in a list (CS4000, CS3900, CS3850, CS3500). Then I would pull detailed information off those four classes and return it back to the model to provide user a final response.
  • This method is good if the user message has a course mentioned. This method would break miserably if the user message is “I’m interested in cyber security. What classes are related to that?”

Embedding attempt:

  • So now we’re trying out embedding. We used ADA2 and convert the entire course catalog into vectors. Each course is a vector, and we record everything into Pinecone.
  • If the student message is “I’m interested in cyber security. What classes are related to that?”, Pinecone spits back the correct courses related to cyber security. It’s very sweet.
  • But now I’m unable to ask questions like “What classes do I need to take to get in CS 4000 if I’m in CS 3500 right now?”… that would go no where.

So, our group is thinking… maybe we need to use both a vector database and a relational database for this application?

For the first prompt completion, we should have the model determine if the user message is about a topic or interest, which we would need to use the vector database, or if the user message is about a specific course, then we would pull the information from a relational database… do you guys think that would work? What is your recommendation and advice for us?

Thank you for your time!!!

If I were given this as a commercial project I think I’d first get the AI to create a number (could be several chosen by the AI) of embedding queries to run, and tell it to create them such that those retrievals will best answer the users question when used as context.

Then I would include that context along with the users original question and inform the model that the query now contains the required context and to use that to answer the question. I would also use GPT-4, I realise there may be budget constraints, but this seems like a task that requires the best of performance and accuracy. (also 8k context)

here is example : Chaining prompt attempt CyberSecurity

Thank you for your response!

Do you mean like:

  • Supply model a list of courses and their prerequisites.
  • instruct model to complete with a list of courses. Or, complete with topic/interest
  • if response = list of courseids, pull all courses from Pinecone based on id.
  • if response = topic/interest, pull courses from Pinecone based on vector similarity
  • supply courses into model as reference content for a final response to student.

something like that?

Ok, so… if the user asks “what do I need to move from level A to Level B” I would create a function named something like “List_of_vector_embedding_queries” that returns a set of strings that contain what the model thinks would be the best text to match with your vector database to get the ideal context, so for this example I’d expect GPT to create 2 strings something like “What are the requirements for Level A” and “What are the requirements for Level B” , and then I’d run both of those as vector retrievals and use the response from that as context to a new GPT model query that lists that context and asks the users question again and says to use the new context to answer it"

1 Like