I’m working on integrating an AI assistant into my web app using the OpenAI API. I’d like to know the process for uploading files to enhance the assistant’s knowledge base through the API. How i can manage these files i.e. delete or update? Any insights or guidance on this would be greatly appreciated!
You’ll primarily be working with the file management features of the OpenAI API to upload, and delete. You will use the vector Stores to create an searchable index of those files, and you will need to attach one or multiple vector stores to your assistant.
1. Uploading Files
- Prepare the File: Ensure your file is in a supported format and contains the data you want your assistant to use.
- Use the OpenAI API to Upload the File:
import OpenAI from "openai"; const openai = new OpenAI({ apiKey: "your-api-key" }); async function uploadFile(filePath) { const fs = require('fs'); const fileStream = fs.createReadStream(filePath); const response = await openai.files.create({ file: fileStream, purpose: 'assistants' // Specify 'assistants' to use this file for an AI assistant }); console.log("Uploaded file ID:", response.data.id); return response.data; } uploadFile("/path/to/your/file.pdf");
2. Managing Files
Once files are uploaded, you can manage them by listing, updating, or deleting as needed to keep the assistant’s knowledge base relevant and up-to-date.
Listing Files:
To view or retrieve the files you’ve uploaded:
async function listFiles() {
const response = await openai.files.list();
console.log(response.data);
}
listFiles();
Updating Files:
Currently, OpenAI does not support directly updating an uploaded file. If you need to make changes, you should delete the existing file and upload a new version.
3. Using Files with the Assistant
To make these files useful, link them to your AI assistant via the tool_resources
when you create or update your assistant.
Creating an Assistant with File Resources:
async function createAssistantWithFiles(vectorStoreId) {
const response = await openai.assistants.create({
model: "gpt-4-turbo",
instructions: "You are a knowledgeable assistant that uses the provided files to answer questions.",
tools: [{ type: "file_search" }],
tool_resources: {
file_search: {
vector_store_ids: [vectorStoreId] // Attach vector store containing your files
}
}
});
console.log("Assistant ID:", response.data.id);
}
createAssistantWithFiles("your-vector-store-id");
4. Vector Stores
Since files must be organized into vector stores to be used by an assistant, you need to manage these as part of your file handling.
Creating a Vector Store:
async function createVectorStore() {
const response = await openai.beta.vectorStores.create({
name: "My Knowledge Base",
description: "A store of documents for the assistant to use."
});
console.log("Vector Store ID:", response.data.id);
return response.data.id;
}
Adding Files to Vector Store:
async function addFilesToVectorStore(vectorStoreId, fileIds) {
await openai.beta.vectorStores.fileBatches.createAndPoll(vectorStoreId, {
file_ids: fileIds
});
console.log("Files added to Vector Store:", vectorStoreId);
}
Remember to handle the lifecycle of these files properly, ensuring they are updated or deleted as your application needs them to avoid much storage charges if you expect a lot of data being uploaded.
Happy building!
Hola Jorge.
What would be a good approach or best practices when preparing the files that would serve as KB to be used by a customer support or help desk bot ?
I mean, a simple plain txt (I’ve read here that is not so good) but to prefer markdown for unstructured info. But MD has those character-codes (for example the ‘#’ for titles) that it may cause noise when embedding into the vector store?
Also, would it be better to have a unique big file or better to have various files dedicated for each theme/topic.?
Is it necessary to tell the assistant to look into VS or provided files?. Could I reference in the prompt an specific file for its name?
Welcome to the deep end.
So… you could use a vector store directly on your code. But as you pointed out you’re going to get noise and lots of irrelevant boilerplate. (500 copies of #include <stdio.h> ?)
So, maybe ask gpt3.5 as a subordinate model to summarize your content and then embed on that summary? You can still point back to the original content to load into the prompt context for the executive model to deal with. But this way your RAG results will be more insightful.
@cshamis shows a good idea “Agentic RAG” is really the best path when the relibility of the informaiton is key.
Now, with the assistant API there are limits to what you can do. What we do is list everything that the indexes that the assistant has access to, and give it to a gpt-3.5 to select, for the user query, what documents to use, and what queries to use to search on the docs (query expansion).
Then we send the result (extracting only the relevant bits) to the assistant API. It still sometimes uses docs that we don’t need want but it does condition it to more often use the right documents, and better find the information. This layer of document review and query expansion is some of what you can do even when dealing with the black box that is the Assistants API.
Hope that helps, and happy building!
Please tell that I have to send large amount of text file data(1000 pages text) to Open AI LLM as input prompt will this above assisstant code will helps