I am looking at trying to use the retrieval plugin as model memory in a gpt4-based custom application. Is there a current flow for doing something like that, or has anyone tried something similar?
Thanks!
I am looking at trying to use the retrieval plugin as model memory in a gpt4-based custom application. Is there a current flow for doing something like that, or has anyone tried something similar?
Thanks!
Plugins aren’t available through API, just Chat GPT. you can look at projects like LangChain that accomplish similar things through API.
Is there any plan to dd this in api call as parameter e.g. model name gpt-3.5-turbo, useplugin=true, plugins enabled = plugin1, plugin2 etc.
Who can share roadmap ?
This is all we have:
You can use lang chain and create an agent and add your plugin here is an example:
import { ChatOpenAI } from “langchain/chat_models/openai”;
import { initializeAgentExecutorWithOptions } from “langchain/agents”;
import {
RequestsGetTool,
RequestsPostTool,
AIPluginTool,
} from “langchain/tools”;
export const run = async () => {
const tools = [
new RequestsGetTool(),
new RequestsPostTool(),
await AIPluginTool.fromPluginUrl(
“-- plugin link”
),
];
const agent = await initializeAgentExecutorWithOptions(
tools,
new ChatOpenAI({ temperature: 0 }),
{ agentType: “chat-zero-shot-react-description”, verbose: true }
);
const result = await agent.call({
input: “what t shirts are available in klarna?”,
});
console.log({ result });
};