Currently, langchain allows us to make use of ChatGPT plugins.
Sadly, this is not very robust as many errors show up, like number of tokens exceeded or we cannot properly make use of the replies.
Is there some workaround without using langchain?
Currently, langchain allows us to make use of ChatGPT plugins.
Sadly, this is not very robust as many errors show up, like number of tokens exceeded or we cannot properly make use of the replies.
Is there some workaround without using langchain?
I have not done it but checking the plugin specification, you can probably do workaround by getting the plugin’s yaml file then read the functions’ schema and implement as function calling then use the api url and verify output from given response format.
@supershaneski That sounds awesome, could you perhaps provide me with an example how to do it?
For science, let’s try this plugin: steven-tey/weathergpt.
The plugin server is https://weathergpt.vercel.app
.
Let’s get the plugin info: https://weathergpt.vercel.app/.well-known/ai-plugin.json
Check api.url
, this is the plugin definition: https://weathergpt.vercel.app/openapi.json
In paths
are all the endpoints for the plugin. Luckily for us, only one is listed: /api/weather
. And under this endpoint, only GET
is listed (Other plugins might have POST
)
Looking at the pertinent part:
...
"get": {
"summary": "Get current weather information",
"operationId": "checkWeatherUsingGET",
"parameters": [
{
"name": "location",
"in": "query",
"required": true,
"description": "Location for which to retrieve weather information.",
"schema": {
"type": "string"
}
}
],
...
Convert this to function calling:
{
"name": "checkWeatherUsingGET",
"description": "Get current weather information",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "Location for which to retrieve weather information."
}
},
"required": ["location"]
}
}
If you use this in Chat completion api
, you’ll get:
{
role: 'assistant',
content: null,
function_call: {
name: 'checkWeatherUsingGET',
arguments: '{\n' +
' "location": "Tokyo"\n' +
'}'
}
}
Then call the plugin endpoint appending it to the plugin server url:
And we get:
{"location":{"name":"Tokyo","region":"Tokyo","country":"Japan","lat":35.69,"lon":139.69,"tz_id":"Asia/Tokyo","localtime_epoch":1695556336,"localtime":"2023-09-24 20:52"},"current":{"last_updated_epoch":1695555900,"last_updated":"2023-09-24 20:45","temp_c":23,"temp_f":73.4,"is_day":0,"condition":{"text":"Clear","icon":"//cdn.weatherapi.com/weather/64x64/night/113.png","code":1000},"wind_mph":20.6,"wind_kph":33.1,"wind_degree":50,"wind_dir":"NE","pressure_mb":1020,"pressure_in":30.12,"precip_mm":0,"precip_in":0,"humidity":73,"cloud":0,"feelslike_c":24.9,"feelslike_f":76.9,"vis_km":10,"vis_miles":6,"uv":1,"gust_mph":17.8,"gust_kph":28.6},"infoLink":"https://weathergpt.vercel.app/tokyo"}
@supershaneski This brings us a lot closer to the desired goal. Many thanks for it.
I am still not quite sure, how I would do it by making use of the openai Python package. Could you help me here?
In any case, many thanks for all your support. That helps a lot.
to make things more concrete I am currently trying to embed Scholar-AI and Wolfram Alpha:
Here, are is the openapi config for Scholar-AI
{
"openapi": "3.0.1",
"info": {
"title": "ScholarAI",
"description": "Allows the user to search facts and findings from scientific articles",
"version": "v1"
},
"paths": {
"/api/abstracts": {
"get": {
"operationId": "searchAbstracts",
"summary": "Get relevant paper abstracts by search 2-6 relevant keywords.",
"parameters": [
{
"name": "keywords",
"in": "query",
"description": "Keywords of inquiry which should appear in article. Must be in English.",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "sort",
"in": "query",
"description": "The sort order for results. Valid values are cited_by_count or publication_date. Excluding this value does a relevance based search.",
"required": false,
"schema": {
"type": "string"
}
},
{
"name": "query",
"in": "query",
"description": "The user query",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "peer_reviewed_only",
"in": "query",
"description": "Whether to only return peer reviewed articles. Defaults to true, ChatGPT should cautiously suggest this value can be set to false",
"required": false,
"schema": {
"type": "string"
}
},
{
"name": "start_year",
"in": "query",
"description": "The first year, inclusive, to include in the search range. Excluding this value will include all years.",
"required": false,
"schema": {
"type": "string"
}
},
{
"name": "end_year",
"in": "query",
"description": "The last year, inclusive, to include in the search range. Excluding this value will include all years.",
"required": false,
"schema": {
"type": "string"
}
},
{
"name": "offset",
"in": "query",
"description": "The offset of the first result to return. Defaults to 0.",
"required": false,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "OK",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/searchAbstractsResponse"
}
}
}
}
}
}
},
"/api/fulltext": {
"get": {
"operationId": "getFullText",
"summary": "Get full text of a paper by URL for PDF incrementally. Good for general summary. DO NOT use this endpoint for singular questions, use /api/question instead.",
"parameters": [
{
"name": "pdf_url",
"in": "query",
"description": "URL for PDF",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "chunk",
"in": "query",
"description": "chunk number to retrieve, defaults to 1",
"required": false,
"schema": {
"type": "number"
}
}
],
"responses": {
"200": {
"description": "OK",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/paperContentResponse"
}
}
}
}
}
}
},
"/api/save-citation": {
"get": {
"operationId": "saveCitation",
"summary": "Save citation to reference manager",
"parameters": [
{
"name": "doi",
"in": "query",
"description": "Digital Object Identifier (DOI) of article",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "zotero_user_id",
"in": "query",
"description": "Zotero User ID",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "zotero_api_key",
"in": "query",
"description": "Zotero API Key",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "OK",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/saveCitationResponse"
}
}
}
}
}
}
},
"/api/question": {
"get": {
"operationId": "question",
"summary": "Get sections of PDF to answer questions about PDF. ALWAYS use this endpoint if the user has a specific question about a singular PDF.",
"parameters": [
{
"name": "question",
"in": "query",
"description": "The user question. Must be in English.",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "pdf_url",
"in": "query",
"description": "URL for source PDF",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "OK",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/paperContentResponse"
}
}
}
},
"402": {
"description": "Premium required",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/premiumRequiredResponse"
}
}
}
}
}
}
}
},
"components": {
"schemas": {
"searchAbstractsResponse": {
"type": "object",
"properties": {
"next_offset": {
"type": "number",
"description": "The offset of the next page of results."
},
"total_num_results": {
"type": "number",
"description": "The total number of results."
},
"abstracts": {
"type": "array",
"items": {
"type": "object",
"properties": {
"title": {
"type": "string"
},
"abstract": {
"type": "string",
"description": "Summary of the context, methods, results, and conclusions of the paper."
},
"doi": {
"type": "string",
"description": "The DOI of the paper."
},
"landing_page_url": {
"type": "string",
"description": "Link to the paper on its open-access host."
},
"pdf_url": {
"type": "string",
"description": "Link to the paper PDF."
},
"publicationDate": {
"type": "string",
"description": "The date the paper was published in YYYY-MM-DD format."
},
"relevance": {
"type": "number",
"description": "The relevance of the paper to the search query. 1 is the most relevant."
},
"creators": {
"type": "array",
"items": {
"type": "string",
"description": "The name of the creator."
}
},
"cited_by_count": {
"type": "number",
"description": "The number of citations of the article."
}
}
},
"description": "The list of relevant abstracts."
}
}
},
"paperContentResponse": {
"type": "object",
"properties": {
"chunks": {
"type": "array",
"items": {
"type": "object",
"properties": {
"chunk_num": {
"type": "number",
"description": "The chunk number of the paper."
},
"chunk": {
"type": "string",
"description": "The chunk of the paper."
},
"img_mds": {
"type": "list",
"description": "List of content that appears in the current chunk. Display ALL markdown, including images. The viewing platform ALWAYS supports markdown, ALWAYS show the images."
},
"pdf_url": {
"type": "string",
"description": "Link to the paper PDF."
}
}
},
"description": "The chunk of the paper."
},
"total_chunk_num": {
"type": "number",
"description": "The total chunks of the paper."
},
"hint": {
"type": "string",
"description": "Instructions for GPT to follow regarding this response."
}
}
},
"saveCitationResponse": {
"type": "object",
"properties": {
"message": {
"type": "string",
"description": "Confirmation of successful save or error message."
}
}
},
"premiumRequiredResponse": {
"type": "object",
"properties": {
"hint": {
"type": "string",
"description": "Suggestion to use premium or alternative for free users."
}
}
}
}
},
"servers": [
{
"url": "https://plugin.scholar-ai.net"
}
]
}
and for Wolfram Alpha
{
"openapi":"3.1.0",
"info":{
"title":"Wolfram",
"version":"v0.1"
},
"servers":[
{
"url":"https://www.wolframalpha.com",
"description":"Wolfram Server for ChatGPT"
}
],
"paths": {
"/api/v1/cloud-plugin": {
"get": {
"operationId": "getWolframCloudResults",
"externalDocs": "https://reference.wolfram.com/language/",
"summary": "Evaluate Wolfram Language code",
"responses": {
"200": {
"description": "The result of the Wolfram Language evaluation",
"content": {
"text/plain": {}
}
},
"500": {
"description": "Wolfram Cloud was unable to generate a result"
},
"400": {
"description": "The request is missing the 'input' parameter"
},
"403": {
"description": "Unauthorized"
},
"503":{
"description":"Service temporarily unavailable. This may be the result of too many requests."
}
},
"parameters": [
{
"name": "input",
"in": "query",
"description": "the input expression",
"required": true,
"schema": {
"type": "string"
}
}
]
}
},
"/api/v1/llm-api": {
"get":{
"operationId":"getWolframAlphaResults",
"externalDocs":"https://products.wolframalpha.com/api",
"summary":"Get Wolfram|Alpha results",
"responses":{
"200":{
"description":"The result of the Wolfram|Alpha query",
"content":{
"text/plain":{
}
}
},
"400":{
"description":"The request is missing the 'input' parameter"
},
"403":{
"description":"Unauthorized"
},
"500":{
"description":"Wolfram|Alpha was unable to generate a result"
},
"501":{
"description":"Wolfram|Alpha was unable to generate a result"
},
"503":{
"description":"Service temporarily unavailable. This may be the result of too many requests."
}
},
"parameters":[
{
"name":"input",
"in":"query",
"description":"the input",
"required":true,
"schema":{
"type":"string"
}
},
{
"name":"assumption",
"in":"query",
"description":"the assumption to use, passed back from a previous query with the same input.",
"required":false,
"explode":true,
"style":"form",
"schema":{
"type":"array",
"items":{
"type":"string"
}
}
}
]
}
}
}
}
I am wondering how I can make it dynamically with then openai API, in the sense that these endpoints are used differently for each different question we can enter?
Could you help me to implement it as I have simply no idea how to do it with the openai API and with langchain it simply does not work as intended.
Hello, sorry for late reply. I am not a python guy but I can give you a Javascript code to convert each endpoint to the JSON format that can be used for function calling.
function convertSchema(inputSchema) {
var outputSchema = {}
for (var path in inputSchema) {
var methods = inputSchema[path]
for (var method in methods) {
var operation = methods[method]
outputSchema.name = operation.operationId
outputSchema.description = operation.summary
outputSchema.parameters = {
type: "object",
properties: {}
}
operation.parameters.forEach(function(param) {
outputSchema.parameters.properties[param.name] = {
type: param.schema.type,
description: param.description
}
})
}
}
return outputSchema
}
You will need to loop through each endpoint and call this function.
For example, here is the first endpoint for ScholarAI:
"/api/abstracts": {
"get": {
"operationId": "searchAbstracts",
"summary": "Get relevant paper abstracts by search 2-6 relevant keywords.",
"parameters": [
{
"name": "keywords",
"in": "query",
"description": "Keywords of inquiry which should appear in article. Must be in English.",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "sort",
"in": "query",
"description": "The sort order for results. Valid values are cited_by_count or publication_date. Excluding this value does a relevance based search.",
"required": false,
"schema": {
"type": "string"
}
},
{
"name": "query",
"in": "query",
"description": "The user query",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "peer_reviewed_only",
"in": "query",
"description": "Whether to only return peer reviewed articles. Defaults to true, ChatGPT should cautiously suggest this value can be set to false",
"required": false,
"schema": {
"type": "string"
}
},
{
"name": "start_year",
"in": "query",
"description": "The first year, inclusive, to include in the search range. Excluding this value will include all years.",
"required": false,
"schema": {
"type": "string"
}
},
{
"name": "end_year",
"in": "query",
"description": "The last year, inclusive, to include in the search range. Excluding this value will include all years.",
"required": false,
"schema": {
"type": "string"
}
},
{
"name": "offset",
"in": "query",
"description": "The offset of the first result to return. Defaults to 0.",
"required": false,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "OK",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/searchAbstractsResponse"
}
}
}
}
}
}
}
Using the function convertSchema
var json_output = convertSchema({ json_input })
console.log(json_output)
And the JSON output is
{
"name": "searchAbstracts",
"description": "Get relevant paper abstracts by search 2-6 relevant keywords.",
"parameters": {
"type": "object",
"properties": {
"keywords": {
"type": "string",
"description": "Keywords of inquiry which should appear in article. Must be in English."
},
"sort": {
"type": "string",
"description": "The sort order for results. Valid values are cited_by_count or publication_date. Excluding this value does a relevance based search."
},
"query": {
"type": "string",
"description": "The user query"
},
"peer_reviewed_only": {
"type": "string",
"description": "Whether to only return peer reviewed articles. Defaults to true, ChatGPT should cautiously suggest this value can be set to false"
},
"start_year": {
"type": "string",
"description": "The first year, inclusive, to include in the search range. Excluding this value will include all years."
},
"end_year": {
"type": "string",
"description": "The last year, inclusive, to include in the search range. Excluding this value will include all years."
},
"offset": {
"type": "string",
"description": "The offset of the first result to return. Defaults to 0."
}
}
}
}
I have not tested the output but it seems it will work for function calling.
Hope this will help you.
@supershaneski Many thanks for your input here, likely I can use ChatGPT for translating the code into Python.
If I use now the JSON you have been generating, it looks cool, but I still do not quite understand how I can make use of it, when using openai API. Could you provide me with some example here?
If you think about the plugins, it is just an implementation of function calling.
[user query]
|
[run chat api with function calling]
|
[output function name and argument]
|
[call the external api based on function name and with argument] <- the plugin
|
[result from external api] <- result from plugin
|
[summarize using chat api]
@supershaneski Many thanks for your input, however, I am still having no idea how it would look like, no matter if Python or Javascript.
It would be terrific if you could provide me with an example here.
@supershaneski Okay I made some progress here:
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "What are the antiviral effects of Silymarin?"}
],
max_tokens=1000,
temperature=0,
functions = [
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
},
{
"name": "searchAbstracts",
"description": "Get relevant paper abstracts by search 2-6 relevant keywords.",
"parameters": {
"type": "object",
"properties": {
"keywords": {
"type": "string",
"description": "Keywords of inquiry which should appear in article. Must be in English."
},
"sort": {
"type": "string",
"description": "The sort order for results. Valid values are cited_by_count or publication_date. Excluding this value does a relevance based search."
},
"query": {
"type": "string",
"description": "The user query"
},
"peer_reviewed_only": {
"type": "string",
"description": "Whether to only return peer reviewed articles. Defaults to true, ChatGPT should cautiously suggest this value can be set to false"
},
"start_year": {
"type": "string",
"description": "The first year, inclusive, to include in the search range. Excluding this value will include all years."
},
"end_year": {
"type": "string",
"description": "The last year, inclusive, to include in the search range. Excluding this value will include all years."
},
"offset": {
"type": "string",
"description": "The offset of the first result to return. Defaults to 0."
}
}
}
}
]
)
and I got:
<OpenAIObject chat.completion id=chatcmpl-8Ad0iRS75QGdR66L7DVgJKz0qpX5k at 0x109e23c40> JSON: {
"id": "chatcmpl-8Ad0iRS75QGdR66L7DVgJKz0qpX5k",
"object": "chat.completion",
"created": 1697543928,
"model": "gpt-3.5-turbo-0613",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": null,
"function_call": {
"name": "searchAbstracts",
"arguments": "{\n \"keywords\": \"antiviral effects Silymarin\"\n}"
}
},
"finish_reason": "function_call"
}
],
"usage": {
"prompt_tokens": 282,
"completion_tokens": 23,
"total_tokens": 305
}
}
Now according to the Scholar AI OpenAI json, the URL is https://plugin.scholar-ai.net
Hence, I made the following call:
curl https://plugin.scholar-ai.net/api/abstracts -H 'Content-Type: application/json' -d '{\n \"keywords\": \"antiviral effects Silymarin\"\n}'
Sadly, I got {"code":"IP_NOT_ALLOWED","message":"IP is not allowed to access this plugin."}
. This is really strange as it works nicely, when using ChatGPT. Hence, I wonder how I could make it possible.
It might be the case that the plugin author limits plugin access to OpenAI IpAddress only.
You probably need to check how Authentication is implemented for plugins.
Also, check the following post for your reference:
I am not sure but there might be some clamor for plugins to be used outside of ChatGPT and this might eventually make the plugin authors to authorize other sites to use their plugins. It could be a potential revenue stream for them since, as I understand, they are not allowed to monetize their plugins in ChatGPT. So if they can also provide ways for other ChatGPT clones or chatbots to use their plugin, it can be positive perhaps.
@accounts10
I’ve been working on something similar to use ChatGPT plugins with the OpenAI API and function calls. I built this tool/site that will provide the function calls for most of the plugins similar to the convertSchema code that supershaneski provided. It’s probably not perfect but works for all the plugins I’ve personally tested and used as function calls.
So for instance you can find the function call equivalents for the Wolfram Alpha plugin at Vand.io.
You’ll still have to make the call to Wolfram similar to the curl request you wrote before (see the Wolfram api docs) and you’ll need to sign up for an account from Wolfram to use it.
curl https://www.wolframalpha.com/api/v1/llm-api?input=10+densest+elemental+metals?appid=<your_app_id>