Possibly out of topic, if so, feel free to delete.
You can easily access all 3 models with a custom gpt schema.
I actually didn’t expect that, but it makes sense. At the end of the day it’s an API endpoint.
It’s pretty nice for comparing answer between different LLMs.
Can you share the code of the action? If I insert the equivalent of a prompt (via INST?) will it persist across the invocations of the API or shall I have to repeat it at every request?
Sure, this is one schema I got working:
{
"openapi": "3.1.0",
"info": {
"title": "Mistral AI API",
"version": "0.0.1"
},
"servers": [
{
"url": "https://api.mistral.ai/v1"
}
],
"paths": {
"/chat/completions": {
"post": {
"description": "Ask Mistral a question",
"operationId": "askMistral",
"requestBody": {
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {
"model": {
"type": "string",
"default": "mistral-tiny",
"enum": [
"mistral-tiny",
"mistral-small",
"mistral-medium"
]
},
"messages": {
"type": "array",
"items": {
"type": "object",
"properties": {
"role": {
"type": "string",
"default": "user"
},
"content": {
"type": "string"
}
},
"required": [
"content"
]
}
},
"temperature": {
"type": "number",
"default": 0.7
},
"top_p": {
"type": "number",
"default": 1
},
"max_tokens": {
"type": "integer",
"default": 1000
},
"stream": {
"type": "boolean",
"default": false
},
"safe_mode": {
"type": "boolean",
"default": false
},
"random_seed": {
"type": "integer",
"default": null
}
},
"required": [
"model",
"messages",
"safe_mode"
]
}
}
}
},
"responses": {
"200": {
"description": "Successful response",
"content": {
"application/json": {
"schema": {
"type": "object",
"properties": {
"choices": {
"type": "array",
"items": {
"type": "object",
"properties": {
"message": {
"type": "object",
"properties": {
"role": {
"type": "string"
},
"content": {
"type": "string"
}
}
},
"finish_reason": {
"type": "string"
}
}
}
}
}
}
}
}
}
}
}
}
}
}
You can play around with the parameters, for example you can just set “model” and “messages” as required. All other param are managed by the API:
see Mistral Documentation
Regarding your second question, can you elaborate? Instructions are for instructing the Custom GPT, the user prompt would be the actual message you send to the Mistral endpoint.
By the way, welcome to the community
If you have any more questions or need further clarification on using Custom GPT actions or anything else, feel free to ask. There are plenty of knowledgeable folks here ready to help out. Happy coding!
Thanks for the ‘solution’! As for the second question … well, first I am a total noob on the matter :), then … I am using a custom GPT where I put some knoledge of the structure of a database I want to interrogate and some context knowledge (as a prompt). When I use the GPT it relies on the files I have provided. As I understood prompintg and AI working context, even using the standard chatgpt, I could obtain similar results by providing the full information and then asking, provided I stay within the length of the managed context, without repeating the ‘prompt’ before each question … I was wondering how this kind of thinks work using endpoints such as the MISTRAL ones (is there a session persistency of the contexts or …). Sorry if I am talking nonsense. I think that my next readings will be a thorough analysis of the APIs
Do you mean that you upload files in the Knowledge section for describing the API format? I’d use the Instructions section.
In a Custom GPT you’d put this in the Instructions I guess.
In my Mistral GPT tests memory is not preserved throw-out the conversation with the API, because you are sending one query at the time and history chat is not appended. For that I recommend writing a simple script instead of a Custom GPT.
No worries!
Have a look to one of the community leaders tips on thanking, references provided in the thread. I’ll link it here:
Thanks for this. What is the authentication method? Is it API Key? As I’m still on the waiting list for access to the Mistral API.
hi @pantaleone and welcome!
yes
Yes, thank you for sharing this. It gave me an idea that I just acted on – and I love it!
I have a RAG knowledge base application that I developed (using OpenAI models). Because of the sheer volume of documents, didn’t see any way to economically use Assistants / GPTs. Until your post.
Because my application has an API, I was able to easily create a GPT that accesses my application via the API. In short, I can now use the GPT as another user interface to my application. Sweet!