Hi everyone,
I recently created a bot using the GPTs app store (where GPTs are built), and now I’d like to connect it to Telegram.
I figured out how to integrate a Telegram bot using Python and the ChatGPT API, but I can’t seem to find where to get the API for the GPTs app I created. Is it even possible to use a GPTs app in Telegram?
If anyone has experience or knows how to do this, I’d really appreciate your help and any suggestions on how to make this work.
Thank you in advance!
In the OpenAI API development environment, you can create new assistants, which are equivalent to the GPT you created.
In the Dashboard, you have access to the GPT Assistant creation area. Copy the prompt you created to your GPT, create an assistant and start testing it in the playground.
Best regards
Hello,
Thank you for your response. I attempted to replicate my GPT setup in Python by using the same prompt I had created in the GPT editor as the system message for my API requests. While the text generation partially worked, I faced the following issues:
- After generating the first scene, the bot stopped responding to further inputs. It seems the logic for progressing through the story (e.g., from one scene to the next) didn’t function as intended, even though the structure of the system message was designed to handle this.
- Despite providing specific instructions in the
prompt
(e.g., themes and settings), the generated images often didn’t match the story context. For example, random elements like cakes appeared instead of visuals related to the narrative. Is there a difference in how images are generated through the OpenAI API compared to the GPT editor? How can I ensure that the generated images better align with the text, as they do in the GPT editor?
Additionally, I tried creating an assistant through the OpenAI Dashboard as suggested. While I could configure it successfully, I’m unsure how to deploy this assistant. I write my code in Python and connect it via the API, but the responses seem to come from the default assistant instead of the custom one I created. Could you clarify how to link a Python implementation to a custom assistant so it behaves exactly as configured in the dashboard?
I’d appreciate any guidance on these points. Thank you for your support!
Best regards,
Could you please explain how to properly connect an assistant created in the Assistants section on the OpenAI platform to an API? I currently have a Telegram bot connected to the OpenAI API through my Python code, but it only produces standard ChatGPT responses instead of utilizing the custom behavior and settings of my assistant.
From this, I assume that my assistant is not being correctly linked to the API. Is there a way to ensure that the assistant I created in the OpenAI platform is used via the API for generating responses? If so, I would appreciate detailed guidance on how to set this up.
Thank you for your help!
Best regards,
1 Like
I found a step-by-step guide in the documentation on how Assistants work. Remember to create the Assistant and get the Assistant ID. The procedure is very simple, first you must create a Thread, then you must create an initial message for the Thread. Now start the Thread running, and in the call, put the assistant ID and the Thread ID that contains the message. for example:
const run = await openai.beta.threads.runs.stream( content.threadid, { assistant_id: ‘xxxxxxxxxxx’ } )
.on(‘textDelta’, (textDelta, snapshot) => process.stdout.write( textDelta.value ) );
Best regards,
Hello everyone,
I’m trying to integrate a Telegram bot with a custom assistant I created in OpenAI’s Playground. The assistant is fully configured, and I have its ID: asst_********
. I already have the Telegram bot set up, but I can’t get it to work properly with the assistant.
From the documentation, I understand that I need to create a thread (Thread
) and run interactions via a Run
. I wrote the code with the help of ChatGPT, but there seems to be an error in the syntax or logic, and I can’t figure out what’s wrong.
Issues I’m Facing:
- On the first message, the bot responds: “Could not get a response from the assistant.”
- On the second message, it responds: “An error occurred while processing your message. Please try again later.”
- In the program console, I get the error:
4.*Error while sending message: Unknown error in HTTP implementation: TypeError('Object of type TextContentBlock is not JSON serializable')
**
What I’ve Tried:
- I set up the assistant and ensured the assistant ID is correct.
- The thread seems to be created successfully, but messages do not get processed properly. I suspect there’s an issue with how I’m adding messages or running the thread with the assistant.
Here’s the code I’m currently using (API keys are replaced with ***
):
import openai
from openai import OpenAI
from telegram import Update
from telegram.ext import Application, CommandHandler, MessageHandler, ContextTypes
from telegram.ext.filters import TEXT
# OpenAI setup
openai.api_key = "***"
client = OpenAI(api_key=openai.api_key)
# Custom Assistant ID
ASSISTANT_ID = "asst_********"
# Telegram API Key
TELEGRAM_API_KEY = "***"
# Storing threads for each user
user_threads = {}
async def start(update: Update, context: ContextTypes.DEFAULT_TYPE):
user_id = update.effective_user.id
try:
thread = client.beta.threads.create()
thread_id = thread.id
user_threads[user_id] = thread_id
except Exception as e:
await update.message.reply_text("Could not establish a connection with the assistant. Please try again later.")
print(f"Error creating thread: {e}")
async def handle_message(update: Update, context: ContextTypes.DEFAULT_TYPE):
user_id = update.effective_user.id
user_message = update.message.text
if user_id not in user_threads:
await update.message.reply_text("Please start with the /start command.")
return
thread_id = user_threads[user_id]
try:
client.beta.threads.messages.create(thread_id=thread_id, role="user", content=user_message)
client.beta.threads.runs.create(thread_id=thread_id, assistant_id=ASSISTANT_ID)
messages = client.beta.threads.messages.list(thread_id=thread_id)
assistant_response = next((msg.content for msg in messages if msg.role == 'assistant'), "Could not get a response from the assistant.")
await update.message.reply_text(assistant_response)
except Exception as e:
await update.message.reply_text("An error occurred while processing your message. Please try again later.")
print(f"Error while sending message: {e}")
def main():
app = Application.builder().token(TELEGRAM_API_KEY).build()
app.add_handler(CommandHandler("start", start))
app.add_handler(MessageHandler(TEXT, handle_message))
app.run_polling()
if __name__ == "__main__":
main()
1 Like
I don’t have much experience with Python, I chose to develop in Javascript, and I’m running it through Nodejs. Below I present the example code that I used in the first integration tests with the assistant. In my case, I chose to use example codes presented at https://cookbook.openai.com/. I found great examples and solutions that helped me a lot.
Regarding your code, I noticed that there are small differences compared to my example. The message is directed to the assistant, and the run method is also different. I chose to use run with stream. However, I found in the documentation an option to run without using stream, that is, with direct return.
import OpenAI from "openai";
import { input } from '@inquirer/prompts';
const openai = new OpenAI({
apiKey: 'XXXXXXX'
});
async function main() {
let answer = "";
answer = await input({ message: 'What is your question?' });
const thread = await openai.beta.threads.create();
const message = await openai.beta.threads.messages.create(
thread.id,
{
role: "assistant",
content: answer
}
);
const run = openai.beta.threads.runs.stream( thread.id, { assistant_id: 'XXXX' } ).on('textDelta', (textDelta, snapshot) => process.stdout.write( textDelta.value ) );
}
I hope I have helped, best regards,