Hello,
I take few days to play around with GPTs actions and AirTable. I expose a CRUD of TAGS, CONTACTS, MEMOS, … and it works VERY well but there is few anoying limitations.
- ChatGPT ask for confirmation for each action. Thanks to the workaround using
"x-openai-isConsequential": falseit change my life !!! - The Voice assistant is the old one, we can’t interrupt it … why not moving to the new voice model like default ChatGPT ?
- The UI is also the old one so I have no visual feedbacks
- If I take a photo, analyse the photo and build a memo it works. For instance shopping list or badge event. But GPTs are nor allowed to send media !
- So I can’t push the photo I take to AirTable
- So I can”t ask to generate an image and store it to AirTable
- If I ask my GPTs to get the photo of a CONTACT it works, and find the URL but GPTs need the image file to work with it not an URL .
Is there a roadmap to upgrade the GPTs to the same UI than ChatGPT ? It would be a game changer. Here is what I do right now :
Hello SARAH, can you search for Chocolate Cake receipe, build a Task (aka Memo) with the steps. Then build a Shopping Memo with all the components to buy, then make an Event (aka Memo) with title “Eating the Cake” for tomorrow 9am-10am with Elise. And can you link this event with the 2 others please.
It works GREAT with GPT-4o ! It query my database in CRUD get the tags, cotnacts, … , build and link the info, no errors, no confirmation … !!! It’s a little bit slow of-course.
So adding the new UI and handling Images would be awesome … like take the picture of Elise and generate a cover image of a cake in forground and elise in the back looking at the cake and at it to the event.