Trying to create the same map the guys created on the devDay what's the idea behind it?

Hey Guys,
So in the dev day the guys there created a map that is changing dynamic as the API return results.
what’s the architecture and the flow behind it ?

so lets say i am creating a assistant , what kind of instructions i can give him so that when the results is points (lon/lat) he will answer in a format that i can parse and plot the map ?

or how the api can signal me to plot something on screen ?
if you can share your thoughts will be great . thanks !

1 Like

By my understanding, the code interpreter should be able to generate a html file using python js or whatever. So maybe just saying annotate tese locations on the maps would be straight forward

You are going to want to maintain some sort of live connection to a database (RealTime Database is a good option here)

You can use Function Calling to make edits to the database. If you are using ReactJS (Which is what they are using in the presentation) then when something is updated in the database it is immediately reflected on the website/application.

I am working on this right now using Assistants and it’s fucking awesome (not the maps though, just the same idea).

I just took a look at the Apple Maps API and it’s pretty straightforward:

var portland = new mapkit.Coordinate(45.5231, -122.6765);
var customMarker = new MarkerAnnotation(portland, {
    color: "green",
    glyphColor: "brown",
    glyphImage: { 1: "glyphImage.png" },
    selectedGlyphImage: { 1: "detailedIcon.png", 2: "detailedIcon_2x.png", 3: "detailedIcon_3x.png" }
});

So you could have this almost perfectly represented in a database. When the update is noticed you create this, wham bam

You can get the coordinates here:

So you could probably bundle these two together into a single function call

Let say this is the conversation:

user: i want to visit tokyo tower in japan!
backend:
- function calling
extract_location: { location: "tokyo tower"}
map: https://www.google.com/maps/search/?api=1&query=tokyo%20tower

Code interperter is calling it self to run code, not external functions that i prepare. also for code interperter i dont have a way to instruct him which functions to call.
function calling is the solution here, just trying to figure out the mechanism.

Thanks, i get your idea, but i dont think it’s right thing to do.
i will not save any user action on some kind of DB just to be able to put it back on site.
i came up with 2 possible solutions

  1. sockets, so when ever the backend know something need to run on frontend, he will communicate that via socket to the frontend.
  2. responding with the same parameters the openAI ask me , back to frontend in my final response.

Yes, but all of this is happening on backend . the final response to the froentend is like after all of this already triggered.

The idea behind the database is that the information remains on a page refresh. Otherwise it’s lost.

I guess you could just hold a live connection (SSEs would make more sense as it’s a one-way communication) and then save it to local storage. This would only show up on one device though. So yeah, database is pretty essential for any real-world scenario.

Yeah something like that, thanks buddy.