GPT Vision API for Running App data extraction

Dear All,

This Jupiter Notebook is designed to process screenshots from health apps paired with smartwatches, which are used for monitoring physical activities like running and biking. The goal is to convert these screenshots into a dataframe, as these apps often lack the means to export exercise history. For that we will iterate on each picture with the “gpt-4-vision-preview” model.

Github link to Notebook

Example of screenshot (input) :

Dataframe (output) :

Link to OpenAI documentation

Hope it was helpful :slight_smile: . Any feedback will be much appreciated, not on my performance on the 10k, i know i am working on improving it :smiling_face_with_tear:

Regardz.

Sorry to ask, but it does scare me when we use these fuzzy models to interpret visualisations of numbers, especially on health apps!

What’s wrong with using an old school API to provide the exact data or have you no access?

Maybe I’m just old fashioned :sweat_smile:

1 Like

Hi there !

The app i am using don’t have an API to extract data :confused: that’s why i went with the model. (And also it is my first try with Openai API, i kinda forced a usecase :sweat_smile:)
However there is also the option of using an OCR ( optical character recognition), but it didn’t work well for me, though i did not try verry hard to implement it.

Hope i understood your question well.

Thankz.

1 Like