Gpt-4o-mini generates thousands of whitespace tokens

I’m extracting text from images using a pydantic model to return a structured response, however on every request it spends thousands of tokens doing whitespaces - I attempted exactly the same structure and prompt on gemini which does not do so. Any clues as to why/what can be done?

For example (information redacted):
{
“company_information”: {
“name”: “some name”,
“email”: null,
“address”: {
“street_name”: “some street name”,
“house_number”: “some house”,
“city”: “some city”,
“postal_code”: “some postal”,
“region”: null,
“country”: “Some country”
}

(hundreds of additional lineshift which has been ommitted here - thousands of tokens)

,
“phone”: null,
“bank_details”: null
}
}