ADA embedding is not able to understand that overweight, underweight and rating are semantically same

I am finding it strange. If I ask what is the rating of my portfolio, I want it to retrieve a chunk that says whether my portfolio is underweight or overweight. The chunk basically says “Your portfolio is underweight”. It is not able to semantically match the question(what is the rating of my portfolio) to this chunk. I am using cosine similarity. I was expecting that the semantic similarity will be more than 0.79 for this. But looks like it is not.

Is there any degree of external knowledge required to make the determination of under or over? Also worth baring in mind that embedding models are not complex problem solvers with multilevel inference, they simply put goldfish and salmon in the fish category, they have no opinion on which is the better purchasing option.