Embeddings generally used to find the similarity between different pieces of text. The notion of ‘similar’ has to be found with some sort of distance measure.
Normally the Euclidean Distance or Cosine Distance are used and both of these require vectors to be of the same dimension.
Let’s say the embeddings scaled with the length of the text. The issue here is it is not clear how to use embeddings of different length and compare them in a meaningful way.
Hopefully the model has learnt to represent text regardless of length in a meaningful way so that it can be used for comparisons.
Hope this makes sense.