If your list is the same, I’d embed each keyword to get a vector, then when the text is to classify - I’d embed the text and compare it’s vector to all 300 vectors for each keyword to sort keywords by relevance to the text (cosine similarity for example). Then, I’d take x number of keywords from the top of the sorted list.
You might also check this post here on a different approach to a similar problem: How I cluster/segment my text after embeddings process for easy understanding? - #9 by sergeliatko