The embedding model manages decently with a dot product of: Dot Product: 0.8994434485396415
Dot Product: 0.8994434485396415
But GPT-4 was able to correct the errors, as evidenced here, so maybe use GPT-4 to spell check before embedding?