Did you feel better afterwards or did it just exasperate you more?
I understand that this is a general issue that could hinder the wider adoption of LLMs. I’m curious if there might be a method to alleviate this sense of frustration. The models won’t necessarily perform better, but perhaps there could be a way to encourage people not to give up, or to accept the limitations and find ways to work around them.
I’m thinking that a model could recognize this pattern, but any action by the model to attempt to assuage the user at this point might come across as condescending.