Problem in training model to differentitae between flipped and non flipped images

What kind of images are you trying to classify? Is it easy to understand if they’re flipped (for example if they are text-based) or could they be easily confused at a quick glance (for example, graphs)