Can some one please share how cognitive bias, emotions, and social influences factors are considered while providing results to user.
Hi and welcome to the Developer Forum!
How do you mean considered? Output is shaped by the entire human race though a feedback mechanism called RLHF (Reinforcement Learning through Human Feedback) there will tend to be a bias towards those that use the system more, in this case a western bias.
There are also a number of other internal training methods used to help align the model, but they are nor publicly described. Presumed to be additional layers trained on moderation requirements.