Different output generated for same prompt in chat mode and API mode using gpt-3.5-turbo

Thanks. I agree that lower temperature will help.
Now I got the access to GPT 4 API. The issue that I encountered with 3.5, the same issue persists in GPT 4 as well. API and chat version are providing different answers.