I am doing user / product researches with the ChatGPT and I am trying to see its limits for describing similar webpages that are connected to the subject I am interested in. It is very good at giving examples of companies/webpages similar to the one I am doing the user study for. I am asking it to describe the differences in the current webpages and why some might be better than others. I also ask for user patterns. I understand that this is much to ask for and it is not made for this type of analysis, but I want to explore the limitations and how close it comes to reality and if the data it provides is “good enough” to draw conclusions from.
It answers this in quite a good way, I would say there are many similarities between the explanation and the real website, with some errors (it could also have been changed since 2021). It has of course a hard time to understand some user patterns for what happens when interacting with the webpage. How do ChatGPT know this information for the websites and why can it only describe visuals and not user patterns?
It does not know the visuals but do good guesses and only explain it in a way that could fit multiple websites that fits the description?
It has access to html and css code and therefore know how the layout should look like in a general way, but do not have the information for the code that controls the interactions.