I am let down by the communication from open AI. There is a general lack of specification where there needs to be. A plain example of this is that when I was paying my $20 single month subscription, I was wholly unaware of the fact that chatGPT-4 was limited to 25 messages every 3 hours. This was, as far as I can tell, not communicated at all whatsoever on the “introducing chatgpt plus” page Introducing ChatGPT Plus . I still can not see anything indicating this, and this alone almost feels malicious, as chatGPT-4 is one of the “new features and improvements” that people would pay for. Additionally, on the article for the chatgpt model may 12, where it tells of the roll out of the plugins and web browsing feature, it does not indicate that 1.) web browsing is only for the limited use chatgpt-4, 2.) how to use it. I had simply enabled it in the beta settings and went to an existing chat to see if it could possibly begin working. While I did quickly realize how it works, this should not have been a realization that I had to make. I read the entire article, that even explains how to use the plugins features. There was nowhere that indicated the limitations.
Further, I feel that the amount of safeguards used are impractical and stifling. I see that many of the censors and safeguard get in the way of non-malicious content, and most of the malicious content can be accessed regardless, hampering the experience of using chatGPT but not increasing the safety. Multiple times I have been told by chatgpt “I can’t do that because _____” and just by saying “how is that _____?” it just generates it anyways.
Again, not disclosing that chatgpt-4 was so limited before I paid for a subscription that I already thought was a bit expensive has really hampered my trust. I would like to see more open practices from open AI.