- Giving users the ability to provide ideas/feedback/feature requests directly in chat with ChatGPT. It would require ChatGPT to have a running feature request log behind the scenes that OpenAi developers would be able to see, with time stamps and user details associated with the request only visible to developers. At the same time ChatGPT would be able to quickly determine whether other users have made similar requests and rather than just having a massive list pile up of similar feature requests it could assess whether any new request fit within a previously catalogued request and simply ad a “vote” to it and each “vote” would also come with the user info and time stamp in order to ensure you don’t have a single user/group of users spamming the same request multiple times over. Furthermore ChatGPT would be able to categorise each request into predefined categories set by the dev team in order to act as a filter/differentiator (ie UX request, ease of use features, upgrade/modifications/improvements to GPT, etc.)
Of course, the dev team would need to ensure the framework for such a feature is robust enough that it can’t be abused by either users or ChatGPT itself and that the relevant checks and balances are in place such that ChatGPT won’t unintentionally break/flood the system, and that it’s in keeping with openAi’s relevant policies, etc.
- In light of recent improvements of LLAMA 2.0 that allow it to run models comparable to GPT3s 187billion+ parameter model on the average avid PC gamer PC, discussion needs to be had with what kind of competitive edge ChatGPT offers over people running their own LLM at home. Of course available compute is one advantage but there becomes a point where for the average end user this is no longer relevant. Given the continued advancements in hardware in addition to efficiency gains of LLAMAs open source commits it won’t be more than 12 months before such models become readily accessible for a much broader section of less computer literate users. This beings me back to my first point. By ensuring enabling the average user to submit feature requests directly in chat and having those requests largely moderated by ChatGPT within the guidelines set by the openAI dev team, and when done right (ie feedback such as when a users feature/suggestion has been successfully integrated since a users last log in then ChatGPT could notify them with a big thank you - if it was something they’d suggested, when they return along with a notice that new feature x, as suggested by its user community, has been successfully integrated), this would give users a sense of ownership, loyalty, trust with openAIs platform as they would see and feel that their feedback is valuable, that anything is possible, and that it has much greater potential than a model run at home.
- In keeping with the first two points, it would be fantastic if users could have a a dedicated “cloud drive” that stores their user data which they can request/give read/write permission for ChatGPT to store only information specific to that user. The storage would be owned/paid for by the user and therefore the user would own/control their own data. The drive would only be accessible to ChatGPT if the user is logged in, and has given permission for GPT to keep a running memory on it that is specific to that given user. Once the users chat has ended then GPTs access to the storage is terminated for that given instance so there is no concerns regarding privacy. Users could opt in to the kinds of information they wish GPT to keep on them in that cloud storage ie dates and times of conversations, files that may have been uploaded in order for ChatGPT to respond to, specific user information that they may wish to retain control and privacy over but which would be useful for ChatGPT in giving more insightful answers with. It would ultimately allow users to feel like they are running their own LLM, with memory permanence, without actually having to. I’m certain users would be willing to pay for the additional cloud storage if openAI required the use of their own data Center, or alternatively if openAI would allow it, users could offer up suitably spec.’d hard drive partitions to store their user data on. The idea is to offer a more polished alternative to wanting to run your own LLM. At the same time OpenAI could offer businesses the ability to do the same whilst still keeping their proprietary information in house by essentially cutting off GPT’s access to their company servers after the completion of each request. When a new request is opened then GPT picks up the running memory log stored on the company server so it can carry on where it left off and can have access to any relevant company specific/proprietary information it needs for the new instance. This would require a degree of enterprise level data storage software that openAI may need to provide to enterprise customers to ensure its uniform across a market space and is the most efficient form of data management for ChatGPT’s needs (ie software that efficiently/effectively restructures relevant information into something like a data cube that gpt can readily make sense of without the need for large amounts of compute every time it reconnects.)