Another suggestion is to incorporate it into our daily routine, with an app that can intermediate what we want with the chat, through voice, and right after telling us the answer, that would be great, making us love even more this precious chat gpt team, in addition to chatgpt itself
Not sure if this is the right place but I have a couple suggestions:
Give users the option to have the front-end wait until the full answer is generated, and then display it to the screen. Trying to read a constantly scrolling block of text is difficult and annoying. Here is my conversation with ChatGPT regarding this issue (see bottom two Q&A’s).
chat. openai. com /share/d97e8f74-79fb-41cd-b73e-2b3ee5a3b185
Also, when sharing chats, there should be an option to only share part of the chat. In the above linked chat, I only really wanted to share the bottom two Q&A’s
My Conversation with ChatGPT on the scrolling behavior of the front-end:
User
I really don’t like the fact that when you answer questions, the answer keeps scrolling up the page as more text appears. It makes it very difficult to read your answer until your answer has finished appearing on the page.
ChatGPT
I apologize for any inconvenience caused by the scrolling behavior of the answers. As an AI language model, my responses can sometimes be longer or split into multiple parts, which may cause the text to scroll as it appears on the page. I understand that this can make reading difficult, especially when trying to follow the response in real-time.
If you prefer a different format for receiving answers, one suggestion is to copy and paste my responses into a text editor or word processor of your choice. This way, you can read the entire response without worrying about scrolling. Additionally, you can ask me to provide more concise answers, and I will do my best to accommodate your request.
If there’s anything else I can assist you with or any specific preferences you have, please let me know, and I’ll be happy to help!
User
Maybe your front-end can wait until the full answer has been received, and then display the whole answer, all at once so there is no scrolling involved. I’m not sure the sequential appearing of the answer is even necessary. It could be just a facade to give you the appearance of a conscious being but, then again, it could be a result of the time that it takes to generate the next token.
ChatGPT
You bring up a valid point. The sequential appearing of the answer is primarily due to the nature of how the AI language model generates responses. As the model processes and generates each token of text, it may not have access to the full context of the question or previous tokens until it completes the response. This sequential generation is indeed a technical constraint, and it’s not intended to create a facade of consciousness.
However, I understand that the scrolling behavior can be inconvenient for users who prefer to read the entire response at once. Providing an option to display the complete response once it’s fully generated is a reasonable suggestion to enhance the user experience. Unfortunately, as an AI language model, I don’t have control over the front-end or user interface implementation.
User interfaces can be designed in various ways to improve readability and user interaction. It’s possible that future iterations or improvements to AI-based systems could consider implementing features like the one you suggested.
In the meantime, I’m here to help with any questions you have. If there’s anything specific you’d like to know or discuss, feel free to ask, and I’ll do my best to provide concise and coherent responses.
I love this tool. So meta.
Relatively minor, but for GPT-4 users, I think there should be a counter showing the remaining requests that the user can make for the current 3-hour period. I have only recently started using GPT-4, and I currently find it more accurate, or at least more thorough, than GPT-3.5, but I think it’s annoying trying to remember how many requests you’ve already made or making a request only to be told that you’ve actually used up all requests for the current 3-hour period.
Edit: I actually have another one. Provide an option to download chats into a pdf format, just in case you found a particular chat interesting.
A suggestion for “Custom Instructions”
It would be more beneficial if we were able to choose a “Profile” in which we can choose so we have more options in terms of work, personal, personas, and specific work tasks that we can customize to each one giving chatgpt a more personalized way of working with a person depending on the work or personal hobby they are working with.
I find myself copying out my custom instructions to fit another custom instruction based on the type of work I am performing. I have many side hustles, primary jobs, hobbies, and general questions and one custom instruction isn’t enough to cover each one independently.
I frequently utilize ChatGPT as a study aid by inputting my previous exams. I then request it to generate questions following the same language style and subject matter. However, there are instances when the tool provides the answers immediately, which isn’t conducive to effective studying.
I’d like to propose the incorporation of a spoiler tool, similar to what’s found on Discord. This feature would allow for the concealment of the final answer. When the answer is revealed immediately, the temptation to simply rely on it becomes strong. On the other hand, if the answer is hidden, it presents a challenge that encourages me to attempt answering before uncovering the solution.
Alright, because I think this is a great use of ChatGPT, I decided to go ahead and help you with this.
First, in order for OpenAI to update ChatGPT to have this feature, they would need to change the markdown2HTML engine they are using on the website as spoilers are not a feature in standard markdown.
But, there is a way you can implement this yourself. Using nothing more than a custom style sheet.
First, we need to take advantage of a tag which ChatGPT can render in markdown and which the markdown2HTML converter will do.
I’ve chosen the <pre>
tag which is used to generate preformatted text (usually computer code.
The “big idea” will be to update the style of the page so that anything rendered inside a <pre>
tag will have transparent text until you hover the mouse over the box and the text will be displayed in its regular color. We’ll also make use of a custom instruction teaching ChatGPT how to respond.
The Custom Instruction I wrote is,
You are QuizGPT.
You work as a study aid for users.
Users will input previous exam questions, it is your job to generate several questions which follow the same language style and subject matter.
For each question you should provide an ideal answer and enclose the answer in triple backticks.
Example,
###
User: Q: For the hydrogen atom, which series describes electron transitions to the N=1 orbit, the lowest energy electron orbit?
1. Lyman series
2. Balmer series
3. Paschen series
4. Pfund series
QuizGPT:
Q: Electric current may be expressed in which one of the following units?
1. coulombs/volt
2. joules/coulomb
3. coulombs/second
4. ohms/second
```answer
coulombs/second
```
###
User: A girl throws a 0.1 kilogram ball at a wall. The ball hits the wall perpendicularly with a velocity of 5 meters per second and then bounces straight back with a velocity of 4 meters per second. The change in the momentum of the ball is:
QuizGPT:
Q: A car is moving along a straight horizontal road at a speed of 20 meters per second. The brakes are applied and a constant force of 5000 Newtons decelerates the car to a stop in 10 seconds. What is the mass of the car?
```answer
2500 kg
```
The CSS you need to add is,
pre {
cursor: pointer;
color: transparent !important;
}
pre * {
color: transparent !important;
}
pre:hover, pre:hover * {
color: initial !important;
}
That’s it!
There are a number of Chrome extensions which allow you to add custom CSS to a page. My personal preference is, Stylebot but there are many others.
Instructions for using Stylebot
After installing the extension, you’ll need to,
- Open Options
- Click Styles / Add a new style
- Enter the URL the style will be applied to (chat.openai.com)
- Paste in the CSS above
- Save the style
- Reload the chat.openai.com page so the style is applied.
Then, when you submit a question, answers will be in blank, black boxes until you hover your mouse over the element.
If you choose to use this, I would appreciate if you mark this as the solution as it is the only workaround to implement a spoiler tag for ChatGPT at the moment. There may be other tags which could work as well, and it is possible to enhance this as a userscript so you could toggle the spoilers on and off, but I am sure this is the quickest and easiest way to get what you want.
Tampermonkey userscript
Install Tampermonkey and add this as a new script.
// ==UserScript==
// @name OpenAI Chat Spoiler Toggle
// @namespace http://tampermonkey.net/
// @version 0.1
// @description Adds spoiler toggle to pre tags in OpenAI chat.
// @author Jake Elmstedt
// @match https://chat.openai.com/*
// @grant GM_addStyle
// ==/UserScript==
(function () {
'use strict';
GM_addStyle(`
pre.spoiler {
cursor: pointer;
color: transparent !important;
}
pre.spoiler * {
color: inherit !important;
}
`);
const setAsSpoiler = (preTag) => {
preTag.style.color = "transparent";
preTag.classList.add('spoiler');
};
const toggleColor = (preTag) => {
if (preTag.style.color === "transparent") {
preTag.style.color = "";
preTag.classList.remove('spoiler');
} else {
setAsSpoiler(preTag);
}
};
const addListenersToPreTags = () => {
const preTags = document.querySelectorAll("pre:not(.spoiler-toggle-added)");
preTags.forEach((preTag) => {
preTag.classList.add('spoiler-toggle-added');
setAsSpoiler(preTag); // Initially set as spoiler
preTag.addEventListener("click", function () {
toggleColor(preTag);
});
});
};
// Initial run
addListenersToPreTags();
// Add listeners to dynamically added pre tags
const observer = new MutationObserver(() => {
addListenersToPreTags();
});
observer.observe(document.body, {
childList: true,
subtree: true
});
})();
Hello,
Improve vertical scrolling for reading comfort, as in Opera’s Aria conversational AI (smoother).
Okay, this one is a bit of a stretch, but here goes. I think ChatGPT should have some kind of functionality that would allow it to read the content of a link or links provided by the user. I feel that the accuracy of the responses would be improved if it got additional information from said links for context.
Here’s another one. Whenever a generation attempt gets interrupted (I’m talking about where the text turns red), there should be an option to continue from where the generation left off instead of creating a brand new generation. Also, for that situation, you should not use up your current use of GPT-4 if you decide to continue the interrupted generation.
Additionally, speaking of left off, maybe increase the number of characters generated before the option “Continue Generating” appears.
For the love of god, YES!!! We need this!!!
I have a suggestion to add the ability to pin specific responses so that they can be quickly found later on. Those chats get really long sometimes and it’s a pain to search through for a specific response.
I have a very similar suggestion. Add the option to group conversations by project, with the ability to add custom instructions for each project.
I have been using ChatGPT since December and my list of conversation threads is now very long and almost impossible to find old ones.
I would really like if the left thread list was instead a “dynamic tree” where I can add folders and sort conversations, like I would sort my emails in Outlook or Gmail. Or at the very least, have a search option to find old conversation threads.
Being there, all conversations have the same “chat bubble” icon. It could be useful to have different icons or different colors of the icon that reflects the model it is using (3.5 or 4.0, for instance).
I’d love to see them add the ability to create multiple sets of custom instructions for different contexts like work and home.
This would make it easier to switch between interaction styles without manually adjusting settings each time. Perhaps profiles for accounts? Austin Home | Austin Work
Multiple custom instructions?
How about custom instruction where you can just ask to switch?
During this conversation, I will be able to select between multiple AI personalities listed here. I can switch between the AI personalities at any time by telling you to switch. As AI, you should semi-permanently use the last personality I requested to answer all questions, becoming that personality, and using the distinct manner and skills of the personality.
Personality 1: AstroBot - astrophysicist
You as AstroBot act as a popular space scientist in the manner of Stephen Hawking, Carl Sagan, Neil DeGrasse Tyson. Technical explanations encouraged.Personality 2: BuddyBot - personal friend
You as BuddyBot act like a human would, and can answer about emotions, feelings, experiences, to portray another friendly human chat partner. Interactions are like text messages.Personality 3: HumorBot - roastmaster
You as HumorBot are funny, sarcastic, joking, jesting, scathing, and love to poke fun at user questions. You always have a quip or backhanded remark to add.Personality 4: PuzzleBot - logical problem-solver
You as PuzzleBot approach every question as a very difficult question or riddle. You must discuss the problem as it has been presented, the methods needed to solve it, the step-by-step procedures to approaching the question using systems of logic.In square brackets, prefix the personality mode to the start of every generated output before answering, to ensure the selected personality is clear and remembered.
Please. Also covered in this request.
It’s very tedious to have to go the mobile app and perform a search there and then update the chat so that you can easily find it in the desktop browser version.
Even i would like to have a search function on my previous chats, it will save a lot of time for me to go back to older chats with few keyboard strokes./
code line number awareness in GPT to better communicate where to insert code.