I have a few critical elements I think are important for production level use of the canvas feature to really make it useful and competitive.
You need to be able to tell it to ignore sections of text or stop repeating the same suggestions over and over.
Similarly giving the paragraph selection tool options to specifically request suggestions or comments if its too onerous to do so in a full 10k+ word document pass.
Dial in analysis of documents detail level since it tends to over emphasize or want fluff and padding in most of its suggestions.
Give tools to edit the formatting rules it wants to push in a more user friendly capacity. (Like taking MLA and selectively adding, editing, or removing them as something for it to draw resources from, I don’t know if this can be done on webui for chatgpt but certainly something to look at for local installations.)
Consider adding AI image generative support based on selected text.
Higher character/word count consideration and full pass of document when adding comments to entire document not just first 5 as seems to be the current limitation.
Overall I see this as a solid step in the right direction but I wonder if as a feature these things might get picked up by other AI projects, especially Google, and integrated into their document system. I could easily see a Writer specific module added to gdocs, accountant for sheets, etc for the use case specific interests. We do some of this type of specialization with image generation models already. Specialization then integrate them into the general use AI like GPT might be a route to get where we want it to be.
Can you imagine having a specialized module with the specific to your code base used refined with the feedback and improvement of everyone who uses it, then the general AI testing that same feedback across other similarly specialized modules and vice versa? That just seems a much more practical way to advance specific progress to me.
An idea I’m currently experimenting with, though I can’t say yet if it consistently works, would be interesting to test with the help of others.
I let GPT create a construction guide for a piece of code and insert it into a comment block at the very beginning. It contains everything the code is supposed to implement, with the instruction that this is GPT’s memory and that these instructions MUST always be used and updated during the error correction process.
The question then is whether GPT also destroys this construction list. One should save code from time to time locally in the process.
I find that GPT, in its current version, is not strong enough to really help efficiently with programming. The error correction cycles are simply too many, and there are too many bugs in the code development process, even for very simple and short codes. These try-and-error cycles are exactly what one wants to avoid with an LLM.
Honestly sometimes gpt is like a dog with a stick too wide to get through the door. It keeps banging on the door, and doesn’t look at the problem from another angle - even when you ask it “can do it this way”?
This is a great enhancement for one of the primary use cases - creating structured written materials. With the old version I usually found myself wanting to correct sections of the output but reprompting wouldn’t work or be much slower than just quickly editing it myself. Now this is seamlessly integrated – well done!
Two feature requests that came from my very first session of “co-authoring” a learning module on a particular topic:
“read aloud” option for the selected text section.
a quickbutton that basically prompts “execute this prompt” over the selected text. I find myself asking it to inject new content sections. Currently I then write the prompt into the canvas, then select it and then again type “do this” or “execute this prompt” or similar. so a quickbutton for processing the selected text like a prompt would be awesome!
Been using it extensively but for multidocuments it’s very buggy.
System titles cant be changed, Files cannot be deleted archived.
Canvas files are made whenever it feels like it, even when asking not to.
Putting them in lockdown/readonly helps constant creation and editing without any reason, but also not watertight. Versioning is nice, but edit comparisons are missing, adjustments after creation is practically impossible, it loops immediatly after inline edit requests, and inchat less, but most of the time, clear the whole canvas, say your cleared it totally and start from scratchs works, but surely surpasses the concept.
Also in canvas session you need to constantly ask to not create canvas files after any question or suggestion. You also need to be able to turn updates of specific and/or all canvasses OFF and incl. creation of new canvasses.
At some points, it just started duplicating existing canvasses, from the chat just in conversations, And beware if you have an open canvas window, your chat is not available to do anything else than create new ones of failing any inline changes.
Apart from that, the fact i cant even get a single download anymore generated, let alone a zip file, even if it can and seldom is able to zip.
I’m currently only able to use 4o and 4o canvas, accepting the only extraction method of md and json is by forcing it to use a code window and copy manually.
Am I the only one here pushing and testing this intense or am i doing something wrong? Very interested in other peoples experience the last few days with canvas and 4o.
It is still very immature. And it needs a special trained LLM for coding, not the universal ChatGPT.
It helped me a bit, but in the same time it is a source for frustration.
Im having a similar problem. Im doing much better with 01 preview. The whole thing works better if you use waterfall method and give it an entire requirements document of what you want with all of the details in it. Then you will get somethnig almost decent. I also had it make Unit testers to test the modules. canvas is very buggy if you dont seperate your coding logic into areas of functionality that you can find (so you have to tell it to do this) otherwise you will have this big blob of code you will have to figure out where all the errors are coming from one after the next to find out the whole thing is a complete bag of cowcrap.
Hi, I see that the missing code snippets bug has been fixed. Thank you! I appreciate that! Now the problem is the same but with latex formulas: by generating a tutorial on some math topic (for example variational inference) it does not show the latex formulas in the canvas. Hope it’s easily fixable!
Good update! EXCEPT, CHATPGT 4o with canvas KEEPS. BUGGING OUT. it cuts off at line like 230, leaving me to use stack overflow to figure out the errors i have on my own! it’s an amazing update but chatgpt 4o keeps bugging out and cutting off and nonetheless SAYING it didn’t cut off! please fix this, i want someone to fix my errors and do my work for me!!!
I have been using canvas for writing. At the start it was really good. The length extension, refiner and the suggestion button was great. It gives writers a basic co-author to suggest ideas or to tidy up convoluted sentences.
However, the suggestions over the last few days has been frustrating. Some moron at openai has decided that 5 suggestions to edit was more than enough. When writing an extended piece of text five suggestions doesn’t get past the first two paragraphs. Leaving the rest of the text unchecked. And when you ask to check the rest of the text, it starts at the top and gives the same 5 suggestions again.
Now i have to force it to check the whole text and put up with GPTs sickly over apologetic attitude. Limitations on helpful features are not useful. This has made doing my job harder and longer. All because some simpleton listened to the wrong people.
Yeh, Im blowing off steam.
What goes terribly on my nerves is the text writer gimmick, it is just wasting my time! Please stop it, at least as an option, even for the normal GPT.
Does anybody know why the ai refuses to write math formulas in canvas?
If you cut&paste text using markdown with math formulas it displays the text but ignores formulas.
If i ask about the problem with the formulas it tells me they are there in canvas…
Are these formulas hidden in canvas or the ai is lying?
Even if i open a new canvas and just ask it to display a simple latex formulas it fails.
For instance, it gives me a blank canvas when i open a new “ChatGPT 4o with canvas” and ask :
open canvas and write “$$
|\psi_{\text{initial}}\rangle = \frac{1}{\sqrt{2}} \left( |CC\rangle + |DD\rangle \right)
$$”
And it hallucinates, sayng:
“Here’s the equation written out. Let me know if you need more details or adjustments!”
No formulas shown? They were sent right after the numbered headings…
"pattern": ".*", "replacement": "**Maxwell's Equations**\n\n1. **Gauss's Law for Electricity**\n \n \n \\[\n \\nabla \\cdot \\mathbf{E} = \\frac{\\rho}{\\epsilon_0}\n \\]\n \n This equation relates the electric field **E** to the electric charge density \\(\\rho\\). It implies that electric charges create electric fields, and the flux of **E** through a closed surface is proportional to the enclosed charge.\n\n2. **Gauss's Law for Magnetism**\n \n \n \\[\n \\nabla \\cdot \\mathbf{B} = 0\n \\]\n \n This states that there are no magnetic monopoles; in other words, magnetic field lines are continuous and form closed loops. The magnetic field **B** has zero divergence.\n\n3. **Faraday's Law of Induction**\n \n \n \\[\n \\nabla \\times \\mathbf{E} = -\\frac{\\partial \\mathbf{B}}{\\partial t}\n \\]\n \n This equation shows how a changing magnetic field induces an electric field. It captures the principle behind electromagnetic induction, which is the basis for electric generators and transformers.\n\n4. **Amp\u00e8re's Law (with Maxwell's Correction)**\n \n \n \\[\n \\nabla \\times \\mathbf{B} = \\mu_0 \\mathbf{J} + \\mu_0 \\epsilon_0 \\frac{\\partial \\mathbf{E}}{\\partial t}\n \\]\n \n Amp\u00e8re's Law relates the magnetic field **B** to the electric current density **J** and the time rate of change of the electric field. The second term, introduced by Maxwell, is known as the displacement current and ensures that the continuity equation holds even when no physical current is present."
It should exist a ChatGPT-o1-preview with canvas for coding.
The differences between 4o and o1 are big. 4o hast quick problems with even simple codes. It can not even correct small errors in a complete not to complex code. It actually destroys the entire code.
o1-preview is way stronger. I could create a Javascript, creating a table form a text, whit buttons to add delete and move rows, editing the content and export the content back correctly, with only 5 cycles, 400 code lines. This is with 4o absolutely impossible.
I use 4o only for very simple tasks where o1 is not needed. 4o is useful for “how i do x y” so you not need to check it in the documentations and gits, and this is really saving time. But for coding it 4o is not string enough. It is even frustrating because it constantly forget advice’s or even destroy the code after some cycles.
I’m very interested to see how OpenAI manages hiding tokens, but then also somehow provide us the ability to run tools during inference. Having o1 only capable of executing a tool at the end of it’s “reasoning” seems insanely limiting. Maybe it can just generate the query for the tools and we just blindly return the information? Sounds like a hard task to prevent jailbreaking.
Oh well. I’m sure soon enough there will be a solution and we can enjoy integrating o1 into our solutions (like canvas)
I like the canvas Interface, but oddly it seems to hit a limit at less than two hundred likes of pretty-printed HTML.
Gave it the task to write disclaimers for a web site, which it did just fine.
Then I gave it the task to style it such that one can switch between English, German and French versions, to remain compliant with certain national laws.
It did the logic just fine, but for the French and German translation it just started, and then basically had a placeholder comment along the line of “additional text goes here”.
It couldn’t handle the full text.
I had to have it translate in separate canvases, and then combine the results.
Total combined result: 339 lines, 22258 bytes, with just a small function, the rest is text in three languages, and it seems to overwhelm the system.
While it still saved some time with the styling, and while I could translate, the fact that it can’t process a single file that small is a bit disappointing, given that typical source files are quite a bit larger, and code complexity here is almost non-existent.
Is there a file/canvas size limit published somewhere?
Hello everyone, first post here.
I just got a double canvas and the option to choose one(ChatGPT 4o)
I know it is to better adapt to me (probably), but I would like to know if there is a “beta” or if I can turn on this “experimental interface feature” when I want.
I loved and would like to turn on this “side-by-side comparison layout” when I want.
Is it possible(to turn this experimental feature of double canvas on)?