Introducing Insert and Edits Capabilities

I generated some code with a lot (20+) of empty methods, but when asking Edit to “Remove empty methods” it fails to. Reducing the number of empty methods to like 5 does work though. Too much at once maybe :grin:

Edit Mode has some issues with consistency - is there a way to provide some few shot examples?
Additionally it would be very helpful to have a multiple insertion mode in Insert Mode.

Further update, I’m fairly frequently getting situations where edit doesn’t modify the original text, and instead outputs it in its entirety, before appending more text (which may or may not be edited). In one case, it just repeated my input paragraph verbatim fifty times. That obviously would burn a lot of tokens in a practical setting.

2 Likes

I wish I could see the diff (green for additions, red for deletions) inedit mode. For large texts it’s hard to check visually for edits.

I’ve been trying extensively the edit mode and I can state it truly is amazing, but I’ve noticed some particular behaviors in certain situations, in particular, when using the model for translation. The model isn’t consistent when trying to get an output, which obviously makes total sense (to explain what I mean when talking about inconsistency I’ll give a simple example. When asking to translate a sentence the model outputs the translation, but sometimes it adds the original version as well to the answer, other times it adds a little “The translation of that sentence is:…” and similar things). But I was thinking that maybe some features could be used to make that consistency better. For example it could ask for feedback when doing one of the tasks, a simple boolean value, and in case of true the model could keep that answer as some sort of model answer. It would work a bit in the same way as giving answer examples in the normal mode, idk if this was clear. (this is obviously if what you are gonna do will be a repetitive task in which I you want output consistency)

Hey.

In the Edit mode it would be great if we could point the actual span in the text we would like to be edited.
Sometimes we have more text before and after just for context and it is not simple to instruct GPT3 to just edit a specific sentence/paragraph. Either a start/end span positions or inserting a token before and after the specific span would be great to tell the model: “edit this but consider the whole text context”.

Thanks

4 Likes

I agree with this post. Actually I came here to point out the need for highlighting edits. You have a very great product, Open AI.

This is great! I have to add a stop sequence though – I gave it multiple examples with the same suffix and different prefix “prompts”, and it generated its own prompt! It added the suffix into what it inserted, then kept going.

I made a simple request to remove numbering from a list, but I did not have to specify to remove the punctuation or spacing, it just did it. In accounting, we are constantly trying to reformat data given to us in a bad format (pdf) into a good format (excel). Sometimes the data spans 100s or 1000s of pages, so we have to use tools to automate this, but it requires technical skills beyond what most accountants have. Oddly enough, the tables spanning many pages are easy for humans to read and identify what text belongs to what portion of the table, but it is very difficult to parse the data because text often extends into other columns, or the text in a column is not left justified, or the formatting changes slightly. I could see this editor being adapted to make easy work of this complicated and time consuming process.

I tried something like that, I tested parsing a file, giveng it just what I want, it didnt work, than I and gave a few examples of lines, it still didnt work, but it was alot closer (just had to change a few charecters in the regex it create. i added the examples in the instruction


I love exploring what Edit mode is capable of, and I’ve gotten some really good results so far.

However, I’m experiencing an issue frequently in the Edit Playground - if the completion runs past the edge of the frame, it squishes the prompt window to the point it’s unusable. The only solution I’ve found is to refresh the window, which is of course destructive. It happens often enough that it seems like a pretty critical issue, if only to be able to manually resize the panes of the frames.

I’ve seen this referenced in the comments above, but I didn’t see any screenshots.

Keep up the excellent work!!

Anybody else experiencing empty completions on Inserts?

I try to make text-davinci-002/text-davinci-insert-002 write a third paragraph that fits between two already existing ones.
Sadly the completion is either empty or does tons of repetitions.
Every now and then there is something useful in between.

I am especially interested in writing additional paragraphs in German texts.
What freaks me out most is when the completions suddenly switch to English in the middle.

Any suggestions on how to achieve a more robust, text generation?

An Example of what my approach looks like:

Generiere einen langen Paragraphen, der für Erwachsene leicht zu verstehen ist.
Key Word: Koffein
weitere Informationen: Glasflaschen fritz kola rettet die Umwelt

Fritz-Kola wurde 2002 in Hamburg gegründet. Das Unternehmen setzt bewusst auf eine alternatives Marketingkonzept und vertreibt seine Produkte hauptsächlich über den Direktvertrieb an Clubs, Bars und Restaurants. Fritz-Kola hat sich so in den letzten Jahren zu einer beliebten Marke entwickelt, die vor allem bei jungen Menschen sehr beliebt ist.
[insert]
Sie ist eine koffeinfreie Cola, die in Glasflaschen abgefüllt wird. Fritz-Kola rettet die Umwelt, weil Glasflaschen leicht recycelt werden können. Durch das Recycling von Glasflaschen wird weniger Energie verbraucht und es entstehen weniger Abfälle. Fritz-Kola ist eine gute Wahl für die Umwelt.

I experienced similar behavior.
I’d love to use this great feature more, but it is quite time consuming to work around the strange unexpected occasions of such behavior.

I am not even sure if frequency penalty management would work in edit mode, as the task is obviously different from simple text generation.
What do you think about this? Any new directions?

Hi! thank you for another wonderful update!

I’m playing with the new INSERT feature a lot.
I’m using it to generate static procedural content, for example, such as generating a NPC for a game, with stats, description, background etc. I’m finding the INSERT feature to be much more assertive for generating this kind of content than just using the standart completion at the end of a prompt.

For example, using the standart completion for that NPC generation I just cited, it seems that the AI is much more willing to “forget” some parameters, to create new non-desirable parameters, or to just generate something that is completly different from what I was expecting it to generate.
Most of these problems practically do not occur when I try a similiar prompt but using INSERT.

The method I use is basically to “ask” the AI to generate a new NPC between some examples (meaning that each example is a complete NPC) of what the AI should generate, by puting an “[insert]” between the examples.

That said, a very important feedback I would like to give, is to add other models besides Davinci for the INSERT. Right now, I’m not considering using the INSERT feature to generate things for my project due to the fact that Davinci is the only available model.
It would be really nice if Curie and Babbage could become available for the INSERT feature.

Model used: text-davinci-002

Specific use case: using [insert] in numbered lists

I was generating a list of nonexistent aquatic animals and decided to put an [insert] inside of entry #2 to make the entry a bit longer. It decided to generate the end of #2 and the beginning of a new entry #3 despite the list already containing an entry #3.

The bolded parts below were what was generated in the [insert].

  1. A small, crab-like creature with a hard, spiny carapace. The creature had 2 large compound eyes on stalks and no antennae. The creature’s mouthparts were adapted for scavenging and it had 5 pairs of legs, the first pair of which ended in large pincers.

3. A small, shrimp-like creature with a hard, spiny carapace. The creature had 2 large compound eyes on stalks and 2 pairs of antennae. Its mouthparts were adapted for scavenging and it had 5 pairs of legs, the first pair of which ended in large pincers.

  1. A long, worm-like creature with a soft, segmented body. The creature had no eyes, but did have 2 pairs of antennae. Its mouthparts were adapted for filter feeding and it had no limbs. Its body was covered in cilia that it used for locomotion.

It would probably be better if it was able to determine that entry #3 already existed, and to avoid disrupting lists with duplicate numerical entries.

Thats a good idea. I could now see how a small changes in prompt would affect a suffix.

I pasted a phone call summary in edit mode and used the default “fix the grammar.” It made no edits but added three exclamation points at the end.

1 Like

Extremely impressed with edits ability to optimize code. The below snippet works with result arrays that often contain millions of items so naturally it makes sense to iterate that as few times as possible but none of my usual tools seemed to understand that basic rationality yet code-davinci-001 optimized the code exactly how I would expect without even knowing the language or having access the surrounding context.

I’ve been on the waiting list for code-davinci beta for a few months now and still haven’t been accepted. I would love to have access to those API’s to try out and provide feedback. Really great thing you guys are creating!

Hi! First of all, thank you for your work on the playground and the underlying models, It’s truly the future in creation!
I have a suggestion for the Edit (beta) mode of the playground:
It would be nice the have a diff to highlight the changes (in green?) after we submit a request in the playground. Also it would be nice to have a notice when the output has not changed.

1 Like

I’ve put a long paragraph and asked to summarize it, but GPT3 recopied the whole text. :thinking: