So Im not completely sure how to go about it. But i do know when GPT writes code into a block in the chat its correct, but when it does so with the plugin… its hit or miss.
If anyone has any suggestions, im open. Else my only idea involves a chrome extension tied to the api server of the plugin to invoke the copy code to clipboard element.
Im quite new to coding so, Im not well versed on what might otherwise be a problem easily solved.
The enclosure of code that is used is a triple-backtick character element (three `) – at the upper-left of most keyboards.
a triple backtick is also how this forum detects a code block
This is one of several types of markdown. So in your plugin description, you can say “use markdown”, or “avoid code block markdown”, or “display as markdown”, or “no triple-backticks” to get better behavior.
So do you want a new friend?
Too late, you have one.
lol, but seriously i remember learning something about this, but ive only started in this scape about 5 months ago, and ive been shoving things into my brain as fast as i can.
This is anecdotal, but I feel as though the model tends to do marginally better with code when I specify the language in the fencing. So, giving it some JavaScript I’ll write,
```javascript
// JavaScript code here...
```
This is just an old habit to ensure proper syntax highlighting where it is supported.
I’d need to do some actual tests to verify, but the theory i have which would support that idea is that, on average, the code available to the model which specifies the language in the markdown might be of marginally better quality than that code which does not.
ChatGPT also favors specifying the language itself (though it often gets it wrong—I see ruby a lot) which you can see in the header of the code block.
But, as I said, I’d need to come up with some kind of eval to gather empirical evidence.
The ChatGPT code renderer is just a standard library, that interprets what it sees within the AI’s un-notated backticks, and puts its own “css” or whathaveyou at the top of the box. That’s not the language from the AI.
This is also my finding. Whenever you can use wording/syntax that would be in common use in the training data, especially high quality training data, you tend to get better responses, it’s why using larger and more descriptive words tends to produce more accurate and reliable output.
I appreciate eveyone who tried to answer this question, but rest assure, i have found an answer. In fact a few answer in a fewconditions.
Also, a few reasons why.
some people are still looking for this, after mad experiments, ive figured the best way is to one shot any code writing. Which means overwriting over functions when needed.
Aso means, GPT, even i the code interpreter is… so very very “bad at code”
Not sure how else to say that.
That has not been my experience, I’ve now created going on for 50k lines of code with GPT-4, for syntax and logic I’d put it at around 95%, for more complex work that involves thinking around a few corners, I’d put that at about 80-85% and for things that require domain expertise like multithreaded task handling id’s say it’s at the 50% or better mark.
I have since learned that the means in wich GPT responds via plugin, and chat, are unique.
This is why GPT can provide markdown blocks in the chat session more accurately than in the plugin json response.
I have decided it is not an exploit to leverage this in a browser extension bridge, in combination with the plugin.
However, i have also realized, this is far too complicated a method to employ in a production plugin.
Ergo it will remain in my personal developer plugin.