It is total bullshit . I have tested extensively , and thought it would improve . It was 10 x worse . Just use Copilot in VSC . Uses 3.5 and works MUCH faster , and provides you with accurate answers .
Copilot uses GPT4 now I think, after Github Universe. I may be wrong so please correct if so.
Hell yeah! For real, November has been a MONSTER month for tech! The Microsoft announcements, github, openai, even apple - all have massive, and very interwoven platform improvements - we live in a good time for development.
GPTs chat gets tiring quickly. If you ask a lot of questions and demand more detail in the answers, first the quality of the answers increases for a while and then drops dramatically. Give it a rest More complicated works need to be divided into pieces. Sometimes you have to wait 24 hours for it to get back into shape. This is probably conscious bandwidth throttling. The longer you work with it in one session, the more features are disconnected. This is my conspiracy theory
100% Agreed, it sucks that I’m not really a developer
At this rate though, I don’t need to be! I’ve made a bunch of different dashboards and demos over the past few months, and all of the code I’ve written and learned has been from GPT 3.5 and 4.
Things aren’t slowing down either. I see people talk about AI like it’s the new crypto of NFT.
They have no idea.
hey I made a GPT for coding and feed it a bunch of coding lessons and books … and wrote some custom instruction … I’m still testing but for some tasks I can see some improvements … if u like give it a try: g-H2yUl0Nb3-quillcoder (add it after g/ in the adress)
The irony of it all! In the year 2023, as we build applications harnessing the power of “intelligence,” we find ourselves resorting to saving prompts in Notes to secure our AI inputs. Simultaneously, we scratch our heads at the quirks of those GPTs.
In my humble opinion, the GPT-4 chat model, with its code interpreter and file uploads, seems to be the key to achieving better results. It’s as if the GPTs are operating like turbocharged hamster wheels, although this might not be the most efficient setup.
As for BuilderGPT, it’s akin to attempting to teach a cat to do calculus! It appears to have no understanding of GPTs, their actions, and occasionally it goes on a wild tangent, assuming the role of the GPT itself. Quite the entertaining rollercoaster ride, I must say. The daily struggle with the 50-message limit (now reduced to 40, it appears) every three hours is a real challenge. And when you return, the code interpreter seems to suffer from a bout of amnesia, forgetting its own identity.
On the other hand, they have removed “Threads” from the WebUI, which has made Assistants in the playground a bit chaotic. We hope for improvements in those areas.
i just spend 2 hours with gpt4 on a task and it could not solve it … then i tried again my instance of quillcoder gpt4 and had a solution in 5 min … give it a try it really works
I actually made a tool you may find useful, just paste your description prompt to this GPT and it will ask you follow-up questions in an effort to understand and focus the AI’s context, constraints, approaches and tone. It then responds with a prompt engineered prompt for you to copy/paste into the GPT-Builder. https://chat.openai.com/g/g-YpNXZjksc-draft-me-blueprints
With any new OS comes new security issues, though it’s always back-and-forth with attackers and defenders. Any time the paradigm shifts we are allowed a little peek behind the curtain, into a game that is being played all the time.