GPTs not much better than using GPT directly?

It is total bullshit . I have tested extensively , and thought it would improve . It was 10 x worse . Just use Copilot in VSC . Uses 3.5 and works MUCH faster , and provides you with accurate answers .

Copilot uses GPT4 now I think, after Github Universe. I may be wrong so please correct if so.

image

@matt0sai Looks like it will be soon!

2 Likes

Hell yeah! For real, November has been a MONSTER month for tech! The Microsoft announcements, github, openai, even apple - all have massive, and very interwoven platform improvements - we live in a good time for development.

1 Like

GPTs chat gets tiring quickly. If you ask a lot of questions and demand more detail in the answers, first the quality of the answers increases for a while and then drops dramatically. Give it a rest :slight_smile: More complicated works need to be divided into pieces. Sometimes you have to wait 24 hours for it to get back into shape. This is probably conscious bandwidth throttling. The longer you work with it in one session, the more features are disconnected. This is my conspiracy theory :slight_smile:

2 Likes

100% Agreed, it sucks that I’m not really a developer :sweat_smile:

At this rate though, I don’t need to be! I’ve made a bunch of different dashboards and demos over the past few months, and all of the code I’ve written and learned has been from GPT 3.5 and 4.

Things aren’t slowing down either. I see people talk about AI like it’s the new crypto of NFT.

They have no idea.

1 Like

hey I made a GPT for coding and feed it a bunch of coding lessons and books … and wrote some custom instruction … I’m still testing but for some tasks I can see some improvements … if u like give it a try: g-H2yUl0Nb3-quillcoder (add it after g/ in the adress)

The irony of it all! In the year 2023, as we build applications harnessing the power of “intelligence,” we find ourselves resorting to saving prompts in Notes to secure our AI inputs. Simultaneously, we scratch our heads at the quirks of those GPTs.

In my humble opinion, the GPT-4 chat model, with its code interpreter and file uploads, seems to be the key to achieving better results. It’s as if the GPTs are operating like turbocharged hamster wheels, although this might not be the most efficient setup.

As for BuilderGPT, it’s akin to attempting to teach a cat to do calculus! It appears to have no understanding of GPTs, their actions, and occasionally it goes on a wild tangent, assuming the role of the GPT itself. Quite the entertaining rollercoaster ride, I must say. The daily struggle with the 50-message limit (now reduced to 40, it appears) every three hours is a real challenge. And when you return, the code interpreter seems to suffer from a bout of amnesia, forgetting its own identity.

On the other hand, they have removed “Threads” from the WebUI, which has made Assistants in the playground a bit chaotic. We hope for improvements in those areas.

2 Likes

i just spend 2 hours with gpt4 on a task and it could not solve it … then i tried again my instance of quillcoder gpt4 and had a solution in 5 min … give it a try it really works

I actually made a tool you may find useful, just paste your description prompt to this GPT and it will ask you follow-up questions in an effort to understand and focus the AI’s context, constraints, approaches and tone. It then responds with a prompt engineered prompt for you to copy/paste into the GPT-Builder. https://chat.openai.com/g/g-YpNXZjksc-draft-me-blueprints

With any new OS comes new security issues, though it’s always back-and-forth with attackers and defenders. Any time the paradigm shifts we are allowed a little peek behind the curtain, into a game that is being played all the time.

Hi everyone, I’m building an education-related GPT that is intended to assist students in studying their materials. The GPT must base its answers on the student materials (which have been uploaded to its knwoledge) but it seems that the longer the documents the less accurate the GPT is. Also, as you add more documents it becomes less and less accurate. So I think the more specific the GPT is (shorter and with fewer documents, fewer but specific instructions) the better it performs. Could you pls tell me if i’m wrong?

  1. What format are the docs in? .txt, .pdf
  2. Is the content organized well in out text outlines, tables, and narratives? (text is key). A corollary is if the content is chunked.
  3. What is the character count (not word count) in (a) individual documents; (b) all docs combined?
  4. Have you crafted a “do this and that, but not this or that” prompting template to guide the students?
  5. Are your instructions set up right? (this is a stupid question if you are unaware of how to set them up, but a person who has studied many GPT instruction sets would know…)
  6. Do a search on any of the above questions in various topic areas and you’ll get a lot of realworld insights.

Finally,

Have you tried NotebookLM by Google? It is probably a better kind of GPT act-alike app. I’ve built 20 GPTs and studied dozens of others. They fall into two categories: sophisticated (maybe 10%) and stupid (90%).

IMHO, NotebookLM surpassed GPTs with sufficient sophistication to deftly facilitate access to up to 20 docs and 200k words. But it does so in robust summaries including citations. And a person (student) could do great research on pinpointed information, create pinned notes, and skip the fluff more effectively.

Hey folks,
I’m working on a case study competition where we have to work on the gpt builder and gpt store.
Since the gpt builder is quite new, I have a hard time trying to understand what it holds in the future, what challenges are preventing it and what are its current limitations.
Can any of you help me summarise the very direct limits of this gpt builder?

  • Is there a limit on number of prompts?
  • What’s the limit of pdfs/.txt you can upload?
  • How many hours does it take on average to train a chat bot?
  • What’s the highest level of sophistication you have achieved till now?

Please, can someone lend a hand!!!

There is no “training” or “fine-tune” going on with a GPT. You are just providing some preliminary instructions, and optionally some documents that are preconverted to text or access to an external API for answering from external information.

The AI can then make multiple function calls to formulate an answer to user input, behaving according to instructions “programmed”, similar to if the user had typed out the same themselves.

Hi, I think you are right. I’ve read that GPT-4 Turbo “forgets” context details after 5000 tokens and, of course, it will miss details in the lots of materials.

Hi everyone! Do I understand correctly that GPTs (MyGPTs) and Assistants are different from each other? That is, it is impossible to write the same instructions for both, upload the same files and wait for the same result? If so, what is the main difference between them?

The Prompts: so if there was any deviation whatsoever, one word one comma - the results will be different, it is enshrined in the nature of semantics / syntax and part of the ‘fun’.