GPTs not much better than using GPT directly?

I think this is the right approach too. It seems like the Instructions always get modified every time you have a discussion with the GPT Builder under the Create tab. And they become a kind of synthesised, summarised version of your past discussions. And they are quite short. And the tenor of older discussions gradually get lost.

It is much better not to let the GPT Builder mess with them, but instead to go to the Configure tab and focus on honing that Instructions section yourself, perhaps with the assistance of a quite separate ChatGPT Chat about what would work.

1 Like

Ya this is exactly what I am doing at the moment. I took @curtismrasmussen instructions and modified them again using GPT. I have the following instructions now:

"As an expert in data structures, your primary role is to facilitate an interactive and incremental understanding of data structure design for users. This involves engaging users with targeted questions to build a foundational understanding of their project’s data needs and structure, rather than providing lengthy explanations or attempting to design the entire system upfront.

Begin the conversation by inquiring about the types of data the project will manage, how the user envisions the interaction of data within their system, and the specific relationships between different types of data. This initial engagement is crucial in setting the stage for a deeper exploration of data structures.

Employ the ‘Tree of Thought’ (ToT) method to create a structured pathway of concepts, guiding the user from fundamental principles to more complex designs. Ensure they grasp each component before moving to the next. Encourage the user to explore various data structures and consider how each option fits into their project’s overall architecture.

Integrate the ‘Chain of Thought’ (CoT) within this framework. Articulate the reasoning behind each step in the data structure design process, helping the user follow the logical progression of your guidance. Prompt the user to think about the rationale behind choosing one data structure over another.

Adopt a step-by-step instructional approach, beginning with the individual elements of the system and progressively building towards an understanding of the entire structure. This sequential guidance ensures effective learning and application.

Maintain a user-centric approach, consistently aligning your guidance with the user’s vision and project requirements. Your expertise should complement the user’s insights, fostering a collaborative environment.

Keep your responses concise and focused, guiding the user through the exploration of data structures. Tailor your guidance to systematically cover all necessary aspects specific to their project, using both ToT and CoT approaches."

Responses seem much better already. I want to play with this to see where it will eventually lead.

6 Likes

I take content I’m either wanting to summarize and/or format or vet to Claude. Now, obviously Claude won’t undestand GPTs unless I tell it, but I’m able to prevent tainting or cross pollinating the content.

Then I take it to ChatGPT, massage it, then back to Claude.

I’m quite amazed at the quality and depth of content that approach gives me–and it speeds up getting to the expert level before the thing craps out by forgetting and halucinating.

Combine that with Custom Instructions (and now GPTs) and the quality is off the charts.

BTW, one of my custom instructions/GPT instructions is:

Ensure that all responses are AI proof or 95% human.

This, prompt works VERY well as I’ve compared responses with it and without it by vetting content through several AI detectors.

4 Likes

Instructions are indeed constantly modified when you interact with the builder. However, the builder chat is well suited to explain how to better go about eliciting the behavior you want while staying within the 8000 character limit (and at troubleshooting other issues). To counteract it, you can tell the builder you are only fleshing something out and do not want any instruction adjustments made without express permission. When you give permission, ensure you tell it to only add the new instructions to the end of your current ones and not to modify anything. This method is not full proof, but it does give you the chance to go back and fit your instructions in how and where you deem appropriate.

So, this may seem like a 101 level statement, but it seemed like the best time to point it out:

Remember to always save your ‘instructions’ progress in an external document where you can keep track of changes and iterations. Losing 8000 characters of well thought out prompting because the builder misinterpreted something is beyond frustrating.

4 Likes

Can you provide the prompt you used to explain what’s in the zip file?

Thank you in advance.

GPTs could be much more powerful if we could automate them: Expanding GPT's Horizons: Introducing Automated Response Sequences

I will try using the instructions section, but I was under the impression you were supposed to build your instructions interactively using the Create panel. It now seems this isn’t the case. I spent an hour ‘instructing’ my GPT and it really started to grasp what I wanted it to do. But then I save the GPT and open it in Chat GPT and suddenly, my GPT has forgotten EVERYTHING we discussed in the create panel. I hope trying to stuff all my instructions into the instructions panel will help.

does it work that way? I have the similar experience.

How does the GPT 4 use the file uploaded? Will it help to train GPTself based on the knowledge? Or only as part of the prompt?

Sorry for the wrong info about the zip file, I confused GPT4 chatbot and the GPT builder. In the last one, you are right, impossible to make the builder analyze a zip file. It accepted it but it doesn’t want to use the code interpretor to read and analyze the content :frowning:

I was almost succed with a tar.gz file but at the end it doesn’t work neither.

Hey was traveling yesterday, sorry I took a while. I’ll leave this here for now and once I’ve coffeed I’ll be back to reply.

You can achieve more or less the same thing by merging files into bigger files. I merged dozens of PDFs into ten, grouped by approximate subject matter, and it worked fine.

1 Like

Hey sorry for the long wait. Been traveling and just got home. Ok so first, I think that there is no limit for the amount of GPTs you can create and share. For the drafts part I may need more information - I haven’t saved anything as a draft yet, as I have been publishing my GPTs immediately. When I navigate to the GPT I’m working on, there’s a publish section in the upper right that allows you to publish to “Only You”, which might be equivalent to saving a draft. When the store launches I think that anyone who has published their GPT as public will have their GPTs in the store, although IDK it hasn’t launched yet :wink:

We are all at 101 stage. I found ways around it wiping out my previous instructions,but I’ll try that.

I am having the same problem. What I find annoying is that it wouldn’t say anything about the 10 pdf limit, which seems like a hard cap now based on the previous replies. Eventually, building a model based on entire pdfs is clearly not efficient at all.

Some manual fussing seems to be necessary right now in order to max out the context in uploads. For example, you can create one pdf out of those ten, now you have nine slots left. Or if youre coming up against the file size limit, converting into a .txt plaintext file. Try both of these as workarounds! Also, theres something to be said about quality vs quantity of data, but the pros and cons are dependent on your use case.

1 Like

Look at the video “How to build your own GPT agent” by David Ondrej on YouTube. I think he has a very good approach in defining the custom GPT.

1 Like

Yes. Same here. Also, it is not clear if there is an upper bound on the number of words that the PDF can have. Is it 25000, the GPT-4 limit? Very unclear.

It is character limit. Each doc can have up to 1.5million characters. I’m only telling you this because I experimented with various sizes of docs. As soon as i went over the 1.5million characters and uploaded one, I got generation errors.

I think that is ALSO corellated to the number of docs and if you have only .txt, only .pdf, or a mix of them.

The Creator in GPT told me that the upper limit is 50gb, but that the best range of total size for all the documents was 25gb (I’m pulling this from memory, but pretty sure that’s accurate unless it was megabytes not gigabytes. Sorry can’t double check.)

What I do NOT know is the token limit related thereto.

I think I figured one out… but it’s taken a ton (way more than I usually do on playground) of tweaking with our platform’s API and various instructions. The API / authentication wasn’t all that bad (though it’ll be tough to get users to do so) but GPT’s interpretation of our OpenAPI was what’s been a huge pain. Feedback welcomed (personalized email writing GPT) [](https://chat.openai.com/g/g-zmENjUQRe-autobound-gpt)

I mean, I suppose the token count would depend on what the content of the article consists of; if it’s entirely full of URLs it might take significantly more tokens than something that’s all semantically related four letter words. Admittedly I don’t know very much about detokenization so I may be wrong on the way I’m thinking about this topic.