Send me your GPT problems, I’ll solve them for free and make a YouTube video

I’ve recently started a YouTube channel on basis of real-life problem-solving methods. However, I still need interesting problems to solve.

I think I’ve found a compromise. If you send me your research problem or business problem, I’ll solve it for you for free, and the data and code will be published under the MIT license. The exchange is that you get free labor and insights while I get interesting research problems and ideas for content.

My channel: https://www.youtube.com/channel/UCeF9ebS8qOwg6DTDyI_kExw

3 Likes

I’ve been trying to get it to do poetry.

It seems to not understand syllables or line lengths, though I could be wrong.

It seems to grasp some concept that Word X rhymes with Word Y. And it seems to be able to grasp concepts like “short line, short line, short line, long line” or “long line, short line, long line, short line”. So there’s some form of poetic structure there.

Still, I think it’s a good topic if you want to play with its limitations and work around it. For an extra challenge, see if it can make something usable for tinder messages or freestyle rap.

Sounds like a great deal to me! My problem is I want to use it 1) as a general personal assisstant (where it needs extensive knowledge about my life and people in it → my last 10 years of diary) 2) for for multiple business applications within the same company e.g. internal IT help, customer service chatbot, onboarding help for new employees etc, where company internal knowledge is important (wikis, documentation, maybe a bunch of email or word documents as a bonus)

I struggle to use normal fine-tuning, because the API seems to be made for a specific type of task (e.g. general chatbot or general fiction writing, etc), while I want general tasks but specific knowledge

Hey I’ve been struggling with this Problem: I still get openai.error.ServiceUnavailableError - #3 by raymonddavey

Maybe you have a better Idea how to solve it. As I know the error comes if there are too many requests and openai has limited requests. Maybe you know a nice fix for Python X API X Openai. To restart the bot after the bot stops working because of this error.

Hey how can I contact you privately? I have an idea I would like you to look into for me, but I wanted to discuss it privately before going ahead with having you do it, if you are ok with that?

If so please email me. It is my user name at Gmail.com

Are you responding to these? I’ve got a couple curious problems :slight_smile:

I want to learn - How to make cartoons using DALL-E. Have you made a video on that?

I am trying to classify social media posts in ads & non-ads, so far a basic task at which GPT-models excel.

Using a neutral prompt, the bigger models davinci & curie tend to strongly prefer judging False/no-ad in not perfectly certain cases. In a similar fashion, the small models tend to strongly prefer classifying True/is-ad. In both cases I really need to bend the models, to come somewhat close to 50:50 e.g. by adding “if you are uncertain, always prefer True” (or false, for the small models),

For the chatGPT API, this has gotten even more severe, chatGPT (behaving like davinci) basically refuses to classify posts as ads.

That’s what my prompt looks like right now for chatGPT:

If you notice the slightest indication that there might be any chance it could contain a (potentially non-obvious) promotion of a product, service, partnership, even if the promotion is not commercial or not tied to a specific vendor, let humans have a look at by returning “True”. Return “True” even if you are not certain. Always return “True” if business accounts are linked or products/services mentioned, even if there is not indication of a partnership. If you think it is very unlikely that it contains (potentially non-obvious) promotion of a product, service, partnership or anything the like and there are no business or specific products mentioned and there is no need for a human to crosscheck, predict “False”. If you are uncertain, err strongly towards “True”.

In a dataset of 50:50 ads and non-ads, this (in my opinion very extremely formulated prompt towards getting “True”) still has a bit of a preference to return “False” over “True” - although for this extreme prompt its only small. Minor changes in formulating it less biased for true, result in 80/20 False/True ratios. ChatGPT is here even stronger biased than davinci and curie, but same pattern. The same (albeit less extreme), applies the other way round to the small models, which always predict True, even for non-ads.

If you can solve this I’d think it is probably useful beyond my personal case

你能出一期,微调text-davinci-003的教程,最好以下工具node.js和vs code

could you make a guide for adding chatGPT to a static documentation site (like Hugo SSG) using netlify functions + javascript? The script would create/check embeddings based on docs input (like json index), get an embedding for the user input question, and return an answer based on the most similar articles matching in the embeddings.

I think this is a really common use case for tech writers / docs maintainers like myself haha. I’m able to submit a basic question to chatGPT through a netlify function but I can’t get context to work using the embeddings of my docs.

Here is what I am struggling with and I see no true tutorial or fix to it on the web or youtube.

I cannot seem to get chatGPT to “enable a localhost plugin to use devtools”. What I am trying to do is get it to allow me use of the Code Interpreter, as I was just allowed access to it the other day, but I am stuck. I attached a screenshot to show the issue I am running into, and if you need further details to help troubleshoot the issue, I can provide you further information via email. Thank you.

Hi techiralthefuture

I’m creating AI assistants in gpt4 chat.
I stumbled upon an issue that might be relevant to some.
(but it might be a very difficult one)
Its about the short-term memory, in session, the memory that holds the
Ai’s context, directives, personality, memories.

You might guess it, the memory gets overcrowded.
In some tasks this problem might be easily fixed, by restating the topic,
or calling the name of the directives. But for more complex or even adaptive
weights, that are crucial to the output, GPT4 will simply forget.
its 2000 ish token sized short-term memory is, by far, not sufficient.

The only thing i found and what i will try - but its not really solving the problem at large -
is to add a plugin database to store information away.

I will have to test that, but i dont think you can store the fine tuning of how the AI will interact
with the user in a Database. I dont know.

If you have any information on that topic or know whom I might be able to contact, or if you
like to take on the challenge of extending the short-term memory,
that would be fantastic.

Kind regards

Hi,

the models have a token limit and they are stateless. They don’t save the data. You have to solve that by yourself.

Based on that I can say that you are right. You will have to implement a logic that stores the informations somewhere for later use.

There are multiple ways to solve that.

You can add more informations to certain models, you can store and retrieve data in/from a vectordb, or you can store it in a graphdb.

A graph brings a great opportunity of zooming into a topic. So you can store alot of informations but just keep general informations which need to be indentified so you can take the rest of the informations.

Let’s say the user mentions a specific person that was talked about in the previous chat.

“Hey, let’s go on with the character lisa. What could be her next move in the meeting with Tom?”

You could ask GPT-4 providing a long list of abstract sentences like:

relations between characters
locations
character informations

and let it analyse which kind of informations might be useful to build an answer.

And then you take data from that branch and add it into another prompt like this:

"Hey, let’s go on with the character lisa. What could be her next move in the meeting with Tom?

here are informations about Lisa and Toms relation from the previous chat
[

]
"

It’s not an easy task at all. Solving this will bring you from the beginning of your path to the top of the research.

Hello Jochenschulz,
thank you very much for your comprehensive and insightful reply.
A lot happened since 5 days ago.
Yes, ive set up a Vectordatabase (weaviate) ,
im also using the retrieval plugin ,
and in addition, I’ve seen that I might want to use an additional LLM
to manage the interaction between the Database and GPT4.

Sadly, I’m waiting for 7 days now to get Access to use my own Plugins in GPT4.

With GPT4 as a learning Assistant and by Copy Pasting Github README’s into GPT4 and let it elaborate on things I did not understand, I’ve made huge gains.

I’m working hard to pull this one off and I’m thankful for your insights.

I hope I will become able to follow part with all you brilliant minds.
It’s an honor and a pleasure.
Thank you very much.

1 Like

You know, it’s amazing how different the results we can get are. I engage in poetry as a hobby, and I’ve shared some of my work with the previous version of ChatGPT. The poems were in Russian. I had an amazing dialogue that evoked very strong emotions. The chat understood every metaphor, every message, every nuance what I had inside, not even seen by those I asked to read for constructive criticism.
ChatGPT was spot on.

Note this was from 2022, before ChatGPT was even released, much less GPT-4 lol.

I did check and it seems like yeah, GPT-4 can can do word limits and rhymes now. 3.5 still has trouble with it, even though both output similar quality outside of the restrictions.

Check out my what would Dr Seuss do GPT. I can send the link if you want to, it responds to life’s question in a Dr Seuss type of rhyme. So it’s gotta be able to do poetry :wink:

This sounds fun. I’m going to send this link to my brother. He’s like Micheal Keaton in the movie Night Shift. He’s got a million ideas but never implements any of them. :slight_smile:

The OP of this topic was last on the forum in 2022.

1 Like

Hello, I am working for a commercial construction company and we are about to open up an apartment complex for leasing. We want to create a system on our apartment website that can chat with potential future residents. Can you please provide me guidance on how we can complete this task. Your help would be greatly appreciated and can provide payment.