Send me your GPT problems, I’ll solve them for free and make a YouTube video

I’ve recently started a YouTube channel on basis of real-life problem-solving methods. However, I still need interesting problems to solve.

I think I’ve found a compromise. If you send me your research problem or business problem, I’ll solve it for you for free, and the data and code will be published under the MIT license. The exchange is that you get free labor and insights while I get interesting research problems and ideas for content.

My channel: https://www.youtube.com/channel/UCeF9ebS8qOwg6DTDyI_kExw

2 Likes

I’ve been trying to get it to do poetry.

It seems to not understand syllables or line lengths, though I could be wrong.

It seems to grasp some concept that Word X rhymes with Word Y. And it seems to be able to grasp concepts like “short line, short line, short line, long line” or “long line, short line, long line, short line”. So there’s some form of poetic structure there.

Still, I think it’s a good topic if you want to play with its limitations and work around it. For an extra challenge, see if it can make something usable for tinder messages or freestyle rap.

Sounds like a great deal to me! My problem is I want to use it 1) as a general personal assisstant (where it needs extensive knowledge about my life and people in it → my last 10 years of diary) 2) for for multiple business applications within the same company e.g. internal IT help, customer service chatbot, onboarding help for new employees etc, where company internal knowledge is important (wikis, documentation, maybe a bunch of email or word documents as a bonus)

I struggle to use normal fine-tuning, because the API seems to be made for a specific type of task (e.g. general chatbot or general fiction writing, etc), while I want general tasks but specific knowledge

Hey I’ve been struggling with this Problem: I still get openai.error.ServiceUnavailableError - #3 by raymonddavey

Maybe you have a better Idea how to solve it. As I know the error comes if there are too many requests and openai has limited requests. Maybe you know a nice fix for Python X API X Openai. To restart the bot after the bot stops working because of this error.

Hey how can I contact you privately? I have an idea I would like you to look into for me, but I wanted to discuss it privately before going ahead with having you do it, if you are ok with that?

If so please email me. It is my user name at Gmail.com

Are you responding to these? I’ve got a couple curious problems :slight_smile:

I want to learn - How to make cartoons using DALL-E. Have you made a video on that?

I am trying to classify social media posts in ads & non-ads, so far a basic task at which GPT-models excel.

Using a neutral prompt, the bigger models davinci & curie tend to strongly prefer judging False/no-ad in not perfectly certain cases. In a similar fashion, the small models tend to strongly prefer classifying True/is-ad. In both cases I really need to bend the models, to come somewhat close to 50:50 e.g. by adding “if you are uncertain, always prefer True” (or false, for the small models),

For the chatGPT API, this has gotten even more severe, chatGPT (behaving like davinci) basically refuses to classify posts as ads.

That’s what my prompt looks like right now for chatGPT:

If you notice the slightest indication that there might be any chance it could contain a (potentially non-obvious) promotion of a product, service, partnership, even if the promotion is not commercial or not tied to a specific vendor, let humans have a look at by returning “True”. Return “True” even if you are not certain. Always return “True” if business accounts are linked or products/services mentioned, even if there is not indication of a partnership. If you think it is very unlikely that it contains (potentially non-obvious) promotion of a product, service, partnership or anything the like and there are no business or specific products mentioned and there is no need for a human to crosscheck, predict “False”. If you are uncertain, err strongly towards “True”.

In a dataset of 50:50 ads and non-ads, this (in my opinion very extremely formulated prompt towards getting “True”) still has a bit of a preference to return “False” over “True” - although for this extreme prompt its only small. Minor changes in formulating it less biased for true, result in 80/20 False/True ratios. ChatGPT is here even stronger biased than davinci and curie, but same pattern. The same (albeit less extreme), applies the other way round to the small models, which always predict True, even for non-ads.

If you can solve this I’d think it is probably useful beyond my personal case

你能出一期,微调text-davinci-003的教程,最好以下工具node.js和vs code

could you make a guide for adding chatGPT to a static documentation site (like Hugo SSG) using netlify functions + javascript? The script would create/check embeddings based on docs input (like json index), get an embedding for the user input question, and return an answer based on the most similar articles matching in the embeddings.

I think this is a really common use case for tech writers / docs maintainers like myself haha. I’m able to submit a basic question to chatGPT through a netlify function but I can’t get context to work using the embeddings of my docs.