2-shot plus step-by-step prompts for gpt-3.5-turbo performance at gpt-4 level?

I need to read the paper but I would add that 2-shot is essential how I’ve been getting GPT-3.5 to behave in my Self-INSTRUCT stuff. I use GPT-4 to generate an example that I feed into 3.5 and it definitely improves 3.5s reasoning moving forward

You may not find all that much there. It’s a pretty expensive prompt if you don’t need TOM capability. I’m always looking to understand, from a TOM perspective about LLMs themselves, where their limitations are.

Here’s another paper I just stumbled on. Long, but probably a must read. Eric Horvitz is chief scientific officer (or something like that) at Microsoft research, and a very smart guy. Acknowledgement - I followed a link in another topic about prompts, @mstefanec found it before me.

Sparks of Artificial General Intelligence: Early experiments with GPT-4

2 Likes

Yeah, that’s a classic at this point. Really great paper, must read.

Maybe start a new thread (I’ll contribute!) though to track generic great papers as it’s somewhat off topic to this one.

Good idea, what should we call it? (the new thread?)

Good question, I like asking GPT4 for advice on these things

image

Maybe “must read GPT/LLM papers”? heh

works for me.
I’ll repost that link to start the thread

1 Like

forums rejected the title! must be at least 25 chars :rofl:

I added a ‘Foundational’ up front

1 Like

For anyone looking for this it’s here - Foundational must read GPT/LLM papers

If you post a paper in another thread that works well in the foundational one, link back so people find it :slight_smile: It’s a little hidden away in the ChatGPT category which is muted for everyone I think. Not necessarily a bad thing

1 Like

Didn’t realize that! it was the closest category I could find, no way I could find to create a new category. Maybe I can still edit the original post if there is a better category you can suggest.

The community category would probably be good. It’s below the fold, but I think it’s most appropriate.

FYI the API needs you to send the entire session history each time you want a response, as they don’t retain any state on their end. So conversations get exponentially more expensive as they go on, as this history counts towards token costs each time.