How can I join the waiting list for gpt-4 fine tuning?

Hello Community!

I understand it’s still for a small number of users, but is there a way to join that waiting list / shortlist? Any shortcuts?

I am an API user but did not use the fine tuning models before

Right now the criteria is being a very high user of the fine tuning system, so someone who has trained hundreds of models with a lot of data and has a great deal of experience categorising, evaluating and creating feedback and reports on model performance against a set of high quality evaluations.

Or you can help out on the forum and achieve Trust Level 3 and access to our exciting Lounge area where lucky members get access to alpha services! Just 50 days of visiting, around 10,000 posts read and then 25% of all new posts, then help out new members by bringing enlightenment and joy to all.


Thank you so much for the so-detailed response!

Hi, that does make sense, but it is a shame.

I’ve trained hundreds of open source models for internal use, have large fine tuning datasets sitting on my drive and would love to see if gpt4 is capable of the tasks I have in mind.


You actually sound like a potential candidate, I don’t know what their exact criteria are, but you should drop a message to the support bot on in the bottom right corner is an Icon, click it to open the bot and leave your contact details along with a message about your skillset and desire to alpha test the GPT-4 fine tuning.

1 Like

I’d love to see what you can make, the open source community has been on a roll.

I want to fine-tune gpt4 on gCode so I can control my cnc machine by talking to it.

“Hey, put the head 10 mil above the center of workpiece please”
“Yo! Grab the bull-nose mill please!”

This is complicated and dangerous and I’m hesitant to get started without 128k context and fine-tuning.

1 Like

I assume you’ll be using the whisper API or open source code too then? That sounds like a complex project. I’d love to see what you can come up with though.

Why can’t you just use a fine-tuned llama2 70b model instead?

That’s it. Whisper is unreal it gets context so well.

The little models are cool, but their reasoning abilities are pretty much non-existant.

Gpt4 out of the box will generate valid simple gcode, but for more complex things fine-tuning might unlock some very powerful abilities.

I do things like get Gpt4 to recite poetry in midi and I’m very curious what a Gpt4 fluent in gcode tailored to my machine can do.

1 Like

I use 3D printers so I’m mildly familiar with how gcode can be use (and how dangerous extra zeros can be…)

You’re right about the inferior logic, that’s why I use GPT4 for all my coding and everything else.

It’s the dangerous part that is the big thing. With a big cnc router you just do not mess around. Ever.

I’m sure you have your own already, but this is mine, have a go!

It’s for research but it can do some wild stuff if it’s in a good mood, make simple robot heads, plot toolpaths etc…

1 Like

That’s pretty cool! I’ll keep it, thanks!

Let me know if you have any luck with fine-tuning GPT-4.

By the way, for the open-source models are you fine-tuning locally or through a third party? I’m curious what the computational requirements are for that kind of thing.

For fine tuning little models it’s not bad.

For me, anything over 13B and you start to need real resources. That big model hub makes it pretty easy and there’s great providers out there, you can get gobs of modern gpu and cpu for pennies an hour.

1 Like

there are plenty of CNC emulators out there, I think HASS even has one that’s free

Correct, you’d have to run the gpt4 outputs through a simulator and have it check the results before doing anything.

Cnc machines are dangerous, there is no room for error. Everything has to be double (triple) checked, simulated, and all safety systems must be functioning etc.

Fine-tuning gpt4 for gcode might make it possible to use gpt4 to create outputs that can safely be used to control things in the real world when integrated into a larger system with extremely robust safety protocols.

1 Like

I loved your answer so much, you inspired me to write my first reply! Thanks for being so welcoming :orange_heart: