🧑‍💻 Open Model Hackathon

we can look for team members now?

awwwww man - alright bet - im looking for team members!

@dmitryrichard You’re in TX, right? I think you’re required to work alone. I member the Alamo… better check with your wife first too haha… I didn’t realize they had bears that far south either :wink:

I’m in KS though. Any other Midwesterners around?

GOOD SIR,

my wife - says i can team up with others! I have her blessing. But you are right - the alamo was a wash…=(

but dont worry ive been researching these things, sure mapping them to my big burly hairy growly self. WE CAN BE FRIENDS BRO ill only go in for the hug on tuesdays! besides i go to OKC alot thats basically kansas to me lol

1 Like

Great. I signed up. Just saying hi. I am going for Useful Fine-tune.

3 Likes

For those of you looking for teammates: visit the Participants tab to see other participants of the competition on Devpost. Indicate at the top of the page that you are looking for teammates, then use the filters to search for a potential teammate. Once found, you can send them a message through Devpost. Once you have a team, hit Start Project and see the Manage Team section to invite your teammates to your project.

4 Likes

Since everyone introducing them. I am Gurneesh and looking forward to show you what I build and can’t wait to see yours

BTW, for the best local agent category can we use other OpenAI models since the OSS model is not multimodal. Like only for certain cases obviously where the text would not work to answer the question.

Can we use the quantized weights of GPT-OSS-20B in this hackathon?

I would say so - as long as you provide a repo. The primary judging says it will be on your presentation of the idea, as long as there is that proof (of no fakery). If you can make it work with lower quality inference, especially novel adaptation to device constraints, more power to you.

Have your legal eye sift through the rules, so you can’t blame some random internet advice for a disqualification, though :grinning_face:

2 Likes

I’m a Strategy Consultant in NY and looking for teammates for the OpenAI hackathons. I’ve won 15 global hackathons in 5 countries from companies such as Google, Amazon, Microsoft, BCG, World Bank. Let me know if you would like to team up.

1 Like

Hello, I am New here and I wish to join and get a Team member in the OPEN AI HACKATHON. Please reach me with this E -mail to connect

Thanks for checking! On it for testing.

Ollama / OSS-20b onna 16GB MSI 2022 GE76 Raider
IT LIIIIIIIIIVVVVVVEEEEESSSSSSSSSSS :man_zombie::woman_zombie:

[Slowly! It slowly lives!]

This was via the Ollama Windows interface and I haven’t played with the settings yet… but golly… how do you get all that general knowledge down to 12GB? :heart_hands:

It’s pretty slow (no surprises there at all) but it seems serviceable if I limit it’s thought parameter something low.

What’s important to me is that it looks and feels so “familiar” a response, and it is, after all, running with enough space for me to use other stuff.

I’m not sure if this is the right environment for me to dev and test in… And again, this poor laptop flashed the blue screen of death several weeks ago, so it’s absolutely insane to me that OSS runs on it at all, let alone “pretty well.”

It also self-identifies as GPT 4. :face_holding_back_tears:

All of this adds up to something that’s very “Small Business Friendly” to me, which I really appreciate.

Great work OpenAI team. This is just so neat.

2 Likes

Hi, I am looking for partner join the AI related hackerthon! Please feel free to reach out to me :slight_smile:

I plan on entering a dashboard for building and maintaining ai agents with llama.cpp and llama-cpp-python. I hope to help lower the barrier for entry on running local AI with my custom system. You can learn more by looking up LYRN on github and looking at my whitepapers.

2 Likes

Hi, one question, is it only with the local models, right? My computer doesn’t even run the light model…too bad, even though I have an RTX 3080ti, it gives me problems because I don’t have 16VRAM since I have 12 :smiling_face_with_tear:

2 Likes

Hiya, not just “local,” though there is a category specifically for them.

You just have to make something using the small models, which run well on AWS, Azure, and Google cloud services—or, if you don’t want to deal with that, Ollama offers a Pro version for $20 a month that sounds very reasonable, though I haven’t had a chance to try it.

Since it seems like a good opportunity to share, here’s some research I did trying to understand the cost landscape. All of the Cloud Operators offer credits for developers to help offset the initial costs.

Down at the bottom I try to estimate how many hours of cloud computing for a given model before the purchase of the actual hardware makes sense.

AWS Instances GPUs & memory hourly cost suitability
p5.48xlarge 8 × NVIDIA H100 80 GB (640 GB total), 192 vCPUs, 2,048 GiB RAM $55.04 Suitable for training or inference with the 120‑b model across multiple GPUs. AWS currently only sells H100 in 8‑GPU blocks; per‑GPU cost ≈$6.88/hour.
p5.48xsingle 1 x NVIDIA H100 $6.88 Estimate for a 1 GPU at Amazon.
g5.2xlarge 1 × A10G (24 GB VRAM), 8 vCPUs, 32 GiB RAM $1.21 Can run the 20‑b model using FP4 or BF16. The A10’s 24 GB VRAM provides headroom for context and weights.
Azure VM series GPU (VRAM) cost per hour suitable for the 20‑b model; cannot fit the 120‑b model.
NC40ads H100 v5 1 × H100 80 GB $6.98 Enough for the 120‑b model. Multiple VMs can be clustered through InfiniBand.
NC24ads A100 v4 1 × A100 80 GB $3.67 Fits the 20‑b model comfortably; borderline for 120‑b if using MXFP4 but may require multiple GPUs.
NC6s v3 1 × V100 32 GB $3.06 Suitable for the 20‑b model; cannot fit the 120‑b model.
Approximate cloud hours purchased before the estimated cost of hardware = cost of cloud compute. Does not include ongoing utilities.
Option OSS Model Price of Hardware Amazon for 20B Azure for 20B Amazon 1x for 120B Azure 1x for 120B Amazon 8x for 120B
Desktop (32GB) 20B 1500 1,238 490
Laptop (32GB) 20B 4999 4,125 1,634
NVIDIA A10 24GB 20B 2800 2,310
NVIDIA A100 80GB 120B 11000 2,997 1,599
NVIDIA H100 80GB 120B 25000 3,582 3,634 454
NVIDIA H100 80GB x 8 120B 200000 3,634
Cloud Hours Translated into 40 Hour Work Weeks. So how many weeks of work before cloud purchase equals the raw cost of hardware?
31 12
103 41
58 75 40
90 91 11
91
3 Likes

Hi there, we are looking to train a multi modal projector for gpt-oss to reduce the training time. However is there any credits that are being offered for these type of cases?

Damn. I’m a month late. Am I cooked :sob:

Is there any Credits given from open AI to finetune LLM ?

Good luck to everyone who entered!

I hope entrants will post some projects/details on the forum at some point.

https://community.openai.com/tag/project

I can’t run the Open Models on my hardware yet but excited to see what’s possible when I can :slight_smile:

2 Likes