OpenAI Dev Day 2023 Live Reactions

Wow.

There goes my day. I feel like a kid in a candy store.

1 Like

come on ,you said to 11:30?


They updated the DOCS but have not released the DALL-E 3 model yet.

I have escaped!

9 Likes

I am just running the new script from the DOCS until the model no longer shows does not exist.

Yes its a smart way to encourage usage. I assume they are going to try and monetize certain GPT’s somehow though.

1 Like

Honestly? They announced WAY more than I was expecting. The GPT-4-Turbo model was actually something I expected, along with API access to vision and voice.

They tipped their hand earlier about built-in retrieval, so that wasn’t a surprise.

I’d seen articles about building your own GPT and the unified model.

What was surprising is that it was ALL true. OpenAI was definitely a leaky ship in the runup to Dev Day! :rofl:

The assistants endpoint is (I think) insanely huge, and will prove to be a much bigger deal than a lot of the stuff they dropped today.

The 128k context widow is huge.

I don’t know if they said anything about it on the ChatGPT side of things other than as of today ChatGPT will be using gpt-4-turbo, but are they giving ChatGPT Plus subscribers access to 128K context?

The BIGGEST shock to me though: Revenue sharing!

I think I suggested that in some of the discussions around ads in plugins, I know I talked about it directly with the developer of the Proptate ad platform as what I thought the best alternative was for getting plugin devs paid.

But, I (and I am pretty sure everyone else) thought that was a utopian-magical-thinking-pipe-dream.

Never have I been happier to be so wrong. (Though to be fair, they didn’t mention revenue sharing in the context of plugins, but doing so for GPTs shows they are at least willing to share.)

5 Likes

I think so as well. This is going to completely change how I write my applications. I can’t see why an application wouldn’t have something you can literally talk to, step-by-step. This just opened a whole new dimension.

Being able to use Vision, CI, and Dall-E with the assistant just makes it perfect.

SAME! WTF? I really can’t believe this. I think it’s amazing though. Skip nasty third-party providers. Skip nasty ads. Keep it all contained. I’m really excited to see the agents that are created.

It seems to me that plugins will eventually become overshadowed by the GPT marketplace and “Actions”. Maybe? The docs make it a bit unclear (or maybe I just haven’t absorbed it yet. I’m bouncing around like a ping pong ball)

I don’t think it was mentioned. I would think it’s safe to assume that it’s API only (for now)

AHHH. I have done so much work that Assistants has completely rendered useless. I love it, and I hate it :sob: time to get a batch of coffee rolling cracks knuckles

2 Likes

https://github.com/openai/openai-python - code just dropped from a private repo onto main, but:

If you haven’t been staying abreast of the API beta 1.0.0 - it is not a drop in, you’ll need to do some code conversions to get dictionaries out of the pydantic objects or to use generators.

Nothing was disclosed in the prior code directly about upcoming API features or schemas (like no image model, no context list or other tips), so it will take some digging to see if 1.0.x or what’s in main is nothing like the beta.

2 Likes

@Foxalabs

I have some questions for you to pass along if the opportunity presents (you lucky bastard!).

API

  1. How are tokens counted for the new modalities, and is there a way to know how many tokens a file will count as? E.g.: Is it $x/image upload, # of pixels, tokenized content for a PDF, # of pages, etc.

EDIT: I can answer my own question here (a little bit anyway), from the pricing page they have a pricing calculator for input images of different sizes. It seems they break the image into 512\times512 size tiles and we are charged based (loosely) on the number of tiles (85 base tokens + 170 tokens per tile),

Image Size 512\times512 Tile Arrangement Tokens Price
512 \times 512 1 \times 1 255 (85 base + 170 Tile) $0.00255
1024 \times 512 2 \times 1 425 (85 base + 340 Tile) $0.00425
1536 \times 512 3 \times 1 595 (85 base + 510 Tile) $0.00595
1024 \times 1024 2 \times 2 765 (85 base + 680 Tile) $0.00765
1536 \times 1024 3 \times 2 1105 (85 base + 1020 Tile) $0.01105

Notes:

  1. It appears we are limited to 3 tiles in any dimension and not more than 6 tiles total.
  2. At the ratio of \frac{3}{4}\text{words} / \text{token} we get,
1\text{ picture} = 1105\text{ tokens} = 1105\text{ tokens} \times \frac{3}{4}\text{words} / \text{token} = 828.75\text{ words}

Which is substantially less than the 1,000 words (I have on good authority) a picture is worth.
pitchforks

END EDIT


ChatGPT

  1. If ChatGPT is now powered by gpt-4-turbo, does that include the new, massive context length?

(I’ll add more as I think of them.)

Have fun!

Also, I kinda hate you right now. (j/k, not really, but, maybe…)

Also, also, enjoy your $500 credit! You absolutely have earned it for all the great work you do here!

Also, also, also, I’m so jealous of you right now.

3 Likes

Am I correct in thinking that an API version of the assistant is reflected in ChatGPT as well? Like, ChatGPT is basically just a GUI for initialization / configuration / conversation? That they’re synchronized together?

Ok. So assistants can’t use Dall-E nor GPT-4 Vision, yet. But come with RAG, which is pretty dang sweet. I guess we can easily bypass this for now by just writing a function to call it ourselves? lol?

1 Like

That’s such a realistic image – what prompt did you use?

1 Like

Oh, the other big surprise (but not really once you think about it)…

COPYRIGHT SHIELD!

When you stop to think about it, it makes a certain amount of sense. Big victories are built on top of a series of small victories. OpenAI has a clear interest in ensuring no one using one of their models loses a copyright claim because the moment that happens, the floodgates open.

It’s great to see they are aware of this and took the appropriate action.

It would have been smarter for some of these anti-AI copyright litigants to find a smaller fish to go after initially to establish a precedent before working up to OpenAI itself, but OpenAI just nipped this very real threat in the bud, while simultaneously making their platform much more attractive to others by removing the specter of this legal threat.

So, I think this announcement is the sleeper-bombshell, because it amounts to an earth-shattering declaration of war in the AI-copyright-violation legal landscape.

4 Likes

Found em!

3 Likes

Downtown San Francisco, Fall morning, overcast, from the perspective of one fortunate bastard.

3 Likes

The sun is periodically putting it’s hat on, but that is accurate :rofl:

1 Like

Best snag so far

4 Likes

Hey if possible what do we need to do, to allow following of the OpenAIDev on X.com x.com ?

1 Like

Here’s a question not addressed, and don’t if you’re now finding anyone to answer: Is gpt-4-1106 (turbo) going to act like API users want by being exclusively for API, or is it still going to be minimizing the output tokens and making unsatisfactory response lengths for those willing to pay for those tokens?

Might be one of those “try and see what we’ve got to work with” – like the last three completion models released…

1 Like

This was a baller move by OpenAI

2 Likes