Interesting but do I understand correctly that it still cannot access the internet for current information and cannot provide links to the source of its answers?
One thing to point out, which is different than the past models, is that they charge separate values for input vs. output tokens.
That’s super interesting. Anthropic was already doing the same with Claude. Another aspect: the rate limit at the beginning will be super aggressive. But I guess it makes sense:
Also wondering how many “tokens” that make up an image. Will be watching the YouTube video here shortly.
100 messages per 4 hours in chatgpt for now
The API version is a similar price to a trained gpt3 davinci
Any idea on how many parameters GPT-4 has? I remember there was speculation on the order of 100 Trillion. But if cost on the order of a trained davinci, I’m thinking it can’t be more than 1 Trillion.
I think 1 trillion was a meme/rumor… I know another recent LLM was around 500B… I think OpenAI is working on other things besides size to improve quality and speed…
Interesting they are releasing it on 𝞹 day ![]()
Oh, the YouTube video music is starting …
It’s sweetabe for all users. So thanks management of Chatgpt A1
For more details on GPT-4, here is the GPT-4 Technical Report
Where’s the uploading images for it to look at? Why is it so slow? Also it still has the character limit, yet in the video they said it can produce up to 15,000 characters???
I can’t access the discord, it says I’m banned yet I was only in there for 5 minutes what the hell?
Just got access to the API. I’m considering whether it’s worth it to sleep tonight or not ![]()
Thank you for creating such interesting and helpful models. I’m not sure that this is the right place to ask questions, but I have a particular question. Please let me know if there’s somewhere else more appropriate. I’ve been thinking about the agentic behavior of GPT-4 mentioned in your research document. It occurred to me, that any agentic behavior would require a loop. As far as I know, the only loop in a transformer is for next-token generation. Is it possible that GPT-4 has learned to maintain state by passing information using that loop? It seems to me, then it would be easy enough to observe what is being passed for input for next-token generation. I hope this is helpful.
As an idiot that plays in the playground, is GPT-4 coming to there anytime soon?








