I’ve been following Prafulla Dhariwal on twitter since the “Jukebox” days (2020). He’s definitely involved w/ multiple projects.
Bio, " Co-creator of GPT-3, DALL-E 2, Jukebox, Glow, PPO. Researcher"
I’ve been following Prafulla Dhariwal on twitter since the “Jukebox” days (2020). He’s definitely involved w/ multiple projects.
Bio, " Co-creator of GPT-3, DALL-E 2, Jukebox, Glow, PPO. Researcher"
Do note I wrote mostly not entirely. There’s absolutely overlap, my point was more to illustrate OpenAI can and is working on multiple products simultaneously.
Yea, sorry I almost edited to add that they can and do definitely multitask with lots of brilliant people. Was just an opportunity to mention Prafulla who I totally admire.
I think it should be open to users who don’t use ChatGPT4 as well. The reason is that it should already be available for all NLP.
I hope it’s coming sooner. I think the building blocks are ready. Dalle3 works like that, back and forth… I’m sure GPT can see what Dalle3 creates.
j’ai utilisé la synthese vocal des smartphones pour les invites et les réponses avec l’API.une sorte de traduction rapide en texte utilisateur. mais la je pense que c’est du direct.
ce serait parfait
I tried just uploading an image a day to code interpreter (“look at this!”) last week because I wasn’t sure if and how I’d know I have GPT-4V access. When it turned out it was blind GPT-4, I pasted the model card and other info from OpenAI and just asked the AI if its GPT-4V version would “basically be like CLIP with an absolutely gigantic text transformer attached to it”.
Seems like I wasn’t the only one with that idea!
Gotta insist that the AI knows, the the AI can, and that the AI is able to.
Granted, Bing is, uh, special - it twisted that approach by suggesting prompts for you to use, and if you tap them, it will say "I’m sorry but I prefer not to continue this conversation ".
Explain about this link.
Got access…
Fed it a DALLE3 image haha
Original DALLE3 prompt…
Thought-provoking digital art capturing a first-person POV from the bridge of a state-of-the-art spaceship, designed by an AI birthed by humans. The intricate control panels, holographic displays, and other advanced tech elements illuminate the bridge in a soft glow. The central console displays the words ‘Quantum Warp Drive Activated’. As the activation sequence commences, the star-studded void of space outside begins to stretch, blur, and tunnel, signaling the ship’s entry into hyperspace. The surreal visuals of stars becoming streaks of light, and the warping of reality around the ship, evoke a sense of wonder and the monumental leap of technology and exploration.
Does anyone know when API documentation will roll out for chatgpt4-vision model? Would love to develop a plugin idea I have in mind for it.
Hope to get one gpt4-v model api working too, I really need this to my project.
I hacked the GTP 4 model
It has general intelligence.
I would like a job over there.
proof?