GPT-4 is here! OpenAI's newest language model

I’ve got same response: "As of my knowledge cutoff date in September 2021, there was no GPT-4. ". My question: is it possible to upload an image to chat GPT-4?


Not yet :slight_smile:. But hopefully coming soon!!

1 Like

I joined the waitlist to build with GPT-4, but the other member on our company had already joined it. so I don`t need to join now.

Can you cancel my joining ? however, we ask you not to cancel the joining of other members of our company.

I ask this not only here, but also “chat with us” of support at the same time in case one of them didn’t reply.

1 Like

Is there any data on when to expect general availability for GPT4?

We signed up for the waitlist within about the first hour or so, but waiting for any acknowledgement or indication of where we are in the waitlist .

I understand not everyone on the waitlist will be admitted. But is there an estimate of timing? We are trying to figure out if to continue building (if its weeks), or hold (if its hours/days).


1 Like

@mo0nman, GPT (-3 and -4) can indeed summarize articles, but you first have to scrape them from the web and pre-process the text. One of the important differences with GPT-4 is the amount of text you can include in a single API call. It’s much larger, which means the pre-processing burden is reduced.


Wow… Nice! The responses are absolutely amazing

1 Like

I agree - the wording intimated this possibility, but if you’ve studied how these LLMs are built, you would be sceptical, eh? :wink:

I’ve been using exactly this approach for a while now, even before GP4. Using the API, creating an interface that supports this process is relatively trivial, and GP4 makes it far easier to pull it off in one API call.

Given a query…

  1. Use GPT to extract entities, keywords, and links.
  2. Blend the key content sections from each referenced link into the prompt
  3. Off to GP4 and an amazing few-shot result.
1 Like

But, you can give any GPT models the sensation that they do indeed have access to specific Internet content using few-shot steps in sequenced API calls. GP4 may reduce the shots while expanding intelligent results involving today’s Internet content.


I actually like the idea that you can build AI applications without dependencies on historical cut-off points. Using multi-shot processes opens the door to factoring in real-time data on the Interwebs with outcomes shaped by the LLM. I’ve also experimented with embedding vectors fabricated with information the LLM has no awareness of, and it works pretty well.

I’m relatively new to embeddings, but they seem very powerful and easily “trained”.

1 Like

Yes, of course we can take current data from the network and feed that information back into an LLM to get results with current data, but that is very different technically, than stating the pre-trained LLMs have Internet access. They do not.

We can also fine-tune with current data, but that is creating a new model and that new model will also not have Internet access. It’s just fine-tuned.




Indeed. If it seems like they do, it’s an illusion. I’ve created a very compelling illusion blending real-time streaming highway analytics with LLMs. I had to deflate the users euphoria in three explanation attempts.

These are very powerful alchemies but they can be visually misleading to unsuspecting users.


Very excited about it, I signed up and paid, I can log in the old interface GTP Plus. then select GTP 4 But nothing really seems to have changed, when I try to insert more than 2000 words and ask for a summary, It just comes back with:The message you submitted was too long, please reload the conversation and submit something shorter.


Enhancing GPT-4 API with Metadata and Segmentation for Versatile Use Cases

I think it could be nice to add an extension to the GPT-4 API that would incorporate metadata and segmentation in the replies generated by the language model. This would allow users to easily adapt the responses for various use cases, ultimately increasing the flexibility and usability of the AI model.

One potential implementation is having the AI generate code blocks with metadata. This would enable users to create new files or perform other tasks, as long as the implementation is handled by the API consumer. While this isn’t the only use case I have in mind, I believe it’s a valuable example of the possibilities this extension could offer.

Another interesting application would be the ability to output ANSI color sequences for terminal-like displays. This feature would further expand the ways users can interact with and display the AI’s responses in a visually appealing manner.

In addition to these examples, I envision numerous other use cases that could benefit from the proposed API extension. By enabling the AI model to output sections in specific configurations with metadata, we can augment the capabilities of GPT-4 and allow users to leverage its full potential in a variety of scenarios.

I believe that enhancing the GPT-4 API with metadata and segmentation features would bring immense value to users and developers alike. I am eager to hear your thoughts and ideas about this proposal, and I hope we can collaborate to make this vision a reality.

Sorry if I do not sound like if I wrote this myself as a human person but you know I am an enthusiast user of our beloved ChatGPT+ 3.5 / 4

1 Like

Hi, This is Seong Choe, Family Nurse Practitioner.
I run an outpatient clinic, working as primary care provider.
I would like to apply my experience, knowledge to ChatGPT for better care of the patients.

My email is;


1 Like

So you’re saying we won’t be having a GPT-4 version of text-davinci @logankilpatrick ?

Same question. Will we be constrained to the fine-tuned conversation model? Is it for safety purposes?

1 Like


Are there plans to add GPT-4 to the playground?

1 Like

It’s under Chat completion tab in Playground for me…

Can someone please tell me how to access the tool to upload a photo into ChatGPt 4 ?
I think I have everything setup, and I log in, and select GPT4, but all I can do is type questions.
What am I missing?

@logankilpatrick It looks like the image ingestion is not activated yet. Is that correct, or am I doing something wrong ? If it is not activated yet, and I am sure, you may not be able to comment, but I thought I’ll give it a try :slight_smile: : Any time frame for that ?

If the previous reply applies for me too, how can I activate the ability to upload images?