OpenAI has begun training its next frontier model

I am more interested in things hat are more complex then speed and memory. I am more interested in it’s reasoning capabilities. So it could have a better understanding for what I want of it AND keep more attention to everything that was generated within memory window. So far all AI’s tend to… Slip off some of the information, telling the exact opposite of what is in their memory. GPT still, for my taste, have worse understanding in long chats.


Couldn’t agree more even though I find myself wondering more and more often what specific type of reasoning I actually need the most.

For example, when working on problem descriptions based on language alone it’s necessary for me to clearly define what I am really referring to instead of assuming the model can infer what I mean AND reason about it.

But I guess no matter what type of reasoning these models put on display there will be a way to leverage it properly.

1 Like


That’s a cool idea, I suppose you could build something like this already today.


Just for my better understanding: isn’t this already covered by generating a baseline capability measurement based on a diverse set of evals and benchmarks before putting this value into the context of price per token?

@N2U you mean in the sense of few/many shots instead of actual fine-tuning? Considering that GPT 4 fine running is still available to a selected few only, I suppose we have to be very patient here.

But yeah, if we had a few magic wands there would be quite a lot to wish for!

1 Like

Yeah fair point, like I often use many shot prompting to achieve a desired result, I can technically copy my initial input prompt and the resulting output after a series of prompts, but it would be great if the model could automatically generate a synthetic dataset based on this if prompted to do so. I’m thinking this could be great as an extension to the current capabilities of custom GPT’s.

Another thing that’s high on my wishlist is having the model use reasoning to actually ask for more information that will improve its output, there’s many situations where the model will just provide a very generic answer because the user isn’t providing the specific details needed to complete a given task, it would be nice if it would just ask for these details instead :laughing:


I would like to see improvements in this area as well, inclusive of interpretation of images generated from graphical or tabular data, and histopathology as a tool for data analysis and teaching.

1 Like

yeah, it’s a fair question.

A lot of my work involves deductive reasoning so this is definitely high on my priority list.

The other point is for me abstraction. I frequently find that when I provide models with examples, they often take these too literally and do not tailor their responses enough to new context and/or adopt a too narrow perspective. So a push on that front would be helpful.

1 Like

Understanding and interpreting images in PDFs.


Faster responses. Larger output token limits. If it’s a Large Language Model, having the ability to count at the very least characters and words, or even tokens within documents.

Lower price?

1 Like

I want text to video, where it can convert a script to a full color movie. And it has to take directions, meaning it might take me a day for each scene to input everything to get it just like I want it. Even a couple days. I can write my own software to streamline the process and still be like i want it.

Agreed. For STEM instruction, I’ve been able to interpret images from a pdf after exporting as png and may have a solution for images embedded within the pdf itself…Happy to discuss further. -Dr Boutte