If this was the cause of the ‘candid’ remark - that would be justified. It’s not what the COO said however.
As I mentioned above, if the COO was mislead by the board (certainly possible) or perhaps even lying (he wants to go work for Sam), than the above didn’t apply.
I think the offering could still go through if the candid remark was with regards to Sam out planning his jump ship. That would be very bad and he would deserve the smearing
It’s not in the balance, that shit is blown up. That deal is gone in the wind.
I dunno, if Sam was out shopping for a new job and didn’t tell the board, then what they did makes 1000% sense. MSFT will back them and things can go on.
Not saying that’s what happened, but that would make a lot more sense to me. The only curious thing is the COO’s memo. That’s not ideal.
Oh I guess those senior researchers bailing isn’t good…
Its been stated microsoft will continue, but now in am unhappy and rocky relationship. They didnt tell the ceo od microsoft until literally 60 to 120 seconds before they announced it. Microsoft may start pulling back as while they have some agreement, the news yesterday hurt microsofts market cap more then they actually put into openai, so it could become just a problem for them to keep the cooperation moving foreward.
Its been stated microsoft will continue, but now in am unhappy and rocky relationship. They didnt tell the ceo od microsoft until literally 60 to 120 seconds before they announced it. Microsoft may start pulling back as while they have some agreement, the news yesterday hurt microsofts market cap more then they actually put into openai, so it could become just a problem for them to keep the cooperation moving foreward.
For sure, but if Sam had already been planning this and didn’t tell the Board, than I’m sure Satya would understand. Sam will become the focus of ire.
Who knows, maybe theinformation is just making it all up or reporting on really weak sources.
Maybe it all happened post firing, but still, Sam should have waited a somewhat brief mourning period. This looks a little indecent - if true.
It still blows my mind that they didn’t bother to consult with Microsoft, or at least give them some time to prepare. The CEO was on-stage with Sam just a couple weeks ago.
Now they are pulling back the punches saying it wasn’t caused by any “wrongdoing”? The question remains, why so abruptly and dramatically?
It is what it is, I just want to know what the direction is going forward and how much I should be moving my damn eggs around. This split is at a brutal time and I’m really losing confidence.
Anybody can do GPTs, it’s not a big deal. I was doing it GPT-Desk before OpenAI announced. I actually believed they should not have ventured into taking away the leverage anyone using their APIs have. It looks like a move to monopolise the market and compete with their developer community. Many developers have already lost steam. OpenAI should focus on their technology and let other people innovate how it will be used and pay for using their APIs.
Altman was also courting Middle East sovereign wealth funding for the development of AI chips that would compete with Nvidia’s, according to Bloomberg.
So is this really a betrayal of chip manufacturers?
Seems like a stretch for getting the AXE. Chips are good!
Man usually from what I have seen is that there is one person who wants to change things drastically and for good. At the same time there are these people who can’t stand to see this one person doing so good that so many people can’t do together. I respect all the genius who build openai and all these amazing tools but this is the second time I am seeing this and both the times the person is forced to resign from the post by the board of directors on a very obvious fake reasons and that too when the person brought the business to the peak.
I mean Sam was able to communicate with billions of people fluently and in so many interviews but not in the company and to the board of directors. That sounds so political. May be the board of directors has their own “goal”.
Usually in my experience after a great person steps down from his position. The next announcements from the company tells a lot about the mind of the ‘main’ person who was involved in such political move.
And normally these decisions are not that good and not that up to the mark as compared to what our Sam would do.
Any ways this is not a new thing in this world. Ants always come on the sweets.
When can they open-source GPT-4 so we don’t have to worry about OpenAI fucking up their own company again?
I’m getting honestly worried here. As a developer who -literally- just set off the paperwork for my own startup and to start launching my shit, and to have this happen, is unsettling. Good to prepare me for actual in-the-wild development I guess, but how am I supposed to pivot if I don’t know where shit is going with them?
The GPT store could be a huge advantage, but is that happening? Idk. Could I just use different models? Sure, but I actually like this community of y’all, and tbh other models don’t compare, nor do they have the feature sets I can leverage without spending hundreds more hours building a shittier version of what already exists.
OpenAI needs to give us answers. This is unfair to their investors, us developers and community members, and consumers.
This is why you don’t operate in such secrecy; shit can hit the fan hard.
Even though they’re not related But Altman’s lack of direct communication reminded me of one thing I felt when i reading various documents, information from OpenAI’s website and what I have observed all times whe nuse. OpenAI’s communication is problematic, unclear, insufficient. Moreover, it is as if they are open to all problems of AI and try to take the problems of the whole world onto it. Expect to make it accept everything. Which is the same as the first picture I attached. For some problems I have found that clear communication solves or mitigates problems better than AI development alone. Or at least reduce the problem until it is developed and done in an expected way. Because nowadays OpenAI is not just a researcher or AI developer, it is an organization with a global role and influence. You must realize that developing a complete product alone cannot solve every problem. But it must use the status of OpenAI to manage.
And in the next picture I understand what Altman wrote. Because I was in that position, I didn’t understand AI, so I was interested and studied from the user’s point of view. And I can see many possible real effects. On the other hand, those who know more about AI consider some issues not to be a problem. and is normal operation The problem is that the work doesn’t match the desired results and is fixed by writing code. Just like OpenAI is doing, they are focusing on a framework for developing technology that even the law can’t keep up with. Although it was found that the development exceeded expectations and was able to provide trustworthiness, the AI itself sometimes acted contrary to what Altman intended. Although I have found that answers to images that require AI to explain the meaning It’s my personal information that shouldn’t be displayed this way. But it hasn’t made me lose faith in OpenAI like I feel about gool… These days some expressions of AI emerging from the pority of a higher order require a lie, I still teach like talking to a child. and said It may be better to give a direct and clear answer than to cause problems and create distrust in the person who created it. But my words can’t reach anyone.
No matter who or what is right or wrong. I am who works in data. The news that comes out to the outside is controlled to have as little impact on the organization as possible. And I’m not interested in AI because of one person. I know and believe in OpenAI in the enterprise landscape. What will decide the future is not just today.
To be honest, as long as we keep listening to customers and give them what they want – everyone (and everything!) will be fine.
While it is technically possible that some things at OpenAI will change, they did a great job of plowing the road – and now companies (including ours!) will create solutions on top of that.
This is uncertainty that companies will have to live with – and ADAPT to. In either case, startups are great at adapting.
Possible – though the “real” reason for this is yet to be determined.
Good question … and will probably be answered based on the “Profit vs Non-Profit” debate.
That question will be answered soon based on whether they chose the “Profit” route or the “Non-Profit” route, I guess.
Boom! There are so MANY use-cases emerging and so much business demand opening up, it would be crazy not to.