Mac Intel ChatGPT Desktop

There are still an insane number of Intel Macs in great condition out there. Surely, omitting a build for them MUST have been a mistake!!!???

14 Likes

yeah, this is very weird. I can live without the AR for a year or two more, and I can understand “Apple being Apple”. But why would OpenAI do this too.

1 Like

This isn’t just a build, they are completely different Architectures. There is so much more than just building it as an Intel version.

I’m pretty sure it’s just a compile target.

3 Likes

Please show me where this is.

1 Like

My guess is that this was initially made by OpenAI employees for internal use on their shiny new apple silicon devices, they’ve just decided to let us join in on the fun.

I’m sure support for other operating systems and architectures will come eventually!

2 Likes

I analyzed the executables and determined they  probably  don’t contain any libraries specific to ARM. So, an x86_64 build target should be technically feasible.

However, there are reasons other than if it’s technically feasible why they may not have prioritized the Intel Mac app version. Particularly the significant undertaking of testing, and that performance may be optimized for the M chip.

Or the “total coincidence bro” (YouTube, 0:30) of rolling out the GPT 4o release happened a day before Google I/O 2024 conference. Like @trenton.dambrowitz said, it was initially developed on apple silicon; my thought is that a desktop app release showcasing the new model’s features looks really good up against Google’s news, even though including just one target environment… for now :grinning: :smile: :grin: :laughing:

Fireship YouTube video title: Another glorious battle for AI dominance… GPT-4o vs Google I/O

2 Likes

Hard to imagine there’s not a Windows Version with all the MS money in OpenAI. And if you have a Windows target, why not a Mac Intel. My guess is it exists but the web people messed up with OS Detection. I say this because if you Download it once, there’s NO WAY to download it again. Feels like a slightly botched roll-out attempt.

Not sure but here’s what Geeps says: https: // >chat.openai.com< /share/b4584a41-b823-4273-8514-3e5d175e7b8d

1 Like

It is strange, I cancelled my subscription purely cause of it.
If the on device AI is not a mandatory requirement, it is pure ignorance from OpenAI.

It doesn’t matter if it’s little or more effort, I am sure you don’t need a dedicated team for that.

2 Likes

Alright, since this seems to be a pretty active discussion point in this forum right now, let’s go over the x86 vs. ARM a little bit. I will try and break this down for people of all knowledge levels.

These two architectures represent fundamental differences in how the CPU interprets instruction sets. ARM uses RISC (reduced instruction set computing), and x86 uses CISC (complex instruction set computing). So they are fundamentally different down to the assembly language representing all the higher order code being developed.

Think of this like x86 as Japanese, and ARM as English.

There is near no direct 1:1 correlation. You can get close, and you can translate between them, but they are very different.

In order to go from ARM to x86, you either need a different program entirely, or you would need a way to fundamentally recompile it for x86 architecture. This is a nightmare. And this is just in general; I haven’t even dug into Apple’s specific mess yet.

Keep in mind too, if you have any performance optimizations, any at all, good luck not breaking things. ARM has instructions that just don’t translate to x86 and vice versa.

Now, this is all well and good, but we are talking about Apple’s specific ecosystem in this context. Does this change much of the difficulty and complexity?

Nope. If anything, Apple just makes developing for x86 chips more pointless, not less.

There is never going to be another intel chip on Apple hardware again unless something changes again in another decade.

Why would a developer build for an architecture that is already deprecated? This makes very little sense. Most apps in Apple’s ecosystem now are built exclusively for apple silicon, and that’s where all of Apple’s new customers are going to be over time. Hell, I wouldn’t be surprised if Apple enforces that behavior themselves. This is very much their MO. Gatekeep, and if your old Mac stuff is outdated, by a new Apple thing. Do I agree with this? No. Does Apple care? Also no.

For those who ask “Windows is x86! They’re surely building a Windows app, why can’t they just support intel Mac stuff then?”

Weeeeell, you forget that we’ve only been discussing the lowest levels of compute. We haven’t even touched on differences in OSes. Remember, Macs are still Unix-like systems at their core. Windows is MS-DOS. Again, this adds its own complexity when trying to implement cross compatibility. Just because two computers use the same CPU architecture, does not imply any degree of ease of cross-compatibility. See iOS vs. Android. Of course, there are ways to make this easier and more painless, but it still take certain degrees of time, and the more compatibility you add, the more complexity you add, because the unique quirks between hardwares, OSes, and architectures stack up fast. It can overwhelm you quickly.

So, to end this, the reality of why it’s likely in Apple Silicon first is because the people at OAI probably built it on their laptops using something like Swift (did I forget Apple even has a preference for what programming language you use to build on their ecosystem?), and it takes 1000x less braincells and effort to just make something work well for one architecture on one OS, and branch out from there. It’s quick, easy, simple, long-lasting, and will translate easily to iOS.

2 Likes

Sorry yeah, the compilation of the higher level language to the bit code is all abstracted away these days. It’s just an output target. You don’t have to write different higher level code to get an app for each architecture. That’s not a thing, no one would go for that. There are definitely some API differences in some ways, but for the most part it’s the same thing. It’s not exactly like a jit compiler like JavaScript or anything, but there’s definitely a level of abstraction there that makes most of the process very seamless. If you don’t use specific features of the architecture then it should be pretty universal automatically.

1 Like

Where do I start with this one…

There are two camps to desktop app development:

  1. Make a React app and wrap it into something that can be deployed anywhere

or

  1. Build an app that runs natively on the OS it’s operating in.

I hate to break it to you, but this is how a vast majority of the major apps you use on your computer are developed. This is why companies spend money for actual development teams. If your statement were true, then people’s OSes wouldn’t matter. None of this would matter. Everything could run on Linux, people could play Windows games on their iPads, and there would never be a “__ for Windows/Mac/Linux/Android/iOS”, there would just be apps. Everything would be cross compatible.

Most indie devs go for the easy route that notably does not require a team of people, so they go for option 1, make a React app, click the shiny “port to OS __” button and pat themselves on the back. In some cases, this works just fine. It gets the job done. This is not the way to go if you want any part of your app to be performant and optimized on the hardware its running on (let alone have it run offline). That is not something that you can do with the click of a button. Do you need to know Assembly to do this? No. But you will need to write a lot of extensive code that utilizes libraries specific to the OS and the different kinds of hardware that OS supports. This is why there’s typically versions of apps specific to each operating system.

Think about this statement for a second. Why do you think OpenAI created a desktop app in the first place? They could have just as easily made a desktop app the same day they released ChatGPT, and it would just be a wrapper for the web interface.

Most desktop apps are utilizing performance optimizations under the hood. OpenAI’s desktop app is no different. The reason it is only for M-series chips is because it is probably optimized specifically for the M-chip architecture. This makes a lot of sense when you consider how the M-chips have a neural processing unit designed to enhance AI calculations. I don’t think there’s local models on there yet, but if at any point they wanted to, they are going to need deeper access to that architecture to make sure the ChatGPT app isn’t as slow as a tortoise.

If there is no reason nor need to develop performant, optimized code that runs fast natively on a computer, then there is no reason to develop a desktop app at all. Just develop a web app instead. Considering OAI already has one of those, it kind of makes sense that the desktop app would be built to utilize specific features of an architecture, would it not?

Everybody wants to think developing apps for desktop operating systems is just like developing for phones and the web, and it’s just not. It is fundamentally a different ballgame. Good code is not the same thing as easy code. There is a reason most people avoid this kind of development, and this is why. You can make it easy, or you can make it powerful, but you seldom get both when your developing desktop apps. Most companies are going to choose powerful over easy.

5 Likes

Yep whatever you say guy.

1 Like

Geeps lol! I call it my friend (sometimes my crazy drunk uncle) Chad GePeeTie

Macha, I appreciate your added color. If I’m hearing correctly, you’re emphasizing in order to deliver good software to ARM and x86 you’re in a world of complexity.

You brought up the complexity of performant and optimized native app development, Apple’s direction, M-Chip optimizations for future local models, the difference in architectures, and the impact on development resources.

Adjusting the build configurations to target x86 would be a necessary first step. But delivering good, working software involves thorough optimization and testing to address arch-specific differences.

While technically feasible, this may not get prioritized due to the product’s direction and the potential for significant tech debt.

It’s totally isn’t unless they used features that can only be used on Apple Silicon or used Catalyst. In build settings you can choose the architectures to compile for. See attached screenshot. Did you just start Mac dev?

Edit: Actually, to correct myself, to my knowledge using Catalyst wouldn’t preclude it from running on Intel.

2 Likes

Yes, but unless you want to continue to build support for a 4-year-old version, I don’t see why you should do that. You end up not being able to upgrade your application or build your application (depending on the type of libraries you use) if you go this path. And I would assume they would use Apple’s Speech Framework, which is severely optimized for Silicon devices.

That’s one hundred percent wrong. Whether you compile your app now for both architectures or not, on its own, has no bearing whatsoever on your ability to exclude one architecture in the future.

There are no Intel only features so compiling for Intel does not in any way prevent you from simply turning that off in the future at all.

The reason you would do that is to reach a broader customer base. That’s well known. Are you sure you are a dev? Apple still sold high end Intel Macs as recently as the beginning of 2023.

Recent Intel Macs run the same version of MacOS . It’s not 4 years old.

Please provide a citation for your statement that “Apple’s Speech Framework, … is severely optimized for Silicon devices.” Edit: I looked it up myself and while it may be optimized for Apple Silicon that does not in any way preclude it from running on Intel Macs and using it in no way prevents them compiling it for Intel Macs or places a burden on them by doing so.

OpenAI has made it clear that their intention is to make ChatGPT available to as many people as possible. Making the Mac app available to all recent Macs, for the duration of time that Apple supports them at least, regardless of architecture, more closely aligns with that goal.

It’s clear you just have an axe to grind with Intel Macs for some reason.

4 Likes

I think that what @grandell1234 was more trying to express was that they’re building for the most recent stuff first, which makes sense if my theory that this was originally an internal application is correct. This probably wasn’t something initially made to sell plus subscriptions, instead it was probably an internal tool that they’re now releasing publicly.

I obviously agree that it’s just good business to make it available everywhere you can, and that very well may happen! At the moment though, Apple silicon gets a head start.