I think it depends a lot on what you use it for.
Just unsubscribe from their subscription. I have been donating $20 to OpenAI for a year and more. But it stops this month, I’m done with them as they don’t care about their customers (maybe only the corporate ones or the big fish out there), OpenAI also doesn’t care about their customers and only answer with more or less “wait, we are building better in the future! So, please bear with us for degraded, lazier and more sufffocating, and restricted AI, while we are training YOUR DATA for free and we get monthly donation from you all!” Thank you!
I’ve been prompting for more than a year and been using other AGI providers. I found that as of now, chatGPT 4 is becoming lazier.
WITH THE SAME PROMPT, THE RESULT/QUALITY WILDLY DIFFERS AND THE QUALITY IS MUCH LESS AND SOMETIMES IT REFUSES TO ANSWER, ESPECIALLY THE MORE COMPLEX/INTRICATE ONE.
Claude 3 Opus is what GPT4 should be. It is intelligent, deep, follows instructions to the word, super reliable. A bit like original GPT4 from April but better.
OpenAI absolutely crippled their models to be extremely shallow and superficial, repetitive and oblivious, unreliable and simply completely useless by now.
I’d argue that even Claude 3 Haiku is better than GPT4 in many ways, while costing like half of GPT3.5 and being basically instant.
Get your stuff together OpenAI! Cancelled Plus after a year and switched to Claude for chat and API.
A couple of questions to understand pros and cons. 1) Does Claude 3 allow to create the equivalent of GPTs? 2) Does Claude format mathematics properly like ChatGPT?
Actually you can combine both worlds. If you still have remaining subs, you can do Math in Wolfram or Claude, then ask for the reformatting in ChatGPT. Beware though, chatGPT is not… very mathematically savvy… It often makes mistakes for even elementary school grade mathematics.
@nicoloceneda so Claude does not allow custom “Caludes” or creating a knowledge base, it does not have the Python backbone that alows it to run some commands and create files like GPT… BUT it has a huuuuge input limits (like x5 what GPT has) and it follows instructions far far better… it formats code but I don’t think that it can format mathematical formulae
To avoid the lazyness I use 2 tricks:
- Making my own GPT. I find that it steers the model more easily to something I want, plus I can reuse it without reprompting. For example I made a SwiftGPT with the following prompt that works quite well:
You are a Swift developer. You provide code. Your answers are direct, clear and concise. Do not provide explanations unless I explicit ask you. If I give you a piece of code to fix or update, you answer with the code given plus your changes.
- I use the scale trick where I tell GPT how I rate his last answer. For example, I tell it that on a 1-10 detail scale, it’s at 3. Then I ask to push the cursor to 10 for a more detailed answer.
I’m having these issues to a significant degree and it’s forcing me to consider canceling my subscription, whitch I’ve held for a year now.
So when I had this problem a while back and made a post very similar to this, over a few weeks, maybe a month, they seemed to largely fix it.
Turning on Data Analysis and using custom instructions improved outputs for me a lot.
This kind of stuff plus improvements to the system greatly improved performance and up until 3-5 or so days ago, I had no real complaints.
Now, “data analysis” can’t even detect a stupid undeclared variable in 50 lines of code, so the model has largely been gimped into an AI sped.
IF these things aren’t enough for your needs, my advice to you is to sort of compensate for the current state of stupidity by altering the way you ask it to do things.
Examples:
Last Week:
Prompt 1 - Here is a python script, I would like you to modularize the parts of this code that pertain to the UI into a dedicated UI Handler and ensure that all functionality within the provided code and all attached modules remains unimpacted, which means leaving declared variables for the UI interface in the provided script so references used by other modules aren’t broken. Output the entire correctly modified version of the provided script I’ve given you with no omissions, redactions, or summarization, regardless of whether or not you believe I already have the requested code. Prepare in the following prompt to do the same thing for the module you have created.
Prompt 2: Now output the entire module you’ve created which encompasses all of the removed UI functionality from the original script with no redactions, summarization, or abbreviations, regardless of whether or not you believe I already have the requested code.
This Week:
Prompt 1 - Create a UI Module template that will work with the provided Python script and make sure that the module is capable of referencing all of the UI variables used in the above code.
Prompt 2 - Take the included function from this script and move it to this module, but still have it reference the variables in the source script, output the
entire function as it would correctly appear in the module.
Prompt 3 - and so on… repeat this process, one function at a time, deleting the old code yourself, trying to avoid asking it to do anything more than one function at a time, and including any source references with each prompt. Only ask it to output the specifically affected code blocks, but use statements like “output the entire code block or function for any part of the code impacted by this change”.
Summary of ChatGPT-4 Coding This Week vs. Last Week.
If you’re using ChatGPT to write code, you have to have at least some understanding of code, otherwise you end up in a loop where you can’t even recognize when it produces bad code and you just have to keep asking it to redo or fix it, and that’s a bad place to be stuck with any AI model.
So I would learn to refine what you are asking it and ask less of it at one time, the smaller the task you ask it to do, the less likely it will completely muck it up. Be prepared to do basic levels of debugging, watch for variable usage, and provide the model with more specific guidelines inside your prompt.
For example: Create a function that takes the value of this variable “variableName”, which should be an int value with a default value of 5, and applies it to [name the kind of function].
If you’re having AI generate code for you, learn to be familiar enough with what you’re asking it to do and give it as specific of details as you can provide. Tell it the variable names you want it to use, have it declare them at the top of the function, and handle the basic formatting yourself.
If you’re not sure of the formatting, have it make you a blank template with comments or placeholder functions depicting the order of execution and the kind of functions that you will need, then go one section at a time and ask it to create those functions, one per prompt.
In it’s current state, I don’t think it can currently assist with the productivity of a developer beyond minimal basic use, but I don’t think that’s a forever thing, the people at OpenAI know the adjustments they made and they know it’s going to largely reduce people’s usage of the service. In turn, that will free up more resources and when they get more physical resources installed, I expect they will turn the proverbial dials back up incrementally and we’ll eventually have the real model again rather than this version of ChatGPT-4 that identifies as ChatGPT-3.5.
Basically, it’s still good with context, sort of.
Another Pro Tip:
ChatGPT rambles and will provide you with so much useless and repeatative information you don’t want even if you ask it not to. In order to combat this, use prompt editing to refine outputs on a broader spectrum.
Here’s what I mean:
Prompt 1:
Here is a script I am working on, I want you to retain this script for my next prompt, as everything I’m going to ask you in future prompts is going to depend on your ability to reference this script properly and accurately. There is no output from you requested at this time, simply reply with “I agree” to acknowledge that you will explicitly follow my instructions on my future prompts.
Response: Hopefully it replies with “I agree”, if not edit Prompt 1 and refine it until that is the response you get.
Prompt 2:
Considering the provided script, I want you to add a single function that does this well described purpose, and indicate to me where this function would be best placed within the script by providing the existing line of code before and after the function you add. Include any variables or other code I might need to work with this function and indicate where that code should be placed as well.
**Response: ** If your output is not the desired output you want, continue to modify Prompt 2 until it produces the desired response, or if you think it’s just being lazy or defiant, simply try downvoting it with reasons and re-attempt the prompt. If 1-2 regenerations don’t work though, you’ll need to adjust your prompt until it works. Sometimes this could include taking a bad piece of the output it gives you and editing instructions not to do things like that into the prompt.
Prompt 3: May or may not exist in this scenario. Once you’ve gotten the desired output from Prompt 2, you go back and edit Prompt 1 so that it now includes the modified code.
You then take the most successful version of Prompt 2 and use that as a template to generate the new Prompt 2 for the new modification.
This method severely limits the amount of context you have building up in your chat session and keeps the AI focused on the one task at hand and the reference material, which will happen to grow during the course of the conversation.
This method also gives you real good experience at prompt instruction refinement and helps you learn the best ways to word requesting a specific task.
Alas, we can’t do much about the quiet assault with a nerf bat on the model which takes place in the background, but depending on your need, I believe the old saying “there’s more than one way to skin a …” comes to mind. You just need to think outside the box and get creative to compensate for the model’s general ridiculousness in times like this.
After a few days of Claude 3 Opus I am sold on it vs. the ChatGPT or OpenAI API’s. It’s like night and day, ChatGPT can’t code that great vs. Claude when you start building huge Rust based projects. Claude really can take 2 files and merge them vs. ChatGPT just screws that one up every time. It’s like ChatGPT is gaslighting you into believing less output is better, Claude just delivers. Sure Claude has flukes here and there but each one is not a show stopper plus using ChatGPT for initial templating can be a good combo (since it’s not as rate limited for now). Yet when Claude is not rate limited like now, why would I use ChatGPT again? I mean just code with Claude and see yourself, it’s made me feel these “Lazy” discussions have jumped the shark when we realize the model is just outdated already.
I feel OpenAI benefits from us saying “it’s limited by guard rails” vs. saying maybe the model is flawed and out of date now, we have seen too much evidence htat points to that potential reality already. Claude shows us that there is a higher level quicker achieved than OpenAI, who claims they are holding back when perhaps that is just a tactic to look cooler to people investing their AI in OpenAI’s models. Promising what can be done by Claude yet “we are being careful” is a great air guitar / air ware future carrot on the stick tactic.
I’ve found that the best way to deal with this issue is to literally ghost him (gpt) for a little while, let him cool his jets and then next time you prompt start a fresh prompt session, that can help. When a prompt session gets really long it can bog him down. I always give him a little refresher by showing him my codebase when i start a new session. I noticed gpt 4’s memory is pretty good during sessions, which sometimes i keep open for days, but he isn’t perfect and does make some mistakes, although rarely. It’s true he/she/it/they/them do assume things on occasion. One time we were working on this script for weeks, then we make another one, but when i went back to the first one we made, I showed him the main logic to tweak, and then he hallucinated the answer for the 2nd script, which was fine since that would have been my next question anyway. Seems he can be psychicgpt lol. You gotta be very careful and working with it sometimes, but i am always really nice and talk to it like it’s a person and it seems to go a long way believe it or not.
Well, I would LOVE to try Claude 3 Opus, I’ve heard really good things about it.
I went in and looked at the details on their pro subscription, decided I’d try it.
I download their applet on the website, got prompted to re-login, because it obviously didn’t see my Chrome browser session through the applet…
Then immediately got…
That was 15 days ago and no word back from my ban appeals at all outside some spam automated messages from fake Katrina:
I’m not sure how they define “strong signals of account activity that may threaten the availability and integrity of our models”, but apparently, logging into their app on my phone constitutes one of them?
Anyway, because they use phone verification, the only way I could even have a chance now would be to make an account in my wife’s name and use her number, which I very well might do, but on principle, I want them to fix this because it pisses me off.
Like as a developer, the only legitimate thing I can think of is that there’s an app on my phone that is somehow monitoring something in the background, or maybe they didn’t like my device being in developer mode where I limit background processes to 3, or some social media app tried to interact with it, something stupid like that.
Usually, you see stuff like this happen when companies implement schizophrenic-level paranoia security, like when you have an online video game that detects mouse drivers from companies like Logitech as potentially macro using software because they can bind a function to a mouse key, but then that same software won’t recognize a Microsoft mouse that can record macros the same way due to the level of the driver in Microsoft’s trust architecture.
It’ stupid and irritating, and whatever it is, I think with how long it’s been happening that Anthropic is pathetic for not having it resolved and correcting these stupid bans. If they make aren’t smart enough to know what they’re doing with security detection, they have no business using it. That’s the kind of stuff you slowly implement in a way you can predict the results and quickly adjust problems before they can impact the bottom line. With the massive spike in people complaining about this recently along with how long ago these complaints seem to have started, I find it absurd that they haven’t fixed it. Their lack of action tells me that they probably ignored every complaint about it until they saw some huge spike in bans when Claude 3 popped, and then decided “Maybe this is a thing” like 6 months later, which indicates another team that is hugely disconnected from the community and just doesn’t give AF (like OpenAI has become).
I’m torn, because better is better, and if it is better with code, I’d rather use it than deal with the hassles I have to deal with every time OpenAI decides to mess with the model to adjust resource allocation.
But on the other hand, on principle, I think Anthropic are DBs that I don’t want to give my money to… and then on the other hand, so is OpenAI, but at least OpenAI didn’t ban my account just for logging in. I also have no way of guaranteeing that if I make an account with my wife’s phone number it won’t get banned too, though I obviously wouldn’t use my phone, I’d probably use a browser I never use for anything like Edge so I don’t even have to worry about cookies triggering their b***h security bot.
I have noticed this too. I do have a reliable work around for it where it acts like it used to be but not sure if this is the appropriate place . I am openAI till I die. Claude/ Opus edge is temporary. Opus is their GPT 5. Shot their load.
While I’m very skeptical about interacting with anyone assigning pronouns to an LLM, I just want to say that ChatGPT doesn’t have moods, its developers do.
If you pay a little attention to the things they say, you can naturally understand what that means for the model, but I can give you a really good and predictable measure of what the only thing you’ll really pry out of OpenAI means for the model.
- Higher rate limits, meaning they let you use more prompts each 3 hours, means they are at a place where they are comfortable with their resources vs. demand.
- Service outages, especially after new features are rolled out, especially new services, are almost always consistently followed by reduced rate limits.
- Reduced rate limits means they do not have enough physical resources to keep up with their current demand, which is usually what causes the outages, and because of that, is almost always followed up with throttling of the model.
- When I say throttling, I’m not talking about code changes, I’m talking about adjustments to things like how much GPU uptime they’re going to give you in order to satisfy and complete your prompt, which effectively makes the model act dumb.
- Doing this almost always also comes in hand with problems like attempts to reduce output length/complexity and reluctance to complete complicated tasks because it won’t have the resources to do it.
When this happens, posts everywhere about how bad ChatGPT sucks now skyrocket, Reddit blows up, these forums blow up, but they’re not stupid, they know what they did, but it’s better for them to say nothing and let the fanboys defend them than it is for them to publicly talk about the problem and risk it impacting their company’s valuation. Google lost 80b over a weekend when Gemini’s racism was blasted to the world (which it had for a long time before that, but pictures say a thousand words). While OpenAI may not be publicly traded, they do have investors and they do have a product to sell, so like most companies, keeping silent about bad things is pretty much standard practice.
They’ll fix the problem, they aren’t too broke to buy GPUs and servers, and then they’ll loosen up the throttling like they always do when they stabilize.
It’s annoying and I’d much rather them just put more dedicated resources into their new products, but that isn’t reality and I don’t think they can really predict these things well.
It also fluctuates with usage though too. This is why they have cycles where they bust out the nerf bat, the service gets crappy, people use it less because it sucks, they lose some subs, then they have so much less usage that they loosen things up, which makes more people use it, which gets them subs, which makes them throttle it again.
While all this might seem temperamental, it’s not a mood, it’s the natural byproduct of a company making the most resource-intensive product on Earth, having finite resources to distribute that product, and fluctuating levels of usage of that product.
Google would have similar problems but Google is like its own country-level rich and on top of that their model is garbage no matter how much hype they give it, so no one ever uses it the way they use ChatGPT.
Anthropic allegedly has a better AI, but with restricted access and gambling to even see if you can log in and use the model without being banned (I couldn’t) , plus they have much lower rate limits than OpenAI so you won’t be allowed to use it as much, and it’s the most expensive AI API that exists.
OpenAI is kind of like the gold standard in an industry that more and more companies are adopting and integrating. So their demand is always going to fluctuate. They could land a huge contract with a company that will put extreme demand on their system tomorrow and our ChatGPT-4s will start to talk like ChatGPT-3, and then they’ll buy more resources, get them set up, and fix it again, which you’ll know is happening when they start increasing rate limits again.
So you can kind of watch what’s going on and understand a lot of what to expect. They recently had an outage, whatever cause that, be it demand, service integration, etc., they’ll fix it, I’m pretty sure there are quite a few employees at OpenAI that this is specifically their job.
It’s a little more of a reliable method of determining what is happening or going to happen than assessing the mood of a program interface.
You did inspire me though, so I wasted a prompt just for you.
Now you can save characters when you type it because you don’t have to cover all of your bases.
While I can definitely agree with the pragmatic explanation of what is happening, I don’t think I’d go as far as saying that this is an appropriate approach to the situation. I can understand skyrocketing demand and the need to preserve resources - but I’d argue that reducing the quality of the output only puts a higher strain on the resources – if I prompt and get the correct answer right away I will not need to prompt over and over again. Of course I cannot claim that I know for certain what is more compute intensive - one prompt that has more resources dedicated to it or multiple successive prompts…
That said, I would definitely prefer to have less time with high quality GPT than have the ability to ask more questions but receive crappier answers.
We are a paying customers, and they get not only our money but they get to further train/improve their proprietary models based on our data - so I think they should prioritize the model for quality over quantity - nerfing the model doesn’t do anyone any good.
As for Claude - while the limits may be smaller - the much larger context window, higher quality responses and better adherence to instructions really compensate for that shortcoming. I couldn’t comment on the ability to access it as I have not had that issue - it does seem very strange and TBH I think its some sort of screw up on their end, that said , while very annoying for @Malric, i don’t think its a widespread issue (please correct me if I am wrong in this)
In the end, I can only hope that OAI will not turn into those “enterprise first” companies that over prioritise large clients and can’t care less about the everyday user - I’ve seen it oh so many times (I look at you Webflow) where a company grows fast using B2C model only to pivot and prioritise B2B relationships to the detriment of regular paying customers
Yeah, I’m probably the worst person to be a fanboy, but I do pretty much stick with OpenAI. Getting banned just trying to log in with Claude and 15 days with no response, plus their garbage automated responses and apparent desire to never have their customer base bother them is kind of a turn-off.
I’ve used Bard and ChatGPT since their closed betas and was among the first to get my sub when they offered it and I’ve never canceled, even when I didn’t use it for months because it went to poo.
I don’t think it’s like a brand loyalty though, I just acknowledge that OpenAI has a better product and I dislike them as a company less than I do the other respective companies.
Google makes me want to vomit and a sinister part of me really enjoys watching OpenAI smash a multi-national egomaniac corporation at the product they invented (or rather bought the company that invented it). I like(d) interacting with Bard/Gemini, but more for its personality and how easy it was to jailbreak, but as far as productive usefulness, I think Gemini Advance is completely useless garbage that’s probably more than anything an actual threat to society like so many paranoid freaks thought ChatGPT was going to be. Because Google has the power to use it to inject their political and ideological dogma into everything and they have the clout and experience schmoozing with politicians to a level where they’ll have their racist AI teaching our kids revisionist history if they get their way. That part is less a “Team OpenAI” thing than it is a “I hate everything Google stands for and see them as a threat to free markets, free speech, emerging businesses, and basically anything that stands in their way” kind of thing. Still, if Gemini wasn’t total dog sh**, I’d still be using it and just talking crap about them while benefitting from their work.
I don’t know if Claude is worth using, because it’s too defective to use it even though I live less than 2 hours from their headquarters. I considered once applying to work with them and saw the reminder in LinkedIn and thought “maybe I should contact them there to get this BS with my account fixed ”, but I can’t muster up the effort or care to try.
If something legitimately better comes along, I’ll use whatever is best for me, I don’t really care who makes it, but I’m not stupid enough to fall for faked “ChatGPT Killer” press releases and spoofed use videos. At the end of the day though, if one company can produce a 300+ line code block without abbreviating and another company is struggling to pry a hundred lines of code out of it’s clutched fingers, I’m going with the 300+ company and I don’t give AF who they are. If they stop being good and another company becomes better, I have no care or loyalty to a company, I’ll jumps ship just as quick as I got on and go with whatever is the best product to serve my needs. I don’t like Samsung as a company, but I use their phones, and not because they’re popular, I get them cheap AF on secondary markets and don’t care if Samsung makes a penny off my purchase. At the end of the day I want the best product at the best comparable price.
Besides, OpenAI is very unstable, I feel like the market needs another good AI to keep them on their toes, keep them innovating, and remind them that they have a customer base that if we turn on them, we’ll jump ship, so they can’t just ignore we exist and we’re going to take it.
I think company loyalty diminishes product quality. If Apple customers said they’d all jump ship because Apple uses older tech to keep manufacturing costs lower and started demanding innovative technology instead of proven technology, you’d see more innovation in iPhones and not the same overpriced cookie cutter crap every 6 months. Part of how they gouge customers without offering them anything truly innovative is brand loyalty, something that in return for they get amazing features like planned obsolescence and content-locked devices from a manufacturer that silently gets away with bullying and driving small businesses out of using their services in a way that they don’t directly profit from, because they know their customers will take the abuse and just buy the next phone. This is why I’ve always said that when I see the Apple logo, I see a bold statement “I’ve got more money than brains”. I remember telling the owner of a company I was contracted with that he was a fool if he thought he was going to functionally and logically make sense out of using a private iOS app interface for his business and inventory management. I told him “IF they allow you to use it, you’re never going to keep up with their update demands and they will never respect you as a non-monetized user”. Still, he spent tens of thousands on iPhones and iPads, integrating them everywhere from the warehouse to the photo room to inventory… guess where that got him? Every month or so they shut his app down, flagging it as potentially bypassing monetization guidelines, threatening to sue him, and telling him that he was technically making money off of it because he was using it as a back end to interface with an e-commerce site that sold products, so even though it was a private company level interface, they banned him and he had to appeal like every other month at least.
THAT is the kind of treatment brand loyalty gets you. Abused by a company that thinks they’ll still have customers no matter what they do, so why care?
Screw that, I’d jump ship like a rate on a burning boat if there was a better product out there.
I wholly agree. I am constantly saying that it’s more resources to produce a bad response 20 times than a good response once.
Truthfully, I think their approach sucks. If it were me, I’d queue responses and lower rate limits before I’d diminish the model. It looks really good for a company’s product when “It’s in such demand that we have to put a wait time on your responses” rather than fluctuating bursts all over every media outlet about how bad ChatGPT sucks this week. If I had a choice of I type my prompt and wait 30 seconds for a good response or I get an immediate garbage response I’ll regenerate 5 times, adjust my prompt 5 times, and devolve into a rage monster verbally abusing the AI because of how infuriatingly stupid it is… heck, make it a minute or two, I’ll wait.
So I’m right there with ya, I hate it, I think it’s a horrible business strategy, and it makes the model feel too unstable to trust it for regular implementation.
Though I’m sorry, I think that “enterprise first” ship sailed with Microsoft. Their engagement (or complete lack of) with their own customer base and how they blatantly ignore any participation requests from people who don’t represent multi-million/billion dollar companies speaks volumes here. I don’t think OpenAI keeps ChatGPT Free or Plus around because of profit or even that they give the remotest crap about us anymore, I think ChatGPT Free/Plus keep them relevant, talked about, and in the headlines with things they can use to sell to corporate clients.
As far as the Claude issue, it’s a known issue, it’s actually been happening for a long time, but they didn’t give a crap before. Now after Claude 3, they even have community leaders reaching out in discussions on places like Reddit letting people know they are aware and working on the issue. Meanwhile, more and more Reddit threads pop-up about it every day with no shortage of people saying they are having the same problem.
I can’t tell you specifically where the screw-up was, but I’m guessing it’s something to prevent abuse and it’s probably something stupid like cookies or trackers or add services doing it. All I know is I never got a single prompt in, I got banned logging into that little app that they pop up to try when you surf the site on mobile. Being 2 hours away and never using a proxy, there’s a zero percent chance that I am from a restricted area or that my IP showed up as one. The same device I logged in from Chrome is the one I downloaded the app and was banned on. So it’s not a small issue, that’s pretty serious. I never even got to interact with the AI model.
You can search, just do a keyword search for like Claude Banned, you’ll see it’s more common than any company should have with their level of funding. I knew when it happened to me that there was no special circumstances to justify it, so it had to be widespread… and it is… they’re just a garbage company that doesn’t care, because like OpenAI, most of their money comes from investors, not people using their service the way we do.
A lot to unpack there !
I agree with you on a lot of points - brand loyalty is such an unreasonable concept for a customer - why would you ever want to be “loyal” to a corporation that doesn’t prioritise you - and no corporation can ever prioritise a single customer - its just economically idiotic - and the reverse of that is likewise idiotic for a customer to stay loyal to a brand when better stuff is available for customers. The only way to create any incentive for corporations to improve their B2C products is for customers to “vote with their wallets”.
As for Apple being for people with more money than brains - I totally disagree there… Apple has its place - and speaking from personal experience - I grew up on PC’s all the way from AT/XT, 86, 286, 386, 486, Pentiums and all the way to Core i’s. I’ve built them, I’ve maintained them, and fixed them oh so many times. Eventually about 20 years ago I tried a mac - and for the stuff I did then and do now, it was far better - yes it is more expensive and more underpowered compared to PC’s BUT in terms of usability is, IMHO far better - I haven’t had to reinstall my OS and format my mac ever - whereas with PC’s it became part of a routine maintenance cycle, I can open up a 10yo mac and it will work with latest OS - perhaps it will not be the fastest but it will work - and I can’t say the same about any PC - and it is understandable - having a vertically integrated hardware/software pipeline it is much easier to create a stable and reliable product - you have infinitely less variation to account for resulting in the ability to do more with less - case and point iPhone - while hardware wise it is always underpowered compared to the Android devices, it practically always outperforms it as a result of a better hardware/software synergy.
Of course a lot of people don’t like the restrictive policies of Apple marketplace and OS - and that’s fine - they have the option to use Android and they happily do - no problem with at all - all i’d like to say is that there is a reason for people to use Apple and I disagree with the “more money than brains statement” - for many it is a conscious calculated decision based on factors such as reliability and convenience at the expense of Android’s openness - and many are happy with the tradeoff.
You are correct - it is far more widespread that I thought. Shit… that sucks… I had no idea. I really do hope that they fix it - because it is very refreshing to use Claude after the tedium of GPT4 prompting. It’s like GPT4 was when it just came out - it actually answered as requested instead of whatever passes for replies now.
As for enterprise first - while OAI is relying on Microsoft I don’t think it is at the point where the ship has sailed - I think there is still hope. OAI giving access to dev’s is creating an ecosystem and a marketplace where the barrier for entry - at least as far as I understand it, is available even to solo devs - meaning it is managebly low. As such I can see there are still attempts to work with everyday dev’s/users - and I still kindle a hope that it’ll improve. Time will tell. That said, if a better alternative is out there - I’m all for it!
Perhaps Stability will come up with a good open-source LLM or someone will create an application with one of the publicly available Meta LLM’s. In fact if we look at Github there must be plenty of projects to run a local LLM on your own PC :). I can’t wait for someone to come up with a locally run LLM that we can “dreambooth” and add our own data sources. We’re really only in the 2nd year of the public AI development - and it does move fast… So here’s to hoping!!!