Is online AI practical long term?

I think it’s worth discussing the practicality of offering all users of GPT-4 and Claude access to their data analysis capabilities. I say this because Code Interpreter has been very glitchy in the last week and unable to answer complex questions while Claude’s file size limit has been reduced.

If you consider the processing power you would require to run software like Code Interpreter on your own PC, you’d need a high system with lots of RAM.

It is surely not economic for Open AI or Anthropic to offer all their users virtual PC’s worth thousands when they’re only charging them 20 bucks a month or nothing at all.

Even where Open AI and others are charging for their API’s there is surely a problem. There has to be an upper limit to API capacity.

GPT-4 suggested the other day that this limit has yet to be reached when it kept on failing to process a series of CSV files for me (ranging from 20KB to 25MB). It said “I’m sorry for the inconvenience. There seems to be a persistent issue with the code execution environment used by this platform. It’s not about traffic volume, but rather a technical glitch. I can assure you that OpenAI is actively working to resolve such issues, but I don’t have a specific timeline for when this particular issue will be resolved.In the meantime, I’m able to provide support and guidance for working with data in various tools like Excel, Google Sheets, and even programming languages like Python and R. While I may not be able to directly execute Python code at the moment, I can still help design algorithms, debug code, and explain concepts.”

I’m not particularly concerned at this problem as I am more interested in writing than data analysis. However, does this not all suggest an eventual move away from online AI to personal AI similar to the move from mainframe to personal computers that happened in the 1980’s? Why is nobody talking about this?

It will take some time before politics catches up to engineering.

I’d imagine that they are more interesting in capturing the market and creating their “moat” before they start to profit.

I believe we will head this way. With households being the last step to become “smart”. It’s a bridge that’s slowly being constructed. GPT does a fantastic job with semantics but things like Google Home (although frustratingly stupid) have always done a good job routing your request to the proper device.

That, along with Matter means that we can safely build smart home products without tying ourselves down to a single provider.

So, totally. Once we can host our own LLMs, it’s going to spread like fire along with smart homes. Which, is what I believe Google is waiting for. I was a certified installer for them and it was embarrassing how little effort they put towards it.

Bit of a tangent, but I really hope we move away from centralization. I’m a huge supporter of the Fediverse for this reason. So not only do we have more control over our data, and the liscensing rights that comes with it, but we can use the existing PCs/Mainframes of our house to contribute to these platforms.

This is a cool ELO-style leaderboard for LLMs. Vicuna is only 13 billion parameters and sits at #6, while GPT-3.5-turbo is #4.

I really don’t get the hype about code interpreter.

it literally does the same thing as GPT4, except with GPT4 you run it on dedicated machines with more memory / GPU / etc.

It was amusing toy to play with, but I rapidly realized how useless it is for anything I’m doing.

1 Like

According to an IDC study, there are around 18.5 million people with coding skills on earth, that’s about 0.2% of the population, so for the other 99.8% of people that are unable to code up an API interface with agent features, and give them an introduction to playing with code in a sandbox environment, I think it’s awesome.

I’m with you on this one. I have no idea how to code and find the abilities of Code Interpreter and Claude really helpful in this area. However, I’ve come to understand that you cannot ask either of them a question involving a large dataset. You need to trim down what you feed them to the absolute minimum. What I find helpful is that for larger datasets GPT-4 and Claude can look at a small sample and then give me Excel formulas to get to the answers I want in the whole dataset. Obviously, neither of them is ever going to outperform someone who knows how to code. The volume of the task would defeat them. Equally obvious, neither is going to perform as well at coding or math as they do in tasks involving language. But, compared to where we were a year ago, the progress is astonishing.

Most likely because you already have the skills. I have been introducing Code Interpreter to multiple departments of multiple industries and they love it. It sure beats paying for an ugly, highly opinionated “report” (see: data overload) every month. Now, people such as myself can send raw data and the departments can literally “ask” for the data that they wanted, presented in the way that they want, right before their eyes.

Just like the majority of fields that AI can assist in, the idea is that someone with a vision can implement it without having the necessary skillset. Which, in turn means that the professionals with the skillsets can focus on greater, more higher-level projects.

So, I don’t have to focus on making silly graphs, or building dashboards. Even better, I don’t have to consider such low-level concepts, as AI can usually perform it for the user. That enough is a huge win for me.

Have you tried breaking up your files into chunks?

The problem is not “how do I write syntactically correct python.”
The problem is “how do I think about a problem such that a computer can answer it.”
People aren’t randomly going to suddenly acquire that skill just because they can express their thoughts in words. The thoughts are still going to be ambiguous and underspecified.

The history of computing is full of each generation learning this lesson over again. 4G database systems. Spreadsheets. Low-code environments. COBOL.

I believe the answer is: You make an effort to train your users. Many here disagree with that. But, just as our ability to keyword search evolved over time, I think we could shorten that time with semantic search by empowering users with the knowledge they need. And, I think the first step in that journey is making them understand that AI isn’t some mystical, magical wizard. It’s a big, dumb machine with absolutely no memory or knowledge of anything you’re talking about. Start there, and you immediately change the interaction paradigm.

I have young-adult children who actively resist attempts at training in keyword search, so I wouldn’t say that that’s a “generally solved problem.”

1 Like

GPT4 is great and levels the playing field for sure, my point is about the code interpreter. What does it do that GPT4 doesn’t other than add a convenience layer? And one that very frequently breaks down…

Better for new folks I think would be ask it for some spreadsheet macros and copy/paste those. Same thing, and you learn something to boot.

It’s able to take your request and then use data you have uploaded and then iteratively refine it’s own tool building to provide a solution, that is a fairly impressive demonstration of agency.

1 Like

Actually, to be honest, people who use the code interpreter are shooting themselves in the foot.

This is called the enfeeblement risk of AI.

Use GPT4 and try to learn something while you’re doing your work, folks. That is the real payoff of all this tech - I kid you not.

This exact same argument was used during the introduction of the calculator in the 80’s. It was not true then and it’s not now, it’s a tool for augmentation of skills.

Nope, that is a false analogy. Calculator != GPT4.

Also, you’ll notice how everyone is still forced to learn times tables and long division/multiplication. There is a very good reason for that.

I encourage you to read more about enfeeblement risk.

Not learning about the nuances of your work and tasks is just cargo cult. It’s like someone following a recipe without tasting their sauce.

It’s like owning a car and not understanding anything about how it works. If it’s worth shelling out $$$ it’s worth taking the time to learn a little about it.

If a task is worth doing, it’s worth understanding how it was done.

It’s not just GPT4, it’s anything you might do.

Seriously - take the time. Learn. That is your real value, now and always.

For me it’s exactly the convenience & control layer for clients to explore large amounts of data. They don’t want to learn how to code in Python. I hate spending time trying to assume what they want and making my work presentable and… honestly… fashionable :roll_eyes:

They also don’t care about the data that they well… don’t care for. So I can create all these cool statistics and predictions that in my opinion are very cool and insightful but they just don’t care.

Instead of being opinionated and trying to shape my work for them I can just give them all the data and they can literally ask for it to be explored in numerous ways and feel like they did it all themselves.

These types of clients dream of telling a program to do what they want, and it does it, and it’s easy.

I think the fact that they renamed CI to “Advanced Data Analytics” at the same time as Enterprise kind of hints towards who the target is

1 Like

I know there have been some studies of who benefits most from AI pointing towards low-skill workers being advantaged, but we are probably too early to tell. IME, I have found the opposite to be the case. Low-skill workers rarely are able to abstract/understand how an LLM works, and so they struggle to elicit desirable behavior out of it. Adding another tool on top of that (Code Interpreter) only makes the problem more acute.

It’s a simple interview question - do you find GPT4 game changing and what is your most important use case?

If they don’t answer yes and to learn, I filter them out immediately.

I do not think there will be a huge move to locally run models. In everything else it seems to move is from local processing to cloud processing. The cost can be an issue, but after all we are just sending a bit o text and getting text back. The data amount users are sending on ChatGPT or Claude is not actually very much if we compare to data storage services, and they also encrypt etc. Of course encryption is faster, but the amount of data per month can even exceed 100’s of gigabytes.

Anyway, I think local processing will be available soon. How soon? I think too factor are in play: better small and high quality models. Let’s say we can go 90% down in 5 years. 220B => 22B. My wild guess it should be doable with smart combination of multiple small models. Just a guess though… In some contexts we alread see good performance already in local models, but in my experiences these are far from the capability of GPT-3.5 in diverse tasks.

The second one is GPU capacity. I think we can expect GPU memories to increase faster, but to be safe I think we can look at the maximum capcity in Nvidia RTX series. I think it is about 2x in 5 yr (12 => 24).

Someone could calculate more accurate estimates just following the above logic, but I think in 5 yrs we will see GPT-3.5 level performance, probably more polished, on local machines costing same as 4090 today.

Main question is if ChatGPT virality was a black swan event and GPU manufacturers will compete to bring consumer devices able to run big models faster and quicker.