Question about AI and Python

Dear Anyone.

OK, truth time, I’m NOT a developer, just a computer lover who’s puzzled by this whole AI thing. Y’see I’ve read AI code’s written in Python. Now Python’s about 20 years old, give or take, so it’s not like it’s a brand new language designed to write brand new systems called Artificial Intelligence. It’s a 20 year old language.

Which tells me - if I’m wrong, tell me, - that AI is more of an algorithm than something totally new. An algorithm written in a 20 year old programming language. So why’s this algorithm not been thought of before in Python’s history and, if it has been and I didn’t know about it (totally possible, I’m not an expert in any way, shape or form!) why’s it only just started being trumpeted? Why hasn’t it been going for over a decade, allowing Python 10 years to get started?

Yours curiously,

Chris, who’s not a programmer really or anything, just a computer lover who’s puzzled about this point.

1 Like

Starting point to re-frame your question: read the linked article in a private browser for a good chance that you don’t get paywalled:

The recent breakthroughs in AI owe little to Python in particular: the true “supercomputer” is native GPU code running directly on specialized hardware. Then someone willing to spend billions in front end investment on it. The sheer scaling of computational capacity for massively parallel vector mathematics from 2006 to 2026 rivals the leap in computing hardware from 1986 to 2006.

Python has been a staple in scientific computing for a long time, as it allows compiled code performance within an interpreted language frontend. I first heard from a friend in CS about their interest in a new thing called Python closer to 30 years ago.

2 Likes

Dear -j

Didn’t realise that, and thanks for the fascinating article (which I read in Firefox in one of its Private Windows!) It still seems to have jumped out of nowhere overthe last few months. I know I shouldn’t say this, maybe, but I love the ‘hallucinations’ aspect of it - am I right in thinking that’s because it can’t REALLY know what reality is, only if the steps to get it from Human Thought A to Human Thought B are logical and work. Whether or not they’re practical in what humans call Reality is something it can’t know, only if the logic works. So humans have to prune that logic-step-set away and let it try again, hoping the next set of steps it comes up with IS more real-world practical.

Or the set of steps it comes up with, whilst seeming off-the-wall, are a ‘Hmm. Maybe, just maybe…!’ moment the human asking the question could try out and, if they work, could open a whole new path. Wonder if that’s happened much.

Thanks for answering me - you’re the only one who has, thus far. Feel free to answer the above semi-question, or not, depending on whether your time allows. The article you showed me was very interesting.

Yours respectfully,

Chris.

1 Like