It’s sometimes capable of doing those things but it’s also capable of getting some pretty basic physics and math very wrong.
GPT applied the equations it helped compile to data it pulled from reputable archives. The results showed consistency and aligned with Standard equations for comparison… not wild variations or error. Clean!! Naturally it needs verification and rigeerous critique, but the results while expected by me are none the less shocking.
While you may be shocked and excited about the results produced by GPT, these might not be as revolutionary as you think when viewed by an actual physicist.
I suggest that you share links to your conversations with ChatGPT here, which would make it easier for both OpenAI and real physicists to understand what you are talking about.
Send it to a peer-reviewed journal and get back to us.
Keep in mind that it is very easy to lead GPT into assumptions which aren’t correct and to have it make all sorts of wild claims.
For example, I’ve used it to create various scientific based magic systems for me that would, provided a certain Quantum Field exists, function in reality, mathematically operate correctly under Quantum Field Theory and resolves consciousness and Quantum Gravity. But clearly, magic doesn’t exist so this is not a viable theory and just something useful for fiction.
So, I’d suggest talking with an actual Physicist and publish it for peer review if they say they can’t see anything wrong.
Literally sounds like you copied and pasted that response from 3.5
Hi RouseNexus, remember that GPT tends to tell you what you want to hear, you could direct it toward mathematical validations, have it do the calculations and verifications.
If you need ideas I’ll leave you a link to my site where I save instances on these topics, I’m sure you’ll find it interesting.
The paper I wrote is here. This also has my metaphysics document. Feel free to ask it about the TDG model. It has the first paper on pulsars in the milky way. You can ask about the calculations as well if it doesn’t simple volunteer the info.
I’ve read your “paper”, and sorry to say, but your proposed “Time Dilation Gradient model” is nothing but a standard textbook example of how to apply general relativity to calculate time dilation effects in different gravitational environments.
Its is modified GR. Have I offended you? And think of that paper as the basic presumption and it did not consider anything but baryonic mass. The efgects are predicted by using time dialation of the mass and the space immediately around it. Dark Matter is essentially replace with a process derived from GR. Maybe… lol. . I have been working on overlaying it over the galaxy. A hypotetical model did well with predicting curves amd lensing aligned with dark matter distribution and turbulant dialation flow. It might not amount to anything but a curiosity, but it’s a very stimulating imtellectually. And I probably will publish them… perhaps as an interdiciplinary look at the emerging capabilities of AI and show what it did and claims about its conclusions.
It’s posts like these that make me wonder if, under the right circumstances, model interactions/conversations can emulate the effects of licking a toad to our brains .
“AI dissociation” sounds nicer, but I still prefer “getting high by spitballing with an LLM about a subject you can’t validate yourself”.
Just as a forewarning: Models can aid in exploration of new concepts for the human participant, but OpenAI’s own research has shown evidence LLMs specifically do not aid in innovation, or otherwise enhance the innovation that’s already there.
https://openai.com/index/building-an-early-warning-system-for-llm-aided-biological-threat-creation/
So, just so everyone is clear here, any innovative insights are going to come from the human. An LLM is not going to to improve the likelihood of any insights otherwise occurring naturally with a pen and paper.
Or, put another way, an innovator could replace an LLM with google search, and it would have no bearing on one’s ability to discover such insights. If the LLM is taken away, and suddenly the innovative capabilities of an individual is lost, then it was not an innovation, but a hallucination.
I think GPT can absolutely be an aid and it can come up with novel ideas. There are tricks you can do to push GPT to combine and brainstorm cool new approaches to things.
What it can’t do, however, is just generate non prosaic novel stuff by itself. Really, it can barely do anything by itself beyond some very trivial stuff.
And trying to help it do complex things doesn’t work either, because it thinks in a certain way and usually you just mess it up when you try to inject your thoughts into things. It’s like 2 chefs in a kitchen.
Maybe GPT5 will fix all this, but I’m very skeptical. Nothing has really improved much since GPT4 first came out, and it has degraded in some areas.
I think the gpt-4o demos were more the direction that things will take, which is low latency interaction with people on lower reasoning tasks. Just being a much smarter and much snappier alexa.
Brainstorm and smart search is a very huge win, imho, but I believe we’ve hit a wall. Proliferation will be horizontal, not vertical, unless there is some breakthrough.
Not at all, I wouldn’t be spending my time helping you if that was the case
It’s not. Albert Einstein loved making “beautiful equations,” which meant he sought out mathematical expressions that were elegant and simple, yet powerful enough to describe the complexities of the universe. To Einstein, a beautiful equation was one that was not only accurate but also aesthetically pleasing, often featuring symmetry and simplicity.
To use his equations in practical situations, you will often need to plug in actual values by substitution, which is exactly what you have done, just like many physics students have done before you, this is why I’m calling it a standard textbook example of how to apply general relativity to calculate time dilation.
Only the standard GR equations can do so without dark matter. My revisions do.
Well said. As a toad licker I can definitely say there are some similarities.
I don’t use ChatGPT to strategize. It’s just too happy and excited to give a pat on the back for half-baked ideas (not implying that OPs idea is half-baked, just my own experience).
I have definitely fallen into some silly traps by passing on the reasoning computations to ChatGPT rather than just do it myself (for coding projects).
Then, of course, it supported this half-baked endeavour. It’ll continue. It’s a rabbit hole of nonsense afterwards
You’re incorrect. The standard equations of general relativity inherently account for all forms of mass, including dark matter. No special revisions are necessary to incorporate dark matter into these equations. The presence of dark matter simply affects the total mass used in calculations.
I’m sure all of your fellow cosmolofists are waiting with bated breath for you to submit these tryely exhilerating revelations to all the major scientific journals for peer review. Make sure you give credit where it’s due by including ChatGPT as the co-author in the byline, and try not to get overcome with freight in the process.
Let’s clarify a point about General Relativity (GR) and dark matter. GR doesn’t explain dark matter just by looking at baryonic mass. Instead, dark matter is added to the model to make it fit our observations, which is essentially curve fitting.
My model builds on GR by adding time dilation gradients to explain gravitational effects. This doesn’t change GR’s core equations but offers a new interpretation. Early results suggest the TDG model can match observations like galaxy rotation curves without needing dark matter.
Unlike the dark matter approach, which is about curve fitting, my model aims to find the curve based on baryonic mass alone using Effects described by GR.
It’s important to consider new ideas rather than dismiss them with “the standard model already does that” especialy when it doesn’t.
No. Dark matter is not just “curve fitting.” It is hypothesized based on many years of astronomical observations that cannot be explained by visible matter alone. Including stuff like galaxy rotation curves, gravitational lensing, the cosmic microwave background, and galaxy cluster dynamics.
Any model providing a new interpretation would need to mathematically demonstrate how it replicates and improves upon the predictions provided by general relativity, your’s do not.
Obviously a troll. There is no way that ChatGPT cab do this.