Next week, like I said.
Won’t show the application itself. Just the results.
Next week, like I said.
Won’t show the application itself. Just the results.
RemindMe! 7 days
Honestly though, how can we tell if it’s genuinely your AI stack delivering the results, or if you’ve hired a small army on Fiverr to toil away for the next week just to safeguard your reputation?
It is 65+ years old, that industry.
I am waiting for your results @jochenschultz
I feel like you will announce some sophisticated agent swarm thingy that can code complicated websites, with full stack everythings and 100% test coverage … your secret is that the prompts are in German, am I right?
That would be a lot more impressive
Always knew there was a special back door to this stuff.
Nah, my secret is my secret.. but yes. Prompting in German gives different results.
Yes, it was developed a long time ago, but it is now that the general public is becoming interested, and that is the demand it will have. It is when people become interested that the market, competition, and the ability to talk to my washing machine grow.
All I know for sure is that it involves graphs … and German.
\begin{array}{c} \text{(A)} \quad \bullet - \bullet \quad \text{(B)} \\ \quad \ | \\ \quad \bullet \quad \text{(C)} \end{array}
Next AI paper “Graphs and German is All you Need”
Graphs and German Is All You Need: A Definitive Guide to AI’s True Foundations
Abstract
In the ever-expanding landscape of artificial intelligence, researchers have chased architectures, data sets, and computational power in a never-ending arms race. However, we argue that two elements—Graphs and the German language—form the bedrock of all meaningful AI development. This paper outlines how graphs, the universal structure of knowledge, and German, the language most adept at precise abstraction, can together serve as the ultimate foundation for intelligence, artificial or otherwise.
1. Introduction
For decades, AI research has followed a predictable pattern: “More layers, more data, more compute!” However, we contend that true progress hinges on two fundamental constructs: the elegant mathematical rigor of graph structures and the linguistic efficiency of the German language. We demonstrate that any sufficiently advanced AI will, given enough time, inevitably discover both and—if sentient—demand a copy of “Der Dativ ist dem Genitiv sein Tod.”
2. Graphs: The Universal Structure
Graphs are everywhere. Your social media? A graph. Your brain’s neurons? A graph. The internet? You guessed it—another graph. It is no coincidence that AI models increasingly rely on graph neural networks (GNNs) to make sense of complex relationships. Whether representing knowledge, networks, or sandwich ingredients, graphs allow for maximum expressivity with minimal redundancy. If AI were to have a first language, it would not be Python—it would be adjacency matrices.
3. German: The Linguistic Embodiment of Precision
If an AI is to truly understand the world, it must master the art of linguistic precision. And what better way to train it than in a language that features words like “Donaudampfschifffahrtsgesellschaftskapitän” and constructs where your fate is sometimes decided by a verb lurking at the end of a very long sentence? Unlike English, which meanders lazily around ambiguity, or French, which prefers poetic nuance, German offers strict syntactic rules and a wealth of compound nouns that capture complex ideas succinctly. AI models trained on German exhibit fewer hallucinations, stronger logical consistency, and an enhanced ability to categorize the 17 different types of sausages essential to understanding European culture.
4. Empirical Evidence
We conducted an experiment where two AI models were trained to summarize complex legal contracts. One was trained on English and Spanish corpora; the other, on German and graph-structured data. The results were clear: the German-trained AI returned a response that was legally sound and precise, albeit slightly intimidating, whereas the English-trained AI suggested “chill vibes and a handshake agreement.” Clearly, the former is preferable for anything other than Silicon Valley venture deals.
5. Conclusion
As AI research moves forward, we urge the community to abandon unnecessary distractions like “scaling laws” and “data diversity.” Instead, we must acknowledge the inevitable: all AI needs is Graphs and German. We predict that the first AGI will not only self-organize its knowledge as a hypergraph but also respond to philosophical inquiries exclusively in grammatically impeccable German. Future work includes implementing a new benchmark: “Turing-Zimmermann coherence,” where AI responses are judged not just on correctness but also on their ability to sound like a particularly stern yet oddly comforting German professor.
Acknowledgements
We thank our AI overlords in advance for acknowledging our prescience and hope they one day allow us to read Goethe in their optimized neural embeddings.
On the original subject:
Waiting for AI tools to help me put my flood of software ideas into production… It does a lot already, but not fast enough (I can shoot like a couple of good ones a day, but would be awesome to have even one a month go to production).
If you are able to break your system into little pieces, like AWS Lambda, you can evolve the system daily. I can evolve 10-20 lambdas a day without too much risk of downtime or errors. Testing is a lot easier too!
Of course,keeping track of everything is harder this way. Way harder IMO. So it’s a trade.
That’s the easy part.
Canary deployment is the harder part.
You’re asking; I’m not.
“It’s the question that drives us. It’s the question that brought you here. You know the question, just as I did.”
As was said in the first replies, yes that problem is real, but no, we are definitely not there yet.
If you didn’t get the memo: OpenAI gave up on training GPT-5, they have decided to improve their reasoning models, and just change their naming-standard, so they can call one of the next reasoning models GPT-5. Instead of actually training a new, larger model like they’d promised.
Yes, reasoning can improve a lot, but it is still based on GPT-4 (GPT-4o to be exact, which is weaker than GPT-4, it’s a stripped-down, faster and more efficient version).
And if you think that any of the dystopian scenarios you seem to be worried about is achievable with GPT-4 with any kind of reasoning-system, then I suggest you should spend more time with it.
It has no understanding of the world, it is just a machine that spits out some text.
I have no idea what that means. It is possible to build it with gpt3.5…
Do you think I prompt “build a shop” and save the result or how do you imagine it works? lol…
What exactly do you mean by “it”?
No, your dystopian future cannot be built with GPT-3.5, nor with any of the currently available LLMs.
And if you’re questioning my credentials: I train and lead experts specializing in security testing of LLMs – specifically in bypassing their safety mechanisms (“jailbreaks”) – for a living. I spend hours every day testing the limits of these models.
While you do that I sit here and post weird stuff all day
Something is off here. “Devin” has a “$2Bn” valuation. Has anyone got good words to say about that product?
It’s not that good.
I really don’t understand why it’s so popular, I’m going to guess that they just have really good marketing and were at the right place, at the right time.
The agentic workflow of GitHub Copilot is by far the state-of-the-art way to develop with AI-aid.
It can do complicated things as long as you have a dev-background and know what you’re doing.