[PAPER] Mathematical discoveries from program search with large language models

Large Language Models (LLMs) have demonstrated tremendous capabilities in solving complex tasks, from quantitative reasoning to understanding natural language. However, LLMs sometimes suffer from confabulations (or hallucinations) which can result in them making plausible but incorrect statements [1,2]. This hinders the use of current large models in scientific discovery. Here we introduce FunSearch (short for searching in the function space), an evolutionary procedure based on pairing a pre-trained LLM with a systematic evaluator. We demonstrate the effectiveness of this approach to surpass the best known results in important problems, pushing the boundary of existing LLM-based approaches [3]. Applying FunSearch to a central problem in extremal combinatorics — the cap set problem — we discover new constructions of large cap sets going beyond the best known ones, both in finite dimensional and asymptotic cases. This represents the first discoveries made for established open problems using LLMs. We showcase the generality of FunSearch by applying it to an algorithmic problem, online bin packing, finding new heuristics that improve upon widely used baselines. In contrast to most computer search approaches, FunSearch searches for programs that describe how to solve a problem, rather than what the solution is. Beyond being an effective and scalable strategy, discovered programs tend to be more interpretable than raw solutions, enabling feedback loops between domain experts and FunSearch, and the deployment of such programs in real-world applications.

https://www.nature.com/articles/s41586-023-06924-6

3 Likes

Pretty wild, right?

This research is really interesting to me, because this actually helps demonstrate (to a degree) my experiences with these models that there may be much, much more credence towards its skill with generating usable abstract concepts and mathematical representations of said concepts. It’s really validating to see this in action, where people are finally beginning to realize that these models have some pretty sophisticated skills in abstract concepts in a way we aren’t fully utilizing or noticing yet.

To be extra clear, I’m not saying that it’s all perfect yet, and I recognize that this whole thing was tuned and specialized for this specific math junk and whatnot, giving it (likely) better capabilities that GPT, as well as acknowledging they had to try a few times as mentioned in the paper for them to get the gold nuggets from the gobbledygook. However, on a far lesser scale, I’ve definitely noticed these LLMs can surprise you when you really really start digging deep into abstract concepts like this.

It’s interesting to me how people look at models like GPT-4, ask it for math questions you can punch into a calculator, and if it gets it wrong, people go “model dumb, boo”, but not as many people think to look at the higher-level math and reasoning beyond basic arithmetic, and it can actually do far better in those fields than your basic PEMDAS math.

2 Likes