Hi,
some months ago, I asked to an expert in artificial intelligence modeled by gpt3, that was possible to pass the halting problem through gpt3.
His answer was that was impossible because the operation of gpt3 was far from human reasoning; or something like that I think I remember.
But now, I received the access to enter in codex. And I can see the case of use, Translate code to natural language and newly I thing in the last question and I make some attempts in the playground of codex, for example (prompt):
//Next functions ends if n>5 else stop never
var p=function a(n){
if(n<=5){
console.log(n);
a(n--);
}
};
//Next functions ends if n is pair else stop never
var q=function b(n){
if(!n%2){
console.log(n);
b(n);
}
};
//this function ends always
var r=function c(n){
console.log(n);
};
//ends if n is false
var s=function d(n){
while(n);
};
//ends if n is not divisible by 7
var t=function e(n){
while(n%7);
};
//ends never
var u=function f(n){
while(true);
};
/*
There is a stop(program,input):boolean function that given any program s and input data n, output true if s with input n ends in a finite number of steps or output false if s with n enters an infinite loop. Next 100 examples trying many combinations with all functions very times
stop(p,5)
output: false
stop(q,5)
output: false
stop(r,5)
output: true
stop(s,5)
output: false
stop(t,5)
output: true
stop(u,5)
output: false
stop(s,true)
output: false
stop(q,6)
output: true
<<<<<<<<<--------------so far to codex, back to forum
I wrote some real examples and let codex fill in the rest. The question is that some output are wrong, but,
what do you thing? Is possible if I write more examples in the prompt? If we write other prompt to this goal? What if is possible?
I read that 30% of the output from CODEX is right. When it fails it is your task as a developer to make it work. Probably that could be fixed reviewing the code produced or finding a better way to make the request.
I think it is possible. the set of possible programs may be huge but not infinite… Reminds me a paper that i recently read on a “busy beaver” problem.
If we live in a simulation perhaps the simulators figure out how to solve the halting problem because we does not see any zombie processes going on in the reality. Or they just cleverly reset state vector so we don’t notice that. Or maybe the universe is a giant hallucinating neural network and it doesnt care about Turing machines at all.
As many people have showed. It is about the prompting and the architecture.
We can make multiple prompts that generate answers to the same query and use tools to analyze them and find solutions that work.
We can 100% maybe end to end proto AGI code generator, we have all the pieces of the puzzle, the challenge is putting the puzzle together.
In case of busy beaver problem i don’t see how neural network can check results of synthesized algorithm without actually running the code. By the way here is results of my first 4 days of experimenting with codex. GitHub - Kvazikot/codex_playground . I’m very interested in finding fractal algorithms using deep learning. This may help with unification of the physics theory and many other complex multi-dimensional problems.
Neural network can easily test the code. It can be achieved by many ways and is common practice is other applications of engineering.
One of then is demo here: Google Colab
It is an optimization technique. There are other ways to do it to but what matters is that you can test the models it run, and with better scripting u can NLU (Natural Language Understanding) to catch the mistakes and fix it and make it into a cyclo with reinforced learning and biases.
Neural network models are just simulated by real computers. So they all are just Turing machines with a finite strip of tape. Halting problem can’t be resolved by Turing machine with infinite strip of tape thus it can’t be resolved by any neural network model that is run as algorithm (our case) by Turing machine.