In general, it absolutely does. I'm not sure what you're trying to say.
(1) Yes, there are classes of programs for which you can say whether it halts
(2) Yes, there are programs who do not fall into those classes who can be shown to halt
But the point remains... there's no 'general' way to show that any program halts.
Part of the point of good PL design is to produce a language that is amenable to analysis, including halting analysis. Some languages do this better than others. Others are okay so long as you make particular assumptions. The halting problem tells us that the languages that are perfectly analyzable are categorically less powerful than the ones that are not. However, this may be 'enough' for us.
> In general, it absolutely does. I'm not sure what you're trying to say.
I can see that you don't so I'll try to be clearer. For any given program one might choose there is no reason in principle why a competent computing scientist can't perform a semantic analysis for every statement and then deduce from that whether it is totally or partially correct. Obviously most programs in the set of all programs are too long for a computing scientist to read in a human life time, but that's another issue entirely.
As a matter of practicality though, the working computing scientist is far more likely to first decide on the properties he wants his program to have and then derive a program that has them. This is yet another case of construction being considerably easier than verification.
This points to (but doesn't prove) the possibility that the human computing scientist isn't using a decision procedure to analyze (or construct) the program.
About here is where I expect to see some misinterpretation of Church-Turing as somehow proving that everything, including computing scientists, is a Turing machine brought up. Oddly, rarely is the far more interesting Curry-Howard correspondence mentioned.
I'm sure you know this, but even very simple/short programs can have complex behaviours that put them out of reach of computer scientists with present-day knowledge. For example, short programs can encode things like the Goldbach conjecture, the Collatz conjecture, or the Riemann Hypothesis. For example, does this program (written in Python, but translate to your computational system) halt?
def isprime(n): return n > 1 and all(n % d for d in range(2, n))
def goldbach(n): return any(isprime(p) and isprime(n - p) for p in range(2, n))
n = 4
while goldbach(n): n += 2
(See also the recent https://www.quantamagazine.org/amateur-mathematicians-find-f... which describes how hard it was to prove things even about puny 5-state Turing machines, and that already with just 6 states there is one "antihydra" that encodes Collatz-like behaviour and will probably be hard to prove anything about.)
Absolutely. And frankly I think that’s very exciting. If I had to lay a wager I reckon if those problems will be solved by a human intelligence, the most likely method will be through taking advantage of that correspondence and proving some tractable representation of the appropriate program halts.
Like many people I had a casual go at it for Collatz. It was a humbling experience of course, but it did nothing to convince me some more capable person can’t eventually manage it.
If you fix a particular axiom system for deriving your termination proofs (e.g. because you write them all down in Coq), then for Gödelian reasons, there are programs that don't terminate but for which a nontermination proof doesn't exist (and thus can never be found). If Coq is consistent, then the program that enumerates all possible Coq proofs and stops as soon as it finds a contradiction, is one such program.
> I can see that you don't so I'll try to be clearer. For any given program one might choose there is no reason in principle why a competent computing scientist can't perform a semantic analysis for every statement and then deduce from that whether it is totally or partially correct.
'Correct'? In what sense? 'Correctness' and halting have little to do with each other. For example:
def f(p):
p()
return 2
is 'correct' if the definition of f is to evaluate to two. But it's not clear it's ever going to halt (depends on haltingness of p).
But that being said, actually no, there are many reasons in principle why a competent computer scientist would not be able to perform a semantic analysis for every statement and deduce from that whether the program will halt.
> Obviously most programs in the set of all programs are too long for a computing scientist to read in a human life time, but that's another issue entirely.
Irrelevant because even an immortal computer scientist would be unable to determine if the program halts.
> This points to (but doesn't prove) the possibility that the human computing scientist isn't using a decision procedure to analyze (or construct) the program.
The difference between a computer and a person (and I'm going to ignore all the various philosophies of consciousness) is that a human will 'arbitrarily' determine under what system he/she wants to operate, and thus can change axioms on the fly. of course, one could write a computer system that does this as well. It's really not obvious what you're saying here
> About here is where I expect to see some misinterpretation of Church-Turing as somehow proving that everything, including computing scientists, is a Turing machine brought up. Oddly, rarely is the far more interesting Curry-Howard correspondence mentioned.
Computer scientists may or may not be turing machines, but any deterministic procedure they follow can be encoded as a turing machine so your distinction is irrelevant.
I'm not sure what Curry-Howard really has to do with this, but in general, for a theorem checker, you should make sure your proofs are total.
> ‘Correct'? In what sense? 'Correctness' and halting have little to do with each other.
Confident ignorance isn’t a good look. To help you out, here you go from the wiki[1]:
Within the latter notion, partial correctness, requiring that if an answer is returned it will be correct, is distinguished from total correctness, which additionally requires that an answer is eventually returned, i.e. the algorithm terminates.
To be fair to your time and mine—I stopped reading your reply there based on the principle that nonsense follows nonsense, so you needn’t elaborate further.
This is a non-productive comment. You said 'correctness', not 'total correctness'. Totality absolutely relates to halting. 'correctness' itself doesn't imply totality.
But regardless, it's pretty clear from your ramblings that you're not well-versed in this stuff. As for my part, I implement theorem provers and programming languages, so what do I know
But do we need general ways to show that any program halts? We don't write general programs
In particular, our programs have a limited size and use a limited amount of memory (or else the OS will make sure they will halt..). And for this specific class of programs, the halting problem is actually decidable!
Theoretical impossibility results like this one are often interesting because they tell you there is no "smart" way even in practice beyond just brute forcing it.
While it's true that the class of programs that terminate in time at most T is decidable for trivial reasons (just run the program for at most time T), that trivial decider is of course not gonna run in time T (but at least T+1). So if you set T=time until the heat death of the universe, you haven't gained any practical ability to solve the halting problem either.
(1) Yes, there are classes of programs for which you can say whether it halts
(2) Yes, there are programs who do not fall into those classes who can be shown to halt
But the point remains... there's no 'general' way to show that any program halts.
Part of the point of good PL design is to produce a language that is amenable to analysis, including halting analysis. Some languages do this better than others. Others are okay so long as you make particular assumptions. The halting problem tells us that the languages that are perfectly analyzable are categorically less powerful than the ones that are not. However, this may be 'enough' for us.