I would find it useful for myself. I cannot talk for others.
As a contract similar to no_panic I would find it more useful than Always Diverges, Never Returns, or Might Return.
Besides its consideration as contract it has other interesting properties. Mainly, when types are considered as propositions it would admit implementability only of true propositions. In this way it equates programming with theorem proving. A program would be a proof, as in Coq.
We could try then to exploit the existence of proofs as objects in the language. This would make explicit the contracts of some structures. I think this would only be actually useful if we additionally had dependant types. Then it could be possible to write code like this:
LESS_THAN and COMMUTES would include an always_return in their definition. Just the presence of bound and proof would already give a guarantee about the integrity of the data. One only would operate with them to build proofs for derivate facts.
Would help someone to have proofs embedded in their data structures? Although the hardness of building proofs would scare many people away.
Thus, from the properties that one could want to check in a function, always_returns is for me the one with the deepest meaning.
Possibly it is senseless to implement all of this in Rust. And there are other languages that implement these ideas. But I would never call it pretty meaningless.
What keeps me from lying and annotating a function as âAlways Returnsâ? How would the compiler know (for sure) that I was right (halting problem)? Always diverges and might return and never returns are solvable by the compiler on the other hand so the compiler can verify Iâm not lying. I guess, if you could only annotate a function with always returns where the compiler can verify that it will actually return (at least at some point) it may be useful, but, it seems unlikely. For example: If I had a function that generated a random u128 and then went into a loop and kept generating random u128âs until there was a match and then broke out of the loop, that function would technically âalways returnâ. It might be the heat death of the universe before it does though. Could the compiler âknowâ that it could return at some point? Maybe. Would that be useful? Unlikely.
What keeps me from lying and annotating a functions a âAlways Returnsâ?
A termination checker.
How would the compiler know (for sure) that I was right (halting problem)?
A termination checker, ideally, would be both sound (never allows a non-terminating function) and complete (never forbids a terminating function). This is impossible, however - this is, in fact, the core distinction between the complexity classes R and RE being unequal.
However, you can choose which to give up. Modern termination checkers are sufficiently powerful that they will permit pretty much any deterministically-terminating function you care to come up with. They are sound, and they are close enough to completeness that it is not much of an issue in practice.
For example: If I had a function that generated a random u128 and then went into a loop and kept generating random u128âs until there was a match and then broke out of the loop, that function would technically âalways returnâ.
No, it would not. The probability of returning would approach unity, but it'd never reach it - for example, your RNG might enter a cycle, and never return to that first u128.
"Arbitrarily small possibility of not returning" is still different from "zero possibility of not returning".
Wouldnât marking, for example, LESS_THAN and COMMUTES as âPureâ accomplish this? Isnât âPUREâ a more easily provable attribute for the compiler?
A pure function depends on no global state (only the inputs)
A pure function only calls other pure functions
Randomness is not pure
I/O is not pure
A pure function provably (by the compiler) returns
So, isnât what weâre looking for âPUREâ rather than âALWAYS RETURNSâ or am I misunderstanding something?
No; these are different concepts. The usual term for âALWAYS RETURNSâ is âtotalâ (as opposed to âpartialâ), and purity and totality are orthogonal.
You can have a pure total function, that accesses only its arguments in a strictly-decreasing way.
You can have an impure total function, that accesses a global in a strictly-decreasing way.
You can have a pure partial function, that accesses only its arguments and in some cases will recurse forever.
You can have an impure partial function, that accesses a global and in some cases will recurse forever.
There are actually two notions of âtotal functionâ. One is based on recursion, and thus must consume a nonzero amount of input per level of recursion. The other is based on corecursion, and thus must produce a nonzero amount of output per level of corecursion.
These constraints, respectively, are described as âterminationâ (eventually, itâll run out of input) and âproductivityâ (anyone watching it will get some data on a reliable schedule).
A notable language that is pure but not total is Haskell.
In this formulation, is âloops foreverâ equivalent to ârecurses foreverâ (being that any??? loop can be expressed as recursion)?
Yes; totality and partiality are usually expressed in terms of the lambda calculus (so as to have a minimal set of constructs that need handling), hence the focus on recursion.
This seems very wrong to me. Panicking exists because it is often the only safe and efficient way to signal a contract breach in code. Thus you could easily think of it as always being used as an optimization.
One could imagine getting rid of it by always returning Result, but that wastes memory and increases register pressure. (Or you could just trudge on into undefined behavior instead.) In fact, if there were a consistent high-performance way to write correct programs without panicking... it would be fairly revolutionary.
Thanks for your response, @skysch. I agree with what you said. Iâll try to rephrase:
Even though panicking is an efficient, and easy-to-use means of contract enforcement it should not be used as an excuse to not choose the proper contract.
A language with dependent types which also ensures termination can do this. Well, I suppose the jury is out on high-performance -- such a language could be high performant in theory
initially, and then, after some amount of venerating sacrifice these fns would become inapplicable in contexts where the compiler could statically prove contract breach.
So I'm not sure that it is possible to add these proof obligations after the fact (assuming those functions have been stabilized), because it would cause breakage to existing stable code?
A panicing program is a well-typed program, converting a program which compiles and panics, under certain circumstances, at run-time into a compile time error should constitute breakage, yes?
Sure; but you have to think about the dependencies of your code, if your code happens to be a library.
All the reverse transitive dependencies of your library will have to be semver-bumped since yours was.