This time, I tried to stick closer to the concrete syntax used in the book. I'm
still using the :
and :=
notations to indicate informal type declarations
and definitions of meta-level named constants.
We'll use num_n
to denote the PCF term representing the natural number n
.
This is written as n
with an overline in the book.
We can implement a tuple of functions f, g
both of type nat -> nat
as a
function p : nat -> nat -> nat
such that p(0)
behaves as f
and p(n)
behaves as g
for nonzero n
. Then (letting 1
be shorthand for s(z)
):
even_odd : nat -> nat -> nat
even_odd := fix eo : nat -> nat -> nat is λ (w : nat) λ (n : nat)
ifz w {z -> ifz n {z -> 1 | s(n') -> eo(1)(n')}
| s(_) -> ifz n {z -> z | s(n') -> eo(z)(n')}}
even : nat -> nat
odd : nat -> nat
even := even_odd(0)
odd := even_odd(1)
This is a pretty direct translation of the following Haskell:
evenOdd =
let eo = \w n -> if w == 0
then (if n == 0 then 1 else eo 1 (n - 1))
else (if n == 0 then 0 else eo 0 (n - 1))
in eo
minimize : (nat -> nat -> nat) -> nat -> nat
minimize := λ (phi : nat -> nat -> nat) λ (n : nat)
(fix min_from : nat -> nat is λ (cur : nat)
ifz phi(cur)(n) {z -> cur | s(_) -> min_from(s(cur))})(z)
Let's work with the strict dynamics for PCF, so that any term of type nat
is
either nonterminating or num_n
for some n
; this rules out some edge cases.
Suppose that the partial mathematical function φ : N -> N -> N
is definable
in strict PCF by the term phi
. Then its minimization function ψ : N -> N
is
definable by the term minimize(phi)
. For all natural numbers n
, we have:
minimize(phi)(num_n) ≡
(fix min_from : nat -> nat is λ (cur : nat)
ifz phi(cur)(num_n) {z -> cur | s(_) -> min_from(s(cur))})(z)
Let mf
denote this term:
fix min_from : nat -> nat is λ (cur : nat)
ifz phi(cur)(num_n) {z -> cur | s(_) -> min_from(s(cur))}
We need to show that mf(z) ≡ num_m
iff φ(m, n) = 0
and φ(m', n)
is
defined and nonzero for all m' ≤ m
. As a preliminary, we note that for all
natural numbers c
:
φ(c, n) = 0
implies mf(num_c) |->* num_c
;φ(c, n)
defined and nonzero implies mf(num_c) |->* mf(num_{c + 1})
;φ(c, n)
undefined implies that mf(num_c)
is nonterminating.Each of these follows easily by simple expansion of mf(num_c)
and the
relationship between φ
and phi
. Note also that each of these |->*
s
implies an ≡
. We now separately consider the case where there exists an m
with the desired property and the case where there does not.
Suppose that there exists such an m
. Then since m
is the only number that
can satisfy those conditions (it is unique since it is minimal), and since we
can only have mf(z) ≡ num_x
for at most one x
, it suffices to show that
mf(z) ≡ num_m
. We show this by induction; to be precise, we show that that
for all d ≤ m
, we have mf(num_{m - d}) ≡ num_m
, so then our goal follows
when d = m
(this is basically doing induction "counting down" instead of
"counting up"). In the base case, we need mf(num_m) ≡ num_m
. This is
immediate from case 1 in the note above. In the induction step, suppose we have
mf(num_{m - (d - 1)}) ≡ num_m
and d ≤ m
. Note that φ(m - d, n)
is
defined and nonzero (since d > 0
) and that
num_{m - (d - 1)} = s(num_{m - d})
. Then
mf(num_{m - d}) ≡ mf(s(num_{m - d})) ≡ mf(num_{m - (d - 1)}) ≡ num_m
.
Finally, we can consider the case where there is no m
meeting the necessary
criteria. There are two possibilities: one is that no φ(m, n) = 0
, and one is
that there is some minimal m
with φ(m, n) = 0
but φ(m', n)
is undefined
for some m' < m
. Either way, we need to show that there is no m
with
mf(z) ≡ num_m
. This is equivalent to mf(z)
being nonterminating, since it
is equivalent to mf(z)
reducing to num_m
, and it terminates iff it reduces
to a numeral. I'll sketch an outline of the rest of the proof, since this is
getting long: if there is an m
, but something is undefined before it, then
mf(z) |->* mf(num_{m'})
where m'
is the minimum natural with φ(m', n)
undefined, and then we have nontermination. If there is no m
, then we either
hit a point where we are undefined, or we just keep searching upward forever;
i.e., mf(z) |->* mf(num_x)
for all x
.
It seems like it would be tricky to rigorously prove this, but I don't think
it's definable. Consider the case where e_1
and e_2
are both
nonterminating; then the only way that e(e_1)(e_2)
can be terminating is if
it never eliminates either of its arguments and does not return either. But
then it must be constant, since eliminating the arguments or returning them is
the only way to derive a result from them. Similarly, I don't think there's any
reasonable extension of PCF that would allow this---it seems like it would give
a solution to the halting problem.
ap := λ (f : D) λ (x : D) unfold(f)(x)
k := fold(λ (x : D) fold(λ (y : D) x))
s := fold(λ (f : D) fold(λ (g : D) fold(λ (x : D) ap(ap(f)(x))(ap(g)(x))))
Suppose that Γ, x : [τ'/t]τ |- e' : τ'
and Γ |- e : rec t is τ
. Then define
rec(x.e'; e) := (fix r : (rec t is τ) -> τ' is λ (v : rec t is τ)
(λ (x : [τ'/t]τ) e')(map+{t.τ}(v'.r(v'))(unfold(v)))
)(e)
Suppose that Γ, x : τ' |- e' : [τ'/t]τ'
and Γ |- e : τ'
. Then define
gen(x.e'; e) := (fix g : τ' -> (rec t is τ) is λ (st : τ')
fold(map+{t.τ}(st'.g(st'))((λ (x : τ') e')(st)))
)(e)
Under the eager interpretation, the dynamics of this rec
closely resemble
those of eager M's rec
construct, while the dynamics of this gen
are
not quite the same---eager M's gen
is immediately a value and will only
expand incrementally as it is demanded by unfold
s, whereas this gen
immediately expands as far as possible, which may not terminate in some cases
that were fine in M. For example, the ω := gen(x.r·x; ())
given on page
131 is nonterminating in eager FPC.
Under the lazy interpretation, the dynamics of both this rec
and this gen
closely resemble those of lazy M's rec
and gen
constructs. However,
since new infinite elements of recursive types have been introduced, like the
ω
above, there will be cases where usage of rec
will not terminate,
analogously to gen
's failure in eager FPC.
Assuming that we're representing bool
by unit + unit
, with l·()
true and
r·()
false, define:
signal := rec t is bool * t
nor1 : bool -> bool -> bool
nor1 := λ (a : bool) λ (b : bool)
case a {l·_ -> r·() | r·_ -> case b of {l·_ -> r·() | r·_ -> l·()}}
nor : signal -> signal -> signal
nor := λ (a : signal) λ (b : signal)
gen(st.(nor1(unfold(st·l)·l)(unfold(st·r)·l),
(unfold(st·l)·r, unfold(st·r)·r));
(a, b))
nor
stores a pair of signals as its state, and at each unfolding step, it
nors their heads and stores their tails as the next state. This all only works
under lazy FPC; in eager FPC, the type signal
is inhabited only by
nonterminating computations.
Here is a latch:
latch : signal -> signal -> signal * signal
latch := λ (r : signal) λ (s : signal) fix outs : signal * signal is
(fold((r·(), nor1(r)(outs·r))), (fold((l·(), nor1(s)(outs·l))))
This corresponds to the following somewhat-more-readable Haskell:
latch :: Signal -> Signal -> (Signal, Signal)
latch r s = let outs = (False : nor r (snd outs), True : nor s (fst outs))
in outs
The conses are to introduce an initial state in the latch, meaning that changes in output will be slightly delayed. Notably, a set or reset signal must last at least two "ticks" in order for the latch to reach a stable state; if a signal is provided for only one tick, the latch will remain in an unstable alternating state. Examples:
GHCi> let preview (q, q') = (take 10 q, take 10 q')
GHCi> preview (latch (repeat False) (False:True:True:repeat False))
([False, False, False, True, True, True, True, True, True, True],
[True, True, False, False, False, False, False, False, False, False])
GHCi> preview (latch (repeat False) (False:True:False:repeat False))
([False, False, False, True, False, True, False, True, False, True],
[True, True, False, True, False, True, False, True, False, True])