Someone's Intermediate Representation

0%

# From E to T

System T is the combination of function types with the type of natural numbers. T provides a general mechanism called primitive recursion.

Primitive recursion captures essential inductive character of natural numbers, hence may be seen as intrinsic termination proof for each program in that language.

Consequently we may only define total functions in that language, those that always return a value for each argument.

The proof may seem like a shield against infinite loops, it is a weapon that can be used to show that some programs cannot be written in T.

# Statics and Syntax Table

## Syntax Table

\begin{align} \text{Typ}&&\tau&&::=&&\text{nat}&&\text{nat}&&\text{naturals}\\ &&&&&&\to(\tau_1;\tau_2)&&\tau_1 \to \tau_2&&\text{function}\\ \text{Exp}&&e&&::=&&x&&x&&\text{variables}\\ &&&&&&z&&z&&\text{zero}\\ &&&&&&s(e)&&s(e)&&\text{successor}\\ &&&&&&\text{rec}\lbrace e_0;x.y.e_1\rbrace(e)&&\text{rec}\lbrace z \hookrightarrow e_0 \shortmid s(x) \text{ with } y \hookrightarrow e_1 \rbrace&&\text{recursion}\\ &&&&&&\lambda\lbrace \tau\rbrace(x.e)&&\lambda(x:\tau)e&&\text{abstraction}\\ &&&&&&\text{ap}(e_1;e_2)&&e_1(e_2)&&\text{abstraction} \end{align}

We write $\overline n$ for expression $s(\dots s(z))$ where successor is applied $n$ times to zero.

Recursor represents $e-$fold transformation $x.y.e_1$ from $e_0$.

• Bound variable $x$ represents predecessor.
• Bound variable $y$ represents the result of $x-$fold iteration.
• with clause in concrete syntax binds $y$ to the result of the recursive call.

Iterator, $\text{iter}\lbrace e_0;y.e_1 \rbrace (e)$, is considered as an alternative to recursor, but there is no binding for predecessor.

## Statics

$$\dfrac{}{\Gamma,x:\tau\vdash x:\tau}\\ \dfrac{}{\Gamma\vdash z:\text{nat}}\\ \dfrac{\Gamma \vdash e:\text{nat}}{\Gamma \vdash s(e):\text{nat}}\\ \dfrac{\Gamma \vdash e:\text{nat}\phantom{“”}\Gamma \vdash e_0:\tau\phantom{“”}\Gamma, x:\text{nat},y:\tau \vdash e_1:\tau}{\Gamma \vdash \text{rec}\lbrace e_0;x.y.e_1\rbrace (e):\tau}\\ \dfrac{\Gamma,x:\tau_1\vdash e:\tau_2}{\Gamma \vdash \lambda\lbrace \tau_1\rbrace(x.e_1):\to(\tau_1;\tau_2)}\\ \dfrac{\Gamma\vdash e_1:\to(\tau_2;\tau)\phantom{“”}\Gamma\vdash e_2:\tau_2}{\Gamma \vdash \text{ap}(e_1;e_2):\tau}$$

### Theorem

If $\Gamma \vdash e:\tau$ and $\Gamma,x:\tau\vdash e’:\tau’$, then $\Gamma \vdash [e/x]e’:\tau’$.

# Dynamics

The judgment for val is inductively defined as
\begin{align} &\dfrac{}{z \text{ val}}\\ &\dfrac{[e \text { val}]}{s(e)\text{ val}}\\ &\dfrac{}{\lambda \lbrace\tau\rbrace (x.e)\text{ val}} \end{align}

(10) has a premise that include an eager interpretation of successor, excludes for lazy interpretation.

\begin{align} &\left[\dfrac{e \longmapsto e’}{s(e) \longmapsto s(e’)}\right]\\ &\dfrac{e_1\longmapsto e_1’}{\text{ap}(e_1;e_2)\longmapsto \text{ap}(e_1’;e_2)}\\ &\left[\dfrac{e_1 \text{ val}\phantom{“”}e_2\longmapsto e_2’}{\text{ap}(e_1;e_2)\longmapsto \text{ap}(e_1;e_2’)} \right]\\ &\dfrac{[e_2 \text{ val}]}{\text{ap}(\lambda\lbrace \tau\rbrace (x.e);e_2)\longmapsto [e_2/x]e}\\ &\dfrac{e\longmapsto e’}{\text{rec}\lbrace e_0;x.y.e_1\rbrace (e) \longmapsto \text{rec}\lbrace e_0;x.y.e_1\rbrace (e’)}\\ &\dfrac{}{\text{rec}\lbrace e_0;x.y.e_1\rbrace (z) \longmapsto e_0}\\ &\dfrac{s(e) \text{ val}}{\text{rec}\lbrace e_0;x.y.e_1\rbrace(s(e))\longmapsto[e,\text{rec}\lbrace e_0;x.y.e_1\rbrace (e)/x, y]e_1} \end{align}

The bracketed rules and premises are included for an eager successor and call-by-value application, and omitted for a lazy successor and call-by-name application.

## Theorem

### Canonical Forms

If $e : \tau​$ and $e:\text{ val}​$, then

• If $\tau = \text{nat}$, then either $e = z$ or $e = s(e’)$ for some $e’$.
• If $\tau = \tau_1 \to \tau_2$, then $e = \lambda(x:\tau_1)e_2$ for some $e_2$.

If $e :\text{ val}$, then $\tau$ can only be $\lambda \lbrace \tau \rbrace (x.e)$ or $\overline n$.

• If $\tau = \text{nat}$, then $e$ can only be $z$ or $\overline n$.
• If $\tau = \tau_1 \to \tau_2$, then $e$ must be form $\lambda (x:\tau_1)e_2$ for some $e_2$.

### Preservation

If $e :\tau$ and $e \longmapsto e’$, then $e’ :\tau​$.

If $e \longmapsto e’$, then we can induction on $e \longmapsto e’$ and base on the previous statics theorem, preservation holds.

### Progress

If $e : \tau$, then either $e \text{ val}$ or $e \longmapsto e’$ for some $e’$.

If $e \text{ val}​$, then $e​$ can be either $e : \text{ nat}​$ or form of $\lambda\lbrace \tau\rbrace (x.e)​$, then $e \not \longmapsto e’​$ for $e’ \neq e​$.

If $\neg e\text{ val}$, then we have to prove $\exists e’$ that $e\longmapsto e’$ and $e’ \neq e$. According to dynamics rules, it turns out that they are closed, thus $\neg e\text{ val}$ and $e : \tau$ we can get $\exists e’,e \longmapsto e’$.

# Definability

## Definable

A mathematical function $f : \mathbb N \to \mathbb N$ on $\mathbb N$ is definable in T $\iff$ $\exists e_f : \text{nat}\to\text{nat},\forall n \in \mathbb N, e_f(\overline n) \equiv \overline{f(n)} :\text{nat}$, which means if the expression $e_f$ is applied to the numeral representing argument $n \in \mathbb N$, the application is definitionally equal to $f(n)\in \mathbb N$.

## Definitional Equality

Written with form $\Gamma \vdash e \equiv e’ : \tau$ is the strongest congruence containing axioms
$$\dfrac{\Gamma,x:\tau_1 \vdash e_2:\tau_2\phantom{“”}\Gamma \vdash e_1 :\tau_1}{\Gamma \vdash \text{ap}(\lambda\lbrace \tau_1 \rbrace (x.e_2);e_1)\equiv [e_1/x]e_2:\tau_2}\\ \dfrac{\Gamma \vdash e_0:\tau \phantom{“”}\Gamma,x:\tau\vdash e_1:\tau}{\Gamma \vdash \text{rec}\lbrace e_0;x.y.e_1\rbrace (z) \equiv e_0:\tau}\\ \dfrac{\Gamma \vdash e_0:\tau \phantom{“”}\Gamma,x:\tau\vdash e_1:\tau}{\Gamma \vdash \text{rec}\lbrace e_0;x.y.e_1\rbrace (s(e)) \equiv [e,\text{rec}\lbrace e_0;x.y.e_1\rbrace (e)/x, y]e_1:\tau}$$

Example, $\lambda (x:\text{nat}) \text{ rec }x\lbrace z \hookrightarrow z \shortmid s(u)\text{ with } v\hookrightarrow s(s(v)) \rbrace$, then we can see that $e_d(\overline 0) = \overline 0 :\text {nat}$ and $e_d(\overline n) = \overline {2\times n}: \text{nat}$ by induction.

### Ackermann Function

\begin{align} A(0, n) &=n + 1\\ A(m+1,0) &= A(m, 1)\\ A(m + 1, n+1) &= A(m,A(m+1,n)) \end{align}

First-order primitive recursive function is function of $\text{nat}\to\text{nat}$ that is defined with recursor, but without using any higher-order function. So the ackermann function here is higher-order primitive recursive function.

To show that $A$ is definable on T we need to show that $A(m+1,n)$ iterates $n$ times of function $A(m, -)$ to $A(m, 1)$. We can define a higher-order function with type
$$\text{it}: (\text{nat}\to\text{nat})\to\text{nat}\to\text{nat}\to\text{nat}$$
to be the lambda abstraction $\lambda-$abstraction
$$\lambda(f:\text{nat}\to\text{nat}) \lambda (n:\text{nat})\text{ rec }n \lbrace z\hookrightarrow \text{id} \shortmid s(\underline{}) \text{ with } g \hookrightarrow f \circ g \rbrace$$
Then we can see that $\text{it}(f)(\overline n)(\overline m) = f ^{(n)}(\overline m) :\text{nat}$, meaning $\overline n-$fold composition of $f$ on $\overline m$.

In this way we can define the lambda abstraction for ackermann function $e_a :\text{nat}\to\text{nat}\to\text{nat}$.
$$e_a = \lambda (m:\text{nat})\text{ rec }m \lbrace z\hookrightarrow s \shortmid s(\underline{})\text{ with }f\hookrightarrow \lambda (n:\text{nat})\text{ it}(f)(n)(f(\overline 1)) \rbrace$$

# Undefinability

It is impossible to define an infinite loop in T.

## Theorem

If $e : \tau$, then $\exists v \text{ val}$ such that $e \equiv v :\tau​$.

## Godel Numbering

If we assign a unique natural number to each closed expression of T, then this method is Godel-numbering.

By assigning a unique number to each expression, we may manipulate expressions as data values in T so that T is able to compute with its own programs.

The essence of Godel-numbering is captured by construction on abt: for a general ast $a$ with form $o(a_1,\dots,a_k)$ where $o$ is operator of $k-$arity.

The main complication of Godel numbering generation is to ensure that $\alpha-$equivalence expressions are assigned with same Godel number.

Enumerate the operators so that every operator has index $i \in \mathbb N$ and let $m$ be the index of $o$ in enumeration.

Define Godel number $\ulcorner a\urcorner$ of $a$ being number $2^m3^{n_1}5^{n_2}\dots p_k^{n_k}$ where $p_k$ being $k^{th}$ prime number and $n_1,\dots,n_k$ being Godel number of $a_1,\dots,a_k$.

### Universal Function

Using this representation, we may define a function $f_{univ} : \mathbb N \to \mathbb N \to \mathbb N$ such that $\forall e : \text{nat}\to\text{nat}$,
$$f_{univ}(\ulcorner e \urcorner)(m) =n \iff e(\overline{m}) \equiv \overline {n}:\text{nat}$$
The determinacy of dynamics with undefinability theorem ensure that $f_{univ}$ is a well-defined function.

It is called the universal function of T because it specifies the behavior of any expression $e$ of type $\text{nat}\to\text{nat}​$.

### Diagnoalization

A technique called diagnoalization is to prove that there are functions on natural numbers that are not definable in T. We define an auxiliary mathematical function called the diagnoal function $\delta :\mathbb N \to \mathbb N$ by $\delta(m) = f_{univ}(m)(m)$.

Then it would be easy to see that $\delta(\ulcorner e \urcorner) = n \iff e(\overline{\ulcorner e \urcorner}) \equiv \overline{n} :\text{nat}$. Suppose $f_{univ}$ is defined by expression $e_{univ}$, then $e_{\delta} = \lambda(m:\text{nat}) e_{univ}(m)(m)$.

By definition we can see $e_{\delta}(\overline{\ulcorner e \urcorner}) \equiv e(\overline{\ulcorner e \urcorner})$. We can also define $e_{\Delta}$ as $\lambda(x:\text{nat}) s(e_{\delta}(x))$, but it turns out to be
$$e_{\Delta}(\overline{\ulcorner e_{\Delta} \urcorner}) \equiv s(e_{\delta}(\overline{\ulcorner e_{\Delta} \urcorner})) \equiv s(e_{\Delta}(\overline{\ulcorner e_{\Delta} \urcorner}))$$

We say that a language $\mathcal L$ is universal if it is possible to write an interpreter for $\mathcal L$ in $\mathcal L$ itself.

# Exercises

## 9.3

Attempt to prove that if $e:\text{nat}$ is closed, then $\exists n, e\longmapsto^\ast\overline{n}$ under the eager dynamics, where does the proof break down?

Consider $e_1(e_2)$, then $e_1$ is function type. The theorem and the definition of dynamics do not include inductive hypotheses for function type. But the proof termination depends on the total expression.

## 9.4

In application, where $e_1(e_2)$, even if $e_1,e_2$ are terminating, there is still possible to break down.

If $e_1$ is a function that is a function when applied, fail to terminate. The inductive hypothesis provides no possibility to rule out the possibility.

## 9.5

• If $e :\tau_2 \to \tau$ is hereditarily terminating, and $e_2 : \tau_2$ is hereditarily terminating at type $\tau_2$, then $e(e_2)$ is hereditarily terminating of type $\tau$, which means stronger hypotheses eliminate the problem in 9.4 about application.
• As for proving $\tau_1 \to \tau_2$ type hereditarily terminating, the problem arise since it is based on closed expressions. The proving method can be $e = \lambda\lbrace \tau_1\rbrace(x.e_2)$ that $e_1:\tau_1$ is hereditarily terminating of type $\tau_1$ and $e(e_1) : \tau_2$ is hereditarily terminating of type $\tau_2$.

## 9.6

• If $e:\text{nat}$, then it is hereditarily terminating by defined termination on nat.
• If $e:\tau_1\to\tau_2$, then let $e_1 :\tau_1$ be hereditarily terminating of type $\tau_1$. According to the dynamics, we can see that $e’(e_1) \longmapsto e(e_1)$. Then by hereditarily termination definition $e’$ is hereditarily terminating.
• It is sufficient to show that $[e_1/x]e_2$ is hereditarily terminating of $\tau_2$. If $\lambda(x:\tau_1)e_2$ is closed, then $[e_1/x]e_2$ is closed. But we cannot apply the hereditarily termination to $e_2$ since it is still open with $x$ free.

## 9.7

The final strengthening now makes it sufficient to show that every well-typed open term is open herditarily terminating. Then for 9.3, it follows that consider a closed term nat herditarily terminating thus terminating.

## Extra 1

Statics and Dynamics in T for iterator

$$\dfrac{\Gamma \vdash e : \text{nat}\phantom{“”}\Gamma \vdash e_0:\tau \phantom{“”}\Gamma,x:\tau \vdash e:\tau}{\Gamma \vdash \text{iter }e\lbrace z \hookrightarrow e_0 \shortmid s(x) \hookrightarrow e_1 \rbrace :\tau}\\ \dfrac{}{\text{iter }z \lbrace z \hookrightarrow e_0 \shortmid s(x) \hookrightarrow e_1 \rbrace \longmapsto e_0}\\ \dfrac{}{\text{iter }s(e) \lbrace z_0 \hookrightarrow e_0 \shortmid s(x) \hookrightarrow e_1 \rbrace \longmapsto [\text{iter }e \lbrace z \hookrightarrow e_0 \shortmid s(x) \hookrightarrow e_1 \rbrace / x]e_1}\\ \dfrac{e \longmapsto e’}{\text{iter } e \lbrace z_0 \hookrightarrow e_0 \shortmid s(x) \hookrightarrow e_1 \rbrace \longmapsto \text{iter }e’ \lbrace z \hookrightarrow e_0 \shortmid s(x) \hookrightarrow e_1 \rbrace}$$

## Extra 2

Define the predecessor with only iterator

$$\dfrac{}{\text{pred }z \longmapsto z}\\ \dfrac{\Gamma \vdash e : \text{nat}}{\text{pred }e \longmapsto \text{iter }e \lbrace s(z) \hookrightarrow z \shortmid s(x) \rightarrow x \rbrace }$$