Please refer this link from page 13 to 21.

# Framework of Inductive Definitions

This chapter is the development of **basic framework of inductive definitions**.

An **inductive definition** consists of **a set of rules** for deriving **judgments**.

**Judgments** are statements about one or more abt’s of some sort.

The rules specify necessary and sufficient conditions for **the validity of a judgment**, hence fully determine its meaning.

# Judgments

Examples:

Notations Explanations $n$ `nat`

$n$ is a natural number $n_1 + n_2 = n$ $n$ is the sum of $n_1$ and $n_2$ $\tau$ `type`

$\tau$ is a type $e : \tau$ expression $e$ has type $\tau$ $e \Downarrow v$ expression $e$ has value $v$

## Judgment Form

A **judgment** states that one or more abt’s have a **property** or **stand in some relation** to one another.

This kind of **property** or **relation** is called a **judgment form**.

## Instance of Judgment Form

The **judgment** that an object or objects have the property or stand in the relation is said to be an **instance of that judgment form**.

## Predicate and Subjects

A **judgment form** is also called a **predicate**, the **objects** constituting an **instance** are its **subjects**.

## Some Notations

We write $a\space\mathrm{J}$ or $\mathrm J \space a$ for the **judgment** asserting $\mathrm J$ holds for abt $a$.

The notation of the **judgment form** $\mathrm J$ sometimes can be $-\space\mathrm J$ or $\mathrm J\space-$ to indicate that **argument** is absent.

If it’s not important to **stress the subject** of the judgment, we write $J$ to stand for **unspecified judgment** or **an instance of some judgment form**.

Freely use the prefix/infix/mix-fix notation to enhance the readability.

# Inference Rules

## Inductive Definition

An **inductive definition** of a **judgment form** consists of a **collection of rules** of form:

$$\frac {J_1 \space\dots\space J_k}J$$

in which $J, J_1, \dots, J_k$ are all judgments of the form being defined.

## Premises and Conclusion

The judgments above the horizontal line are **premises** of the rule.

The judgment below the horizontal line is **conclusion**.

## Axiom and Proper Rule

If there’s **no premises in a rule**, the rule is an **axiom**, otherwise it is an **proper rule**.

## Sufficient Relationship

An **inference rule** can be read as: “**premises** are **sufficient** for conclusion”: to show $J$, it is enough to show $\forall 1 \le i \le k, J_i$.

If $k = 0$, the rule is **axiom**, a rule states that its conclusion holds **unconditionally**.

Since many rules with same conclusion and each specifying sufficient conditions for conclusion.

If the conclusion holds, it is not necessary that premises hold.

## Rule Scheme

A finite number of **patterns** or **rule schemes** can specify an **infinite family of rules** by making use of a notational convention.

Example: $$\frac{a \space\text{nat}}{\text{succ} (a) \space \text{nat}}$$ is a

rule schemethat determines onerule, called aninstanceof therule schemefor each choice of object $a$ in therule.

We rely

contextto determine whether a rule is stated for:

- A specific object
- As a rule scheme specifying a rule for each choice of objects in a rule.

## “Closed Under” and “Strongest”

A **collection of rules** is considered to define the **strongest** judgment form that is **closed under** or **respects** the rules.

### “Closed Under”

To be **closed under** the rules means the rules are **sufficient** to show the **validity of a judgment**:

$J$ holds if there’s a way to obtain it with given rules.

### “Strongest”

To be the **strongest** judgment form **closed under** the rules means that the rules are **also necessary**:

$J$ holds **only if** there’s a way to obtain it with given rules.

### Sufficiency and Necessity

**Sufficiency** means we may show that $J$ holds by **deriving** it with composing rules.

**Necessity** means we may reason about it with **rule induction**.

# Derivation

A **derivation** of a judgment is a **finite composition of rules**, starting with **axioms** and ending with that **judgment**.

We sometimes say a derivation of $J$ is evidence for the **validity of an inductively defined judgment** $J$.

## As a Tree

We depict **derivations** as trees with **conclusion** at bottom and with the children of **a node corresponding to a rule** appearing above as **evidence** for the premises of that rule.

If $$\frac {J_1 \space\dots\space J_k}J$$ is an inference rule and $\bigtriangledown_1,\dots,\bigtriangledown_k$ are derivations of its premises, then $$\frac {\bigtriangledown_1\space\dots\space\bigtriangledown_k}J$$ is a derivation of its conclusion.

## Find Derivations

Two methods to find derivations:

**forward chaining**, or**bottom-up construction****backward chaining**, or**top-down construction**

Check Page 16 for these ideas.

# Rule Induction

The principle of **rule induction** states that: to show **property** $a\space P$ holds whenever $a\space\mathrm{J}$ is derivable, and more precisely:

- It is enough to show $P$ is
**closed under**the rules defining the judgment form $\textrm{J}$. - The property $P$
**respects**the rule $$\frac{a_1\space\textrm{J}\space\dots\space a_k\space\textrm{J}}{a\space \textrm J}$$ if $P(a)$ holds whenever $\forall 1 \le i \le k, P(a_i)$ do.

The assumptions $\forall 1 \le i \le k, P(a_i)$ are called **inductive hypotheses** and $P(a)$ is called **inductive conclusion** of the **inference**.

The principle of **rule induction** is simply the expression of the definition of an inductively defined judgment form as **the strongest judgment form closed under the rules comprising the definition**.

So the requirement of the **judgment form** defined by a set of rules is simply:

- closed under those rules.
- sufficient for any other property also closed under those rules.

# Iterated

**Inductive definitions** are often **iterated**, meaning that one **inductive definition** builds on top of another.

Theoretical Example: in an

iterate inductive definition$$\frac {J_1 \space\dots\space J_k}J$$ thepremises of a rulemay beinstancesof:

a previously defined judgment formjudgment form being defined

# Simultaneous Inductive Definitions

Frequently **two or more judgments** are defined **at once** by a **simultaneous inductive definition**.

A **simultaneous inductive definition** consists of a set of rules **deriving instances of several different judgment forms**, any of which may appear as **premise** of any rule.

None of the **judgment forms** are being defined **prior to the others**.

# Defining Functions by Rules

A common use of **inductive definitions** is to **define a function** by giving an inductive definition of its “graph” relating inputs and outputs. Then show that the relation **uniquely** determines the outputs for given inputs.

# Exercises

## 2.1

$$\frac{}{\mathrm{max}(q; \mathrm{zero}; q)}$$

$$\frac{}{\mathrm{max}(\mathrm{zero}; \mathrm{succ}(p); \mathrm{succ}(p))}$$

$$\frac{p\space \mathrm{nat}; q\space \mathrm{nat}; \mathrm{max}(p; q; r)}{\mathrm{max}(\mathrm{succ}(p); \mathrm{succ}(q); \mathrm{succ}(r))}$$

If we want to prove that for $\mathrm{max}(m;n;p)$ $\forall m,n$ are related to a unique $p$, then we must prove the **existence** and **uniqueness**.

**existence**:- According to the first and second rule, we can see that $P(q;\text{zero})$ and $P(\text{zero};\text{succ}(p))$ holds.
- According to the final rule, we see that if $P(p; q)$ then $P(\text{succ}(p); \text{succ}(q))$ holds.
- Thus the
**existence**part proved.

**uniqueness**:- We try to prove that if $\mathrm{max}(m;n;p)$ and $\mathrm{max}(m;n;q)$, then $p$ is $q$.
- Since the first two judgment form guarantee that if $m$ is $n$, then there’s only one unique solution, then these rule guarantee that the
**uniqueness**proved.

The proof establishes that the three-place relation defines a total function of its first two arguments.

## 2.2

According to the 2.1 solution, we can see that every tree can be mapped to a $n$ `nat`

.

Then by rule induction:

- If $t$ is $\mathrm{empty}$, then $n$
`nat`

since $n$ is $\mathrm{zero}$. - If $t$ is $\mathrm{node}(t_1; t_2)$, then we can see $n_1$
`nat`

, $n_2$`nat`

, $n$`nat`

by 2.1. - Thus it is proved that
`hgt`

is a function from trees to`nat`

.

## 2.3

$$\frac{}{\text{nil}\space\mathrm{forest}}$$

$$\frac{f \space\mathrm{forest}}{\text{node}(f)\space\mathrm{tree}}$$

$$\frac{t\space\mathrm{tree}\space\space f\space\mathrm{forest}}{\mathrm{cons}(t;f)\space\mathrm{forest}}$$

## 2.4

$$\frac{}{\mathrm{hgtF}(\mathrm{nil};\mathrm{zero})}$$

$$\frac{\mathrm{hgtF}(f; h)}{\mathrm{hgtH}(\mathrm{node}(f);\mathrm{succ}(h))}$$

$$\frac{\mathrm{hgtF}(f;h_1)\space\space\mathrm{hgtT}(t;h_2)\space\space\mathrm{max}(h_1;h_2;h)}{\mathrm{hgtF}(\mathrm{cons}(f;t);h)}$$

## 2.5

$$\frac{}{\mathrm{zero}\space\mathrm{bin}}$$

$$\frac{n\space\mathrm{bin}}{\mathrm{twice}(n)\space\mathrm{bin}}$$

$$\frac{n\space\mathrm{bin}}{\mathrm{twiceplusone}(n)\space\mathrm{bin}}$$

## 2.6

$$\frac{}{\mathrm{sum}(\mathrm{zero};\mathrm{zero};\mathrm{zero})}$$

$$\frac{\mathrm{sum}(m;n;p)}{\mathrm{sum}(\mathrm{twice}(m);\mathrm{twice}(n);\mathrm{twice}(p))}$$

$$\frac{\mathrm{sum}(m;n;p)}{\mathrm{sum}(\mathrm{twiceplusone}(m);\mathrm{twice}(n);\mathrm{twiceplusone}(p))}$$

$$\frac{\mathrm{sum}(m;n;p)}{\mathrm{sum}(\mathrm{twice}(m);\mathrm{twiceplusone}(n);\mathrm{twiceplusone}(p))}$$

$$\frac{\mathrm{sum}(m;n;p)\space\mathrm{succ}(p; q)}{\mathrm{sum}(\mathrm{twiceplusone}(m);\mathrm{twiceplusone}(n);\mathrm{twiceplusone}(q))}$$

With auxiliary function for `succ`

in `bin`

:

$$\frac{}{\mathrm{succ}(\mathrm{zero}; \mathrm{twiceplusone}(\mathrm{zero}))}$$

$$\frac{}{\mathrm{succ}(\mathrm{twice}(p); \mathrm{twiceplusone}(p))}$$

$$\frac{\mathrm{succ}(p; q)}{\mathrm{succ}(\mathrm{twiceplusone}(p); \mathrm{twice}(q))}$$