IndPropInductively Defined Propositions

Set Warnings "-notation-overridden,-parsing".
Require Export Logic.

Inductively Defined Propositions

In the Logic chapter, we looked at several ways of writing propositions, including conjunction, disjunction, and quantifiers. In this chapter, we bring a new tool into the mix: inductive definitions.
Recall that we have seen two ways of stating that a number n is even: We can say (1) evenb n = true, or (2) k, n = double k. Yet another possibility is to say that n is even if we can establish its evenness from the following rules:
  • Rule ev_0: The number 0 is even.
  • Rule ev_SS: If n is even, then S (S n) is even.
To illustrate how this definition of evenness works, let's imagine using it to show that 4 is even. By rule ev_SS, it suffices to show that 2 is even. This, in turn, is again guaranteed by rule ev_SS, as long as we can show that 0 is even. But this last fact follows directly from the ev_0 rule.
We will see many definitions like this one during the rest of the course. For purposes of informal discussions, it is helpful to have a lightweight notation that makes them easy to read and write. Inference rules are one such notation:
   (ev_0)  

ev 0
ev n (ev_SS)  

ev (S (S n))
Each of the textual rules above is reformatted here as an inference rule; the intended reading is that, if the premises above the line all hold, then the conclusion below the line follows. For example, the rule ev_SS says that, if n satisfies ev, then S (S n) also does. If a rule has no premises above the line, then its conclusion holds unconditionally.
We can represent a proof using these rules by combining rule applications into a proof tree. Here's how we might transcribe the above proof that 4 is even:
                             ------  (ev_0)
                              ev 0
                             ------ (ev_SS)
                              ev 2
                             ------ (ev_SS)
                              ev 4
Why call this a "tree" (rather than a "stack", for example)? Because, in general, inference rules can have multiple premises. We will see examples of this below.
Putting all of this together, we can translate the definition of evenness into a formal Coq definition using an Inductive declaration, where each constructor corresponds to an inference rule:
Inductive ev : natProp :=
| ev_0 : ev 0
| ev_SS : n : nat, ev nev (S (S n)).
This definition is different in one crucial respect from previous uses of Inductive: its result is not a Type, but rather a function from nat to Prop — that is, a property of numbers. Note that we've already seen other inductive definitions that result in functions, such as list, whose type is Type Type. What is new here is that, because the nat argument of ev appears unnamed, to the right of the colon, it is allowed to take different values in the types of different constructors: 0 in the type of ev_0 and S (S n) in the type of ev_SS.
In contrast, the definition of list names the X parameter globally, to the left of the colon, forcing the result of nil and cons to be the same (list X). Had we tried to bring nat to the left in defining ev, we would have seen an error:
Fail Inductive wrong_ev (n : nat) : Prop :=
| wrong_ev_0 : wrong_ev 0
| wrong_ev_SS : n, wrong_ev nwrong_ev (S (S n)).
(* ===> Error: A parameter of an inductive type n is not
        allowed to be used as a bound variable in the type
        of its constructor. *)

("Parameter" here is Coq jargon for an argument on the left of the colon in an Inductive definition; "index" is used to refer to arguments on the right of the colon.)
We can think of the definition of ev as defining a Coq property ev : nat Prop, together with primitive theorems ev_0 : ev 0 and ev_SS : n, ev n ev (S (S n)).
Such "constructor theorems" have the same status as proven theorems. In particular, we can use Coq's apply tactic with the rule names to prove ev for particular numbers...
Theorem ev_4 : ev 4.
Proof. apply ev_SS. apply ev_SS. apply ev_0. Qed.
... or we can use function application syntax:
Theorem ev_4' : ev 4.
Proof. apply (ev_SS 2 (ev_SS 0 ev_0)). Qed.
We can also prove theorems that have hypotheses involving ev.
Theorem ev_plus4 : n, ev nev (4 + n).
Proof.
  intros n. simpl. intros Hn.
  apply ev_SS. apply ev_SS. apply Hn.
Qed.
More generally, we can show that any number multiplied by 2 is even:

Exercise: 1 star (ev_double)

Theorem ev_double : n,
  ev (double n).
Proof.
  (* FILL IN HERE *) Admitted.

Using Evidence in Proofs

Besides constructing evidence that numbers are even, we can also reason about such evidence.
Introducing ev with an Inductive declaration tells Coq not only that the constructors ev_0 and ev_SS are valid ways to build evidence that some number is even, but also that these two constructors are the only ways to build evidence that numbers are even (in the sense of ev).
In other words, if someone gives us evidence E for the assertion ev n, then we know that E must have one of two shapes:
  • E is ev_0 (and n is O), or
  • E is ev_SS n' E' (and n is S (S n'), where E' is evidence for ev n').
This suggests that it should be possible to analyze a hypothesis of the form ev n much as we do inductively defined data structures; in particular, it should be possible to argue by induction and case analysis on such evidence. Let's look at a few examples to see what this means in practice.

Inversion on Evidence

Suppose we are proving some fact involving a number n, and we are given ev n as a hypothesis. We already know how to perform case analysis on n using the inversion tactic, generating separate subgoals for the case where n = O and the case where n = S n' for some n'. But for some proofs we may instead want to analyze the evidence that ev n directly.
By the definition of ev, there are two cases to consider:
  • If the evidence is of the form ev_0, we know that n = 0.
  • Otherwise, the evidence must have the form ev_SS n' E', where n = S (S n') and E' is evidence for ev n'.
We can perform this kind of reasoning in Coq, again using the inversion tactic. Besides allowing us to reason about equalities involving constructors, inversion provides a case-analysis principle for inductively defined propositions. When used in this way, its syntax is similar to destruct: We pass it a list of identifiers separated by | characters to name the arguments to each of the possible constructors.
Theorem ev_minus2 : n,
  ev nev (pred (pred n)).
Proof.
  intros n E.
  inversion E as [| n' E'].
  - (* E = ev_0 *) simpl. apply ev_0.
  - (* E = ev_SS n' E' *) simpl. apply E'. Qed.
In words, here is how the inversion reasoning works in this proof:
  • If the evidence is of the form ev_0, we know that n = 0. Therefore, it suffices to show that ev (pred (pred 0)) holds. By the definition of pred, this is equivalent to showing that ev 0 holds, which directly follows from ev_0.
  • Otherwise, the evidence must have the form ev_SS n' E', where n = S (S n') and E' is evidence for ev n'. We must then show that ev (pred (pred (S (S n')))) holds, which, after simplification, follows directly from E'.
This particular proof also works if we replace inversion by destruct:
Theorem ev_minus2' : n,
  ev nev (pred (pred n)).
Proof.
  intros n E.
  destruct E as [| n' E'].
  - (* E = ev_0 *) simpl. apply ev_0.
  - (* E = ev_SS n' E' *) simpl. apply E'. Qed.
The difference between the two forms is that inversion is more convenient when used on a hypothesis that consists of an inductive property applied to a complex expression (as opposed to a single variable). Here's is a concrete example. Suppose that we wanted to prove the following variation of ev_minus2:
Theorem evSS_ev : n,
  ev (S (S n)) → ev n.
Intuitively, we know that evidence for the hypothesis cannot consist just of the ev_0 constructor, since O and S are different constructors of the type nat; hence, ev_SS is the only case that applies. Unfortunately, destruct is not smart enough to realize this, and it still generates two subgoals. Even worse, in doing so, it keeps the final goal unchanged, failing to provide any useful information for completing the proof.
Proof.
  intros n E.
  destruct E as [| n' E'].
  - (* E = ev_0. *)
    (* We must prove that n is even from no assumptions! *)
Abort.
What happened, exactly? Calling destruct has the effect of replacing all occurrences of the property argument by the values that correspond to each constructor. This is enough in the case of ev_minus2' because that argument, n, is mentioned directly in the final goal. However, it doesn't help in the case of evSS_ev since the term that gets replaced (S (S n)) is not mentioned anywhere.
The inversion tactic, on the other hand, can detect (1) that the first case does not apply, and (2) that the n' that appears on the ev_SS case must be the same as n. This allows us to complete the proof:
Theorem evSS_ev : n,
  ev (S (S n)) → ev n.
Proof.
  intros n E.
  inversion E as [| n' E'].
  (* We are in the E = ev_SS n' E' case now. *)
  apply E'.
Qed.
By using inversion, we can also apply the principle of explosion to "obviously contradictory" hypotheses involving inductive properties. For example:
Theorem one_not_even : ¬ ev 1.
Proof.
  intros H. inversion H. Qed.

Exercise: 1 star (SSSSev__even)

Prove the following result using inversion.
Theorem SSSSev__even : n,
  ev (S (S (S (S n)))) → ev n.
Proof.
  (* FILL IN HERE *) Admitted.

Exercise: 1 star (even5_nonsense)

Prove the following result using inversion.
Theorem even5_nonsense :
  ev 5 → 2 + 2 = 9.
Proof.
  (* FILL IN HERE *) Admitted.
The way we've used inversion here may seem a bit mysterious at first. Until now, we've only used inversion on equality propositions, to utilize injectivity of constructors or to discriminate between different constructors. But we see here that inversion can also be applied to analyzing evidence for inductively defined propositions.
Here's how inversion works in general. Suppose the name I refers to an assumption P in the current context, where P has been defined by an Inductive declaration. Then, for each of the constructors of P, inversion I generates a subgoal in which I has been replaced by the exact, specific conditions under which this constructor could have been used to prove P. Some of these subgoals will be self-contradictory; inversion throws these away. The ones that are left represent the cases that must be proved to establish the original goal. For those, inversion adds all equations into the proof context that must hold of the arguments given to P (e.g., S (S n') = n in the proof of evSS_ev).
The ev_double exercise above shows that our new notion of evenness is implied by the two earlier ones (since, by even_bool_prop in chapter Logic, we already know that those are equivalent to each other). To show that all three coincide, we just need the following lemma:
Lemma ev_even_firsttry : n,
  ev n k, n = double k.
Proof.
(* WORKED IN CLASS *)
We could try to proceed by case analysis or induction on n. But since ev is mentioned in a premise, this strategy would probably lead to a dead end, as in the previous section. Thus, it seems better to first try inversion on the evidence for ev. Indeed, the first case can be solved trivially.
  intros n E. inversion E as [| n' E'].
  - (* E = ev_0 *)
     0. reflexivity.
  - (* E = ev_SS n' E' *) simpl.
Unfortunately, the second case is harder. We need to show k, S (S n') = double k, but the only available assumption is E', which states that ev n' holds. Since this isn't directly useful, it seems that we are stuck and that performing case analysis on E was a waste of time.
If we look more closely at our second goal, however, we can see that something interesting happened: By performing case analysis on E, we were able to reduce the original result to an similar one that involves a different piece of evidence for ev: E'. More formally, we can finish our proof by showing that
         k'n' = double k',
which is the same as the original statement, but with n' instead of n. Indeed, it is not difficult to convince Coq that this intermediate result suffices.
    assert (I : ( k', n' = double k') →
                ( k, S (S n') = double k)).
    { intros [k' Hk']. rewrite Hk'. (S k'). reflexivity. }
    apply I. (* reduce the original goal to the new one *)

Admitted.

Induction on Evidence

If this looks familiar, it is no coincidence: We've encountered similar problems in the Induction chapter, when trying to use case analysis to prove results that required induction. And once again the solution is... induction!
The behavior of induction on evidence is the same as its behavior on data: It causes Coq to generate one subgoal for each constructor that could have used to build that evidence, while providing an induction hypotheses for each recursive occurrence of the property in question.
Let's try our current lemma again:
Lemma ev_even : n,
  ev n k, n = double k.
Proof.
  intros n E.
  induction E as [|n' E' IH].
  - (* E = ev_0 *)
     0. reflexivity.
  - (* E = ev_SS n' E'
       with IH : exists k', n' = double k' *)

    destruct IH as [k' Hk'].
    rewrite Hk'. (S k'). reflexivity.
Qed.
Here, we can see that Coq produced an IH that corresponds to E', the single recursive occurrence of ev in its own definition. Since E' mentions n', the induction hypothesis talks about n', as opposed to n or some other number.
The equivalence between the second and third definitions of evenness now follows.
Theorem ev_even_iff : n,
  ev n k, n = double k.
Proof.
  intros n. split.
  - (* -> *) apply ev_even.
  - (* <- *) intros [k Hk]. rewrite Hk. apply ev_double.
Qed.
As we will see in later chapters, induction on evidence is a recurring technique across many areas.
The following exercises provide simple examples of this technique, to help you familiarize yourself with it.

Exercise: 2 stars (ev_sum)

Theorem ev_sum : n m, ev nev mev (n + m).
Proof.
  (* FILL IN HERE *) Admitted.

Exercise: 4 stars, advanced, optional (ev'_ev)

In general, there may be multiple ways of defining a property inductively. For example, here's a (slightly contrived) alternative definition for ev:
Inductive ev' : natProp :=
| ev'_0 : ev' 0
| ev'_2 : ev' 2
| ev'_sum : n m, ev' nev' mev' (n + m).
Prove that this definition is logically equivalent to the old one. (You may want to look at the previous theorem when you get to the induction step.)
Theorem ev'_ev : n, ev' nev n.
Proof.
 (* FILL IN HERE *) Admitted.

Exercise: 3 stars, advanced, recommended (ev_ev__ev)

Finding the appropriate thing to do induction on is a bit tricky here:
Theorem ev_ev__ev : n m,
  ev (n+m) → ev nev m.
Proof.
  (* FILL IN HERE *) Admitted.

Exercise: 3 stars (ev_plus_plus)

This exercise just requires applying existing lemmas. No induction or even case analysis is needed, though some of the rewriting may be tedious.
Theorem ev_plus_plus : n m p,
  ev (n+m) → ev (n+p) → ev (m+p).
Proof.
  (* FILL IN HERE *) Admitted.

Inductive Relations

A proposition parameterized by a number (such as ev) can be thought of as a property — i.e., it defines a subset of nat, namely those numbers for which the proposition is provable. In the same way, a two-argument proposition can be thought of as a relation — i.e., it defines a set of pairs for which the proposition is provable.
Module Playground.
One useful example is the "less than or equal to" relation on numbers.
The following definition should be fairly intuitive. It says that there are two ways to give evidence that one number is less than or equal to another: either observe that they are the same number, or give evidence that the first is less than or equal to the predecessor of the second.
Inductive le : natnatProp :=
  | le_n : n, le n n
  | le_S : n m, (le n m) → (le n (S m)).

Notation "m ≤ n" := (le m n).
Proofs of facts about using the constructors le_n and le_S follow the same patterns as proofs about properties, like ev above. We can apply the constructors to prove goals (e.g., to show that 3≤3 or 3≤6), and we can use tactics like inversion to extract information from hypotheses in the context (e.g., to prove that (2 1) 2+2=5.)
Here are some sanity checks on the definition. (Notice that, although these are the same kind of simple "unit tests" as we gave for the testing functions we wrote in the first few lectures, we must construct their proofs explicitly — simpl and reflexivity don't do the job, because the proofs aren't just a matter of simplifying computations.)
Theorem test_le1 :
  3 ≤ 3.
Proof.
  (* WORKED IN CLASS *)
  apply le_n. Qed.

Theorem test_le2 :
  3 ≤ 6.
Proof.
  (* WORKED IN CLASS *)
  apply le_S. apply le_S. apply le_S. apply le_n. Qed.

Theorem test_le3 :
  (2 ≤ 1) → 2 + 2 = 5.
Proof.
  (* WORKED IN CLASS *)
  intros H. inversion H. inversion H2. Qed.
The "strictly less than" relation n < m can now be defined in terms of le.
End Playground.

Definition lt (n m:nat) := le (S n) m.

Notation "m < n" := (lt m n).
Here are a few more simple relations on numbers:
Inductive square_of : natnatProp :=
  | sq : n:nat, square_of n (n * n).

Inductive next_nat : natnatProp :=
  | nn : n:nat, next_nat n (S n).

Inductive next_even : natnatProp :=
  | ne_1 : n, ev (S n) → next_even n (S n)
  | ne_2 : n, ev (S (S n)) → next_even n (S (S n)).

Exercise: 2 stars (total_relation)

Define (in Coq) an inductive binary relation total_relation that holds between every pair of natural numbers.
(* FILL IN HERE *)

Exercise: 2 stars (empty_relation)

Define (in Coq) an inductive binary relation empty_relation (on numbers) that never holds.
(* FILL IN HERE *)

Exercise: 3 stars, optional (le_exercises)

Here are a number of facts about the and < relations that we are going to need later in the course. The proofs make good practice exercises.
Lemma le_trans : m n o, mnnomo.
Proof.
  (* FILL IN HERE *) Admitted.

Theorem O_le_n : n,
  0 ≤ n.
Proof.
  (* FILL IN HERE *) Admitted.

Theorem n_le_m__Sn_le_Sm : n m,
  nmS nS m.
Proof.
  (* FILL IN HERE *) Admitted.

Theorem Sn_le_Sm__n_le_m : n m,
  S nS mnm.
Proof.
  (* FILL IN HERE *) Admitted.

Theorem le_plus_l : a b,
  aa + b.
Proof.
  (* FILL IN HERE *) Admitted.

Theorem plus_lt : n1 n2 m,
  n1 + n2 < m
  n1 < mn2 < m.
Proof.
 unfold lt.
 (* FILL IN HERE *) Admitted.

Lemma minus_Sn_m: n m : nat,
    mnS (n - m) = S n - m.
Proof.
  (* FILL IN HERE *) Admitted.

Theorem lt_S : n m,
  n < m
  n < S m.
Proof.
  (* FILL IN HERE *) Admitted.

Theorem leb_complete : n m,
  leb n m = truenm.
Proof.
  (* FILL IN HERE *) Admitted.
Hint: The next one may be easiest to prove by induction on m.
Theorem leb_correct : n m,
  nm
  leb n m = true.
Proof.
  (* FILL IN HERE *) Admitted.
Hint: This theorem can easily be proved without using induction.
Theorem leb_true_trans : n m o,
  leb n m = trueleb m o = trueleb n o = true.
Proof.
  (* FILL IN HERE *) Admitted.

Exercise: 2 stars, optional (leb_iff)

Theorem leb_iff : n m,
  leb n m = truenm.
Proof.
  (* FILL IN HERE *) Admitted.

Exercise: 2 stars, optional (leb_spec)

Lemma leb_spec : (n m : nat),
  leb n m = true ∨ (leb n m = falseleb m n = true).
Proof.
  (* FILL IN HERE *) Admitted.
Module R.

Exercise: 3 stars, recommended (R_provability)

We can define three-place relations, four-place relations, etc., in just the same way as binary relations. For example, consider the following three-place relation on numbers:
Inductive R : natnatnatProp :=
   | c1 : R 0 0 0
   | c2 : m n o, R m n oR (S m) n (S o)
   | c3 : m n o, R m n oR m (S n) (S o)
   | c4 : m n o, R (S m) (S n) (S (S o)) → R m n o
   | c5 : m n o, R m n oR n m o.
  • Which of the following propositions are provable?
    • R 1 1 2
    • R 2 2 6
  • If we dropped constructor c5 from the definition of R, would the set of provable propositions change? Briefly (1 sentence) explain your answer.
  • If we dropped constructor c4 from the definition of R, would the set of provable propositions change? Briefly (1 sentence) explain your answer.
(* FILL IN HERE *)

Exercise: 3 stars (R_fact)

The relation R above actually encodes a familiar function. Figure out which function; then state and prove this equivalence in Coq?
Definition fR : natnatnat
  (* REPLACE THIS LINE WITH ":= _your_definition_ ." *). Admitted.

Theorem R_equiv_fR : m n o, R m n ofR m n = o.
Proof.
(* FILL IN HERE *) Admitted.
End R.

Exercise: 4 stars, advanced (subsequence)

A list is a subsequence of another list if all of the elements in the first list occur in the same order in the second list, possibly with some extra elements in between. For example,
      [1;2;3]
is a subsequence of each of the lists
      [1;2;3]
      [1;1;1;2;2;3]
      [1;2;7;3]
      [5;6;1;9;9;2;7;3;8]
but it is not a subsequence of any of the lists
      [1;2]
      [1;3]
      [5;6;2;1;7;3;8].
  • Define an inductive proposition subseq on list nat that captures what it means to be a subsequence. (Hint: You'll need three cases.)
  • Prove subseq_refl that subsequence is reflexive, that is, any list is a subsequence of itself.
  • Prove subseq_app that for any lists l1, l2, and l3, if l1 is a subsequence of l2, then l1 is also a subsequence of l2 ++ l3.
  • (Optional, harder) Prove subseq_trans that subsequence is transitive — that is, if l1 is a subsequence of l2 and l2 is a subsequence of l3, then l1 is a subsequence of l3. Hint: choose your induction carefully!
(* FILL IN HERE *)

Exercise: 2 stars (R_provability2)

Suppose we give Coq the following definition:
    Inductive R : nat → list nat → Prop :=
      | c1 : R 0 []
      | c2 :  n lR n l → R (S n) (n :: l)
      | c3 :  n lR (S nl → R n l.
Which of the following propositions are provable?
  • R 2 [1;0]
  • R 1 [1;2;1;0]
  • R 6 [3;2;1;0]

Aside: Strong Induction

So far, we've worked with a conventional induction principle on naturals:
Definition weak_induction_principle : Prop :=
   P : natProp,
    P 0 → ( n : nat, P nP (S n)) → n : nat, P n.
That is, to prove P n for any n, we need to show that:
  • P 0 holds (the base case), and
  • if P n' holds, then P (S n') holds (the inductive case).
But there are other ways of doing induction on the naturals! The most common alternative is what's called strong induction or course of values induction.
Definition strong_induction_principle : Prop :=
   P : natProp,
    ( n : nat, ( m : nat, m < nP m) → P n) →
     n : nat, P n.
That is, to prove P n for any n, we:
  • assume that P m holds for all m < n, and then
  • show that P n holds.
This principle of induction is called "strong" induction because we get a stronger inductive hypothesis. In the "weak" induction we've been doing, our IH is that P n', which we use to prove P (S n'). In "strong" induction, to prove P (S n'), our IH is that P m for every m < (S n').
Metaphorically speaking, in weak induction we build a tower of proof just using the floor beneath us: to build the S nth floor, we assume that the nth floor is on solid ground. In strong induction, we build a tower of proof using all of the floors beneath us: to build the nth floor, we can rely on the mth floor for m < n.
Suppose we define a function pow : nat nat nat as follows:
  pow(m,0) = 1
  pow(m,n) = if n is even
             then pow(m*m,n/2)
             else m * pow(m,n-1)
We use strong induction to prove that the informal function pow defined above behaves the same as exp from Basics.v.
    Fixpoint exp (base power : nat) : nat :=
      match power with
        | O ⇒ S O
        | S p ⇒ mult base (exp base p)
      end.
We'll assume that exp (m*m) ((S n')/2), is equal to exp m (S n').
  • Theorem: forall m and n, pow(m,n) = exp base power.
Proof: By strong induction on n, leaving m general. Our IH is that for all n' < n, we have pow(m, n') = exp m n'; we must show pow(m, n) = exp m n. We go by cases on n.
  • If n=0, then we have exp m 0 = 1 and pow(m,0) = 1 by definition.
  • If n = S n', then we have exp m (S n') = m * exp m n'. We go by cases on the parity of n.
    + If n is even, then we have pow(m, n) = pow(m*m, n/2). But by the IH on n/2 < n, pow(m*m, n/2) is equal to exp (m*m) (n/2), which is itself equal to exp m n.
    + If n is odd, then we have pow(m, S n') = m * pow(m, n'); by the IH on n' < n, we know that pow(m, n') = exp(m, n'), and we are done.
Qed.
A few things to note about this proof:
  • We get the IH immediately!
  • We manually do a case analysis as the first step of our proof.
  • When n=0, our IH is useless: there is no n' < 0!
  • Whenever we apply the IH, we have to show we're applying it to a smaller number.
Strong induction is at least as strong as weak induction: we can prove the principle of weak induction using the principle of strong induction.
Do NOT use the induction tactic: instead we apply the strong induction principle.

Exercise: 2 stars (strong_induction__weak_induction)

Lemma strong_induction__weak_induction :
  strong_induction_principleweak_induction_principle.
Proof.
  unfold weak_induction_principle.
  intros Hstrong P Hbase Hind.
  apply Hstrong.
  (* FILL IN HERE *) Admitted.
What may come as a surprise is that the weak induction principle is as strong as the strong induction principle: we can use it to prove the strong induction principle!
Here we'll use the induction tactic in order to apply weak induction. Notice that we actually prove a more general property and then specialize it. Your job is to round out some of the detail.

Exercise: 3 stars, recommended (strong_induction)

Lemma strong_induction :
  strong_induction_principle.
Proof.
  unfold strong_induction_principle.
  intros P IHstrong n.
  assert ( k, knP k).
  { induction n as [|n' IHn'].
    (* FILL IN HERE *) Admitted.

Exercise: 3 stars (pow_informal_proof)

Prove the following theorem. You may may assume that n/2 < n.
  • Theorem: pow(1,k) = 1 for all k.
Proof: (* FILL IN HERE *)

Exercise: 3 stars (down_informal_proof)

Suppose we define the function down as follows:
    down 0 = 0
    down n = if n is even
             then down(n / 2)
             else down(n - 1)
Prove the following theorem; you may assume that n/2 < n.
  • Theorem: down n = 0 for all n.
Proof: (* FILL IN HERE *)

Case Study: Regular Expressions

The ev property provides a simple example for illustrating inductive definitions and the basic techniques for reasoning about them, but it is not terribly exciting — after all, it is equivalent to the two non-inductive definitions of evenness that we had already seen, and does not seem to offer any concrete benefit over them. To give a better sense of the power of inductive definitions, we now show how to use them to model a classic concept in computer science: regular expressions.
Regular expressions are a simple language for describing strings, defined as follows:
Inductive reg_exp {T : Type} : Type :=
| EmptySet : reg_exp
| EmptyStr : reg_exp
| Char : Treg_exp
| App : reg_expreg_expreg_exp
| Union : reg_expreg_expreg_exp
| Star : reg_expreg_exp.
Note that this definition is polymorphic: Regular expressions in reg_exp T describe strings with characters drawn from T — that is, lists of elements of T.
(We depart slightly from standard practice in that we do not require the type T to be finite. This results in a somewhat different theory of regular expressions, but the difference is not significant for our purposes.)
We connect regular expressions and strings via the following rules, which define when a regular expression matches some string:
  • The expression EmptySet does not match any string.
  • The expression EmptyStr matches the empty string [].
  • The expression Char x matches the one-character string [x].
  • If re1 matches s1, and re2 matches s2, then App re1 re2 matches s1 ++ s2.
  • If at least one of re1 and re2 matches s, then Union re1 re2 matches s.
  • Finally, if we can write some string s as the concatenation of a sequence of strings s = s_1 ++ ... ++ s_k, and the expression re matches each one of the strings s_i, then Star re matches s.
    As a special case, the sequence of strings may be empty, so Star re always matches the empty string [] no matter what re is.
We can easily translate this informal definition into an Inductive one as follows:
Inductive exp_match {T} : list Treg_expProp :=
| MEmpty : exp_match [] EmptyStr
| MChar : x, exp_match [x] (Char x)
| MApp : s1 re1 s2 re2,
           exp_match s1 re1
           exp_match s2 re2
           exp_match (s1 ++ s2) (App re1 re2)
| MUnionL : s1 re1 re2,
              exp_match s1 re1
              exp_match s1 (Union re1 re2)
| MUnionR : re1 s2 re2,
              exp_match s2 re2
              exp_match s2 (Union re1 re2)
| MStar0 : re, exp_match [] (Star re)
| MStarApp : s1 s2 re,
               exp_match s1 re
               exp_match s2 (Star re) →
               exp_match (s1 ++ s2) (Star re).
Again, for readability, we can also display this definition using inference-rule notation. At the same time, let's introduce a more readable infix notation.
Notation "s =~ re" := (exp_match s re) (at level 80).
   (MEmpty)  

[] =~ EmptyStr
   (MChar)  

[x] =~ Char x
s1 =~ re1    s2 =~ re2 (MApp)  

s1 ++ s2 =~ App re1 re2
s1 =~ re1 (MUnionL)  

s1 =~ Union re1 re2
s2 =~ re2 (MUnionR)  

s2 =~ Union re1 re2
   (MStar0)  

[] =~ Star re
s1 =~ re    s2 =~ Star re (MStarApp)  

s1 ++ s2 =~ Star re
Notice that these rules are not quite the same as the informal ones that we gave at the beginning of the section. First, we don't need to include a rule explicitly stating that no string matches EmptySet; we just don't happen to include any rule that would have the effect of some string matching EmptySet. (Indeed, the syntax of inductive definitions doesn't even allow us to give such a "negative rule.")
Second, the informal rules for Union and Star correspond to two constructors each: MUnionL / MUnionR, and MStar0 / MStarApp. The result is logically equivalent to the original rules but more convenient to use in Coq, since the recursive occurrences of exp_match are given as direct arguments to the constructors, making it easier to perform induction on evidence. (The exp_match_ex1 and exp_match_ex2 exercises below ask you to prove that the constructors given in the inductive declaration and the ones that would arise from a more literal transcription of the informal rules are indeed equivalent.)
Let's illustrate these rules with a few examples.
Example reg_exp_ex1 : [1] =~ Char 1.
Proof.
  apply MChar.
Qed.

Example reg_exp_ex2 : [1; 2] =~ App (Char 1) (Char 2).
Proof.
  apply (MApp [1] _ [2]).
  - apply MChar.
  - apply MChar.
Qed.
(Notice how the last example applies MApp to the strings [1] and [2] directly. Since the goal mentions [1; 2] instead of [1] ++ [2], Coq wouldn't be able to figure out how to split the string on its own.)
Using inversion, we can also show that certain strings do not match a regular expression:
Example reg_exp_ex3 : ¬ ([1; 2] =~ Char 1).
Proof.
  intros H. inversion H.
Qed.
We can define helper functions for writing down regular expressions. The reg_exp_of_list function constructs a regular expression that matches exactly the list that it receives as an argument:
Fixpoint reg_exp_of_list {T} (l : list T) :=
  match l with
  | [] ⇒ EmptyStr
  | x :: l'App (Char x) (reg_exp_of_list l')
  end.

Example reg_exp_ex4 : [1; 2; 3] =~ reg_exp_of_list [1; 2; 3].
Proof.
  simpl. apply (MApp [1]).
  { apply MChar. }
  apply (MApp [2]).
  { apply MChar. }
  apply (MApp [3]).
  { apply MChar. }
  apply MEmpty.
Qed.
We can also prove general facts about exp_match. For instance, the following lemma shows that every string s that matches re also matches Star re.
Lemma MStar1 :
   T s (re : @reg_exp T) ,
    s =~ re
    s =~ Star re.
Proof.
  intros T s re H.
  rewrite <- (app_nil_r _ s).
  apply (MStarApp s [] re).
  - apply H.
  - apply MStar0.
Qed.
(Note the use of app_nil_r to change the goal of the theorem to exactly the same shape expected by MStarApp.)

Exercise: 3 stars (exp_match_ex1)

The following lemmas show that the informal matching rules given at the beginning of the chapter can be obtained from the formal inductive definition.
Lemma empty_is_empty : T (s : list T),
  ¬ (s =~ EmptySet).
Proof.
  (* FILL IN HERE *) Admitted.

Lemma MUnion' : T (s : list T) (re1 re2 : @reg_exp T),
  s =~ re1s =~ re2
  s =~ Union re1 re2.
Proof.
  (* FILL IN HERE *) Admitted.
The next lemma is stated in terms of the fold function from the Poly chapter: If ss : list (list T) represents a sequence of strings s1, ..., sn, then fold app ss [] is the result of concatenating them all together.
Lemma MStar' : T (ss : list (list T)) (re : reg_exp),
  ( s, In s sss =~ re) →
  fold app ss [] =~ Star re.
Proof.
  (* FILL IN HERE *) Admitted.

Exercise: 4 stars, optional (reg_exp_of_list_spec)

Prove that reg_exp_of_list satisfies the following specification:
Lemma reg_exp_of_list_spec : T (s1 s2 : list T),
  s1 =~ reg_exp_of_list s2s1 = s2.
Proof.
  (* FILL IN HERE *) Admitted.
Since the definition of exp_match has a recursive structure, we might expect that proofs involving regular expressions will often require induction on evidence.
For example, suppose that we wanted to prove the following intuitive result: If a regular expression re matches some string s, then all elements of s must occur as character literals somewhere in re.
To state this theorem, we first define a function re_chars that lists all characters that occur in a regular expression:
Fixpoint re_chars {T} (re : reg_exp) : list T :=
  match re with
  | EmptySet ⇒ []
  | EmptyStr ⇒ []
  | Char x ⇒ [x]
  | App re1 re2re_chars re1 ++ re_chars re2
  | Union re1 re2re_chars re1 ++ re_chars re2
  | Star rere_chars re
  end.
We can then phrase our theorem as follows:
Theorem in_re_match : T (s : list T) (re : reg_exp) (x : T),
  s =~ re
  In x s
  In x (re_chars re).
Proof.
  intros T s re x Hmatch Hin.
  induction Hmatch
    as [| x'
        | s1 re1 s2 re2 Hmatch1 IH1 Hmatch2 IH2
        | s1 re1 re2 Hmatch IH | re1 s2 re2 Hmatch IH
        | re | s1 s2 re Hmatch1 IH1 Hmatch2 IH2].
  (* WORKED IN CLASS *)
  - (* MEmpty *)
    apply Hin.
  - (* MChar *)
    apply Hin.
  - simpl. rewrite In_app_iff in *.
    destruct Hin as [Hin | Hin].
    + (* In x s1 *)
      left. apply (IH1 Hin).
    + (* In x s2 *)
      right. apply (IH2 Hin).
  - (* MUnionL *)
    simpl. rewrite In_app_iff.
    left. apply (IH Hin).
  - (* MUnionR *)
    simpl. rewrite In_app_iff.
    right. apply (IH Hin).
  - (* MStar0 *)
    destruct Hin.
Something interesting happens in the MStarApp case. We obtain two induction hypotheses: One that applies when x occurs in s1 (which matches re), and a second one that applies when x occurs in s2 (which matches Star re). This is a good illustration of why we need induction on evidence for exp_match, as opposed to re: The latter would only provide an induction hypothesis for strings that match re, which would not allow us to reason about the case In x s2.
  - (* MStarApp *)
    simpl. rewrite In_app_iff in Hin.
    destruct Hin as [Hin | Hin].
    + (* In x s1 *)
      apply (IH1 Hin).
    + (* In x s2 *)
      apply (IH2 Hin).
Qed.

Exercise: 4 stars (re_not_empty)

Write a recursive function re_not_empty that tests whether, for a given regular expression re, there exists some string that matches re. Prove that your function is correct.
Fixpoint re_not_empty {T : Type} (re : @reg_exp T) : bool
  (* REPLACE THIS LINE WITH ":= _your_definition_ ." *). Admitted.

Lemma re_not_empty_correct : T (re : @reg_exp T),
  ( s, s =~ re) ↔ re_not_empty re = true.
Proof.
  (* FILL IN HERE *) Admitted.

Additional Exercises

Exercise: 3 stars, recommended (nostutter_defn)

Formulating inductive definitions of properties is an important skill you'll need in this course. Try to solve this exercise without any help at all.
We say that a list "stutters" if it repeats the same element consecutively. (This is different from the NoDup property in the exercise above: the sequence 1;4;1 repeats but does not stutter.) The property "nostutter mylist" means that mylist does not stutter. Formulate an inductive definition for nostutter.
Inductive nostutter {X:Type} : list XProp :=
 (* FILL IN HERE *)
.
Make sure each of these tests succeeds, but feel free to change the suggested proof (in comments) if the given one doesn't work for you. Your definition might be different from ours and still be correct, in which case the examples might need a different proof. (You'll notice that the suggested proofs use a number of tactics we haven't talked about, to make them more robust to different possible ways of defining nostutter. You can probably just uncomment and use them as-is, but you can also prove each example with more basic tactics.)
Example test_nostutter_1: nostutter [3;1;4;1;5;6].
(* FILL IN HERE *) Admitted.
(* 
  Proof. repeat constructor; apply beq_nat_false_iff; auto.
  Qed.
*)


Example test_nostutter_2: nostutter (@nil nat).
(* FILL IN HERE *) Admitted.
(* 
  Proof. repeat constructor; apply beq_nat_false_iff; auto.
  Qed.
*)


Example test_nostutter_3: nostutter [5].
(* FILL IN HERE *) Admitted.
(* 
  Proof. repeat constructor; apply beq_nat_false; auto. Qed.
*)


Example test_nostutter_4: not (nostutter [3;1;1;4]).
(* FILL IN HERE *) Admitted.
(* 
  Proof. intro.
  repeat match goal with
    h: nostutter _ |- _ => inversion h; clear h; subst
  end.
  contradiction H1; auto. Qed.
*)

Exercise: 4 stars, advanced (filter_challenge)

Let's prove that our definition of filter from the Poly chapter matches an abstract specification. Here is the specification, written out informally in English:
A list l is an "in-order merge" of l1 and l2 if it contains only and exactly the same elements as l1 and l2, in the same order as l1 and l2, but possibly interleaved. For example,
    [1;4;6;2;3]
is an in-order merge of
    [1;6;2]
and
    [4;3].
Now, suppose we have a set X, a function test: Xbool, and a list l of type list X. Suppose further that l is an in-order merge of two lists, l1 and l2, such that every item in l1 satisfies test and no item in l2 satisfies test. Then filter test l = l1.
Translate this specification into a Coq theorem and prove it. (You'll need to begin by defining what it means for one list to be a merge of two others. Do this with an inductive relation, not a Fixpoint.)
(* FILL IN HERE *)

Exercise: 5 stars, advanced, optional (filter_challenge_2)

A different way to characterize the behavior of filter goes like this: Among all subsequences of l with the property that test evaluates to true on all their members, filter test l is the longest. Formalize this claim in Coq and prove it.
(* FILL IN HERE *)

Exercise: 4 stars, optional (palindromes)

A palindrome is a sequence that reads the same backwards as forwards.
  • Define an inductive proposition pal on list X that captures what it means to be a palindrome. (Hint: You'll need three cases. Your definition should be based on the structure of the list; just having a single constructor like
      c :  ll = rev l → pal l
    may seem obvious, but will not work very well.)
  • Prove (pal_app_rev) that
      lpal (l ++ rev l).
  • Prove (pal_rev that)
      lpal l → l = rev l.
(* FILL IN HERE *)

Exercise: 5 stars, optional (palindrome_converse)

Again, the converse direction is significantly more difficult, due to the lack of evidence. Using your definition of pal from the previous exercise, prove that
      ll = rev l → pal l.
(* FILL IN HERE *)

Exercise: 4 stars, advanced, optional (NoDup)

Recall the definition of the In property from the Logic chapter, which asserts that a value x appears at least once in a list l:
(* Fixpoint In (A : Type) (x : A) (l : list A) : Prop :=
   match l with
   |  => False
   | x' :: l' => x' = x \/ In A x l'
   end *)

Your first task is to use In to define a proposition disjoint X l1 l2, which should be provable exactly when l1 and l2 are lists (with elements of type X) that have no elements in common.
(* FILL IN HERE *)
Next, use In to define an inductive proposition NoDup X l, which should be provable exactly when l is a list (with elements of type X) where every member is different from every other. For example, NoDup nat [1;2;3;4] and NoDup bool [] should be provable, while NoDup nat [1;2;1] and NoDup bool [true;true] should not be.
(* FILL IN HERE *)
Finally, state and prove one or more interesting theorems relating disjoint, NoDup and ++ (list append).
(* FILL IN HERE *)

Exercise: 4 stars, advanced, optional (pigeonhole_principle)

The pigeonhole principle states a basic fact about counting: if we distribute more than n items into n pigeonholes, some pigeonhole must contain at least two items. As often happens, this apparently trivial fact about numbers requires non-trivial machinery to prove, but we now have enough...
First prove an easy useful lemma.
Lemma in_split : (X:Type) (x:X) (l:list X),
  In x l
   l1 l2, l = l1 ++ x :: l2.
Proof.
  (* FILL IN HERE *) Admitted.
Now define a property repeats such that repeats X l asserts that l contains at least one repeated element (of type X).
Inductive repeats {X:Type} : list XProp :=
  (* FILL IN HERE *)
.
Now, here's a way to formalize the pigeonhole principle. Suppose list l2 represents a list of pigeonhole labels, and list l1 represents the labels assigned to a list of items. If there are more items than labels, at least two items must have the same label — i.e., list l1 must contain repeats.
This proof is much easier if you use the excluded_middle hypothesis to show that In is decidable, i.e., x l, (In x l) ¬ (In x l). However, it is also possible to make the proof go through without assuming that In is decidable; if you manage to do this, you will not need the excluded_middle hypothesis.
Theorem pigeonhole_principle: (X:Type) (l1 l2:list X),
   excluded_middle
   ( x, In x l1In x l2) →
   length l2 < length l1
   repeats l1.
Proof.
   intros X l1. induction l1 as [|x l1' IHl1'].
  (* FILL IN HERE *) Admitted.