IndPropInductively Defined Propositions
Inductively Defined Propositions
- Rule ev_0: The number 0 is even.
- Rule ev_SS: If n is even, then S (S n) is even.
| (ev_0) | |
| ev 0 | 
| ev n | (ev_SS) | 
| ev (S (S n)) | 
                             ------  (ev_0)
ev 0
------ (ev_SS)
ev 2
------ (ev_SS)
ev 4 
ev 0
------ (ev_SS)
ev 2
------ (ev_SS)
ev 4
This definition is different in one crucial respect from
    previous uses of Inductive: its result is not a Type, but
    rather a function from nat to Prop — that is, a property of
    numbers.  Note that we've already seen other inductive definitions
    that result in functions, such as list, whose type is Type →
    Type.  What is new here is that, because the nat argument of
    ev appears unnamed, to the right of the colon, it is allowed
    to take different values in the types of different constructors:
    0 in the type of ev_0 and S (S n) in the type of ev_SS.
 
    In contrast, the definition of list names the X parameter
    globally, to the left of the colon, forcing the result of
    nil and cons to be the same (list X).  Had we tried to bring
    nat to the left in defining ev, we would have seen an error: 
Fail Inductive wrong_ev (n : nat) : Prop :=
| wrong_ev_0 : wrong_ev 0
| wrong_ev_SS : ∀ n, wrong_ev n → wrong_ev (S (S n)).
(* ===> Error: A parameter of an inductive type n is not
allowed to be used as a bound variable in the type
of its constructor. *)
| wrong_ev_0 : wrong_ev 0
| wrong_ev_SS : ∀ n, wrong_ev n → wrong_ev (S (S n)).
(* ===> Error: A parameter of an inductive type n is not
allowed to be used as a bound variable in the type
of its constructor. *)
("Parameter" here is Coq jargon for an argument on the left of the
    colon in an Inductive definition; "index" is used to refer to
    arguments on the right of the colon.) 
 
 We can think of the definition of ev as defining a Coq property
    ev : nat → Prop, together with primitive theorems ev_0 : ev 0 and
    ev_SS : ∀n, ev n → ev (S (S n)). 
 
 Such "constructor theorems" have the same status as proven
    theorems.  In particular, we can use Coq's apply tactic with the
    rule names to prove ev for particular numbers... 
... or we can use function application syntax: 
We can also prove theorems that have hypotheses involving ev. 
Theorem ev_plus4 : ∀ n, ev n → ev (4 + n).
Proof.
intros n. simpl. intros Hn.
apply ev_SS. apply ev_SS. apply Hn.
Qed.
☐ 
Proof.
intros n. simpl. intros Hn.
apply ev_SS. apply ev_SS. apply Hn.
Qed.
Using Evidence in Proofs
- E is ev_0 (and n is O), or
- E is ev_SS n' E' (and n is S (S n'), where E' is evidence for ev n').
Inversion on Evidence
-  If the evidence is of the form ev_0, we know that n = 0.
- Otherwise, the evidence must have the form ev_SS n' E', where n = S (S n') and E' is evidence for ev n'.
Theorem ev_minus2 : ∀ n,
ev n → ev (pred (pred n)).
Proof.
intros n E.
inversion E as [| n' E'].
- (* E = ev_0 *) simpl. apply ev_0.
- (* E = ev_SS n' E' *) simpl. apply E'. Qed.
ev n → ev (pred (pred n)).
Proof.
intros n E.
inversion E as [| n' E'].
- (* E = ev_0 *) simpl. apply ev_0.
- (* E = ev_SS n' E' *) simpl. apply E'. Qed.
In words, here is how the inversion reasoning works in this proof:
 
 
 This particular proof also works if we replace inversion by
    destruct: 
-  If the evidence is of the form ev_0, we know that n = 0.
      Therefore, it suffices to show that ev (pred (pred 0)) holds.
      By the definition of pred, this is equivalent to showing that
      ev 0 holds, which directly follows from ev_0.
- Otherwise, the evidence must have the form ev_SS n' E', where n = S (S n') and E' is evidence for ev n'. We must then show that ev (pred (pred (S (S n')))) holds, which, after simplification, follows directly from E'.
Theorem ev_minus2' : ∀ n,
ev n → ev (pred (pred n)).
Proof.
intros n E.
destruct E as [| n' E'].
- (* E = ev_0 *) simpl. apply ev_0.
- (* E = ev_SS n' E' *) simpl. apply E'. Qed.
ev n → ev (pred (pred n)).
Proof.
intros n E.
destruct E as [| n' E'].
- (* E = ev_0 *) simpl. apply ev_0.
- (* E = ev_SS n' E' *) simpl. apply E'. Qed.
The difference between the two forms is that inversion is more
    convenient when used on a hypothesis that consists of an inductive
    property applied to a complex expression (as opposed to a single
    variable).  Here's is a concrete example.  Suppose that we wanted
    to prove the following variation of ev_minus2: 
Intuitively, we know that evidence for the hypothesis cannot
    consist just of the ev_0 constructor, since O and S are
    different constructors of the type nat; hence, ev_SS is the
    only case that applies.  Unfortunately, destruct is not smart
    enough to realize this, and it still generates two subgoals.  Even
    worse, in doing so, it keeps the final goal unchanged, failing to
    provide any useful information for completing the proof.  
Proof.
intros n E.
destruct E as [| n' E'].
- (* E = ev_0. *)
(* We must prove that n is even from no assumptions! *)
Abort.
intros n E.
destruct E as [| n' E'].
- (* E = ev_0. *)
(* We must prove that n is even from no assumptions! *)
Abort.
What happened, exactly?  Calling destruct has the effect of
    replacing all occurrences of the property argument by the values
    that correspond to each constructor.  This is enough in the case
    of ev_minus2' because that argument, n, is mentioned directly
    in the final goal. However, it doesn't help in the case of
    evSS_ev since the term that gets replaced (S (S n)) is not
    mentioned anywhere. 
 
 The inversion tactic, on the other hand, can detect (1) that the
    first case does not apply, and (2) that the n' that appears on
    the ev_SS case must be the same as n.  This allows us to
    complete the proof: 
Theorem evSS_ev : ∀ n,
ev (S (S n)) → ev n.
Proof.
intros n E.
inversion E as [| n' E'].
(* We are in the E = ev_SS n' E' case now. *)
apply E'.
Qed.
ev (S (S n)) → ev n.
Proof.
intros n E.
inversion E as [| n' E'].
(* We are in the E = ev_SS n' E' case now. *)
apply E'.
Qed.
By using inversion, we can also apply the principle of explosion
    to "obviously contradictory" hypotheses involving inductive
    properties. For example: 
☐ 
☐ 
We could try to proceed by case analysis or induction on n.  But
    since ev is mentioned in a premise, this strategy would probably
    lead to a dead end, as in the previous section.  Thus, it seems
    better to first try inversion on the evidence for ev.  Indeed,
    the first case can be solved trivially. 
  intros n E. inversion E as [| n' E'].
- (* E = ev_0 *)
∃ 0. reflexivity.
- (* E = ev_SS n' E' *) simpl.
- (* E = ev_0 *)
∃ 0. reflexivity.
- (* E = ev_SS n' E' *) simpl.
Unfortunately, the second case is harder.  We need to show ∃
    k, S (S n') = double k, but the only available assumption is
    E', which states that ev n' holds.  Since this isn't directly
    useful, it seems that we are stuck and that performing case
    analysis on E was a waste of time.
 
    If we look more closely at our second goal, however, we can see
    that something interesting happened: By performing case analysis
    on E, we were able to reduce the original result to an similar
    one that involves a different piece of evidence for ev: E'.
    More formally, we can finish our proof by showing that
 
        ∃ k', n' = double k',
 
    which is the same as the original statement, but with n' instead
    of n.  Indeed, it is not difficult to convince Coq that this
    intermediate result suffices. 
    assert (I : (∃ k', n' = double k') →
(∃ k, S (S n') = double k)).
{ intros [k' Hk']. rewrite Hk'. ∃ (S k'). reflexivity. }
apply I. (* reduce the original goal to the new one *)
Admitted.
(∃ k, S (S n') = double k)).
{ intros [k' Hk']. rewrite Hk'. ∃ (S k'). reflexivity. }
apply I. (* reduce the original goal to the new one *)
Admitted.
Induction on Evidence
Lemma ev_even : ∀ n,
ev n → ∃ k, n = double k.
Proof.
intros n E.
induction E as [|n' E' IH].
- (* E = ev_0 *)
∃ 0. reflexivity.
- (* E = ev_SS n' E'
with IH : exists k', n' = double k' *)
destruct IH as [k' Hk'].
rewrite Hk'. ∃ (S k'). reflexivity.
Qed.
ev n → ∃ k, n = double k.
Proof.
intros n E.
induction E as [|n' E' IH].
- (* E = ev_0 *)
∃ 0. reflexivity.
- (* E = ev_SS n' E'
with IH : exists k', n' = double k' *)
destruct IH as [k' Hk'].
rewrite Hk'. ∃ (S k'). reflexivity.
Qed.
Here, we can see that Coq produced an IH that corresponds to
    E', the single recursive occurrence of ev in its own
    definition.  Since E' mentions n', the induction hypothesis
    talks about n', as opposed to n or some other number. 
 
 The equivalence between the second and third definitions of
    evenness now follows. 
Theorem ev_even_iff : ∀ n,
ev n ↔ ∃ k, n = double k.
Proof.
intros n. split.
- (* -> *) apply ev_even.
- (* <- *) intros [k Hk]. rewrite Hk. apply ev_double.
Qed.
ev n ↔ ∃ k, n = double k.
Proof.
intros n. split.
- (* -> *) apply ev_even.
- (* <- *) intros [k Hk]. rewrite Hk. apply ev_double.
Qed.
As we will see in later chapters, induction on evidence is a
    recurring technique across many areas. 
 
 The following exercises provide simple examples of this
    technique, to help you familiarize yourself with it. 
 
☐ 
Exercise: 2 stars (ev_sum)
Exercise: 4 stars, advanced, optional (ev'_ev)
In general, there may be multiple ways of defining a property inductively. For example, here's a (slightly contrived) alternative definition for ev:
Inductive ev' : nat → Prop :=
| ev'_0 : ev' 0
| ev'_2 : ev' 2
| ev'_sum : ∀ n m, ev' n → ev' m → ev' (n + m).
| ev'_0 : ev' 0
| ev'_2 : ev' 2
| ev'_sum : ∀ n m, ev' n → ev' m → ev' (n + m).
Prove that this definition is logically equivalent to the old
    one.  (You may want to look at the previous theorem when you get
    to the induction step.) 
☐ 
Exercise: 3 stars, advanced, recommended (ev_ev__ev)
Finding the appropriate thing to do induction on is a bit tricky here:Exercise: 3 stars (ev_plus_plus)
This exercise just requires applying existing lemmas. No induction or even case analysis is needed, though some of the rewriting may be tedious.Inductive Relations
One useful example is the "less than or equal to" relation on
    numbers. 
 
 The following definition should be fairly intuitive.  It
    says that there are two ways to give evidence that one number is
    less than or equal to another: either observe that they are the
    same number, or give evidence that the first is less than or equal
    to the predecessor of the second. 
Inductive le : nat → nat → Prop :=
| le_n : ∀ n, le n n
| le_S : ∀ n m, (le n m) → (le n (S m)).
Notation "m ≤ n" := (le m n).
| le_n : ∀ n, le n n
| le_S : ∀ n m, (le n m) → (le n (S m)).
Notation "m ≤ n" := (le m n).
Proofs of facts about ≤ using the constructors le_n and
    le_S follow the same patterns as proofs about properties, like
    ev above. We can apply the constructors to prove ≤
    goals (e.g., to show that 3≤3 or 3≤6), and we can use
    tactics like inversion to extract information from ≤
    hypotheses in the context (e.g., to prove that (2 ≤ 1) →
    2+2=5.) 
 
 Here are some sanity checks on the definition.  (Notice that,
    although these are the same kind of simple "unit tests" as we gave
    for the testing functions we wrote in the first few lectures, we
    must construct their proofs explicitly — simpl and
    reflexivity don't do the job, because the proofs aren't just a
    matter of simplifying computations.) 
Theorem test_le1 :
3 ≤ 3.
Proof.
(* WORKED IN CLASS *)
apply le_n. Qed.
Theorem test_le2 :
3 ≤ 6.
Proof.
(* WORKED IN CLASS *)
apply le_S. apply le_S. apply le_S. apply le_n. Qed.
Theorem test_le3 :
(2 ≤ 1) → 2 + 2 = 5.
Proof.
(* WORKED IN CLASS *)
intros H. inversion H. inversion H2. Qed.
3 ≤ 3.
Proof.
(* WORKED IN CLASS *)
apply le_n. Qed.
Theorem test_le2 :
3 ≤ 6.
Proof.
(* WORKED IN CLASS *)
apply le_S. apply le_S. apply le_S. apply le_n. Qed.
Theorem test_le3 :
(2 ≤ 1) → 2 + 2 = 5.
Proof.
(* WORKED IN CLASS *)
intros H. inversion H. inversion H2. Qed.
The "strictly less than" relation n < m can now be defined
    in terms of le. 
Here are a few more simple relations on numbers: 
Inductive square_of : nat → nat → Prop :=
| sq : ∀ n:nat, square_of n (n * n).
Inductive next_nat : nat → nat → Prop :=
| nn : ∀ n:nat, next_nat n (S n).
Inductive next_even : nat → nat → Prop :=
| ne_1 : ∀ n, ev (S n) → next_even n (S n)
| ne_2 : ∀ n, ev (S (S n)) → next_even n (S (S n)).
| sq : ∀ n:nat, square_of n (n * n).
Inductive next_nat : nat → nat → Prop :=
| nn : ∀ n:nat, next_nat n (S n).
Inductive next_even : nat → nat → Prop :=
| ne_1 : ∀ n, ev (S n) → next_even n (S n)
| ne_2 : ∀ n, ev (S (S n)) → next_even n (S (S n)).
Exercise: 2 stars (total_relation)
Define (in Coq) an inductive binary relation total_relation that holds between every pair of natural numbers.
(* FILL IN HERE *)
☐ 
Exercise: 2 stars (empty_relation)
Define (in Coq) an inductive binary relation empty_relation (on numbers) that never holds.
(* FILL IN HERE *)
☐ 
Exercise: 3 stars, optional (le_exercises)
Here are a number of facts about the ≤ and < relations that we are going to need later in the course. The proofs make good practice exercises.
Lemma le_trans : ∀ m n o, m ≤ n → n ≤ o → m ≤ o.
Proof.
(* FILL IN HERE *) Admitted.
Theorem O_le_n : ∀ n,
0 ≤ n.
Proof.
(* FILL IN HERE *) Admitted.
Theorem n_le_m__Sn_le_Sm : ∀ n m,
n ≤ m → S n ≤ S m.
Proof.
(* FILL IN HERE *) Admitted.
Theorem Sn_le_Sm__n_le_m : ∀ n m,
S n ≤ S m → n ≤ m.
Proof.
(* FILL IN HERE *) Admitted.
Theorem le_plus_l : ∀ a b,
a ≤ a + b.
Proof.
(* FILL IN HERE *) Admitted.
Theorem plus_lt : ∀ n1 n2 m,
n1 + n2 < m →
n1 < m ∧ n2 < m.
Proof.
unfold lt.
(* FILL IN HERE *) Admitted.
Lemma minus_Sn_m: ∀ n m : nat,
m ≤ n → S (n - m) = S n - m.
Proof.
(* FILL IN HERE *) Admitted.
Theorem lt_S : ∀ n m,
n < m →
n < S m.
Proof.
(* FILL IN HERE *) Admitted.
Theorem leb_complete : ∀ n m,
leb n m = true → n ≤ m.
Proof.
(* FILL IN HERE *) Admitted.
Proof.
(* FILL IN HERE *) Admitted.
Theorem O_le_n : ∀ n,
0 ≤ n.
Proof.
(* FILL IN HERE *) Admitted.
Theorem n_le_m__Sn_le_Sm : ∀ n m,
n ≤ m → S n ≤ S m.
Proof.
(* FILL IN HERE *) Admitted.
Theorem Sn_le_Sm__n_le_m : ∀ n m,
S n ≤ S m → n ≤ m.
Proof.
(* FILL IN HERE *) Admitted.
Theorem le_plus_l : ∀ a b,
a ≤ a + b.
Proof.
(* FILL IN HERE *) Admitted.
Theorem plus_lt : ∀ n1 n2 m,
n1 + n2 < m →
n1 < m ∧ n2 < m.
Proof.
unfold lt.
(* FILL IN HERE *) Admitted.
Lemma minus_Sn_m: ∀ n m : nat,
m ≤ n → S (n - m) = S n - m.
Proof.
(* FILL IN HERE *) Admitted.
Theorem lt_S : ∀ n m,
n < m →
n < S m.
Proof.
(* FILL IN HERE *) Admitted.
Theorem leb_complete : ∀ n m,
leb n m = true → n ≤ m.
Proof.
(* FILL IN HERE *) Admitted.
Hint: The next one may be easiest to prove by induction on m. 
Hint: This theorem can easily be proved without using induction. 
Theorem leb_true_trans : ∀ n m o,
leb n m = true → leb m o = true → leb n o = true.
Proof.
(* FILL IN HERE *) Admitted.
☐ 
☐ 
leb n m = true → leb m o = true → leb n o = true.
Proof.
(* FILL IN HERE *) Admitted.
Lemma leb_spec : ∀ (n m : nat),
leb n m = true ∨ (leb n m = false ∧ leb m n = true).
Proof.
(* FILL IN HERE *) Admitted.
☐ 
leb n m = true ∨ (leb n m = false ∧ leb m n = true).
Proof.
(* FILL IN HERE *) Admitted.
Exercise: 3 stars, recommended (R_provability)
We can define three-place relations, four-place relations, etc., in just the same way as binary relations. For example, consider the following three-place relation on numbers:
Inductive R : nat → nat → nat → Prop :=
| c1 : R 0 0 0
| c2 : ∀ m n o, R m n o → R (S m) n (S o)
| c3 : ∀ m n o, R m n o → R m (S n) (S o)
| c4 : ∀ m n o, R (S m) (S n) (S (S o)) → R m n o
| c5 : ∀ m n o, R m n o → R n m o.
| c1 : R 0 0 0
| c2 : ∀ m n o, R m n o → R (S m) n (S o)
| c3 : ∀ m n o, R m n o → R m (S n) (S o)
| c4 : ∀ m n o, R (S m) (S n) (S (S o)) → R m n o
| c5 : ∀ m n o, R m n o → R n m o.
-  Which of the following propositions are provable?
- R 1 1 2
-  R 2 2 6
 
-  If we dropped constructor c5 from the definition of R,
      would the set of provable propositions change?  Briefly (1
      sentence) explain your answer.
- If we dropped constructor c4 from the definition of R, would the set of provable propositions change? Briefly (1 sentence) explain your answer.
☐
Exercise: 3 stars (R_fact)
The relation R above actually encodes a familiar function. Figure out which function; then state and prove this equivalence in Coq?
Definition fR : nat → nat → nat
(* REPLACE THIS LINE WITH ":= _your_definition_ ." *). Admitted.
Theorem R_equiv_fR : ∀ m n o, R m n o ↔ fR m n = o.
Proof.
(* FILL IN HERE *) Admitted.
☐ 
(* REPLACE THIS LINE WITH ":= _your_definition_ ." *). Admitted.
Theorem R_equiv_fR : ∀ m n o, R m n o ↔ fR m n = o.
Proof.
(* FILL IN HERE *) Admitted.
Exercise: 4 stars, advanced (subsequence)
A list is a subsequence of another list if all of the elements in the first list occur in the same order in the second list, possibly with some extra elements in between. For example,
      [1;2;3]
 
    is a subsequence of each of the lists
      [1;2;3]
[1;1;1;2;2;3]
[1;2;7;3]
[5;6;1;9;9;2;7;3;8] 
    but it is not a subsequence of any of the lists
[1;1;1;2;2;3]
[1;2;7;3]
[5;6;1;9;9;2;7;3;8]
      [1;2]
[1;3]
[5;6;2;1;7;3;8]. 
[1;3]
[5;6;2;1;7;3;8].
-  Define an inductive proposition subseq on list nat that
      captures what it means to be a subsequence. (Hint: You'll need
      three cases.)
-  Prove subseq_refl that subsequence is reflexive, that is,
      any list is a subsequence of itself.
-  Prove subseq_app that for any lists l1, l2, and l3,
      if l1 is a subsequence of l2, then l1 is also a subsequence
      of l2 ++ l3.
- (Optional, harder) Prove subseq_trans that subsequence is transitive — that is, if l1 is a subsequence of l2 and l2 is a subsequence of l3, then l1 is a subsequence of l3. Hint: choose your induction carefully!
(* FILL IN HERE *)
☐ 
Exercise: 2 stars (R_provability2)
Suppose we give Coq the following definition:
    Inductive R : nat → list nat → Prop :=
| c1 : R 0 []
| c2 : ∀ n l, R n l → R (S n) (n :: l)
| c3 : ∀ n l, R (S n) l → R n l. 
    Which of the following propositions are provable?
| c1 : R 0 []
| c2 : ∀ n l, R n l → R (S n) (n :: l)
| c3 : ∀ n l, R (S n) l → R n l.
- R 2 [1;0]
- R 1 [1;2;1;0]
- R 6 [3;2;1;0]
Definition weak_induction_principle : Prop :=
∀ P : nat → Prop,
P 0 → (∀ n : nat, P n → P (S n)) → ∀ n : nat, P n.
∀ P : nat → Prop,
P 0 → (∀ n : nat, P n → P (S n)) → ∀ n : nat, P n.
That is, to prove P n for any n, we need to show that:
 
- P 0 holds (the base case), and
- if P n' holds, then P (S n') holds (the inductive case).
Definition strong_induction_principle : Prop :=
∀ P : nat → Prop,
(∀ n : nat, (∀ m : nat, m < n → P m) → P n) →
∀ n : nat, P n.
∀ P : nat → Prop,
(∀ n : nat, (∀ m : nat, m < n → P m) → P n) →
∀ n : nat, P n.
That is, to prove P n for any n, we:
 
 
    Metaphorically speaking, in weak induction we build a tower of
    proof just using the floor beneath us: to build the S nth floor,
    we assume that the nth floor is on solid ground. In strong
    induction, we build a tower of proof using all of the floors
    beneath us: to build the nth floor, we can rely on the mth
    floor for m < n.
 
 
 Suppose we define a function pow : nat → nat → nat as follows:
 
 
 
    We use strong induction to prove that the informal function pow
    defined above behaves the same as exp from Basics.v.
 
 
 
        We'll assume that exp (m*m) ((S n')/2), is equal to exp m (S
    n').
 
 
 
   A few things to note about this proof:
 
 
 Strong induction is at least as strong as weak induction: we can
    prove the principle of weak induction using the principle of
    strong induction.
 
    Do NOT use the induction tactic: instead we apply the strong
    induction principle.  
 
- assume that P m holds for all m < n, and then
- show that P n holds.
  pow(m,0) = 1
pow(m,n) = if n is even
then pow(m*m,n/2)
else m * pow(m,n-1) 
pow(m,n) = if n is even
then pow(m*m,n/2)
else m * pow(m,n-1)
    Fixpoint exp (base power : nat) : nat :=
match power with
| O ⇒ S O
| S p ⇒ mult base (exp base p)
end. 
match power with
| O ⇒ S O
| S p ⇒ mult base (exp base p)
end.
- Theorem: forall m and n, pow(m,n) = exp base power.
-  If n=0, then we have exp m 0 = 1 and pow(m,0) = 1 by definition.
-  If n = S n', then we have exp m (S n') = m * exp m n'.
      We go by cases on the parity of n.
+ If n is even, then we have pow(m, n) = pow(m*m, n/2). But by the IH on n/2 < n, pow(m*m, n/2) is equal to exp (m*m) (n/2), which is itself equal to exp m n.+ If n is odd, then we have pow(m, S n') = m * pow(m, n'); by the IH on n' < n, we know that pow(m, n') = exp(m, n'), and we are done.
-  We get the IH immediately!
-  We manually do a case analysis as the first step of our proof.
-  When n=0, our IH is useless: there is no n' < 0!
-  Whenever we apply the IH, we have to show we're applying it to a
     smaller number.
Exercise: 2 stars (strong_induction__weak_induction)
Lemma strong_induction__weak_induction :
strong_induction_principle → weak_induction_principle.
Proof.
unfold weak_induction_principle.
intros Hstrong P Hbase Hind.
apply Hstrong.
(* FILL IN HERE *) Admitted.
☐ 
strong_induction_principle → weak_induction_principle.
Proof.
unfold weak_induction_principle.
intros Hstrong P Hbase Hind.
apply Hstrong.
(* FILL IN HERE *) Admitted.
Exercise: 3 stars, recommended (strong_induction)
Lemma strong_induction :
strong_induction_principle.
Proof.
unfold strong_induction_principle.
intros P IHstrong n.
assert (∀ k, k ≤ n → P k).
{ induction n as [|n' IHn'].
(* FILL IN HERE *) Admitted.
☐ 
strong_induction_principle.
Proof.
unfold strong_induction_principle.
intros P IHstrong n.
assert (∀ k, k ≤ n → P k).
{ induction n as [|n' IHn'].
(* FILL IN HERE *) Admitted.
Exercise: 3 stars (pow_informal_proof)
Prove the following theorem. You may may assume that n/2 < n.- Theorem: pow(1,k) = 1 for all k.
☐
Exercise: 3 stars (down_informal_proof)
Suppose we define the function down as follows:
    down 0 = 0
down n = if n is even
then down(n / 2)
else down(n - 1) 
down n = if n is even
then down(n / 2)
else down(n - 1)
- Theorem: down n = 0 for all n.
☐
Case Study: Regular Expressions
Inductive reg_exp {T : Type} : Type :=
| EmptySet : reg_exp
| EmptyStr : reg_exp
| Char : T → reg_exp
| App : reg_exp → reg_exp → reg_exp
| Union : reg_exp → reg_exp → reg_exp
| Star : reg_exp → reg_exp.
| EmptySet : reg_exp
| EmptyStr : reg_exp
| Char : T → reg_exp
| App : reg_exp → reg_exp → reg_exp
| Union : reg_exp → reg_exp → reg_exp
| Star : reg_exp → reg_exp.
Note that this definition is polymorphic: Regular
    expressions in reg_exp T describe strings with characters drawn
    from T — that is, lists of elements of T.
 
    (We depart slightly from standard practice in that we do not
    require the type T to be finite.  This results in a somewhat
    different theory of regular expressions, but the difference is not
    significant for our purposes.) 
 
 We connect regular expressions and strings via the following
    rules, which define when a regular expression matches some
    string:
 
 
 We can easily translate this informal definition into an
    Inductive one as follows: 
-  The expression EmptySet does not match any string.
-  The expression EmptyStr matches the empty string [].
-  The expression Char x matches the one-character string [x].
-  If re1 matches s1, and re2 matches s2, then App re1
        re2 matches s1 ++ s2.
-  If at least one of re1 and re2 matches s, then Union re1
        re2 matches s.
-  Finally, if we can write some string s as the concatenation of
        a sequence of strings s = s_1 ++ ... ++ s_k, and the
        expression re matches each one of the strings s_i, then
        Star re matches s.
As a special case, the sequence of strings may be empty, so Star re always matches the empty string [] no matter what re is.
Inductive exp_match {T} : list T → reg_exp → Prop :=
| MEmpty : exp_match [] EmptyStr
| MChar : ∀ x, exp_match [x] (Char x)
| MApp : ∀ s1 re1 s2 re2,
exp_match s1 re1 →
exp_match s2 re2 →
exp_match (s1 ++ s2) (App re1 re2)
| MUnionL : ∀ s1 re1 re2,
exp_match s1 re1 →
exp_match s1 (Union re1 re2)
| MUnionR : ∀ re1 s2 re2,
exp_match s2 re2 →
exp_match s2 (Union re1 re2)
| MStar0 : ∀ re, exp_match [] (Star re)
| MStarApp : ∀ s1 s2 re,
exp_match s1 re →
exp_match s2 (Star re) →
exp_match (s1 ++ s2) (Star re).
| MEmpty : exp_match [] EmptyStr
| MChar : ∀ x, exp_match [x] (Char x)
| MApp : ∀ s1 re1 s2 re2,
exp_match s1 re1 →
exp_match s2 re2 →
exp_match (s1 ++ s2) (App re1 re2)
| MUnionL : ∀ s1 re1 re2,
exp_match s1 re1 →
exp_match s1 (Union re1 re2)
| MUnionR : ∀ re1 s2 re2,
exp_match s2 re2 →
exp_match s2 (Union re1 re2)
| MStar0 : ∀ re, exp_match [] (Star re)
| MStarApp : ∀ s1 s2 re,
exp_match s1 re →
exp_match s2 (Star re) →
exp_match (s1 ++ s2) (Star re).
Again, for readability, we can also display this definition using
    inference-rule notation.  At the same time, let's introduce a more
    readable infix notation. 
| (MEmpty) | |
| [] =~ EmptyStr | 
| (MChar) | |
| [x] =~ Char x | 
| s1 =~ re1 s2 =~ re2 | (MApp) | 
| s1 ++ s2 =~ App re1 re2 | 
| s1 =~ re1 | (MUnionL) | 
| s1 =~ Union re1 re2 | 
| s2 =~ re2 | (MUnionR) | 
| s2 =~ Union re1 re2 | 
| (MStar0) | |
| [] =~ Star re | 
| s1 =~ re s2 =~ Star re | (MStarApp) | 
| s1 ++ s2 =~ Star re | 
(Notice how the last example applies MApp to the strings [1]
    and [2] directly.  Since the goal mentions [1; 2] instead of
    [1] ++ [2], Coq wouldn't be able to figure out how to split the
    string on its own.)
 
    Using inversion, we can also show that certain strings do not
    match a regular expression: 
We can define helper functions for writing down regular
    expressions. The reg_exp_of_list function constructs a regular
    expression that matches exactly the list that it receives as an
    argument: 
Fixpoint reg_exp_of_list {T} (l : list T) :=
match l with
| [] ⇒ EmptyStr
| x :: l' ⇒ App (Char x) (reg_exp_of_list l')
end.
Example reg_exp_ex4 : [1; 2; 3] =~ reg_exp_of_list [1; 2; 3].
match l with
| [] ⇒ EmptyStr
| x :: l' ⇒ App (Char x) (reg_exp_of_list l')
end.
Example reg_exp_ex4 : [1; 2; 3] =~ reg_exp_of_list [1; 2; 3].
We can also prove general facts about exp_match.  For instance,
    the following lemma shows that every string s that matches re
    also matches Star re. 
(Note the use of app_nil_r to change the goal of the theorem to
    exactly the same shape expected by MStarApp.) 
 
Exercise: 3 stars (exp_match_ex1)
The following lemmas show that the informal matching rules given at the beginning of the chapter can be obtained from the formal inductive definition.
Lemma empty_is_empty : ∀ T (s : list T),
¬ (s =~ EmptySet).
Proof.
(* FILL IN HERE *) Admitted.
Lemma MUnion' : ∀ T (s : list T) (re1 re2 : @reg_exp T),
s =~ re1 ∨ s =~ re2 →
s =~ Union re1 re2.
Proof.
(* FILL IN HERE *) Admitted.
¬ (s =~ EmptySet).
Proof.
(* FILL IN HERE *) Admitted.
Lemma MUnion' : ∀ T (s : list T) (re1 re2 : @reg_exp T),
s =~ re1 ∨ s =~ re2 →
s =~ Union re1 re2.
Proof.
(* FILL IN HERE *) Admitted.
The next lemma is stated in terms of the fold function from the
    Poly chapter: If ss : list (list T) represents a sequence of
    strings s1, ..., sn, then fold app ss [] is the result of
    concatenating them all together. 
Lemma MStar' : ∀ T (ss : list (list T)) (re : reg_exp),
(∀ s, In s ss → s =~ re) →
fold app ss [] =~ Star re.
Proof.
(* FILL IN HERE *) Admitted.
☐ 
(∀ s, In s ss → s =~ re) →
fold app ss [] =~ Star re.
Proof.
(* FILL IN HERE *) Admitted.
Exercise: 4 stars, optional (reg_exp_of_list_spec)
Prove that reg_exp_of_list satisfies the following specification:
Lemma reg_exp_of_list_spec : ∀ T (s1 s2 : list T),
s1 =~ reg_exp_of_list s2 ↔ s1 = s2.
Proof.
(* FILL IN HERE *) Admitted.
☐ 
s1 =~ reg_exp_of_list s2 ↔ s1 = s2.
Proof.
(* FILL IN HERE *) Admitted.
Fixpoint re_chars {T} (re : reg_exp) : list T :=
match re with
| EmptySet ⇒ []
| EmptyStr ⇒ []
| Char x ⇒ [x]
| App re1 re2 ⇒ re_chars re1 ++ re_chars re2
| Union re1 re2 ⇒ re_chars re1 ++ re_chars re2
| Star re ⇒ re_chars re
end.
match re with
| EmptySet ⇒ []
| EmptyStr ⇒ []
| Char x ⇒ [x]
| App re1 re2 ⇒ re_chars re1 ++ re_chars re2
| Union re1 re2 ⇒ re_chars re1 ++ re_chars re2
| Star re ⇒ re_chars re
end.
We can then phrase our theorem as follows: 
Theorem in_re_match : ∀ T (s : list T) (re : reg_exp) (x : T),
s =~ re →
In x s →
In x (re_chars re).
Proof.
intros T s re x Hmatch Hin.
induction Hmatch
as [| x'
| s1 re1 s2 re2 Hmatch1 IH1 Hmatch2 IH2
| s1 re1 re2 Hmatch IH | re1 s2 re2 Hmatch IH
| re | s1 s2 re Hmatch1 IH1 Hmatch2 IH2].
(* WORKED IN CLASS *)
- (* MEmpty *)
apply Hin.
- (* MChar *)
apply Hin.
- simpl. rewrite In_app_iff in *.
destruct Hin as [Hin | Hin].
+ (* In x s1 *)
left. apply (IH1 Hin).
+ (* In x s2 *)
right. apply (IH2 Hin).
- (* MUnionL *)
simpl. rewrite In_app_iff.
left. apply (IH Hin).
- (* MUnionR *)
simpl. rewrite In_app_iff.
right. apply (IH Hin).
- (* MStar0 *)
destruct Hin.
s =~ re →
In x s →
In x (re_chars re).
Proof.
intros T s re x Hmatch Hin.
induction Hmatch
as [| x'
| s1 re1 s2 re2 Hmatch1 IH1 Hmatch2 IH2
| s1 re1 re2 Hmatch IH | re1 s2 re2 Hmatch IH
| re | s1 s2 re Hmatch1 IH1 Hmatch2 IH2].
(* WORKED IN CLASS *)
- (* MEmpty *)
apply Hin.
- (* MChar *)
apply Hin.
- simpl. rewrite In_app_iff in *.
destruct Hin as [Hin | Hin].
+ (* In x s1 *)
left. apply (IH1 Hin).
+ (* In x s2 *)
right. apply (IH2 Hin).
- (* MUnionL *)
simpl. rewrite In_app_iff.
left. apply (IH Hin).
- (* MUnionR *)
simpl. rewrite In_app_iff.
right. apply (IH Hin).
- (* MStar0 *)
destruct Hin.
Something interesting happens in the MStarApp case.  We obtain
    two induction hypotheses: One that applies when x occurs in
    s1 (which matches re), and a second one that applies when x
    occurs in s2 (which matches Star re).  This is a good
    illustration of why we need induction on evidence for exp_match,
    as opposed to re: The latter would only provide an induction
    hypothesis for strings that match re, which would not allow us
    to reason about the case In x s2. 
  - (* MStarApp *)
simpl. rewrite In_app_iff in Hin.
destruct Hin as [Hin | Hin].
+ (* In x s1 *)
apply (IH1 Hin).
+ (* In x s2 *)
apply (IH2 Hin).
Qed.
simpl. rewrite In_app_iff in Hin.
destruct Hin as [Hin | Hin].
+ (* In x s1 *)
apply (IH1 Hin).
+ (* In x s2 *)
apply (IH2 Hin).
Qed.
Exercise: 4 stars (re_not_empty)
Write a recursive function re_not_empty that tests whether, for a given regular expression re, there exists some string that matches re. Prove that your function is correct.
Fixpoint re_not_empty {T : Type} (re : @reg_exp T) : bool
(* REPLACE THIS LINE WITH ":= _your_definition_ ." *). Admitted.
Lemma re_not_empty_correct : ∀ T (re : @reg_exp T),
(∃ s, s =~ re) ↔ re_not_empty re = true.
Proof.
(* FILL IN HERE *) Admitted.
☐ 
(* REPLACE THIS LINE WITH ":= _your_definition_ ." *). Admitted.
Lemma re_not_empty_correct : ∀ T (re : @reg_exp T),
(∃ s, s =~ re) ↔ re_not_empty re = true.
Proof.
(* FILL IN HERE *) Admitted.
Additional Exercises
Exercise: 3 stars, recommended (nostutter_defn)
Formulating inductive definitions of properties is an important skill you'll need in this course. Try to solve this exercise without any help at all.
Make sure each of these tests succeeds, but feel free to change
    the suggested proof (in comments) if the given one doesn't work
    for you.  Your definition might be different from ours and still
    be correct, in which case the examples might need a different
    proof.  (You'll notice that the suggested proofs use a number of
    tactics we haven't talked about, to make them more robust to
    different possible ways of defining nostutter.  You can probably
    just uncomment and use them as-is, but you can also prove each
    example with more basic tactics.)  
Example test_nostutter_1: nostutter [3;1;4;1;5;6].
(* FILL IN HERE *) Admitted.
(*
Proof. repeat constructor; apply beq_nat_false_iff; auto.
Qed.
*)
Example test_nostutter_2: nostutter (@nil nat).
(* FILL IN HERE *) Admitted.
(*
Proof. repeat constructor; apply beq_nat_false_iff; auto.
Qed.
*)
Example test_nostutter_3: nostutter [5].
(* FILL IN HERE *) Admitted.
(*
Proof. repeat constructor; apply beq_nat_false; auto. Qed.
*)
Example test_nostutter_4: not (nostutter [3;1;1;4]).
(* FILL IN HERE *) Admitted.
(*
Proof. intro.
repeat match goal with
h: nostutter _ |- _ => inversion h; clear h; subst
end.
contradiction H1; auto. Qed.
*)
☐ 
(* FILL IN HERE *) Admitted.
(*
Proof. repeat constructor; apply beq_nat_false_iff; auto.
Qed.
*)
Example test_nostutter_2: nostutter (@nil nat).
(* FILL IN HERE *) Admitted.
(*
Proof. repeat constructor; apply beq_nat_false_iff; auto.
Qed.
*)
Example test_nostutter_3: nostutter [5].
(* FILL IN HERE *) Admitted.
(*
Proof. repeat constructor; apply beq_nat_false; auto. Qed.
*)
Example test_nostutter_4: not (nostutter [3;1;1;4]).
(* FILL IN HERE *) Admitted.
(*
Proof. intro.
repeat match goal with
h: nostutter _ |- _ => inversion h; clear h; subst
end.
contradiction H1; auto. Qed.
*)
Exercise: 4 stars, advanced (filter_challenge)
Let's prove that our definition of filter from the Poly chapter matches an abstract specification. Here is the specification, written out informally in English:
    [1;4;6;2;3]
 
    is an in-order merge of
    [1;6;2]
 
    and
    [4;3].
 
    Now, suppose we have a set X, a function test: X→bool, and a
    list l of type list X.  Suppose further that l is an
    in-order merge of two lists, l1 and l2, such that every item
    in l1 satisfies test and no item in l2 satisfies test.  Then
    filter test l = l1.
(* FILL IN HERE *)
☐ 
Exercise: 5 stars, advanced, optional (filter_challenge_2)
A different way to characterize the behavior of filter goes like this: Among all subsequences of l with the property that test evaluates to true on all their members, filter test l is the longest. Formalize this claim in Coq and prove it.
(* FILL IN HERE *)
☐ 
Exercise: 4 stars, optional (palindromes)
A palindrome is a sequence that reads the same backwards as forwards.-  Define an inductive proposition pal on list X that
      captures what it means to be a palindrome. (Hint: You'll need
      three cases.  Your definition should be based on the structure
      of the list; just having a single constructor like
c : ∀ l, l = rev l → pal lmay seem obvious, but will not work very well.)
-  Prove (pal_app_rev) that
∀ l, pal (l ++ rev l).
-  Prove (pal_rev that)
∀ l, pal l → l = rev l.
(* FILL IN HERE *)
☐ 
Exercise: 5 stars, optional (palindrome_converse)
Again, the converse direction is significantly more difficult, due to the lack of evidence. Using your definition of pal from the previous exercise, prove that
     ∀ l, l = rev l → pal l.
 
(* FILL IN HERE *)
☐ 
Exercise: 4 stars, advanced, optional (NoDup)
Recall the definition of the In property from the Logic chapter, which asserts that a value x appears at least once in a list l:
(* Fixpoint In (A : Type) (x : A) (l : list A) : Prop :=
match l with
| => False
| x' :: l' => x' = x \/ In A x l'
end *)
match l with
| => False
| x' :: l' => x' = x \/ In A x l'
end *)
Your first task is to use In to define a proposition disjoint X
    l1 l2, which should be provable exactly when l1 and l2 are
    lists (with elements of type X) that have no elements in
    common. 
(* FILL IN HERE *)
Next, use In to define an inductive proposition NoDup X
    l, which should be provable exactly when l is a list (with
    elements of type X) where every member is different from every
    other.  For example, NoDup nat [1;2;3;4] and NoDup
    bool [] should be provable, while NoDup nat [1;2;1] and
    NoDup bool [true;true] should not be.  
(* FILL IN HERE *)
Finally, state and prove one or more interesting theorems relating
    disjoint, NoDup and ++ (list append).  
(* FILL IN HERE *)
☐ 
Exercise: 4 stars, advanced, optional (pigeonhole_principle)
The pigeonhole principle states a basic fact about counting: if we distribute more than n items into n pigeonholes, some pigeonhole must contain at least two items. As often happens, this apparently trivial fact about numbers requires non-trivial machinery to prove, but we now have enough...
Lemma in_split : ∀ (X:Type) (x:X) (l:list X),
In x l →
∃ l1 l2, l = l1 ++ x :: l2.
Proof.
(* FILL IN HERE *) Admitted.
In x l →
∃ l1 l2, l = l1 ++ x :: l2.
Proof.
(* FILL IN HERE *) Admitted.
Now define a property repeats such that repeats X l asserts
    that l contains at least one repeated element (of type X).  
Now, here's a way to formalize the pigeonhole principle.  Suppose
    list l2 represents a list of pigeonhole labels, and list l1
    represents the labels assigned to a list of items.  If there are
    more items than labels, at least two items must have the same
    label — i.e., list l1 must contain repeats.
 
    This proof is much easier if you use the excluded_middle
    hypothesis to show that In is decidable, i.e., ∀x l, (In x
    l) ∨ ¬ (In x l).  However, it is also possible to make the proof
    go through without assuming that In is decidable; if you
    manage to do this, you will not need the excluded_middle
    hypothesis. 
Theorem pigeonhole_principle: ∀ (X:Type) (l1  l2:list X),
excluded_middle →
(∀ x, In x l1 → In x l2) →
length l2 < length l1 →
repeats l1.
Proof.
intros X l1. induction l1 as [|x l1' IHl1'].
(* FILL IN HERE *) Admitted.
☐ 
excluded_middle →
(∀ x, In x l1 → In x l2) →
length l2 < length l1 →
repeats l1.
Proof.
intros X l1. induction l1 as [|x l1' IHl1'].
(* FILL IN HERE *) Admitted.
