From 0037d0c44070e91ff58a5004ab2eb1143f6ffe05 Mon Sep 17 00:00:00 2001 From: Tomas Uribe Date: Fri, 3 May 2024 14:13:11 -0500 Subject: [PATCH 01/10] Separate out more substantial suggestions, remove the purely style-based ones. --- axioms_and_computation.md | 2 +- conv.md | 3 +- dependent_type_theory.md | 87 +++++++++++++++++++++----------------- induction_and_recursion.md | 10 ++--- interacting_with_lean.md | 4 +- propositions_and_proofs.md | 22 +++++----- tactics.md | 54 +++++++++++++++-------- 7 files changed, 107 insertions(+), 75 deletions(-) diff --git a/axioms_and_computation.md b/axioms_and_computation.md index 35a96097..1f1abffa 100644 --- a/axioms_and_computation.md +++ b/axioms_and_computation.md @@ -341,7 +341,7 @@ In a sense, however, a cast does not change the meaning of an expression. Rather, it is a mechanism to reason about the expression's type. Given an appropriate semantics, it then makes sense to reduce terms in ways that preserve their meaning, ignoring the intermediate -bookkeeping needed to make the reductions type correct. In that case, +bookkeeping needed to make the reductions type-correct. In that case, adding new axioms in ``Prop`` does not matter; by proof irrelevance, an expression in ``Prop`` carries no information, and can be safely ignored by the reduction procedures. diff --git a/conv.md b/conv.md index 3f0a4022..422b3865 100644 --- a/conv.md +++ b/conv.md @@ -35,7 +35,8 @@ example (a b c : Nat) : a * (b * c) = a * (c * b) := by The above snippet shows three navigation commands: -- `lhs` navigates to the left hand side of a relation (here equality), there is also a `rhs` navigating to the right hand side. +- `lhs` navigates to the left-hand side of a relation (here equality). +There is also a `rhs` to navigate to the right-hand side. - `congr` creates as many targets as there are (nondependent and explicit) arguments to the current head function (here the head function is multiplication). - `rfl` closes target using reflexivity. diff --git a/dependent_type_theory.md b/dependent_type_theory.md index 001f166d..ef308230 100644 --- a/dependent_type_theory.md +++ b/dependent_type_theory.md @@ -224,8 +224,8 @@ hierarchy of types: Think of ``Type 0`` as a universe of "small" or "ordinary" types. ``Type 1`` is then a larger universe of types, which contains ``Type 0`` as an element, and ``Type 2`` is an even larger universe of types, -which contains ``Type 1`` as an element. The list is indefinite, so -that there is a ``Type n`` for every natural number ``n``. ``Type`` is +which contains ``Type 1`` as an element. The list is infinite: +there is a ``Type n`` for every natural number ``n``. ``Type`` is an abbreviation for ``Type 0``: ```lean @@ -256,8 +256,8 @@ type signature of the function ``List``: ``` Here ``u`` is a variable ranging over type levels. The output of the -``#check`` command means that whenever ``α`` has type ``Type n``, -``List α`` also has type ``Type n``. The function ``Prod`` is +``#check List`` command means that whenever ``α`` has type ``Type u``, +``List α`` also has type ``Type u``. The function ``Prod`` is similarly polymorphic: ```lean @@ -275,7 +275,7 @@ def F (α : Type u) : Type u := Prod α α #check F -- Type u → Type u ``` -You can avoid the universe command by providing the universe parameters when defining F. +You can avoid the `universe` command by providing the universe parameters when defining `F`. ```lean def F.{u} (α : Type u) : Type u := Prod α α @@ -359,7 +359,7 @@ The last expression, for example, denotes the function that takes three types, ``α``, ``β``, and ``γ``, and two functions, ``g : β → γ`` and ``f : α → β``, and returns the composition of ``g`` and ``f``. (Making sense of the type of this function requires an understanding -of dependent products, which will be explained below.) +of *dependent products*, which will be explained below.) The general form of a lambda expression is ``fun x : α => t``, where the variable ``x`` is a "bound variable": it is really a placeholder, @@ -482,30 +482,33 @@ So `def` can also be used to simply name a value like this: def pi := 3.141592654 ``` -`def` can take multiple input parameters. Let's create one +`def` can take multiple input parameters. Let's define a function that adds two natural numbers: ```lean -def add (x y : Nat) := +def add (x Nat) (y : Nat) := x + y #eval add 3 2 -- 5 ``` -The parameter list can be separated like this: +For convenience, the parameter list can be abbreviated as follows: ```lean # def double (x : Nat) : Nat := # x + x -def add (x : Nat) (y : Nat) := +def add (x y : Nat) := x + y #eval add (double 3) (7 + 9) -- 22 ``` +The resulting function will display a different *signature*, but the function itself will have the same type as before. -Notice here we called the `double` function to create the first +Notice that we used the `double` function to create the first parameter to `add`. + + You can use other more interesting expressions inside a `def`: ```lean @@ -517,7 +520,7 @@ def greater (x y : Nat) := You can probably guess what this one will do. You can also define a function that takes another function as input. -The following calls a given function twice passing the output of the +The following calls a given function twice, passing the output of the first invocation to the second: ```lean @@ -529,31 +532,30 @@ def doTwice (f : Nat → Nat) (x : Nat) : Nat := #eval doTwice double 2 -- 8 ``` -Now to get a bit more abstract, you can also specify arguments that -are like type parameters: +Now to get a bit more abstract, you can also specify arguments that work as type parameters, +such as `(α β γ : Type)` here: ```lean def compose (α β γ : Type) (g : β → γ) (f : α → β) (x : α) : γ := g (f x) ``` -This means `compose` is a function that takes any two functions as input -arguments, so long as those functions each take only one input. -The type algebra `β → γ` and `α → β` means it is a requirement +This defines `compose` as a function that takes three type arguments (`α β γ : Type`) and two other functions as arguments, +locally named `f` and `g`. Furthermore, +the type constraints `β → γ` and `α → β` require that the type of the output of the second function must match the type of the input to the first function - which makes sense, otherwise the two functions would not be composable. -`compose` also takes a 3rd argument of type `α` which -it uses to invoke the second function (locally named `f`) and it -passes the result of that function (which is type `β`) as input to the -first function (locally named `g`). The first function returns a type -`γ` so that is also the return type of the `compose` function. +`compose` also takes one last argument `x` of type `α`, which +it uses to invoke the second function (`f`). +The result of that function, of type `β`, is passed as the input to the +first function (`g`). The first function returns a result of type +`γ`, so that is also the return type of the `compose` function. -`compose` is also very general in that it works over any type -`α β γ`. This means `compose` can compose just about any 2 functions -so long as they each take one parameter, and so long as the type of -output of the second matches the input of the first. For example: +`compose` is also very general in that it works over any types +`α β γ`. This means that it can compose just about any two functions, +as long as the type of the output of the second matches the input of the first. For example: ```lean # def compose (α β γ : Type) (g : β → γ) (f : α → β) (x : α) : γ := @@ -566,6 +568,10 @@ def square (x : Nat) : Nat := #eval compose Nat Nat Nat double square 3 -- 18 ``` +Lean has syntax to make the three type parameters (`α β γ`) implicit, +so that they don't have to be explicitly included each time that `compose` is called --- +see the [Implicit Arguments](#Implicit-Arguments) section below. + Local Definitions ----------------- @@ -733,12 +739,13 @@ open Foo #check Foo.fa ``` -When you declare that you are working in the namespace ``Foo``, every -identifier you declare has a full name with prefix "``Foo.``". Within -the namespace, you can refer to identifiers by their shorter names, -but once you end the namespace, you have to use the longer names. -Unlike `section`, namespaces require a name. There is only one -anonymous namespace at the root level. +Inside the scope of the namespace ``Foo``, every +identifier you declare is given a full name with prefix "``Foo.``". +Within the namespace, you can use the shorter names, +but once you end the namespace, you must use the longer, *fully qualified* names. + +Unlike `section`, namespaces require a name. (There is only one +anonymous namespace: the one at the root level.) The ``open`` command brings the shorter names into the current context. Often, when you import a module, you will want to open one or @@ -935,7 +942,7 @@ Suppose we have an implementation of lists as: #check Lst.append -- Lst.append.{u} (α : Type u) (as bs : Lst α) : Lst α ``` -Then, you can construct lists of `Nat` as follows. +Then, you can construct lists of `Nat` as follows: ```lean # universe u @@ -1076,10 +1083,10 @@ used to specify the desired types of the expressions ``id`` and ``` Numerals are overloaded in Lean, but when the type of a numeral cannot -be inferred, Lean assumes, by default, that it is a natural number. So +be inferred, Lean assumes, by default, that it is a natural number. Thus, the expressions in the first two ``#check`` commands below are -elaborated in the same way, whereas the third ``#check`` command -interprets ``2`` as an integer. +both elaborated as `Nat`, whereas the third ``#check`` command +interprets ``2`` as an `Int`. ```lean #check 2 -- Nat @@ -1102,6 +1109,8 @@ made explicit. #check @id Bool true -- Bool ``` -Notice that now the first ``#check`` command gives the type of the -identifier, ``id``, without inserting any placeholders. Moreover, the -output indicates that the first argument is implicit. +Notice that the ``#check @id`` command now gives the full type of the +identifier, ``id``, without any placeholders. Moreover, the +curly brackets in the +output indicate that the first argument, `α`, is implicit. + diff --git a/induction_and_recursion.md b/induction_and_recursion.md index 22b3ec50..264c55be 100644 --- a/induction_and_recursion.md +++ b/induction_and_recursion.md @@ -328,7 +328,7 @@ any inhabited type. We can also use the type ``Option α`` to simulate incomplete patterns. The idea is to return ``some a`` for the provided patterns, and use ``none`` for the incomplete cases. The following example demonstrates -both approaches. +both approaches: ```lean def f1 : Nat → Nat → Nat @@ -688,7 +688,7 @@ if ``x`` has no predecessors, it is accessible. Given any type ``α``, we should be able to assign a value to each accessible element of ``α``, recursively, by assigning values to all its predecessors first. -The statement that ``r`` is well founded, denoted ``WellFounded r``, +The statement that ``r`` is well-founded, denoted ``WellFounded r``, is exactly the statement that every element of the type is accessible. By the above considerations, if ``r`` is a well-founded relation on a type ``α``, we should have a principle of well-founded @@ -707,7 +707,7 @@ noncomputable def f {α : Sort u} There is a long cast of characters here, but the first block we have already seen: the type, ``α``, the relation, ``r``, and the -assumption, ``h``, that ``r`` is well founded. The variable ``C`` +assumption, ``h``, that ``r`` is well-founded. The variable ``C`` represents the motive of the recursive definition: for each element ``x : α``, we would like to construct an element of ``C x``. The function ``F`` provides the inductive recipe for doing that: it tells @@ -715,7 +715,7 @@ us how to construct an element ``C x``, given elements of ``C y`` for each predecessor ``y`` of ``x``. Note that ``WellFounded.fix`` works equally well as an induction -principle. It says that if ``≺`` is well founded and you want to prove +principle. It says that if ``≺`` is well-founded and you want to prove ``∀ x, C x``, it suffices to show that for an arbitrary ``x``, if we have ``∀ y ≺ x, C y``, then we have ``C x``. @@ -810,7 +810,7 @@ def natToBin : Nat → List Nat ``` As a final example, we observe that Ackermann's function can be -defined directly, because it is justified by the well foundedness of +defined directly, because it is justified by the well-foundedness of the lexicographic order on the natural numbers. The `termination_by` clause instructs Lean to use a lexicographic order. This clause is actually mapping the function arguments to elements of type `Nat × Nat`. Then, Lean uses typeclass diff --git a/interacting_with_lean.md b/interacting_with_lean.md index 19fcd00a..30fbfacf 100644 --- a/interacting_with_lean.md +++ b/interacting_with_lean.md @@ -17,8 +17,8 @@ Importing Files --------------- The goal of Lean's front end is to interpret user input, construct -formal expressions, and check that they are well formed and type -correct. Lean also supports the use of various editors, which provide +formal expressions, and check that they are well-formed and type-correct. +Lean also supports the use of various editors, which provide continuous checking and feedback. More information can be found on the Lean [documentation pages](https://lean-lang.org/documentation/). diff --git a/propositions_and_proofs.md b/propositions_and_proofs.md index 18a22de7..c782167a 100644 --- a/propositions_and_proofs.md +++ b/propositions_and_proofs.md @@ -13,9 +13,11 @@ One strategy for proving assertions about objects defined in the language of dependent type theory is to layer an assertion language and a proof language on top of the definition language. But there is no reason to multiply languages in this way: dependent type theory is -flexible and expressive, and there is no reason we cannot represent +flexible and expressive, and it turns out that there is no reason we cannot represent assertions and proofs in the same general framework. + + For example, we could introduce a new type, ``Prop``, to represent propositions, and introduce constructors to build new propositions from others. @@ -51,7 +53,7 @@ variable (p q : Prop) In addition to axioms, however, we would also need rules to build new proofs from old ones. For example, in many proof systems for -propositional logic, we have the rule of modus ponens: +propositional logic, we have the rule of *modus ponens*: > From a proof of ``Implies p q`` and a proof of ``p``, we obtain a proof of ``q``. @@ -136,7 +138,7 @@ that even though we can treat proofs ``t : p`` as ordinary objects in the language of dependent type theory, they carry no information beyond the fact that ``p`` is true. -The two ways we have suggested thinking about the +The two ways we have suggested for thinking about the propositions-as-types paradigm differ in a fundamental way. From the constructive point of view, proofs are abstract mathematical objects that are *denoted* by suitable expressions in dependent type @@ -156,7 +158,7 @@ syntax and semantics by saying, at times, that a program "computes" a certain function, and at other times speaking as though the program "is" the function in question. -In any case, all that really matters is the bottom line. To formally +In any case, all that really matters is the bottom line: To formally express a mathematical assertion in the language of dependent type theory, we need to exhibit a term ``p : Prop``. To *prove* that assertion, we need to exhibit a term ``t : p``. Lean's task, as a @@ -191,7 +193,7 @@ true. Note that the ``theorem`` command is really a version of the ``def`` command: under the propositions and types correspondence, proving the theorem ``p → q → p`` is really the same -as defining an element of the associated type. To the kernel type +as defining an element of the associated type. To the Lean kernel type checker, there is no difference between the two. There are a few pragmatic differences between definitions and @@ -202,7 +204,7 @@ theorem is complete, typically we only need to know that the proof exists; it doesn't matter what the proof is. In light of that fact, Lean tags proofs as *irreducible*, which serves as a hint to the parser (more precisely, the *elaborator*) that there is generally no -need to unfold it when processing a file. In fact, Lean is generally +need to unfold them when processing a file. In fact, Lean is generally able to process and check proofs in parallel, since assessing the correctness of one proof does not require knowing the details of another. @@ -260,10 +262,10 @@ axiom hp : p theorem t2 : q → p := t1 hp ``` -Here, the ``axiom`` declaration postulates the existence of an -element of the given type and may compromise logical consistency. For -example, we can use it to postulate the empty type `False` has an -element. +The ``axiom`` declaration postulates the existence of an +element of the given type, and may compromise logical consistency. For +example, we can use it to postulate that the empty type `False` has an +element: ```lean axiom unsound : False diff --git a/tactics.md b/tactics.md index 13b8acaa..edc00051 100644 --- a/tactics.md +++ b/tactics.md @@ -52,6 +52,7 @@ wherever a term is expected, Lean allows us to insert instead a ``by separated by semicolons or line breaks. You can prove the theorem above in that way: + ```lean theorem test (p q : Prop) (hp : p) (hq : q) : p ∧ q ∧ p := by @@ -72,12 +74,13 @@ theorem test (p q : Prop) (hp : p) (hq : q) : p ∧ q ∧ p := by exact hq exact hp ``` +(We often put the ``by`` keyword on the preceding line.) -The ``apply`` tactic applies an expression, viewed as denoting a -function with zero or more arguments. It unifies the conclusion with -the expression in the current goal, and creates new goals for the -remaining arguments, provided that no later arguments depend on -them. In the example above, the command ``apply And.intro`` yields two +The ``apply `` tactic applies an expression ``, viewed as a +function of zero or more arguments. It unifies the conclusion (that is, the result of applying `e`) +with the current goal expression, and creates new goals for the +arguments of `e`, provided that no later arguments depend on them. +In the example above, the command ``apply And.intro`` yields two subgoals: ``` @@ -94,9 +97,13 @@ subgoals: ⊢ q ∧ p ``` -The first goal is met with the command ``exact hp``. The ``exact`` -command is just a variant of ``apply`` which signals that the -expression given should fill the goal exactly. It is good form to use +Spelling out this `apply` step: `And.intro` is the rule `a → b → a ∧ b`; unifying its conclusion, `a ∧ b`, +with the current goal `p ∧ q ∧ p`, +gives us `p` for `a`, and `q ∧ p` for `b`; thus, we get `p` and `q ∧ p` as the new subgoals. + +The first of these two new subgoals is discharged with the command ``exact hp``. The ``exact`` +command is just a variant of ``apply`` that requires that the +expression given should match the goal exactly. It is good form to use it in a tactic proof, since its failure signals that something has gone wrong. It is also more robust than ``apply``, since the elaborator takes the expected type, given by the target of the goal, @@ -115,6 +122,15 @@ You can see the resulting proof term with the ``#print`` command: #print test ``` +Lean will print: +``` +∀ (p q : Prop), p → q → p ∧ q ∧ p := + fun p q hp hq => { left := hp, right := { left := hq, right := hp } } +``` + +This is the actual proof term that the tactics constructed. It can now be verified independently +of the tactics and the rest of the Lean machinery if we so choose, by a trusted third-party tool, for example. + You can write a tactic script incrementally. In VS Code, you can open a window to display messages by pressing ``Ctrl-Shift-Enter``, and that window will then show you the current goal whenever the cursor is @@ -135,7 +151,7 @@ theorem test (p q : Prop) (hp : p) (hq : q) : p ∧ q ∧ p := by exact And.intro hq hp ``` -Unsurprisingly, it produces exactly the same proof term. +Unsurprisingly, it produces exactly the same proof term: ```lean # theorem test (p q : Prop) (hp : p) (hq : q) : p ∧ q ∧ p := by @@ -144,20 +160,22 @@ Unsurprisingly, it produces exactly the same proof term. #print test ``` -Multiple tactic applications can be written in a single line by concatenating with a semicolon. +Multiple tactic applications can be written in a single line, concatenated with semicolons: ```lean theorem test (p q : Prop) (hp : p) (hq : q) : p ∧ q ∧ p := by apply And.intro hp; exact And.intro hq hp ``` -Tactics that may produce multiple subgoals often tag them. For +Tactics that may produce multiple subgoals often *tag* them. For example, the tactic ``apply And.intro`` tagged the first subgoal as -``left``, and the second as ``right``. In the case of the ``apply`` -tactic, the tags are inferred from the parameters' names used in the -``And.intro`` declaration. You can structure your tactics using the +``left``, and the second as ``right``. In this ``apply`` tactic invocation, +the tags are inferred from the parameters' names used in the +``And.intro`` declaration. + +You can structure your tactics using the notation ``case => ``. The following is a structured -version of our first tactic proof in this chapter. +version of our first tactic proof in this chapter: ```lean theorem test (p q : Prop) (hp : p) (hq : q) : p ∧ q ∧ p := by @@ -169,6 +187,8 @@ theorem test (p q : Prop) (hp : p) (hq : q) : p ∧ q ∧ p := by case right => exact hp ``` + + You can solve the subgoal ``right`` before ``left`` using the ``case`` notation: @@ -189,8 +209,8 @@ block. For simple subgoals, it may not be worth selecting a subgoal using its tag, but you may still want to structure the proof. Lean also provides -the "bullet" notation ``. `` (or ``· ``) for -structuring proof. +the indentation-sensitive "bullet" notation ``. `` (or ``· ``) for +structuring proofs: ```lean theorem test (p q : Prop) (hp : p) (hq : q) : p ∧ q ∧ p := by From 29c52c429bbb4aa1f27bf099d1de6f51d0298cb5 Mon Sep 17 00:00:00 2001 From: Tomas Uribe Date: Fri, 3 May 2024 18:14:50 -0500 Subject: [PATCH 02/10] Clarification/questions --- axioms_and_computation.md | 5 ++++- conv.md | 7 ++++++- tactics.md | 2 +- 3 files changed, 11 insertions(+), 3 deletions(-) diff --git a/axioms_and_computation.md b/axioms_and_computation.md index 1f1abffa..b4ac5d4b 100644 --- a/axioms_and_computation.md +++ b/axioms_and_computation.md @@ -24,7 +24,10 @@ reduce to a numeral. Lean's standard library defines an additional axiom, propositional extensionality, and a quotient construction which in turn implies the -principle of function extensionality. These extensions are used, for +principle of function extensionality. + +These extensions are used, for example, to develop theories of sets and finite sets. We will see below that using these theorems can block evaluation in Lean's kernel, so that closed terms of type ``Nat`` no longer evaluate to numerals. But diff --git a/conv.md b/conv.md index 422b3865..934a0989 100644 --- a/conv.md +++ b/conv.md @@ -14,7 +14,7 @@ As a first example, let us prove example (examples in this file are somewhat artificial since other tactics could finish them immediately). The naive first attempt is to enter tactic mode and try `rw [Nat.mul_comm]`. But this -transforms the goal into `b * c * a = a * (c * b)`, after commuting the +transforms the goal into `b * c * a = a * (c * b)`, because it commutes the very first multiplication appearing in the term. There are several ways to fix this issue, and one way is to use a more precise tool: the conversion mode. The following code block shows the current target @@ -155,8 +155,13 @@ example (g : Nat → Nat) (h₁ : g x = x + 1) (h₂ : x > 0) : g x = f x := by exact h₁ ``` + - `enter [1, x, 2, y]` iterate `arg` and `intro` with the given arguments. It is just the macro: + + ``` syntax enterArg := ident <|> group("@"? num) syntax "enter " "[" (colGt enterArg),+ "]": conv diff --git a/tactics.md b/tactics.md index edc00051..0142c887 100644 --- a/tactics.md +++ b/tactics.md @@ -108,7 +108,7 @@ it in a tactic proof, since its failure signals that something has gone wrong. It is also more robust than ``apply``, since the elaborator takes the expected type, given by the target of the goal, into account when processing the expression that is being applied. In -this case, however, ``apply`` would work just as well. +this example, however, ``apply`` would work just as well. You can see the resulting proof term with the ``#print`` command: From 707591f89f851777fab1f96c5080ee4c78ee3060 Mon Sep 17 00:00:00 2001 From: Tomas Uribe Date: Fri, 3 May 2024 19:30:13 -0500 Subject: [PATCH 03/10] Comment about example --- conv.md | 4 ++-- interacting_with_lean.md | 2 ++ 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/conv.md b/conv.md index 934a0989..0087143f 100644 --- a/conv.md +++ b/conv.md @@ -158,9 +158,9 @@ example (g : Nat → Nat) (h₁ : g x = x + 1) (h₂ : x > 0) : g x = f x := by - `enter [1, x, 2, y]` iterate `arg` and `intro` with the given arguments. It is just the macro: - +Adding, say, [1,2,x] as another example would also be useful. --> ``` syntax enterArg := ident <|> group("@"? num) diff --git a/interacting_with_lean.md b/interacting_with_lean.md index 30fbfacf..6ed7a06a 100644 --- a/interacting_with_lean.md +++ b/interacting_with_lean.md @@ -290,6 +290,8 @@ theorem List.isPrefix_self (as : List α) : as ≤ as := ⟨[], by simp⟩ ``` + + That assignment can also be made local: ```lean From 43c6ebef7ea70a47b859a26339d8c978dc0fa6f9 Mon Sep 17 00:00:00 2001 From: Tomas Uribe Date: Sun, 5 May 2024 09:06:18 -0500 Subject: [PATCH 04/10] Suggestion about variable declarations --- axioms_and_computation.md | 5 +---- dependent_type_theory.md | 9 +++++---- propositions_and_proofs.md | 3 +++ 3 files changed, 9 insertions(+), 8 deletions(-) diff --git a/axioms_and_computation.md b/axioms_and_computation.md index b4ac5d4b..1f1abffa 100644 --- a/axioms_and_computation.md +++ b/axioms_and_computation.md @@ -24,10 +24,7 @@ reduce to a numeral. Lean's standard library defines an additional axiom, propositional extensionality, and a quotient construction which in turn implies the -principle of function extensionality. - -These extensions are used, for +principle of function extensionality. These extensions are used, for example, to develop theories of sets and finite sets. We will see below that using these theorems can block evaluation in Lean's kernel, so that closed terms of type ``Nat`` no longer evaluate to numerals. But diff --git a/dependent_type_theory.md b/dependent_type_theory.md index ef308230..a3657a79 100644 --- a/dependent_type_theory.md +++ b/dependent_type_theory.md @@ -355,7 +355,7 @@ You can also pass types as parameters: ```lean #check fun (α β γ : Type) (g : β → γ) (f : α → β) (x : α) => g (f x) ``` -The last expression, for example, denotes the function that takes +This expression, for example, denotes the function that takes three types, ``α``, ``β``, and ``γ``, and two functions, ``g : β → γ`` and ``f : α → β``, and returns the composition of ``g`` and ``f``. (Making sense of the type of this function requires an understanding @@ -532,7 +532,8 @@ def doTwice (f : Nat → Nat) (x : Nat) : Nat := #eval doTwice double 2 -- 8 ``` -Now to get a bit more abstract, you can also specify arguments that work as type parameters, +Now to get a bit more abstract, you can also specify arguments that +are like type parameters, such as `(α β γ : Type)` here: ```lean @@ -554,8 +555,8 @@ first function (`g`). The first function returns a result of type `γ`, so that is also the return type of the `compose` function. `compose` is also very general in that it works over any types -`α β γ`. This means that it can compose just about any two functions, -as long as the type of the output of the second matches the input of the first. For example: +`α β γ`. This means `compose` can compose just about any two functions +as long as the output type of the second matches the input type of the first. For example: ```lean # def compose (α β γ : Type) (g : β → γ) (f : α → β) (x : α) : γ := diff --git a/propositions_and_proofs.md b/propositions_and_proofs.md index c782167a..d0cfa1de 100644 --- a/propositions_and_proofs.md +++ b/propositions_and_proofs.md @@ -323,6 +323,9 @@ be written ``∀ (p q : Prop) (hp : p) (hq : q), p``, since the arrow denotes nothing more than an arrow type in which the target does not depend on the bound variable. + + When we generalize ``t1`` in such a way, we can then apply it to different pairs of propositions, to obtain different instances of the general theorem. From fe6a4fa508ea3cecfb6d879ff77fa6d150b7ee25 Mon Sep 17 00:00:00 2001 From: Tomas Uribe Date: Sun, 5 May 2024 21:39:02 -0500 Subject: [PATCH 05/10] Spell out the second subgoal discharge --- tactics.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/tactics.md b/tactics.md index 0142c887..1f5b7d04 100644 --- a/tactics.md +++ b/tactics.md @@ -110,6 +110,8 @@ elaborator takes the expected type, given by the target of the goal, into account when processing the expression that is being applied. In this example, however, ``apply`` would work just as well. +The second subgoal, which requires showing `q ∧ p`, is discharged by `apply And.intro; exact hq; exact hp`. + You can see the resulting proof term with the ``#print`` command: ```lean From 2014dcf8b017ee1ee4e7514842f551a3f21f3994 Mon Sep 17 00:00:00 2001 From: Tomas Uribe Date: Sun, 5 May 2024 22:07:31 -0500 Subject: [PATCH 06/10] Revert some typo fixes to better separate PRs --- axioms_and_computation.md | 2 +- dependent_type_theory.md | 4 ++-- induction_and_recursion.md | 10 +++++----- propositions_and_proofs.md | 2 +- 4 files changed, 9 insertions(+), 9 deletions(-) diff --git a/axioms_and_computation.md b/axioms_and_computation.md index 1f1abffa..35a96097 100644 --- a/axioms_and_computation.md +++ b/axioms_and_computation.md @@ -341,7 +341,7 @@ In a sense, however, a cast does not change the meaning of an expression. Rather, it is a mechanism to reason about the expression's type. Given an appropriate semantics, it then makes sense to reduce terms in ways that preserve their meaning, ignoring the intermediate -bookkeeping needed to make the reductions type-correct. In that case, +bookkeeping needed to make the reductions type correct. In that case, adding new axioms in ``Prop`` does not matter; by proof irrelevance, an expression in ``Prop`` carries no information, and can be safely ignored by the reduction procedures. diff --git a/dependent_type_theory.md b/dependent_type_theory.md index a3657a79..ff812f21 100644 --- a/dependent_type_theory.md +++ b/dependent_type_theory.md @@ -275,7 +275,7 @@ def F (α : Type u) : Type u := Prod α α #check F -- Type u → Type u ``` -You can avoid the `universe` command by providing the universe parameters when defining `F`. +You can avoid the universe command by providing the universe parameters when defining F. ```lean def F.{u} (α : Type u) : Type u := Prod α α @@ -943,7 +943,7 @@ Suppose we have an implementation of lists as: #check Lst.append -- Lst.append.{u} (α : Type u) (as bs : Lst α) : Lst α ``` -Then, you can construct lists of `Nat` as follows: +Then, you can construct lists of `Nat` as follows. ```lean # universe u diff --git a/induction_and_recursion.md b/induction_and_recursion.md index 264c55be..22b3ec50 100644 --- a/induction_and_recursion.md +++ b/induction_and_recursion.md @@ -328,7 +328,7 @@ any inhabited type. We can also use the type ``Option α`` to simulate incomplete patterns. The idea is to return ``some a`` for the provided patterns, and use ``none`` for the incomplete cases. The following example demonstrates -both approaches: +both approaches. ```lean def f1 : Nat → Nat → Nat @@ -688,7 +688,7 @@ if ``x`` has no predecessors, it is accessible. Given any type ``α``, we should be able to assign a value to each accessible element of ``α``, recursively, by assigning values to all its predecessors first. -The statement that ``r`` is well-founded, denoted ``WellFounded r``, +The statement that ``r`` is well founded, denoted ``WellFounded r``, is exactly the statement that every element of the type is accessible. By the above considerations, if ``r`` is a well-founded relation on a type ``α``, we should have a principle of well-founded @@ -707,7 +707,7 @@ noncomputable def f {α : Sort u} There is a long cast of characters here, but the first block we have already seen: the type, ``α``, the relation, ``r``, and the -assumption, ``h``, that ``r`` is well-founded. The variable ``C`` +assumption, ``h``, that ``r`` is well founded. The variable ``C`` represents the motive of the recursive definition: for each element ``x : α``, we would like to construct an element of ``C x``. The function ``F`` provides the inductive recipe for doing that: it tells @@ -715,7 +715,7 @@ us how to construct an element ``C x``, given elements of ``C y`` for each predecessor ``y`` of ``x``. Note that ``WellFounded.fix`` works equally well as an induction -principle. It says that if ``≺`` is well-founded and you want to prove +principle. It says that if ``≺`` is well founded and you want to prove ``∀ x, C x``, it suffices to show that for an arbitrary ``x``, if we have ``∀ y ≺ x, C y``, then we have ``C x``. @@ -810,7 +810,7 @@ def natToBin : Nat → List Nat ``` As a final example, we observe that Ackermann's function can be -defined directly, because it is justified by the well-foundedness of +defined directly, because it is justified by the well foundedness of the lexicographic order on the natural numbers. The `termination_by` clause instructs Lean to use a lexicographic order. This clause is actually mapping the function arguments to elements of type `Nat × Nat`. Then, Lean uses typeclass diff --git a/propositions_and_proofs.md b/propositions_and_proofs.md index d0cfa1de..46ded3b1 100644 --- a/propositions_and_proofs.md +++ b/propositions_and_proofs.md @@ -53,7 +53,7 @@ variable (p q : Prop) In addition to axioms, however, we would also need rules to build new proofs from old ones. For example, in many proof systems for -propositional logic, we have the rule of *modus ponens*: +propositional logic, we have the rule of modus ponens: > From a proof of ``Implies p q`` and a proof of ``p``, we obtain a proof of ``q``. From e4de0a257212c610a1f09714b003bd901be0829f Mon Sep 17 00:00:00 2001 From: Tomas Uribe Date: Sun, 5 May 2024 22:19:01 -0500 Subject: [PATCH 07/10] Rewording --- propositions_and_proofs.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/propositions_and_proofs.md b/propositions_and_proofs.md index 46ded3b1..49e1c678 100644 --- a/propositions_and_proofs.md +++ b/propositions_and_proofs.md @@ -13,14 +13,14 @@ One strategy for proving assertions about objects defined in the language of dependent type theory is to layer an assertion language and a proof language on top of the definition language. But there is no reason to multiply languages in this way: dependent type theory is -flexible and expressive, and it turns out that there is no reason we cannot represent +flexible and expressive, and it turns out that we can represent assertions and proofs in the same general framework. -For example, we could introduce a new type, ``Prop``, to represent +For example, we can introduce a new type, ``Prop``, to represent propositions, and introduce constructors to build new propositions -from others. +from others: ```lean # def Implies (p q : Prop) : Prop := p → q From 25c04a7bd234fffdd5576b1bd96d790f21e19fbd Mon Sep 17 00:00:00 2001 From: Tomas Uribe Date: Wed, 8 May 2024 09:11:34 -0500 Subject: [PATCH 08/10] Try to clarify a bit more. --- propositions_and_proofs.md | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/propositions_and_proofs.md b/propositions_and_proofs.md index 49e1c678..47b732f9 100644 --- a/propositions_and_proofs.md +++ b/propositions_and_proofs.md @@ -16,8 +16,6 @@ no reason to multiply languages in this way: dependent type theory is flexible and expressive, and it turns out that we can represent assertions and proofs in the same general framework. - - For example, we can introduce a new type, ``Prop``, to represent propositions, and introduce constructors to build new propositions from others: @@ -51,9 +49,9 @@ variable (p q : Prop) #check and_comm p q -- Proof (Implies (And p q) (And q p)) ``` -In addition to axioms, however, we would also need rules to build new +In addition to axioms, however, we also need rules to build new proofs from old ones. For example, in many proof systems for -propositional logic, we have the rule of modus ponens: +propositional logic, we have the rule of *modus ponens*: > From a proof of ``Implies p q`` and a proof of ``p``, we obtain a proof of ``q``. @@ -79,7 +77,8 @@ We could render this as follows: axiom implies_intro : (p q : Prop) → (Proof p → Proof q) → Proof (Implies p q) ``` -This approach would provide us with a reasonable way of building assertions and proofs. +This approach would provide us with a reasonable way of building assertions and proofs, +staying inside the framework of dependent type theory. Determining that an expression ``t`` is a correct proof of assertion ``p`` would then simply be a matter of checking that ``t`` has type ``Proof p``. From dc5cf38bb4e9dd0a99ddbd21107d8b8e612ecc7a Mon Sep 17 00:00:00 2001 From: Tomas Uribe Date: Mon, 27 May 2024 15:22:49 -0500 Subject: [PATCH 09/10] Merge fix --- conv.md | 5 ----- propositions_and_proofs.md | 4 ---- tactics.md | 4 ---- 3 files changed, 13 deletions(-) diff --git a/conv.md b/conv.md index e99c3d32..959a7dc6 100644 --- a/conv.md +++ b/conv.md @@ -35,13 +35,8 @@ example (a b c : Nat) : a * (b * c) = a * (c * b) := by The above snippet shows three navigation commands: -<<<<<<< HEAD -- `lhs` navigates to the left-hand side of a relation (here equality). -There is also a `rhs` to navigate to the right-hand side. -======= - `lhs` navigates to the left-hand side of a relation (equality, in this case). There is also a `rhs` to navigate to the right-hand side. ->>>>>>> master - `congr` creates as many targets as there are (nondependent and explicit) arguments to the current head function (here the head function is multiplication). - `rfl` closes target using reflexivity. diff --git a/propositions_and_proofs.md b/propositions_and_proofs.md index 24e978e5..2d885423 100644 --- a/propositions_and_proofs.md +++ b/propositions_and_proofs.md @@ -262,11 +262,7 @@ theorem t2 : q → p := t1 hp ``` The ``axiom`` declaration postulates the existence of an -<<<<<<< HEAD element of the given type, and may compromise logical consistency. For -======= -element of the given type and may compromise logical consistency. For ->>>>>>> master example, we can use it to postulate that the empty type `False` has an element: diff --git a/tactics.md b/tactics.md index 4b12d630..94bc91d6 100644 --- a/tactics.md +++ b/tactics.md @@ -211,11 +211,7 @@ block. For simple subgoals, it may not be worth selecting a subgoal using its tag, but you may still want to structure the proof. Lean also provides -<<<<<<< HEAD the indentation-sensitive "bullet" notation ``. `` (or ``· ``) for -======= -the "bullet" notation ``. `` (or ``· ``) for ->>>>>>> master structuring proofs: ```lean From 8fe6c7fbb7b7157ae976aca8bee43ecb17de1abc Mon Sep 17 00:00:00 2001 From: Tomas Uribe Date: Mon, 3 Jun 2024 18:38:30 -0500 Subject: [PATCH 10/10] Finesse the "by" issue, remove commented text. --- tactics.md | 16 +--------------- 1 file changed, 1 insertion(+), 15 deletions(-) diff --git a/tactics.md b/tactics.md index 94bc91d6..743acc79 100644 --- a/tactics.md +++ b/tactics.md @@ -52,20 +52,6 @@ wherever a term is expected, Lean allows us to insert instead a ``by separated by semicolons or line breaks. You can prove the theorem above in that way: - - ```lean theorem test (p q : Prop) (hp : p) (hq : q) : p ∧ q ∧ p := by apply And.intro @@ -74,7 +60,7 @@ theorem test (p q : Prop) (hp : p) (hq : q) : p ∧ q ∧ p := by exact hq exact hp ``` -(We often put the ``by`` keyword on the preceding line.) +(We often put the ``by`` keyword on the same line as the `:=`.) The ``apply `` tactic applies an expression ``, viewed as a function of zero or more arguments. It unifies the conclusion (that is, the result of applying `e`)