Deduction
(Content adapted from Critchlow & Eck)
Arguments
Logic can be applied to draw conclusions from a set of premises. A premise is just a proposition that is known to be true or that has been accepted to be true for the sake of argument, and a conclusion is a proposition that can be deduced logically from the premises. The idea is that if you believe that the premises are true, then logic forces you to accept that the conclusion is true. An "argument" is a claim that a certain conclusion follows from a given set of premises. Here is an argument laid out in a traditional format:
The premises of the argument are shown above the line, and the conclusion below. The symbol is read "therefore." The claim is that the conclusion, "This is Belgium," can be deduced logically from the two premises, "If today is Tuesday, then this is Belgium" and "Today is Tuesday." In fact, this claim is true. Logic forces you to accept this argument. Why is that?
Let stand for the proposition "Today is Tuesday," and let stand for the proposition "This is Belgium." Then the above argument has the form
Now, for any propositions and —not just the ones in this particular argument—if is true and is true, then must also be true. This is easy to check in a truth table:
false | false | true |
false | true | true |
true | false | false |
true | true | true |
The only case where both and are true is on the last line of the table, and in this case, is also true. If you believe that and are true, you have no logical choice but to believe . This applies no matter what and represent. For example, if you believe "If Jill is breathing, then Jill pays taxes," and you believe that "Jill is breathing," logic forces you to believe that "Jill pays taxes." Note that we can't say for sure that the conclusion is true, only that if the premises are true, then the conclusion must be true.
This fact can be rephrased by saying that is a tautology. More generally, for any compound propositions and , saying " is a tautology" is the same as saying that "in all cases where is true, is also true".1 We will use the notation to mean that is a tautology. Think of as being the premise of an argument. To say is to say that follows logically from . We will use the same notation in both propositional logic and predicate logic. (Note that the relation of to is the same as the relation of to .)
Let and be any formulas in either propositional logic or predicate logic. The notation is used to mean that is a tautology. That is, in all cases where is true, is also true. We then say that can be logically deduced from or that logically implies .
More generally, if and are formulas, then we say that can be logically deduced from the premises through , and write , if is a tautology.
An argument in which the conclusion follows logically from the premises is said to be a valid argument. To test whether an argument is valid, you have to replace the particular propositions or predicates that it contains with variables, and then test whether the conjunction of the premises logically implies the conclusion. We have seen that any argument of the form
is valid, since is a tautology. This rule of deduction is called modus ponens. It plays a central role in logic. Another, closely related rule is modus tollens, which applies to arguments of the form
To verify that this is a valid argument, just check that , that is, that is a tautology. As an example, the following argument has the form of modus tollens and is therefore a valid argument:
You should note carefully that the validity of this argument has nothing to do with whether or not Keanu Reeves can act well. The argument forces you to accept the conclusion only if you accept the premises. You can logically believe that the conclusion is false, as long as you believe that at least one of the premises is false.
Another named rule of deduction is the Law of Syllogism, which has the form
For example:
There are many other possible rules that have been identified by logicians over the centuries. Below we will look at one particular set of rules known as natural deduction that may be used to derive all of the others in a fairly systematic way.
Logical deduction is related to logical equivalence. We defined and to be logically equivalent if is a tautology. Since is equivalent to , we see that if and only if both and . Thus, we can show that two statements are logically equivalent if we can show that each of them can be logically deduced from the other.
Natural Deduction
In general, arguments are more complicated that those we've considered so far. Here, for example, is an argument that has five premises:
Is this argument valid? Of course, you could use a truth table to check whether the conjunction of the premises logically implies the conclusion. But with five propositional variables, the table would have 32 lines, and the size of the table grows quickly when more propositional variables are used. So, in general, truth tables are not practical.
Fortunately, there is another way to proceed, based on the fact that it is possible to chain several logical deductions together. That is, if and , it follows that . This means we can demonstrate the validity of an argument by deducing the conclusion from the premises in a sequence of steps. These steps can be presented in the form of a proof:
A formal proof that an argument is valid consists of a sequence of propositions such that the last proposition in the sequence is the conclusion of the argument, and every proposition in the sequence is either a premise of the argument or follows by logical deduction from propositions that precede it in the list.
The deduction rules, the primitive argument forms that we will chain together into complete proofs, are described in more detail below. One of the characteristics of natural deduction is that there are rules associated with each logical operator that determine how to either introduce or eliminate the operator. This can provide a great deal of guidance when writing a proof, because as you fill in the steps you can look at the main operators of the current goal(s) and the available premises—either you will work backwards from a goal by use of an introduction rule or you will work forwards from a premise by use of an elimination rule.
The existence of such a proof shows that the conclusion follows logically from the premises, and therefore that the argument is valid. Here is a formal proof that the argument given above is valid. The propositions in the proof are labeled, and each proposition has a justification.
Once a formal proof has been constructed, it is convincing. Unfortunately, it's not necessarily easy to come up with the proof. Usually, the best method is a combination of working forward ("Here's what I know, what can I deduce from that?") and working backwards ("Here's what I need to prove, what other things would imply that?"). For this proof, I might have thought: I want to prove . I know that implies , so if I can prove , I'm OK. But to prove , it'll be enough to prove and separately….
Natural Deduction Rules
As mentioned above, the particular set of primitive argument forms that we will use are grouped into introduction and elimination rules for each of the logical operators. Here we will explain each deduction rule, justify it, and show examples of how they may be used in proofs. At the end of this section is a table summarizing all of the rules.
Conjunction
The conjunction is true when both and are true. Therefore, to introduce the conjunction in a proof we need to first establish the premises and individually:
The and labels on the premises are shown here to allow us to refer to them in the "justification" attached to the conclusion: stands for "AND introduction from premises labeled and ." Here is an example of how this rule might be used in the proof of :
The elimination rules for conjunction allow us to move from the premise to either the conclusion or the conclusion :
Here is a proof that combines the introduction and elimination rules for conjunction to prove , i.e., that AND is commutative:
Note that we used the second elimination rule in step and the first in , to extract respectively the second and the first terms of the conjunction. Here is an equivalent proof using the rules in the other order:
In the justification for step , we specify that the term from () is used first in the conclusion. Although we have just proved that conjunction is commutative, it is important for a careful proof to keep track of this sort of distinction.
Implication
The implication says that whenever is true, must also be true. To introduce an implication in a proof, we will temporarily assume and show that, based on that assumption, we are then able to conclude . As a deduction rule, this argument will take the form of a nested proof, analogous to a nested block of code that may introduce temporary local variables, or a function definition that may use parameters. The notation we will use for this is inspired by the notation for an anonymous function block in languages such as JavaScript, Scala, and ReasonML:
The curly braces around the nested proof (which is also indented for clarity) emphasize that the temporary assumption that is true, labeled , is only available within that section of the proof. No conclusion within the braces should be referred to from outside that section (just as you cannot access local variables from a block or function definition when programming); at the end of the subproof, we extract the last conclusion () but only in the form of the implication —if is true, then is also true.
Here is an example of a proof of the tautology (since a tautology may be viewed as an argument with no premises) :
Note that when the justification in step refers to , it means the entire subproof. However, the reference to in the justifications for and is to the premise . The premise is only available within the braces, as are each of the conclusions through which rely on that temporary assumption.
The analogy with defining a function taking the hypothesis as a parameter and returning the conclusion is not accidental. The elimination rule, which you may recognize as our old friend modus ponens, is very much like function application: if we have a proof of , and we can also supply an argument that is true, then we may conclude that is true. Just as a function body describes how to takes an argument passed in through its parameter and compute a result, the subproof that establishes tells us how to take an argument (!) for and produce the extra steps needed to conclude .
Here is a proof of the argument :
Disjunction
To prove the disjunction , it is enough to prove either or alone. This leads to two obvious introduction rules:
Here are two distinct proofs of the argument :
Proof 1:
Proof 2:
Although they have the same premises and conclusions, these two proofs are giving fundamentally different reasons why the conclusion follows from the premise. Note that in the introduction rules for disjunction, one of the terms in the disjunction appears "from nowhere". The argument in proof 1 could equally well conclude from the premise , where could be anything; however, the argument in proof 2 would allow us to conclude instead for any proposition .
This peculiar behavior of disjunction extends to the elimination rule. Whereas the introduction rules appear to be duals of the elimination rules for conjunction, the elimination rule for disjunction is significantly more complicated that just a dual of the introduction for conjunction.2 What we have is essentially a case analysis—to eliminate an OR, we need to conduct two subproofs (just as in the rule), one for each possible case. Here is the rule:
In words, this says that we have our disjunction, , labeled , plus two nested subproofs, labeled and (as always, in an actual proof, these parts may be laid out in any order, with other parts of the proof in between; however, for readability it is suggested that the proof be structured just as shown). The subproof concludes some proposition from the additional premise , while the subproof concludes the same from the alternate premise . Since we know that either or is true at this point in the proof, we are able to conclude regardless of which it is.
Here is a proof that OR is commutative ():
True and False
We may think of as a conjunction of zero things: it is true whenever all of those (zero) things are true, i.e., it is always true. Compare this with taking the sum of an empty set of numbers: the result is 0, which is the identity for , just as is the identity for . Using this analogy, we get one introduction rule for (with zero premises) and zero elimination rules:
In words, we may conclude at any time with no premises. This is not generally useful, but we include it for completeness.
Similarly, we may think of as a disjunction of zero things, noting as above that is the identity for . It is false unless at least one of those zero things is true…. This suggests that we get zero introduction rules and one elimination rule, which just has the premise and zero nested subproofs:
That is, if we have a proof of , labeled , then we can produce a proof of any arbitrary proposition ! Logicians like to refer to this as ex falso quodlibet, "from falsehood, anything." If your premises are consistent, you should never be able to prove at the top level of a proof; if you could do that, then you could use this rule to prove anything whatsoever. This rule is useful in nested proofs (for example in disjunction elimination, doing a case analysis), where if temporary assumption leads to a contradiction then we can conclude anything in that subproof, secure in the belief that that assumption will never actually be true.
Here is an example, where we validate the common argument that if we know that either or is true, and we know that implies false, then must actually be true:
Note that step is justified by simply copying the proposition from ; this will be discussed further in the Miscellaneous section below.
Negation
Since the laws of Boolean algebra tell us that , we could simply derive the rules for negation from those for implication, specialized to the conclusion . However, it is convenient to have rules explicitly to deal with negation. There is also one additional rule for negation that does not fit the pattern of the rest of the rules (see below).
Accordingly, here is the introduction rule for negation:
In words, if we give a nested subproof that arrives at a contradiction (i.e., a proof of ) from the assumption that is true, then must actually be false.
Similarly, here is the elimination rule for negation:
That is, one way to demonstrate a contradiction is to have proofs ( and ) of both and . Compare these rules to and , and confirm that they are just the special case of rules for where is .
Using these, here is a proof of one direction of the equivalence between and its contrapositive :
If you try to prove the other direction of this equivalence, you will have a surprisingly difficult time. In fact, it is possible to show that there is no proof of the argument using the rules seen so far.3 The closest you will be able to get starting from the premise is to conclude :
Although you may be tempted to just erase the double negation, in a formal proof you need to justify every step, and it turns out that we do not have any way yet to prove ! Therefore, the very last rule we will add (apart from wrapping up some loose ends in the next section) is the rule of double negation elimination:
With this additional rule, we may finish the proof of the equivalence of the contrapositive:
Miscellaneous
As mentioned above, it is sometimes useful in a proof to repeat a proposition from earlier (always remembering that we do not have access to propositions from nested proofs from the outside). This leads to the trivial rule
The justification for is simply the label of the previous line where was established.
Here is an example of using this to prove the tautology :
As another convenience, once we have proven the validity of some argument , we may reuse that proof in future proofs as if it were another deduction rule:
Instead of citing the argument like that in the justification, it is common to give names to useful results (such as the modus tollens and syllogism arguments discussed earlier). Also note that we may perform any consistent substitution for the propositional variables in the argument.
As an example, here is a use of the modus tollens law, , to prove an extended version:
Given a proof of the law of syllogism, , we could also prove the above as follows:
In programming terms, using an already proven theorem like this is analogous to calling an already written function out of a library.
Summary of Natural Deduction Rules
Conjunction:
Implication, Reference:
Disjunction:
True, False, Theorem:
Negation:
Invalid Arguments
Of course, not every argument is valid, so the question also arises, how can we show that an argument is invalid? Let's assume that the argument has been put into general form, with all the specific propositions replaced by propositional variables. The argument is valid if in all cases where all the premises are true, the conclusion is also true. The argument is invalid if there is even one case where all the premises are true and the conclusion is false. We can prove that an argument is invalid by finding an assignment of truth values to the propositional variables which makes all the premises true but makes the conclusion false. For example, consider an argument of the form:
In the case where is false, is false, and is true, the three premises of this argument are all true, but the conclusion is false. This shows that the argument is invalid.
Example
To apply all this to arguments stated in English, we have to introduce propositional variables to represent all the propositions in the argument. For example, consider:
John will be at the party if Mary is there and Bill is not there. Mary will be at the party if it's on Friday or Saturday. If Bill is at the party, Tom will be there. Tom won't be at the party if it's on Friday. The party is on Friday. Therefore, John will be at the party.
Let stand for "John will be at the party," for "Mary will be there," for "Bill will be there," for "Tom will be there," for "The party is on Friday," and for "The party is on Saturday." Then this argument has the form
This is a valid argument, as the following proof shows:
Predicate Logic
So far in this section, we have been working mostly with propositional logic. But the definitions of valid argument and logical deduction apply to predicate logic as well. One of the most basic rules of deduction in predicate logic says that for any entity in the domain of discourse of the predicate . That is, if a predicate is true of all entities, then it is true of any given particular entity. This rule can be combined with rules of deduction for propositional logic to give the following valid arguments
These valid arguments go by the names of modus ponens and modus tollens for predicate logic. Note that from the premise we can deduce . From this and from the premise that , we can deduce by modus ponens. So the first argument above is valid. The second argument is similar, using modus tollens.
The most famous logical deduction of them all is an application of modus ponens for predicate logic:
This has the form of modus ponens with standing for " is human," standing for " is mortal," and standing for the noted entity, Socrates.
There is a lot more to say about logical deduction and proof in predicate logic, and we'll spend the rest of this chapter on the subject.
Exercises
Verify the validity of modus tollens and the Law of Syllogism.
Answer
For modus tollens, we need to check that is a tautology. One way to do this is with a truth table:
false false true true true true true false true true false false true true true false false false true true false true true true false false true false For the Law of Syllogism, we need to check that is a tautology. Rather than building an eight-row truth table, let's do this with the laws of Boolean algebra, using the equivalence :
Definition of De Morgan De Morgan, Double Negation Commutative Distributive Excluded Middle Identity Excluded Middle Annihilation Each of the following is a valid rule of deduction. For each one, give an example of a valid argument in English that uses that rule.
Answer
I have soup or a salad with my meal. I do not have soup with my meal. Therefore, I have a salad with my meal.
Answer
I have an apple. I have a banana. Therefore, I have an apple and a banana.
Answer
I have an apple and a banana. Therefore, I have an apple.
Answer
I walked ten miles. Therefore, I walked ten miles or ran ten miles.
- There are two notorious invalid arguments that look deceptively like modus ponens and modus tollens:
Show that each of these arguments is invalid. Give an English example that uses each of these arguments.
Answer
The first argument fails when is false while is true. For example, "If I were a millionaire, then I would own a house. I own a house. Therefore, I am a millionaire."
The second argument also fails when is false while is true. For example, "If the moon is made of blue cheese, then . The moon is not made of blue cheese. Therefore, ."
Decide whether each of the following arguments is valid. If it is valid, give a formal proof. If it is invalid, show that it is invalid by finding an appropriate assignment of truth values to propositional variables.
Answer
Invalid. Take and false, so that both premises are vacuously true but the conclusion is false.
Answer
Valid:
Answer
Valid:
Answer
Valid:
Answer
Invalid. Take , , and all true, while is false. All of the premises are true, but the conclusion is false.
Answer
Valid:
For each of the following English arguments, express the argument in terms of propositional logic and determine whether the argument is valid or invalid.
If it is Sunday, it rains or snows. Today, it is Sunday and it's not raining. Therefore, it must be snowing.
Answer
Let be "it is Sunday", be "it is raining", and be "it is snowing." The argument is then
This argument is valid:
The Law of Disjunctive Syllogism cited on line 6 was seen as the first argument in Exercise 2. It is a common enough argument (used for example in Exercise 4(ii)) that I decided to introduce it as a shortcut here.
If there are anchovies on the pizza, Jack won't eat it. If Jack doesn't eat pizza, he gets angry. Jack is angry. Therefore, there were anchovies on the pizza.
Answer
Let be "There are anchovies on the pizza", be "Jack doesn't eat the pizza", and be "Jack is angry." The argument is then
This argument is invalid. Maybe Jack never eats pizza and is angry all the time (that is, and are true, making all the premises true with no obligation for to be true).
At 8:00, Jane studies in the library or works at home. It's 8:00 and Jane is not studying in the library. So she must be working at home.
Answer
Let be "it is 8:00", be "Jane studies in the library", and be "Jane works at home." The argument is then
This argument is valid:
- Here, "in all cases" means for all combinations of truth values of the propositional variables in and . Saying is a tautology means it is true in all cases. But by definition of , it is automatically true in cases where is false. In cases where is true, will be true if and only if is true.↩
- Part of this complication is an inherent asymmetry in deduction: while our arguments may have multiple premises, they may only have one conclusion. A rule that was somehow "dual" to would need to have two conclusions. There is another formulation of logic, known as the "sequent calculus" (see https://en.wikipedia.org/wiki/Sequent_calculus), where arguments may have multiple conclusions, and this asymmetry disappears. However, natural deduction has a cleaner connection to functional programming, as we will see later on.↩
- Logicians have taken this observation and built an entire system of logic known as intuitionistic logic, on the grounds that there is something unusual with the rule of double negation elimination. As computer scientists, this should actually make sense: if we think of a proof as showing how to compute something, then all of the rest of the deduction rules are reasonable. However, the double negation rule says that if we don't have a way of showing that something is false, then somehow we get a proof that it is true—just because we run a program and it doesn't print out the wrong answer doesn't mean that it will print out the correct answer, because maybe the program will never print out an answer at all!↩