PROOFS WITHOUT TEARS

A Student's Handy Guide to Proving Sequents in the system of Allen/Hand, Logic Primer

The rules of any logical system define what a proof is. They do not, however, tell you how to build one. Here are some procedures that will help you with the latter.

What are you trying to prove?

A proof is a trip from a given destination from a given starting point. The destination is the conclusion; the starting point is the set of premisses. Your task is to discover a route--not the route, since there are many different ones.

A general strategy for getting to where you want to go from where you already are is to work backwards from your destination. If you need to get to Galveston, and you know how to get to Galveston from Houston, then all you have to figure out is how to get from where you are to Houston. If you know how to get from Waller to Houston (and Houston to Galveston), then all you have to do is figure out how to get from where you are to Waller. At each step of this process, you reduce your problem of how to get from where you are to Galveston to the problem of how to get from where you are to somewhere else--hopefully, somewhere closer to where you are. If your strategy works, eventually you reduce your problem to one you already know how to solve: I can get from College Station to Galveston if I can get to Houston, and I can get from College Station to Houston if I can get to Waller, and I can get from College Station to Waller if I can get to Hempstead, and I can get from College Station to Hempstead if I can get to Navasota--but, behold, I know how to get from College Station to Navasota. So, my problem is solved.

This kind of strategy works well in trying to discover proofs. Start with where you want to end up, that is, with the conclusion, and try to figure out how you could get there from something; keep working backwards from the intermediate steps until you find that what you need is something you know how to get.

On What Everything Is

A very, very important point to remember is that, in our formal system, there are just exactly six kinds of thing, and everything is exactly one of those and not two of those. Every wff is:

  1. Atomic (a letter all by itself, or
  2. a Negation (main connective ~)
  3. a Conjunction (main connective &), or
  4. a Disjunction (main connective v), or
  5. a Conditional (main connective ->), or
  6. a Biconditional (main connective <->)

This is the first thing to notice about any wff you're working with, since it determines both the strategies for proving it and the strategies for using it in proving other things.

Some important questions: the Direct Approach

There are several fundamental questions that will help in constructing a strategy. We'll call the strategy they rest on 'the direct approach'.

1. Do I already have it as part of something else?
The wff you need may already be present as a constituent part of something you have.
2. If I do, how do I take that thing apart and get out what I need?
This depends on what kind of wff it is. In general, the Elimination rules are useful here (see below).
3. If I don't, then how do I build what I need?
This depends on what kind of wff it is. In general, the Introduction rules are useful here.

More about Question 2: how to take things apart

If what you need is ....then the rule to use is... and what else you need is...
The P in PvQvE~Q
The Q in PvQvE~P
The P in P&Q&Enothing
The Q in P&Q&Enothing
The Q in P->Q->EP
P in P->Q(You just can't get there from here)Another strategy

More about Question 3: How to Build New Things

If what you need to build is ...Then the rule to use is ...and what you need is ...... and
P&Q&IP Q
PvQvI either P or Q(nothing)
P->Q->I...to assume P...and deduce Q~P
~P RAA...to assume P...and deduce both X
and ~X (for any
X you can)
P<->Q<->IP->Q O->P

The Indirect Approach

Sometimes, the direct approach doesn't work. In that case (or at other times), you can always resort to 'the indirect approach': assume a denial of what you want to get, then try to get a contradictory pair so that you can use RAA.

Some exercises from 1.5.1 and 1.5.2

S12: P->Q, ~Q |- ~P

Since the Middle Ages, this rule has been called 'Modus Tollendo Tollens,' or just Modus Tollens', or just 'MT'

Here, what we want is not a constituent of any premise. So, we need to find a way to build it. ~P is a negation, so we need to introduce a '~'. The rule to do that with is RAA:

1(1) P->Q A
2(2)~QA
..
1,2~P?, ? RAA (?)

Next, we need to figure out what assumption to make (and discharge), and also how to deduce a statement and its denial. A good strategy is just to assume the denial of what we want:

1(1) P->Q A
2(2)~QA
3(3)PA
..
1,2( )~P?, ? RAA (3)

Now, all we need is to get a pair of contradictories. Since we already have ~Q, one way to do this would be to get Q:

1(1) P->Q A
2(2)~QA
3(3)PA
..
( )Q?
1,2( )~P?, ? RAA (3)

In fact, we do already have Q as part of premise 1. So, the next question is: how do we get it? The rule ->E lets us get the consequent of a conditional if we have its antecedent, which in this case is P. So, we could get Q if we could get P. Here, we're in luck: we do have P:

1(1) P->Q A
2(2)~QA
3(3)PA
1,3(4)Q1,3 ->E
1,2(5)~P2,4 RAA (3)

S13-15: These can be done with exactly the same strategy as S12

S13-S15 are variant forms of Modus Tollens

S16: P->Q, Q->R |- P->R

This rule has the name 'hypothetical syllogism', or 'HS'
1(1)P->Q A
2(2)Q->RA
..
1,2( )P->R?

What we need to get is a conditional, and it doesn't already appear in the premises. So, since we need to introduce an arrow, let's try ->I:

1(1)P->Q A
2(2)Q->RA
3 (3)PA
..
( )R?
1,2()P->R? ->I (3)

That is: if we can get R after assuming P (line 3), then we can get what we want with ->I and discharge the assumption. Our problem, then, reduces to this: how do we get R? R is found in premise (2) as the consequent of a conditional, so we could get it if we had the antecedent, Q. Once again, then, we've reduced our problem to another:

1(1)P->Q A
2(2)Q->RA
3 (3)PA
..
(n-1) Q?
(n)R2, (n-1) ->E
1,2(n+1)P->R(n) ->I (3)

So now, we need to get Q. We have it on the right side of (1), so we could get it by ->E if we had the left side. But we do: it's (3). So, we're done:

1(1)P->Q A
2(2)Q->RA
3 (3)PA
1,3(4)Q1,3 ->E
1,2,3(5)R2,4 ->E
1,2(6)P->R5 ->I (3)

S17: P |- Q->P

What we need to get is a conditional, and all that we have is an atomic sentence. So, we'll have to build the conclusion, and to do that we will need ->I:

1(1)P A
2(2)QA
..
(n)P?
1(n+1) Q->Pn ->I (2)

All that remains, then, is to figure out how to get P, having assumed Q. But we already have P: it's the first premise. So, we're done:

1(1)P A
2(2)QA
1(3)PA
1(4)Q->P3 ->I (2)

S18: ~P |- P->Q

This rule is known as 'False Antecedent' (FA). A nice medieval Latin name is 'ex falso quodlibet'

The conclusion is a conditional, and it's not already present as part of any premise, so we should probably try to use ->I:

1(1)~P A
..
..
1()P->Q? ->I

Of course, to use ->I, we'll have to assume the antecedent of the conditional we want (that is, P), then try to deduce its consequent (that is, Q). So, our proof will need to have this overall structure:

1(1)~P A
2(2)PA
..
()Q?
1()P->Q? ->I (2)

So, how can we get Q? There's an important point of strategy we can notice here. Q does not occur at all in the premises. Thus, we have to find a way to introduce a new atomic sentence in the proof. There are only two rules that allow us to get a conclusion containing a statement letter not found among the premises: vI and RAA. We're not trying to get a disjunction, so vI is not likely to be of much use here. Let's try RAA. The strategy would be to assume a denial of Q, then try to get a contradictory pair:

1(1)~P A
2(2)PA
3(3)~QA
..
()Q? RAA (3)
1 ()P->Q? ->I (2)

But wait! we've already got a sentence and its denial (lines (1) and (2)). So, we can fill in the details for RAA and for the rest of the proof:

1(1)~P A
2(2)PA
3(3)~QA
1,2(4)Q1,2 RAA (3)
1(5) P->Q4 ->I (2)

S22: PvQ -||- ~P->Q

Among the names of this is 'Wedge-Arrow', or 'v->'.

This is a double turnstile, so we need to prove two sequents: PvQ |- ~P->Q and ~P->Q |- PvQ. The first is easy: since we need to build a conditional, we use ->I.

1(1)PvQ A
2(2)~PA
1,2(3)Q1,2 vE
1(4)~P->Q3 ->I (2)

The second is a little harder: look here.

S25: PvQ, P->R, Q->R |-R

The venerable name for this is 'Simple Dilemma'. Please notice how many times 'm' and 'n' respectively do and do not occur in the word 'dilemma'.
1(1)PvQ A
2(2)P->RA
3 (3)Q->RA
..
( )R ?

We need to get R. Looking over the premises, we see that we could get R if we could get P, and we could also get R if we could get Q (each time using ->E). Which shall we try? Let's start by trying P:

1(1)PvQ A
2(2)P->RA
3 (3)Q->RA
..
(n-1) P?
1,2,3 (n)R 2, n-1 ->E

So, the problem is how to get P. There's no rule that lets us get the left side of a conditional, but we could get it from line (1) if we had ~Q, using vE:

1(1)PvQ A
2(2)P->R A
3 (3)Q->RA
..
(n-2) ~Q?
(n-1)P1,n-2 vE
1,2,3(n)R 2, n-1 ->E

So, how could we get ~Q? It's not part of any premise, so the best strategy would be to try RAA. We would then assume Q and try to deduce a pair of contradictory formulas. Before we start down that path, however, let's look where it's leading us. If we look over the premises, the only thing we can get from them in one step with the additional assumption Q is R. There are no negations anywhere in the premises, so we might suspect that it's going to be hard to come up with our contradictory pair. We will probably have to make one or more assumptions in addition to Q.

At this point, we're approaching the condition of aimless floundering. Aimless floundering is, in fact, a strategy of sorts, rather like guessing on a multiple-choice quiz or buying lottery tickets: sometimes, those strategies win. However, when you find that you don't really know what to do next, that's usually a sign that it's time to give up on the direct approach and try the indirect approach. That's particularly apt when the conclusion you want is atomic, as it is in this case. So, let's start over:

1(1)PvQ A
2(2)P->RA
3 (3)Q->RA
4(4)~RA
..
1,2,3( )R ?, ? RAA (4)

So, we need to get a contradictory pair. Notice right away that if we assume either P or Q, we'll get R. This is promising, so let's try it:

1(1)PvQ A
2(2)P->RA
3 (3)Q->RA
4(4)~RA
5(5)PA
2,5(6)R2,5 ->E
..
1,2,3( )R ?, ? RAA (4)

Now we have a contradiction (lines (4 and 6). But which assumption do we want to discharge? If we discharge (4), we won't be done, because we'll still have (5) in the assumption set. However, we can discharge (5):

1(1)PvQ A
2(2)P->RA
3 (3)Q->RA
4(4)~RA
5(5)PA
2,5(6)R2,5 ->E
2,4(7)~P6,4 RAA (5)
. .
1,2,3 ( )R ?, ? RAA (4)

This takes us a little farther. With ~P, we can get Q (by vE, from (1)), and with Q we can get R (by ->E, from (3)). But R is a denial of ~R, so we have what we need.

1(1)PvQ A
2(2)P->RA
3 (3)Q->RA
4(4)~RA
5(5)PA
2,5(6)R2,5 ->E
2,4(7)~P6,4 RAA (5)
1,2,4(8)Q1,7 vE
1,2,3,4(9)R3,8 ->E
1,2,3(10)R 4,9 RAA (4)

Notice that we couldn't stop at (9), since there's something besides the premises in the assumption set. However, using RAA, we can get rid of that assumption. Line (10), remember, is the denial of line (4): it just happens to be the same thing as line (9)

S28: ~(PvQ) -||- ~P & ~Q

This is one of the rules known as 'DeMorgan's Law' (after 19th-century logician Augustus DeMorgan).

Left to right: we need to build a conjunction, so we'll use &I. To do that, we need to get both conjuncts, that is, ~P and ~Q. those are both negations, so we will use RAA for each.

1(1)~(PvQ) A
2(2)PA
2(3)PvQ2 vI
1(4)~P1,3 RAA (2)
5(5) QA
5(6)PvQ5 vI
1(7)~Q1,6 RAA (5)
1(8) ~P&~Q4,7 &I

Right to left: we need to build a negation, so it's best to try RAA at once. This turns out to be easy:

1(1)~P&~Q A
2(2)PvQA
1(3)~P1 &E
1,2(4)Q2,3 vE
1(5) ~Q1 &E
1(6)~(PvQ)4,5 RAA(2)

S44: P->(Q->R) -||- (P&Q) -> R

This is called 'exportation' going left to right and 'importation' right to left. I replaced a redundant pair of parentheses for clarity.

Right to left: we need to build a conditional, so we assume the antecedent. To get R from the premise, we need to get Q->R out first. Again, this is straightforward:

1(1)P->(Q->R) A
2(2)P&QA
2 (3)P2 &E
1,2(4)Q->R 1,3 ->E
2(5)Q2 &E
1,2(6)R4,5 ->E
1(7) (P&Q)->R->I (2)

Right to left: a little more subtle. We need to build a conditional, but one side of it is a conditional. What we need to do, then, is assume P and try to get Q->R. Since that's also a conditional, we then assume Q and try to get R. Then, we discharge the two assumptions in reverse order:

1(1)(P&Q)->R A
2(2)PA
3(3)QA
2,3(4)P&Q2,3 &I
1,2,3(5)R1,4 ->E
1,2(6)Q->R5 ->I (3)
1(7)P->(Q->R)6 ->I (2)

S39: P->Q -||- ~Q->~P

For fairly obvious reasons, this is known as 'Transposition'.

For both directions: usual strategy for building a condition, that is, assume the antecedent, try to get the consequent, then use ->I. In each case, the way to get the consequent turns out to be RAA:

1(1)P->Q A
2(2)~QA
3(3)PA
1,3(4)Q1,3 ->E
1,2(5)~P2,4 RAA (3)
1(6)~Q->~P 5 ->I (2)
1(1)~Q->~P A
2(2)PA
3(3)~QA
1,3(4)~P1,3 ->E
1,2(5)Q2,4 RAA (3)
1(6)P->Q 5 ->I (2)

You might notice that these two proofs are very similar.

To keep this file from getting any longer, examples will be continued here. If there's nothing there, try again later.