日本語 | English

You have seen enough limits to be ready for a definition. It is true that we have survived this far without one, and we could continue. But this seems a reasonable time to define limits more carefully. The goal is to achieve rigor without rigor mortis.

First you should know that limits of Δy/Δx are by no means the only limits in mathematics. Here are five completely different examples. They involve n → a, not Δx → 0:

- a
_{n}= (n - 3) ⁄ (n + 3) (for large n, ignore the 3's and find a_{n}→ 1) - a
_{n}= ½a_{n-1}+ 4 (start with any a₁ and always a_{n}→ 8) - a
_{n}= probability of living to year n (unfortunately a_{n}→ 0) - a
_{n}= fraction of zeros among the first n digits of n (a_{n}→ 1/10?) - a₁ = .4, a₂ = .49, a₃ = .493, .... No matter what the remaining decimals are, the a's converge to a limit. Possibly a
_{n}→ .493000 …, but not likely.

**The problem is to say what the limit symbol → really means.**

A good starting point is to ask about convergence to zero. When does a sequence of positive numbers approach zero? What does it mean to write a_{n} →0? The numbers a₁, a₂, a₃, …, must become "small," but that is too vague. We will propose four definitions of **convergence to zero**, and I hope the right one will be clear.

- All the numbers a, are below 10
^{-10}. That may be enough for practical purposes, but it certainly doesn't make the a_{n}approach zero. - The sequence is getting closer to zero—each a
_{n+1}is smaller than the preceding a_{n}. This test is met by 1.1, 1.01, 1.001, … which converges to 1 instead of 0. - For any small number you think of, at least one of the an's is smaller. That pushes something toward zero, but not necessarily the whole sequence. The condition would be satisfied by 1, 1/2, 1, 1/3, 1, 1/4, …, which does not approach zero.
- For any small number you think of, the a
_{n}'s eventually go below that number and**stay below**. This is the correct definition.

I want to repeat that. To test for convergence to zero, start with a small number—say 10^{-10}. The an's must go below that number. They may come back up and go below again—the first million terms make absolutely no difference. Neither do the next billion, but eventually all terms must go below 10^{-10}. After waiting longer (possibly a lot longer), all terms drop below 10^{-20}. The tail end of the sequence decides everything.

*Question 1* Does the sequence 10^{-3}, 10^{-2}, 10^{-6}, 10^{-5}, 10^{-9}, 10^{-8}, … approach 0?

Answer Yes. These up and down numbers eventually stay below any ε.

[Fig. 2.17]

*Question 2* Doesthesequence 10^{-4}, 10^{-6}, 10^{-4}, 10^{-8}, 10^{-4}, 10^{-10}, … approach zero?

Answer No. This sequence goes below 10^{-4} but does not stay below.

There is a recognized symbol for "an arbitrarily small positive number." By worldwide agreement, it is the Greek letter ε (**epsilon**). Convergence to zero means that **the sequence eventually goes below ε and stays there**. The smaller the ε, the tougher the test and the longer we wait. Think of ε as the tolerance, and keep reducing it.

To emphasize that ε comes from outside, Socrates can choose it. Whatever ε he proposes, the a's must eventually be smaller. After some a_{N}, all the a's are below the tolerance ε. Here is the exact statement:

**for any E there is an N such that a _{n} < ε if n > N.**

Once you see that idea, the rest is easy. Figure 2.17 has N = 3 and then N = 6.

*EXAMPLE 1* The sequence 1/2, 4/4, 2/8, … starts upward but goes to zero. Notice that 1,4,9, …, 100, … are squares, and 2, 4, 8, …, 1024, … are powers of 2. Eventually 2^{n} grows faster than n², as in a_{10} = 100/1024. The ratio goes below any ε.

*EXAMPLE 2* 1, 0, 1/2, 0, 1/3, 0, … approaches zero. These a's do not decrease steadily (the mathematical word for steadily is monotonica ally") but still their limit is zero. The choice ε = 1/10 produces the right response: Beyond a_{2001} all terms are below 1/1000. So N = 2001 for that ε.

The sequence 1, 1/2, 1/2, 1/3, 1/3, 1/3, … is much slower but it also converges to zero.

Next we allow the numbers a_{n} to be negative as well as positive. They can converge upward toward zero, or they can come in from both sides. The test still requires the a, to go inside any strip near zero (and stay there). But now the strip starts at -ε.

The distance from zero is the absolute value la_{n}l. Therefore a_{n} → 0 means la_{n}l → 0. The previous test can be applied to la_{n}l:

**for any ε there is an N such that la _{n}l < ε if n > N.**

*EXAMPLE 3* 1, -1/2, 1/3, -1/4, … converges to zero because 1, 1/2 1/3, 1/4, … converges to zero.

It is a short step to limits other than zero. **The limit is L if the numbers a _{n} − L converge to Zero.** Our final test applies to the absolute value la

This is the definition of convergence! Only a finite number of a's are outside any strip around L (Figure 2.18). We write a_{n} → L or lim a_{n} = L or lim_{n→∞} a_{n} = L.

[Fig. 2.18]

*EXAMPLE 4* The numbers 3/2, 5/4, 7/6, … converge to L = 1. After subtracting 1 the differences 1/2, 1/4, 1/6, … converge to zero. Those difference are la_{n} − L|.

*EXAMPLE 5* **The sequence 1, 1 + 1/2, 1 + 1/2 + 1/3, 1 + 1/2 + 1/3 + 1/4, … fails to converge.**

The distance between terms is getting smaller. But those numbers a₁, a₂, a₃, a₄, … go past any proposed limit L. The second term is 1½. The fourth term adds on 1/3 + 1/4, so a₄ goes past 2. The eighth term has four new fractions 1/5 + 1/6 + 1/7 + 1/8, totaling more than 1/8 + 1/8 + 1/8 + 1/8 = 1/2. Therefore a₈ exceeds 2½. Eight more terms will add more than 8 times 1/16, so a_{16} is beyond 3. The lines in Figure 2.18c are infinitely long, not stopping at any L.

In the language of Chapter 10, the harmonic series 1 + 1/2 + 1/3 + … does not converge. The sum is infinite, because the "partial sums" a_{n} go beyond every limit L (a_{5000} is past L = 9). We will come back to infinite series, but this example makes a subtle point: The steps between the a_{n} can go to zero while still a_{n} → ∞.

Thus the condition a_{n+1} − a_{n} → 0 is **not sufficient** for convergence. However this condition is **necessary**. If we do have convergence, then a_{n+1} − a_{n} → 0. That is a good exercise in the logic of convergence, emphasizing the difference between "sufficient" and "necessary." We discuss this logic below, after proving that [statement A] implies [statement B]:

**If [a _{n} converges to L] then [a_{n+1} − a_{n} converges to zero].** (1)

Proof Because the a, converge, there is a number N beyond which |a_{n} − L| < ε and also la_{n+1} − L| < ε. Since a_{n+1} − a_{n} is the sum of a_{n+1} − L and L − a_{n} its absolute value cannot exceed ε + ε = 2ε. Therefore a_{n+1} − a_{n} approaches zero.

Objection by Socrates: We only got below 2ε and he asked for ε. Our reply: If he particularly wants la_{n+1} − a_{n}| < 1/10, we start with ε = 1/20. Then 2ε = 1/10. But this juggling is not necessary. To stay below 2ε is just as convincing as to stay below ε.

The following page is inserted to help with the language of mathematics. In ordinary language we might say "I will come if you call." Or we might say "I will come only if you call." That is different! A mathematician might even say "I will come if and only if you call." Our goal is to think through the logic, because it is important and not so familiar.*

Statement A above implies statement B. Statement A is a_{n} → L; statement B is a_{n+1} − a_{n} → 0. Mathematics has at least five ways of writing down A ⇒ B, and I though you might like to see them together. It seems excessive to have so many expressions for the same idea, but authors get desperate for a little variety. Here are the five ways that come to mind:

A ⇒ B

A implies B

**if** A **then** B

A is a **sufficient** condition for B

B is true **if** A is true

*EXAMPLES* **If** [positive numbers are decreasing] **then** [they converge to a limit].

**If** [sequences a_{n} and b_{n} converge] **then** [the sequence a_{n} + b_{n} converges].

**If** [ƒ(x) is the integral of v(x)] **then** [v(x) is the derivative of ƒ(x)].

Those are all true, but not proved. A is the hypothesis, B is the conclusion

Now we go in the other direction. (It is called the "converse," not the inverse.) We exchange A and B. Of course stating the converse does not make it true! B might imply A, or it might not. In the first two examples the converse was false—the a, can converge without decreasing, and a_{n} + b_{n} can converge when the separate sequences do not. The converse of the third statement is true—and there are five more ways to state it:

A ⇐ B

A is implied by B

**if** B **then** A

A is **necessary** condition for B

B is true **only if** A is true

Those words "necessary" and "sufficient" are not always easy to master. The same is true of the deceptively short phrase "if and only if." The two statements A ⇒ B and A ⇐ B are completely different and they both require proof. That means two separate proofs. But they can be stated together for convenience (when both are true):

A ⇔ B

A implies B and B implies A

A is **equivalent** to B

A is a **necessary and sufficient** condition for B

Ais true **if and only if** B is true

*EXAMPLES* [a_{n} → L] ⇔ [2a_{n} → 2L] ⇔ [a_{n} + 1 → L + 1] ⇔ [a_{n} − L → 0].

Calculus needs a definition of limits, to define dyldx. That derivative contains two limits: Δx → 0 and Δy/Δx → dy/dx. Calculus also needs rules for limits, to prove the sum rule and product rule for derivatives. We started on the definition, and now we start on the rules.

Given two convergent sequences, a_{n} → L and b_{n} → M, other sequences also converge:

Adittion: a_{n} + b_{n} → L + M a_{n} − b_{n} → L − M

Multiplication: a_{n}b_{n} → LM Division: a_{n} ⁄ b_{n} → L ⁄ M (provided M ≠ 0)

We check the multiplication rule, which uses a convenient identity:

a_{n}b_{n} − LM = (a_{n} − L)(b_{n} − M) + M(a_{n} − L) + L(b_{n} − M). (2)

Suppose |a_{n} − L| < ε beyond some point N, and |b_{n} − M| < ε beyond some other point N'. Then beyond the larger.of N and N', the right side of (2) is small. It is less than ε·ε + Mε + Lε. This proves that (2) gives a_{n}b_{n} → LM.

**An important special case is ca _{n} → cL.** (The sequence of b's is c, c, c, c, ….) Thus a constant can be brought "outside" the limit, to give lim ca

The final step is to replace sequences by functions. Instead of a₁, a₂, … there is a continuum of values ƒ(x). The limit is taken as x approaches a specified point a (instead of n → ∞). Example: As x approaches a = 0, the function ƒ(x) = 4 − x² approaches L = 4. As x approaches a = 2, the function 5x approaches L = 10. Those statements are fairly obvious, but we have to say what they mean. Somehow it must be this:

**if x is close to a then ƒ(x) is close to L.**

If x − a is small, then ƒ(x) − L should be small. As before, the word small does not say everything. We really mean "arbitrarily small," or "below any ε." The difference ƒ(x) − L must become as small as anyone wants, when x gets near a. In that case lim_{x→a} ƒ(x) = L. Or we write ƒ(x) → L as x → a.

The statement is awkward because it involves two limits. The limit x → a is forcing ƒ(x) → L. (Previously n → ∞ forced a_{n} → L.) But it is wrong to expect the same ε in both limits. We do not and cannot require that |x − a| < ε produces |ƒ(x) − L| < ε. **It may be necessary to push x extremely close to a** (closer than ε). We must guarantee that if x is close enough to a, then |ƒ(x) − L| < ε.

We have come to the "**epsilon-delta definition**" of limits. First, Socrates chooses ε. He has to be shown that ƒ(x) is within ε of L, for every x near a. Then somebody else (maybe Plato) replies with a number δ. That gives the meaning of "near a." Plato's goal is to get ƒ(x) within ε of L, by keeping x within δ of a:

**if 0 < lx − a| < δ then |ƒ(x) − L| < ε**. (3)

The input tolerance is δ (delta), the output tolerance is ε. When Plato can find a δ for every ε, Socrates concedes that the limit is L.

*EXAMPLE* Prove that lim_{x→2} 5x = 10. In this case a = 2 and L = 10.

Socrates asks for |5x − 10| < E. Plato responds by requiring |x − 2| < δ. What δ should he choose? In this case |5x − 10| is exactly 5 times |x − 2|. So Plato picks δ below ε ⁄ 5 (a smaller δ is always OK). Whenever |x − 2| < ε ⁄ 5, multiplication by 5 shows that |5x − 10| < ε.

*Remark 1* In Figure 2.19, Socrates chooses the height of the box. It extends above and below L, by the small number ε. Second, Plato chooses the width. He must make the box narrow enough for the graph to go **out the sides**. Then |ƒ(x) − L| < ε.

[Fig. 2.19]

When ƒ(x) has a jump, the box can't hold it. A step function has no limit as x approaches the jump, because the graph goes through the top or bottom—no matter how thin the box.

*Remark 2* The second figure has ƒ(x) → L, because in taking limits **we ignore the Jinalpoint x = a**. The value ƒ(a) can be anything, with no effect on L. The first figure has more: ƒ(a) equals L. Then a special name applies—ƒ is **continuous**. The left figure shows a continuous function, the other figures do not.

We soon come back to continuous functions.

*Remark 3* In the example with ƒ = 5x and δ = ε ⁄ 5, the number 5 was the slope. That choice barely kept the graph in the box—it goes out the corners. A little narrower, say δ = ε ⁄ 10, and the graph goes safely out the sides. A reasonable choice is to divide ε by 2|ƒ'(a)|. (We double the slope for safety.) I want to say why this δ works—even if the ε-δ test is seldom used in practice.

The ratio of ƒ(x) − L to x − a is distance up over distance across. This is Δƒ ⁄ Δx, close to the slope ƒ'(a). When the distance across is δ, the distance up or down is near δ|ƒ'(a)|. That equals ε/2 for our "reasonable choice" of δ—so we are safely below E. This choice solves most exercises. But Example 7 shows that a limit might exist even when the slope is infinite.

*EXAMPLE 7* lim_{x→1+} √(x − 1) = 0 (**a one-sided limit**).

Notice the plus sign in the symbol x → 1^{+} . The number x approaches a = 1 only from above. An ordinary limit x → 1 requires us to accept x on both sides of 1 (the exact value x = 1 is not considered). Since negative numbers are not allowed by the square root, we have a one-sided limit. It is L = 0.

Suppose ε is 1/10. Then the response could be δ = 1/100. A number below 1/100 has a square root below 1/10. In this case the box must be made extremely narrow, δ much smaller than ε, because the square root starts with infinite slope.

Those examples show the point of the ε-δ definition. (Given ε, look for δ. This came from Cauchy in France, not Socrates in Greece.) We also see its bad feature: The test is not convenient. Mathematicians do not go around proposing δ's and replying with δ's. We may live a strange life, but not that strange.

It is easier to establish once and for all that 5x approaches its obvious limit 5a. The same is true for other familiar functions: x^{n} → a^{n} and sin x +sin a and (1 − x)^{-1} → (1 − a)^{-1} —except at a = 1. **The correct limit L comes by substituting x = a into the function.** This is exactly the property of a "**continuous function**." Before the section on continuous functions, we prove the Squeeze Theorem using ε and δ.

*2H* **Squeeze Theorem** Suppose ƒ(x) ≤ g(x) ≤ h(x) for x near a. If ƒ(x) → L and h(x) → L as x → a, then the limit of g(x) is also L.

Proof g(x) is squeezed between ƒ(x) and h(x). After subtracting L, g(x) − L is between ƒ(x) − L and h(x) − L. Therefore |g(x) − L| < ε if |f(x) − L| < ε and |h(x) − L| < ε.

For any E, the last two inequalities hold in some region 0 < |x − a| < δ. So the first one also holds. This proves that g(x) → L. Values at x = a are not involved—until we get to continuous functions.