Durrett Probability Theory Examples 2nd Edition

Probability: Theory and Examples Solutions Manual The creation of this solution manual was one of the most important improvements in the second edition of Probability: Theory and Examples The solutions are not intended to be as polished as the proofs in the book, but are supposed to give enough of the details so that little is left to the reader’s imagination It is inevitable that some of the many solutions will contain errors If you find mistakes or better solutions send them via e-mail to rtd1@cornell.edu or via post to Rick Durrett, Dept of Math., 523 Malott Hall, Cornell U., Ithaca NY 14853 Rick Durrett Contents Laws of Large Numbers Basic Definitions Random Variables Expected Value Independence Weak Laws of Large Numbers 12 Borel-Cantelli Lemmas 15 Strong Law of Large Numbers 19 Convergence of Random Series 20 Large Deviations 24 Central Limit Theorems The De Moivre-Laplace Theorem 26 Weak Convergence 27 Characteristic Functions 31 Central Limit Theorems 35 Poisson Convergence 39 Stable Laws 43 Infinitely Divisible Distributions 45 Limit theorems in Rd 46 Random walks Stopping Times 48 Renewal theory 51 48 26 Contents Martingales 54 Conditional Expectation 54 Martingales, Almost Sure Convergence 57 Examples 43 Doob’s Inequality, Lp Convergence 64 Uniform Integrability, Convergence in L1 66 Backwards Martingales 68 Optional Stopping Theorems 69 Markov Chains 74 Definitions and Examples 74 Extensions of the Markov Property Recurrence and Transience 79 Stationary Measures 82 Asymptotic Behavior 84 General State Space 88 Ergodic Theorems 91 Definitions and Examples 91 Birkhoff’s Ergodic Theorem 93 Recurrence 95 A Subadditive Ergodic Theorem 96 Applications 97 Brownian Motion 75 98 Definition and Construction 98 Markov Property, Blumenthal’s 0-1 Law 99 Stopping Times, Strong Markov Property 100 Maxima and Zeros 101 Martingales 102 Donsker’s Theorem 105 CLT’s for Dependent Variables 106 Empirical Distributions, Brownian Bridge 107 Laws of the Iterated Logarithm 107 iii iv Contents Appendix: Measure Theory 108 Lebesgue-Stieltjes Measures 108 Carath´eodary’s Extension Theorem 109 Completion, etc 109 Integration 109 Properties of the Integral 112 Product Measures, Fubini’s Theorem 114 Radon-Nikodym Theorem 116 Laws of Large Numbers 1.1 Basic Definitions 1.1 (i) A and B−A are disjoint with B = A∪(B−A) so P (A)+P (B−A) = P (B) and rearranging gives the desired result (ii) Let An = An ∩ A, B1 = A1 and for n > 1, Bn = An − ∪n−1 m=1 Am Since the Bn are disjoint and have union A we have using (i) and Bm ⊂ Am ∞ P (A) = ∞ P (Bm ) ≤ m=1 P (Am ) m=1 (iii) Let Bn = An − An−1 Then the Bn are disjoint and have ∪∞ m=1 Bm = A, ∪nm=1 Bm = An so n ∞ P (A) = P (Bm ) = lim m=1 n→∞ P (Bm ) = lim P (An ) m=1 n→∞ (iv) Acn ↑ Ac so (iii) implies P (Acn ) ↑ P (Ac ) Since P (B c ) = − P (B) it follows that P (An ) ↓ P (A) 1.2 (i) Suppose A ∈ Fi for all i Then since each Fi is a σ-field, Ac ∈ Fi for each i Suppose A1 , A2 , is a countable sequence of disjoint sets that are in Fi for all i Then since each Fi is a σ-field, A = ∪m Am ∈ Fi for each i (ii) We take the interesection of all the σ-fields containing A The collection of all subsets of Ω is a σ-field so the collection is not empty 1.3 It suffices to show that if F is the σ-field generated by (a1 , b1 )×· · ·×(an , bn ), then F contains (i) the open sets and (ii) all sets of the form A1 × · · · An where Ai ∈ R For (i) note that if G is open and x ∈ G then there is a set of the form (a1 , b1 ) × · · · × (an , bn ) with , bi ∈ Q that contains x and lies in G, so any open set is a countable union of our basic sets For (ii) fix A2 , , An and Chapter Laws of Large Numbers let G = {A : A × A2 × · · · × An ∈ F} Since F is a σ-field it is easy to see that if Ω ∈ G then G is a σ-field so if G ⊃ A then G ⊃ σ(A) From the last result it follows that if A1 ∈ R, A1 × (a2 , b2 ) × · · · × (an , bn ) ∈ F Repeating the last argument n − more times proves (ii) 1.4 It is clear that if A ∈ F then Ac ∈ F Now let Ai be a countable collection of sets If Aci is countable for some i then (∪i Ai )c is countable On the other hand if Ai is countable for each i then ∪i Ai is countable To check additivity of P now, suppose the Ai are disjoint If Aci is countable for some i then Aj is countable for all j = i so k P (Ak ) = = P (∪k Ak ) On the other hand if Ai is countable for each i then ∪i Ai is and k P (Ak ) = = P (∪k Ak ) 1.5 The sets of the form (a1 , b1 ) × · · · × (ad , bd ) where , bi ∈ Q is a countable collection that generates Rd 1.6 If B ∈ R then {Z ∈ B} = ({X ∈ B} ∩ A) ∪ ({Y ∈ B} ∩ Ac ) ∈ F 1.7 P (χ ≥ 4) ≤ (2π)−1/2 4−1 e−8 = 3.3345 × 10−5 The lower bound is 15/16’s of the upper bound, i.e., 3.126 × 10−5 1.8 The intervals (F (x−), F (x)), x ∈ R are disjoint and each one that is nonempty contains a rational number 1.9 Let Fˆ −1 (x) = sup{y : F (y) ≤ x} and note that F (Fˆ −1 (x)) = x when F is continuous This inverse wears a hat since it is different from the one defined in the proof of (1.2) To prove the result now note that P (F (X) ≤ x) = P (X ≤ Fˆ −1 (x)) = F (Fˆ −1 (x)) = x 1.10 If y ∈ (g(α), g(β)) then P (g(X) ≤ y) = P (X ≤ g −1 (y)) = F (g −1 (y)) Differentiating with respect to y gives the desired result 1.11 If g(x) = ex then g −1 (x) = log x and g (g −1 (x)) = x so using the formula in the previous exercise gives (2π)−1/2 e−(log x) /2 /x √ √ 1.12 (i) Let F (x) = P (X ≤ x) P (X ≤ y) = F ( y) − F (− y) for y > Differentiating we see that X has density function √ √ √ (f ( y) + f (− y))/2 y (ii) In the case of the normal this reduces to (2πy)−1/2 e−y/2 Section 1.2 Random Variables 1.2 Random Variables 2.1 Let G be the smallest σ-field containing X −1 (A) Since σ(X) is a σ-field containing X −1 (A), we must have G ⊂ σ(X) and hence G = {{X ∈ B} : B ∈ F} for some S ⊃ F ⊃ A However, if G is a σ-field then we can assume F is Since A generates S, it follows that F = S 2.2 If {X1 + X2 < x} then there are rational numbers ri with r1 + r2 < x and Xi < ri so {X1 + X2 < x} = ∪r1 ,r2 ∈Q:r1 +r2 0} = G, so we need all the open sets to make all the continuous functions measurable 2.5 If f is l.s.c and xn is a sequence of points that converge to x and have f (xn ) ≤ a then f (x) ≤ a, i.e., {x : f (x) ≤ a} is closed To argue the converse note that if {y : f (y) > a} is open for each a ∈ R and f (x) > a then it is impossible to have a sequence of points xn → x with f (xn ) ≤ a so lim inf y→x f (y) ≥ a and since a < f (x) is arbitrary, f is l.s.c The measurability of l.s.c functions now follows from Example 2.1 For the other type note that if f is u.s.c then −f is measurable since it is l.s.c., so f = −(−f ) is 2.6 In view of the previous exercise we can show f δ is l.s.c by showing {x : f δ (x) > a} is open for each a ∈ R To this we note that if f δ (x) > a then there is an > and a z with z − x < δ − so that f (z) > a but then if y − x < we have f δ (y) > a A similar argument shows that {x : fδ (x) < a} is open for each a ∈ R so fδ is u.s.c The measurability of f and f0 now follows from (2.5) The measurability of {f = f0 } follows from the fact that f − f0 is 2.7 Clearly the class of F measurable functions contains the simple functions and by (2.5) is closed under pointwise limits To complete the proof now it suffices to observe that any f ∈ F is the pointwise limit of the simple functions fn = −n ∨ ([2n f ]/2n ) ∧ n Chapter Laws of Large Numbers 2.8 Clearly the collection of functions of the form f (X) contains the simple functions measurable with respect to σ(X) To show that it is closed under pointwise limits suppose fn (X) → Z, and let f (x) = lim supn fn (x) Since f (X) = lim supn fn (X) it follows that Z = f (X) Since any f (X) is the pointwise limit of simple functions, the desired result follows from the previous exercise 2.9 Note that for fixed n the Bm,n form a partition of R and Bm,n = B2m,n+1 ∪ B2m+1,n+1 If we write fn (x) out in binary then as n → ∞ we get more digits in the expansion but don’t change any of the old ones so limn fn (x) = f (x) exists Since fn (X(ω)) − Y (ω) ≤ 2−n and fn (X(ω)) → f (X(ω)) for all ω, Y = f (X) 1.3 Expected Value 3.1 X − Y ≥ so E X − Y = E(X − Y ) = EX − EY = and using (3.4) it follows that P ( X − Y ≥ ) = for all > 3.2 (3.1c) is trivial if EX = ∞ or EY = −∞ When EX + < ∞ and EY − < ∞, we have E X , E Y < ∞ since EX − ≤ EY − and EX + ≥ EY + To prove (3.1a) we can without loss of generality suppose EX − , EY − < ∞ and also that EX + = ∞ (for if E X , E Y < ∞ the result follows from the theorem) In this case, E(X + Y )− ≤ EX − + EY − < ∞ and E(X + Y )+ ≥ EX + − EY − = ∞ so E(X + Y ) = ∞ = EX + EY To prove (3.1b) we note that it is easy to see that if a = E(aX) = aEX To complete the proof now it suffices to show that if EY = ∞ then E(Y + b) = ∞, which is obvious if b ≥ and easy to prove by contradiction if b < 3.3 Recall the function with equality holds strictly convex proof of (5.2) in the Appendix We let (x) ≤ ϕ(x) be a linear (EX) = ϕ(EX) and note that Eϕ(X) ≥ E (X) = (EX) If then Exercise 3.1 implies that ϕ(X) = (X) a.s When ϕ is we have ϕ(x) > (x) for x = EX so we must have X = EX a.s 3.4 There is a linear function n ψ(x) = ϕ(EX1 , , EXn ) + (xi − EXi ) i=1 so that ϕ(x) ≥ ψ(x) for all x Taking expected values now and using (3.1c) now gives the desired result 3.5 (i) Let P (X = a) = P (X = −a) = b2 /2a2 , P (X = 0) = − (b2 /a2 ) Section 1.3 Expected Value (ii) As a → ∞ we have a2 1( X ≥a) → a.s Since all these random variables are smaller than X , the desired result follows from the dominated convergence theorem 3.6 (i) First note that EY = EX and var(Y ) = var(X) implies that EY = EX and since ϕ(x) = (x + b)2 is a quadratic that Eϕ(Y ) = Eϕ(X) Applying (3.4) we have P (Y ≥ a) ≤ Eϕ(Y )/(a + b)2 = Eϕ(X)/(a + b)2 = p (ii) By (i) we want to find p, b > so that ap−b(1−p) = and a2 p+b2 (1−p) = σ Looking at the answer we can guess p = σ /(σ + a2 ), pick b = σ /a so that EX = and then check that EX = σ 3.7 (i) Let P (X = n) = P (X = −n) = 1/2n2 , P (X = 0) = − 1/n2 for n ≥ (ii) Let P (X = − ) = − 1/n and P (X = + b) = 1/n for n ≥ To have EX = 1, var(X) = σ we need − (1 − 1/n) + b(1/n) = The first equation implies (1 − 1/n) + b2 (1/n) = σ = b/(n − 1) Using this in the second we get σ = b2 b2 1 + b2 = n(n − 1) n n−1 3.8 Cauchy-Schwarz implies EY 1(Y >a) ≤ EY P (Y > a) The left hand side is larger than (EY − a)2 so rearranging gives the desired result 2/α 3.9 EXn = n2 (1/n − 1/(n + 1)) = n/(n + 1) ≤ If Y ≥ Xn for all n then ∞ Y ≥ nα on (1/(n + 1), 1/n) but then EY ≥ n=1 nα−1 /(n + 1) = ∞ since α > 3.10 If g = 1A this follows from the definition Linearity of integration extends the result to simple functions, and then monotone convergence gives the result for nonnegative functions Finally by taking positive and negative parts we get the result for integrable functions 3.11 To see that 1A = − ni=1 (1 − 1Ai ) note that the product is zero if and only if ω ∈ Ai some i Expanding out the product gives n 1− n (1 − 1Ai ) = i=1 n 1Ai 1Aj · · · + (−1)n 1Ai − i=1 i √ √ E X12 ; X1 > n n n) → by dominated convergence 7.7 CLT’s for Dependent Variables 7.1 On {ζn = i} we have E(Xn+1 Gn ) = x dHi (x) = E(Xn+1 Gn ) = x2 dHi (x) = σi2 The ergodic theorem for Markov chains, Example 2.2 in Chapter (or Exercise 5.2 in Chapter 5) implies that n n−1 σ (Xm ) → σ (x)π(x) a.s x m=1 7.2 Let µ = P (ηn = 1) and let Xn = ηn − 1/4 Since Xn is 1-dependent, the formula in Example 7.1 implies σ = EX02 + 2E(X0 X1 ) EX02 = var(η0 ) = (1/4)(3/4) since η0 is Bernoulli(1/4) For the other term we note EX0 X1 = E [(η0 − 1/4)(η1 − 1/4)] = −1/16 since EZ0 Z1 = and EZi = 1/4 Combining things we have σ = 2/16 To identify Y0 we use the formula from the proof and the fact that X1 is independent of F−1 , to conclude Y0 = X0 − E(X0 F−1 ) + E(X1 F0 ) − EX1 1 = 1(ξ0 =H,ξ1 =T ) − 1(ξ0 =H) + 1(ξ1 =H) − 1/4 2 7.3 The Markov property implies pn−1 (ζ−n , j)µj E(X0 F−n ) = j Section 7.9 Laws of the Iterated Logarithm Since Markov chain is irreducible with a finite state space, combining Exercise 5.10 with fact that i π(i)µi = shows there are constants < γ, C < ∞ so that pn−1 (i, j)µj ≤ Ce−γn sup i j 7.8 Empirical Distributions, Brownian Bridge 8.1 Exercise 4.1 implies that P max Bt > b, − < B1 < 0≤t≤1 = P (2b − < B1 < 2b + ) Since P ( B1 < ) ∼ · (2π)−1/2 it follows that P max Bt > b − < B1 < 0≤t≤1 → e−(2b )/2 7.9 Laws of the Iterated Logarithm 9.1 Letting f (t) = 2(1 + ) log log log t and using a familar formula from the proof of (9.1) P0 (Btk > (tk f (tk ))1/2 ) ∼ κf (tk )−1/2 exp(−(1 + ) log k) The right-hand side is summable so lim sup Btk /(2tk log log log tk )1/2 ≤ k→∞ For a bound in the other direction take g(t) = 2(1 − ) log log log t and note that P0 (Btk − Btk−1 > ((tk − tk−1 )g(tk ))1/2 ) ∼ κg(tk )−1/2 exp(−(1 − ) log k) The sum of the right-hand side is ∞ and the events on the left are independent so P0 Btk − Btk−1 > ((tk − tk−1 )g(tk ))1/2 i.o = Combining this with the result for the lim sup and noting tk−1 /tk → the desired result follows easily 107 108 Chapter Brownian Motion ∞ 1/α ) = ∞ for any C Using 9.2 E Xi α = ∞ implies m=1 P ( Xi > Cn the second Borel-Cantelli now we see that lim supn→∞ Xn /n1/α ≥ C, i.e., the lim sup = ∞ Since max{ Sn , Sn−1 } ≥ Xn /2 it follows that lim supn→∞ Sn /n1/α = ∞ 9.3 (9.1) implies that lim sup Sn /(2n log log n)1/2 = n→∞ lim inf Sn /(2n log log n)1/2 = −1 n→∞ so the limit set is contained in [−1, 1] On the other hand ∞ P (Xn > √ n) < ∞ m=1 √ √ for any so Xn / n → This shows that the differences (Sn+1 − Sn )/ n → so as Sn /(2n log log n)1/2 wanders back and forth between and −1 it fills up the entire interval Appendix: Measure Theory A.1 Lebesgue-Stieltjes Measures 1.1 (i) If A, B ∈ ∪i Fi then A, B ∈ Fn for some n, so Ac , A ∪ B ∈ Fn (ii) Let Ω = [0, 1), Fn = σ({[m/2n , (m + 1)/2n ), ≤ m < 2n } σ(∪i Fi ) = the Borel subsets of [0, 1) but [0, 1/3) ∈ ∪i Fi 1.2 If A has asymptotic density θ then Ac has asymptotic density − θ However, A is not closed under unions To prove this note that if A has the property that {2k − 1, 2k} ∩ A = for all integers k then A has asymptotic density 1/2 Let A consist of the odd integers between (2k − 1)! and (2k)! and the even integers between (2k)! and (2k + 1)! Let B = 2Z Then lim sup (A ∪ B) ∩ {1, 2, n} /n = n→∞ lim inf (A ∪ B) ∩ {1, 2, n} /n = 1/2 n→∞ 1.3 (i) B = A + (B − A) so µ(B) = µ(A) + µ(B − A) ≥ µ(A) c (ii) Let An = An ∩ A, B1 = A1 and for n > 1, Bn = An − ∪n−1 m=1 (Am ) Since the Bn are disjoint and have union A we have using (i) and Bm ⊂ Am ∞ µ(A) = ∞ µ(Bm ) ≤ m=1 µ(Am ) m=1 (iii) Let Bn = An − An−1 Then the Bn are disjoint and have ∪∞ m=1 Bm = A, ∪nm=1 Bm = An so ∞ µ(A) = n µ(Bm ) = lim m=1 n→∞ µ(Bm ) = lim µ(An ) m=1 n→∞ (iv) A1 −An ↑ A1 −A so (iii) implies µ(A1 −An ) ↑ µ(A1 −A) Since µ(A1 −B) = µ(A1 ) − µ(B) it follows that µ(An ) ↓ µ(A) 110 Chapter Brownian Motion 1.4 µ(Z) = but µ({n}) = for all n and Z = ∪n {n} so µ is not countably additive on σ(A) 1.5 By fixing the sets in coordinates 2, , d it is easy to see σ(Rdo ) ⊃ R × Ro × Ro and iterating gives the desired result A.2 Carath´ eodary’s Extension Theorem 2.1 Let C = {{1, 2}, {2, 3}} Let µ be counting measure Let ν(A) = if ∈ A, otherwise A.3 Completion, etc 3.1 By (3.1) there are Ai ∈ A so that ∪i Ai ⊃ B and i µ(Ai ) ≤ µ(B) + /2 Pick I so that i>I µ(Ai ) < /2, and let A = ∪i≤I Ai Since B ⊂ ∪i Ai , we have B − A ⊂ ∪i>I Ai and hence µ(B − A) ≤ µ(∪i>I Ai ) ≤ /2 To bound the other difference we note that A − B ⊂ (∪i Ai ) − B and ∪i Ai ⊃ B so µ(A − B) ≤ µ(∪i Ai ) − µ(B) ≤ /2 3.2 (i) For each rational r, let Er = r + Dq The Er are disjoint subsets of (0, 1], so r µ(Er ) ≤ but we have µ(Er ) = µ(Dq ), so µ(Dq ) = (ii) By translating A we can suppose without loss of generality that µ(A ∩ (0, 1]) > For each rational q let Aq = A ∩ Bq If every Aq is measurable then µ(Aq ) = by (i) and µ(A ∩ (0, 1]) = q µ(Aq ) = a contradicition 3.3 Write the rotated rectangle B as {(x, y) : a ≤ x ≤ b, f (x) ≤ y ≤ g(x)} where f and g are piecewise linear Subdividing [a, b] into n equal pieces, using the upper Riemann sum for g and the lower Riemann sum for f , then letting n → ∞ we conclude that λ∗ (B) = λ(A) (ii) By covering D with the appropriate rotations and translations of sets used to cover C, we conclude λ∗ (D) ≤ λ∗ (C) Interchanging the roles of C and D proves that equality holds A.4 Integration 4.1 Let Aδ = {x : f (x) > δ} and note that Aδ ↑ A0 as δ ↓ If µ(A0 ) > then µ(Aδ ) > for some δ > If µ(Aδ ) > then µ(Aδ ∩ [−m, m]) > for some m Letting h(x) = δ on Aδ ∩ [−m, m] and otherwise we have f dµ ≥ h dµ = δµ(Aδ ∩ [−m, m]) > A.4 Integration a contradiction 4.2 Let g = ∞ m m=1 2n 1En,m Since g ≤ f , (iv) in (4.5) implies ∞ m µ(En,m ) = 2n m=1 g dµ ≤ f dµ ∞ lim sup n→∞ m µ(En,m ) ≤ n m=1 f dµ For the other inequality let h be the class used to define the integral That is, ≤ h ≤ f , h is bounded, and H = {x : h(x) > 0} has µ(H) < ∞ g+ 1H ≥ f 1H ≥ h 2n so using (iv) in (4.5) again we have ∞ m µ(H) + µ(En,m ) ≥ n 2n m=1 h dµ Letting n → ∞ now gives ∞ lim inf n→∞ m µ(En,m ) ≥ n m=1 h dµ Since h is an aribitrary member of the defining class the desired result follows 4.3 Since g + − ϕ dµ + g − (ϕ − ψ) dµ ≤ g − − ψ dµ it suffices to prove the result when g ≥ Using Exercise 4.2, we can pick n large enough so that if En,m = {x : m/2n ≤ f (x) < (m + 1)/2n } and ∞ h(x) = m=1 (m/2n )1En,m then g − h dµ < /2 Since ∞ m µ(En,m ) = n n=1 we can pick M so that h dµ ≤ m m>M 2n µ(En,m ) M ϕ= g dµ < ∞ < /2 If we let m n En,m m=1 111 112 Chapter Brownian Motion then g − ϕ dµ = g − h dµ + h − ϕ dµ < (ii) Pick Am that are finite unions of open intervals so that Am ∆En,m ≤ M −2 and let M m q(x) = n Am m=1 k Now the sum above is = j=1 cj 1(aj−1 ,aj ) almost everywhere (i.e., except at the end points of the intervals) for some a0 < a1 < · · · < ak and cj ∈ R M ϕ − q dµ ≤ m µ(Am ∆En,m ) ≤ n n 2 m=1 (iii) To make the continuous function replace each cj 1(aj−1 ,aj ) by a function rj that is on (aj−1 , aj )c , cj on [aj−1 + δj , aj − δj ], and linear otherwise If we k let r(x) = j=1 rj (x) then k q(x) − r(x) = δj cj < j=1 if we take δj cj < /k 4.4 Suppose g(x) = c1(a,b) (x) In this case b g(x) cos nx dx = c cos nx dx = a c sin nx n b a so the absolute value of the integral is smaller than 2 c /n and hence → Linearity extends the last result to step functions Using Exercise 4.3 we can approximate g by a step function q so that g − q dx < Since cos nx ≤ the triangle inequality implies g(x) cos nx dx ≤ q(x) cos nx dx + so the lim sup of the left hand side < complete and since g(x) − q(x) dx is arbitrary the proof is 4.5 (a) does not imply (b): let f (x) = 1[0,1] This function is continuous at x = and but if g = f a.e then g will be discontinuous at and (b) does not imply (a): f = 1Q where Q = the rationals is equal a.e to the continuous function that is ≡ However 1Q is not continuous anywhere A.5 Properties of the Integral n n = {ω : xnm−1 ≤ f (x) < xnm }, ψn = xnm−1 on Em and ϕn = xnm on 4.6 Let Em n Em ψn ≤ f ≤ ϕn ≤ ψn + mesh(σn ) so (iv) in (4.7) implies ψn dµ ≤ f dµ ≤ ϕn dµ ≤ ψn dµ + mesh(σn )µ(Ω) It follows from the last inequality that if we have a sequence of partitions with mesh(σn ) → then ¯ n) = ψn dµ, L(σ ¯ (σn ) = U ϕn dµ, → f dµ A.5 Properties of the Integral 5.1 If g ≤ M a.e then f g ≤ M f a.e and (iv) in (4.7) implies f g dµ ≤ M f dµ = M f Taking the inf over M now gives the desired result 5.2 If µ({x : f (x) > M }) = then f p dµ ≤ M p so lim supp→∞ f p ≤ M On the other hand if µ({x : f (x) > N }) = δ > then f p dµ ≥ δN p so lim inf p→∞ f p ≥ N Taking the inf over M and sup over N gives the desired result 5.3 Since f + g ≤ f + g we have f + g p dx ≤ f f + g p−1 dx + ≤ f p f + g p−1 q g f + g p−1 dx + g p f + g p−1 q Now q = p/(p − 1) so 1/q f + g p−1 q = f + g p dx = f +g p−1 q and dividing each side of the first display by f + g p−1 gives the desired result q (ii) Since f + g ≤ f + g , (iv) and (iii) of (4.7) imply that f + g dx ≤ f + g dx ≤ f dx + g dx 113 114 Chapter Brownian Motion It is easy to see that if µ{x : f (x) ≥ M } = and µ{x : g(x) ≥ N } = then µ{x : f (x) + g(x) ≥ M + N } = Taking the inf over M and N we have f +g ∞ ≤ f ∞ + g ∞ 5.4 If σn is a sequence of partitions with mesh(σn ) → then f σn (x) → f (x) at all points of continuity of f so the bounded convergence theorem implies f σn (x) dx → U (σn ) = [a,b] f (x) dx [a,b] A similar argument to applies to the lower Riemann sum and completes the proof 5.5 If ≤ (gn + g1− ) ↑ (g + g1− ) then the monotone convergence theorem implies gn − g1− dµ ↑ Since g1− dµ < ∞ we can add the desired result 5.6 n m=0 gm ∞ m=0 gm ↑ g − g1− dµ g1− dµ to both sides and use (ii) of (4.5) to get so the monotone convergence theorem implies ∞ n gm dµ = lim gm dµ n→∞ m=0 m=0 n ∞ gm dµ = = lim n→∞ m=0 gm dµ m=0 5.7 (i) follows from the monotone convergence theorem (ii) Let f = g and pick n so that g dµ − g ∧ n dµ < Then let δ < /(2n) Now if µ(A) < δ g dµ ≤ g − ( g ∧ n) dµ + A g ∧ n dµ < A n + µ(A)n < 5.8 m=0 f 1Em → f 1E and is dominated by the integrable function f , so the dominated convergence theorem implies n f dµ = lim E n→∞ f dµ m=0 Em A.6 Product Measure, Fubini’s Theorem 5.9 If xn → c ∈ (a, b) then f 1[a,xn] → f 1[a,c] a.e and is dominated by f so the dominated convergence theorem implies g(xn ) → g(c) 5.10 First suppose f ≥ Let ϕn (x) = m/2n on {x : m/2n ≤ f (x) < (m + 1) < 2n } for ≤ m < n2n and otherwise As n ↑ ∞, ϕn (x) ↑ f (x) so so the dominated convergence theorem implies f − ϕn p dµ → To extend to the + − − general case now, let ϕ+ n approximate f , let ϕn approximate f , and let ϕ = ϕ+ − ϕ− and note that f + − ϕ+ n dµ + f − ϕ dµ = 5.11 Exercise 5.6 implies fn dµ = fn dµ < ∞ so n n f − − ϕ− n dµ fn < ∞ a.e., ∞ gn = fm → g = m=1 fm a.e m=1 and the dominated convergence theorem implies the proof now we notice that (iv) of (4.7) implies gn dµ → g dµ To finish n gn dµ = fm dµ m=1 and we have ∞ m=1 fm dµ ≤ ∞ m=1 n fm dµ < ∞ so ∞ fm dµ → m=1 fm dµ m=1 A.6 Product Measure, Fubini’s Theorem 6.1 The first step is to observe A × Bo ⊂ σ(Ao × Bo ) so σ(Ao × Bo ) = A × B Since Ao × Bo is closed under intersection, uniqueness follows from (2.2) 6.2 f ≥ so f d(µ1 × µ2 ) = f (x, y) µ2 (dy) µ1 (dx) < ∞ X Y This shows f is integrable and the result follows from (6.2) 115 116 Chapter Brownian Motion 6.3 Let Y = [0, ∞), B = the Borel subsets, and λ = Lebesgue measure Let f (x, y) = 1{(x,y):0[...]... [m/2n , (m + 1)/2n ) Then Xk → 0 in probability but XN (n) ≡ 1 7.2 Let Sn = X1 +· · ·+Xn , Tn = Y1 +· · ·+Yn , and N (t) = sup{n : Sn +Tn ≤ t} SN (t)+1 SN (t) Rt ≤ ≤ SN (t)+1 + TN (t)+1 t SN (t) + TN (t) To handle the left-hand side we note SN (t) N (t) + 1 N (t) 1 · → EX1 · · ·1 N (t) SN (t)+1 + TN (t)+1 N (t) + 1 EX1 + EY1 A similar argument handles the right-hand side and completes the proof 7.3 Our... µ, and let Y be a random variable with distribution µ Exercise 2.5 implies that if α < β then EY α = 1 If we let γ ∈ (α, β) we have EY γ = 1 = (EY α )γ/α so for the random variable Y α and the convex function ϕ(x) = (x+ )γ/α we have equality in Jensen’s inequality and Exercise 3.3 in Chapter 1 implies Y α = 1 a.s 2.14 Suppose there is a sequence of random variables with P ( Xn > y) → 0, EXn2 = 1, and. .. of the set {1, 2} and A2 consist of the sets {1, 3} and {1, 4} Clearly A1 and A2 are independent, but σ(A2 ) = the set of all subsets so σ(A1 ) and σ(A2 ) are not independent 4.15 {X + Y = n} = ∪m {X = m, Y = n − m} The events on the right hand side are disjoint, so using independence P (X = m, Y = n − m) = P (X + Y = n) = m P (X = m)P (Y = n − m) m 4.16 Using 4.15, some arithmetic and then the binomial... with probability 1, Sm (ω) is a Cauchy sequence and converges a.s 8.11 Let Sk,n = Sn − Sk Convergence of Sn /n to 0 in probability and Sk,n ≤ Sk + Sn imply that if > 0 then min P ( Sk,n ≤ n ) → 1 0≤k≤n as n → ∞ Since P ( Sn > n ) → 0, ( ) with m = 0 implies P max Sj > 2n 0 ) → 0, i.e., Xn → c in probability. .. this estimate and the fact that n n 2 m=1 m ≤ m=1 2m − 1 = n E(Sn /n − νn )2 = 1 n2 n var(Xm ) ≤ A/n + m=1 Since is arbitrary this shows the L2 convergence of Sn /n − νn to 0, and convergence in probability follows from (5.3) 5.2 Let > 0 and pick K so that if k ≥ K then r(k) ≤ Noting that Cauchy Schwarz implies EXi Xj ≤ (EXi2 EXj2 )1/2 = EXk2 = r(0) and breaking the sum into i − j ≤ K and i − j ... ≤ i < K If m < n and = n − m our fact implies that if 0 ≤ m < n < K then for each Section 1.4 Independence 0 ≤ i, j < K there is exactly one pair 0 ≤ x, y < K so that x + my = i and x + ny = j This shows P (X + mY = i, X + nY = j) = 1/K 2 = P (X + mY = i)P (X + nY = j) so the variables are pairwise independent 4.13 Let X1 , X2 , X3 , X4 be independent and take values 1 and −1 with probability 1/2 each... · 1n = ∞ 8.9 Let Ak = { Sm,k > 2a, Sm,j ≤ 2a, m ≤ j < k} and let Gk = { Sk,n ≤ a} Since the Ak are disjoint, Ak ∩ Gk ⊂ { Sm,n > a}, and Ak and Gk are independent n P ( Sm,n > a) ≥ P (Ak ∩ Gk ) k=m+1 n = n P (Ak )P (Gk ) ≥ min P (Gk ) m 0 is sufficiently small, we have P (Xm ∈ [0, ], Xn ∈ [a, b]) = 0 < P (Xm ∈ [0, ])P (Xn ∈ [a, b]) 4.7 (i) Using (4.9) with z = 0 and then with z < 0 and letting z ↑ 0 and using
  1. Durrett Probability Theory Examples 2nd Edition Science
  2. Durrett Probability
  3. Probability Theory And Examples Pdf
Goodreads helps you keep track of books you want to read.
Rate this book

This is a very good textbook for probability theory. I have used previous editions as a textbook for my probability theory course and recently bought the 5th edition and read through it. 5th edition correct mistakes in previous editions and has some more beautiful examples. Probability: Theory and Examples. This classic introduction to probability theory for beginning graduate students covers laws of large numbers, central limit theorems, random walks, martingales, Markov chains, ergodic theorems, and Brownian motion. It is a comprehensive treatment concentrating on the results that are the most useful for applications.

See a Problem?

We’d love your help. Let us know what’s wrong with this preview of Probability by Rick Durrett.
Not the book you’re looking for?

Preview — Probability by Rick Durrett

This book is an introduction to probability theory covering laws of large numbers, central limit theorems, random walks, martingales, Markov chains, ergodic theorems, and Brownian motion. It is a comprehensive treatment concentrating on the results that are the most useful for applications. Its philosophy is that the best way to learn probability is to see it in action, so...more
Pdf
Published August 30th 2010 by Cambridge University Press
To see what your friends thought of this book,please sign up.
To ask other readers questions aboutProbability,please sign up.
MIT Mathematics syllabus books
100 books — 2 voters

More lists with this book...
Rating details

Jun 19, 2014Jerzy rated it really liked it
Good book full of helpful examples and exercises. I used this for a class along with Probability & Measure Theory and they complement each other well. We used this one more later in the course, since it covers less of the underlying measure theory but has more interesting examples in probability theory as such.
Some of the notation was a bit nonstandard (compared to the other book and our course notes) but still fairly easy to follow.
Something that took me a while to understand, mainly because I had given up on reading this book, is that to read it, you should already have had a measure theory course. In fact it's mandatory. Simple theorems - I don't remember well, but I think an example of those was int h(x) dF_X = int h(x) f_X dx, when a the dist. has a density - are not in the 1st chapter, and are constantly used beyond it. Here lies the major problem with the book. The author was lazy, put forth the appendix as a chapt...more
Pros:
1. Classical and widely used textbook for graduate students
2. Clear structure. Divide probability theory into Large Number Law, Central Limit Theorem, Random Walk, Martingales, Markov Chains, Ergodic Theorems and Brownian Motion, which covers important basic topics in graduate probability theory.
Cons:
1. Some typos
2. Students use 4th edition while some professors may use 3rd edition. It is a bit annoying.
3. Some proofs and explanations are not easy to understand
Samuel Isaacson rated it really liked it
Jan 24, 2016
Jimmy Kennington marked it as to-read
Jan 26, 2014
Shiva Prasad Varma marked it as to-read
Oct 20, 2016

Durrett Probability Theory Examples 2nd Edition Science

Ricardo Rodríguez Reveco marked it as to-read
Sep 15, 2017

Durrett Probability

There are no discussion topics on this book yet.Be the first to start one »
Recommend ItStatsRecent Status Updates

Probability Theory And Examples Pdf

0followers