Watson's lemma

In mathematics, Watson's lemma, proved by G. N. Watson (1918, p. 133), has significant application within the theory on the asymptotic behavior of integrals.

Statement of the lemma

Let 0 < T {\displaystyle 0<T\leq \infty } be fixed. Assume φ ( t ) = t λ g ( t ) {\displaystyle \varphi (t)=t^{\lambda }\,g(t)} , where g ( t ) {\displaystyle g(t)} has an infinite number of derivatives in the neighborhood of t = 0 {\displaystyle t=0} , with g ( 0 ) 0 {\displaystyle g(0)\neq 0} , and λ > 1 {\displaystyle \lambda >-1} .

Suppose, in addition, either that

| φ ( t ) | < K e b t   t > 0 , {\displaystyle |\varphi (t)|<Ke^{bt}\ \forall t>0,}

where K , b {\displaystyle K,b} are independent of t {\displaystyle t} , or that

0 T | φ ( t ) | d t < . {\displaystyle \int _{0}^{T}|\varphi (t)|\,\mathrm {d} t<\infty .}

Then, it is true that for all positive x {\displaystyle x} that

| 0 T e x t φ ( t ) d t | < {\displaystyle \left|\int _{0}^{T}e^{-xt}\varphi (t)\,\mathrm {d} t\right|<\infty }

and that the following asymptotic equivalence holds:

0 T e x t φ ( t ) d t   n = 0 g ( n ) ( 0 )   Γ ( λ + n + 1 ) n !   x λ + n + 1 ,     ( x > 0 ,   x ) . {\displaystyle \int _{0}^{T}e^{-xt}\varphi (t)\,\mathrm {d} t\sim \ \sum _{n=0}^{\infty }{\frac {g^{(n)}(0)\ \Gamma (\lambda +n+1)}{n!\ x^{\lambda +n+1}}},\ \ (x>0,\ x\rightarrow \infty ).}

See, for instance, Watson (1918) for the original proof or Miller (2006) for a more recent development.

Proof

We will prove the version of Watson's lemma which assumes that | φ ( t ) | {\displaystyle |\varphi (t)|} has at most exponential growth as t {\displaystyle t\to \infty } . The basic idea behind the proof is that we will approximate g ( t ) {\displaystyle g(t)} by finitely many terms of its Taylor series. Since the derivatives of g {\displaystyle g} are only assumed to exist in a neighborhood of the origin, we will essentially proceed by removing the tail of the integral, applying Taylor's theorem with remainder in the remaining small interval, then adding the tail back on in the end. At each step we will carefully estimate how much we are throwing away or adding on. This proof is a modification of the one found in Miller (2006).

Let 0 < T {\displaystyle 0<T\leq \infty } and suppose that φ {\displaystyle \varphi } is a measurable function of the form φ ( t ) = t λ g ( t ) {\displaystyle \varphi (t)=t^{\lambda }g(t)} , where λ > 1 {\displaystyle \lambda >-1} and g {\displaystyle g} has an infinite number of continuous derivatives in the interval [ 0 , δ ] {\displaystyle [0,\delta ]} for some 0 < δ < T {\displaystyle 0<\delta <T} , and that | φ ( t ) | K e b t {\displaystyle |\varphi (t)|\leq Ke^{bt}} for all δ t T {\displaystyle \delta \leq t\leq T} , where the constants K {\displaystyle K} and b {\displaystyle b} are independent of t {\displaystyle t} .

We can show that the integral is finite for x {\displaystyle x} large enough by writing

( 1 ) 0 T e x t φ ( t ) d t = 0 δ e x t φ ( t ) d t + δ T e x t φ ( t ) d t {\displaystyle (1)\quad \int _{0}^{T}e^{-xt}\varphi (t)\,\mathrm {d} t=\int _{0}^{\delta }e^{-xt}\varphi (t)\,\mathrm {d} t+\int _{\delta }^{T}e^{-xt}\varphi (t)\,\mathrm {d} t}

and estimating each term.

For the first term we have

| 0 δ e x t φ ( t ) d t | 0 δ e x t | φ ( t ) | d t 0 δ | φ ( t ) | d t {\displaystyle \left|\int _{0}^{\delta }e^{-xt}\varphi (t)\,\mathrm {d} t\right|\leq \int _{0}^{\delta }e^{-xt}|\varphi (t)|\,\mathrm {d} t\leq \int _{0}^{\delta }|\varphi (t)|\,\mathrm {d} t}

for x 0 {\displaystyle x\geq 0} , where the last integral is finite by the assumptions that g {\displaystyle g} is continuous on the interval [ 0 , δ ] {\displaystyle [0,\delta ]} and that λ > 1 {\displaystyle \lambda >-1} . For the second term we use the assumption that φ {\displaystyle \varphi } is exponentially bounded to see that, for x > b {\displaystyle x>b} ,

| δ T e x t φ ( t ) d t | δ T e x t | φ ( t ) | d t K δ T e ( b x ) t d t K δ e ( b x ) t d t = K e ( b x ) δ x b . {\displaystyle {\begin{aligned}\left|\int _{\delta }^{T}e^{-xt}\varphi (t)\,\mathrm {d} t\right|&\leq \int _{\delta }^{T}e^{-xt}|\varphi (t)|\,\mathrm {d} t\\&\leq K\int _{\delta }^{T}e^{(b-x)t}\,\mathrm {d} t\\&\leq K\int _{\delta }^{\infty }e^{(b-x)t}\,\mathrm {d} t\\&=K\,{\frac {e^{(b-x)\delta }}{x-b}}.\end{aligned}}}

The finiteness of the original integral then follows from applying the triangle inequality to ( 1 ) {\displaystyle (1)} .

We can deduce from the above calculation that

( 2 ) 0 T e x t φ ( t ) d t = 0 δ e x t φ ( t ) d t + O ( x 1 e δ x ) {\displaystyle (2)\quad \int _{0}^{T}e^{-xt}\varphi (t)\,\mathrm {d} t=\int _{0}^{\delta }e^{-xt}\varphi (t)\,\mathrm {d} t+O\left(x^{-1}e^{-\delta x}\right)}

as x {\displaystyle x\to \infty } .

By appealing to Taylor's theorem with remainder we know that, for each integer N 0 {\displaystyle N\geq 0} ,

g ( t ) = n = 0 N g ( n ) ( 0 ) n ! t n + g ( N + 1 ) ( t ) ( N + 1 ) ! t N + 1 {\displaystyle g(t)=\sum _{n=0}^{N}{\frac {g^{(n)}(0)}{n!}}\,t^{n}+{\frac {g^{(N+1)}(t^{*})}{(N+1)!}}\,t^{N+1}}

for 0 t δ {\displaystyle 0\leq t\leq \delta } , where 0 t t {\displaystyle 0\leq t^{*}\leq t} . Plugging this in to the first term in ( 2 ) {\displaystyle (2)} we get

( 3 ) 0 δ e x t φ ( t ) d t = 0 δ e x t t λ g ( t ) d t = n = 0 N g ( n ) ( 0 ) n ! 0 δ t λ + n e x t d t + 1 ( N + 1 ) ! 0 δ g ( N + 1 ) ( t ) t λ + N + 1 e x t d t . {\displaystyle {\begin{aligned}(3)\quad \int _{0}^{\delta }e^{-xt}\varphi (t)\,\mathrm {d} t&=\int _{0}^{\delta }e^{-xt}t^{\lambda }g(t)\,\mathrm {d} t\\&=\sum _{n=0}^{N}{\frac {g^{(n)}(0)}{n!}}\int _{0}^{\delta }t^{\lambda +n}e^{-xt}\,\mathrm {d} t+{\frac {1}{(N+1)!}}\int _{0}^{\delta }g^{(N+1)}(t^{*})\,t^{\lambda +N+1}e^{-xt}\,\mathrm {d} t.\end{aligned}}}

To bound the term involving the remainder we use the assumption that g ( N + 1 ) {\displaystyle g^{(N+1)}} is continuous on the interval [ 0 , δ ] {\displaystyle [0,\delta ]} , and in particular it is bounded there. As such we see that

| 0 δ g ( N + 1 ) ( t ) t λ + N + 1 e x t d t | sup t [ 0 , δ ] | g ( N + 1 ) ( t ) | 0 δ t λ + N + 1 e x t d t < sup t [ 0 , δ ] | g ( N + 1 ) ( t ) | 0 t λ + N + 1 e x t d t = sup t [ 0 , δ ] | g ( N + 1 ) ( t ) | Γ ( λ + N + 2 ) x λ + N + 2 . {\displaystyle {\begin{aligned}\left|\int _{0}^{\delta }g^{(N+1)}(t^{*})\,t^{\lambda +N+1}e^{-xt}\,\mathrm {d} t\right|&\leq \sup _{t\in [0,\delta ]}\left|g^{(N+1)}(t)\right|\int _{0}^{\delta }t^{\lambda +N+1}e^{-xt}\,\mathrm {d} t\\&<\sup _{t\in [0,\delta ]}\left|g^{(N+1)}(t)\right|\int _{0}^{\infty }t^{\lambda +N+1}e^{-xt}\,\mathrm {d} t\\&=\sup _{t\in [0,\delta ]}\left|g^{(N+1)}(t)\right|\,{\frac {\Gamma (\lambda +N+2)}{x^{\lambda +N+2}}}.\end{aligned}}}

Here we have used the fact that

0 t a e x t d t = Γ ( a + 1 ) x a + 1 {\displaystyle \int _{0}^{\infty }t^{a}e^{-xt}\,\mathrm {d} t={\frac {\Gamma (a+1)}{x^{a+1}}}}

if x > 0 {\displaystyle x>0} and a > 1 {\displaystyle a>-1} , where Γ {\displaystyle \Gamma } is the gamma function.

From the above calculation we see from ( 3 ) {\displaystyle (3)} that

( 4 ) 0 δ e x t φ ( t ) d t = n = 0 N g ( n ) ( 0 ) n ! 0 δ t λ + n e x t d t + O ( x λ N 2 ) {\displaystyle (4)\quad \int _{0}^{\delta }e^{-xt}\varphi (t)\,\mathrm {d} t=\sum _{n=0}^{N}{\frac {g^{(n)}(0)}{n!}}\int _{0}^{\delta }t^{\lambda +n}e^{-xt}\,\mathrm {d} t+O\left(x^{-\lambda -N-2}\right)}

as x {\displaystyle x\to \infty } .

We will now add the tails on to each integral in ( 4 ) {\displaystyle (4)} . For each n {\displaystyle n} we have

0 δ t λ + n e x t d t = 0 t λ + n e x t d t δ t λ + n e x t d t = Γ ( λ + n + 1 ) x λ + n + 1 δ t λ + n e x t d t , {\displaystyle {\begin{aligned}\int _{0}^{\delta }t^{\lambda +n}e^{-xt}\,\mathrm {d} t&=\int _{0}^{\infty }t^{\lambda +n}e^{-xt}\,\mathrm {d} t-\int _{\delta }^{\infty }t^{\lambda +n}e^{-xt}\,\mathrm {d} t\\[5pt]&={\frac {\Gamma (\lambda +n+1)}{x^{\lambda +n+1}}}-\int _{\delta }^{\infty }t^{\lambda +n}e^{-xt}\,\mathrm {d} t,\end{aligned}}}

and we will show that the remaining integrals are exponentially small. Indeed, if we make the change of variables t = s + δ {\displaystyle t=s+\delta } we get

δ t λ + n e x t d t = 0 ( s + δ ) λ + n e x ( s + δ ) d s = e δ x 0 ( s + δ ) λ + n e x s d s e δ x 0 ( s + δ ) λ + n e s d s {\displaystyle {\begin{aligned}\int _{\delta }^{\infty }t^{\lambda +n}e^{-xt}\,\mathrm {d} t&=\int _{0}^{\infty }(s+\delta )^{\lambda +n}e^{-x(s+\delta )}\,\mathrm {d} s\\[5pt]&=e^{-\delta x}\int _{0}^{\infty }(s+\delta )^{\lambda +n}e^{-xs}\,\mathrm {d} s\\[5pt]&\leq e^{-\delta x}\int _{0}^{\infty }(s+\delta )^{\lambda +n}e^{-s}\,\mathrm {d} s\end{aligned}}}

for x 1 {\displaystyle x\geq 1} , so that

0 δ t λ + n e x t d t = Γ ( λ + n + 1 ) x λ + n + 1 + O ( e δ x )  as  x . {\displaystyle \int _{0}^{\delta }t^{\lambda +n}e^{-xt}\,\mathrm {d} t={\frac {\Gamma (\lambda +n+1)}{x^{\lambda +n+1}}}+O\left(e^{-\delta x}\right){\text{ as }}x\to \infty .}

If we substitute this last result into ( 4 ) {\displaystyle (4)} we find that

0 δ e x t φ ( t ) d t = n = 0 N g ( n ) ( 0 )   Γ ( λ + n + 1 ) n !   x λ + n + 1 + O ( e δ x ) + O ( x λ N 2 ) = n = 0 N g ( n ) ( 0 )   Γ ( λ + n + 1 ) n !   x λ + n + 1 + O ( x λ N 2 ) {\displaystyle {\begin{aligned}\int _{0}^{\delta }e^{-xt}\varphi (t)\,\mathrm {d} t&=\sum _{n=0}^{N}{\frac {g^{(n)}(0)\ \Gamma (\lambda +n+1)}{n!\ x^{\lambda +n+1}}}+O\left(e^{-\delta x}\right)+O\left(x^{-\lambda -N-2}\right)\\&=\sum _{n=0}^{N}{\frac {g^{(n)}(0)\ \Gamma (\lambda +n+1)}{n!\ x^{\lambda +n+1}}}+O\left(x^{-\lambda -N-2}\right)\end{aligned}}}

as x {\displaystyle x\to \infty } . Finally, substituting this into ( 2 ) {\displaystyle (2)} we conclude that

0 T e x t φ ( t ) d t = n = 0 N g ( n ) ( 0 )   Γ ( λ + n + 1 ) n !   x λ + n + 1 + O ( x λ N 2 ) + O ( x 1 e δ x ) = n = 0 N g ( n ) ( 0 )   Γ ( λ + n + 1 ) n !   x λ + n + 1 + O ( x λ N 2 ) {\displaystyle {\begin{aligned}\int _{0}^{T}e^{-xt}\varphi (t)\,\mathrm {d} t&=\sum _{n=0}^{N}{\frac {g^{(n)}(0)\ \Gamma (\lambda +n+1)}{n!\ x^{\lambda +n+1}}}+O\left(x^{-\lambda -N-2}\right)+O\left(x^{-1}e^{-\delta x}\right)\\&=\sum _{n=0}^{N}{\frac {g^{(n)}(0)\ \Gamma (\lambda +n+1)}{n!\ x^{\lambda +n+1}}}+O\left(x^{-\lambda -N-2}\right)\end{aligned}}}

as x {\displaystyle x\to \infty } .

Since this last expression is true for each integer N 0 {\displaystyle N\geq 0} we have thus shown that

0 T e x t φ ( t ) d t n = 0 g ( n ) ( 0 )   Γ ( λ + n + 1 ) n !   x λ + n + 1 {\displaystyle \int _{0}^{T}e^{-xt}\varphi (t)\,\mathrm {d} t\sim \sum _{n=0}^{\infty }{\frac {g^{(n)}(0)\ \Gamma (\lambda +n+1)}{n!\ x^{\lambda +n+1}}}}

as x {\displaystyle x\to \infty } , where the infinite series is interpreted as an asymptotic expansion of the integral in question.

Example

When 0 < a < b {\displaystyle 0<a<b} , the confluent hypergeometric function of the first kind has the integral representation

1 F 1 ( a , b , x ) = Γ ( b ) Γ ( a ) Γ ( b a ) 0 1 e x t t a 1 ( 1 t ) b a 1 d t , {\displaystyle {}_{1}F_{1}(a,b,x)={\frac {\Gamma (b)}{\Gamma (a)\Gamma (b-a)}}\int _{0}^{1}e^{xt}t^{a-1}(1-t)^{b-a-1}\,\mathrm {d} t,}

where Γ {\displaystyle \Gamma } is the gamma function. The change of variables t = 1 s {\displaystyle t=1-s} puts this into the form

1 F 1 ( a , b , x ) = Γ ( b ) Γ ( a ) Γ ( b a ) e x 0 1 e x s ( 1 s ) a 1 s b a 1 d s , {\displaystyle {}_{1}F_{1}(a,b,x)={\frac {\Gamma (b)}{\Gamma (a)\Gamma (b-a)}}\,e^{x}\int _{0}^{1}e^{-xs}(1-s)^{a-1}s^{b-a-1}\,ds,}

which is now amenable to the use of Watson's lemma. Taking λ = b a 1 {\displaystyle \lambda =b-a-1} and g ( s ) = ( 1 s ) a 1 {\displaystyle g(s)=(1-s)^{a-1}} , Watson's lemma tells us that

0 1 e x s ( 1 s ) a 1 s b a 1 d s Γ ( b a ) x a b as  x  with  x > 0 , {\displaystyle \int _{0}^{1}e^{-xs}(1-s)^{a-1}s^{b-a-1}\,ds\sim \Gamma (b-a)x^{a-b}\quad {\text{as }}x\to \infty {\text{ with }}x>0,}

which allows us to conclude that

1 F 1 ( a , b , x ) Γ ( b ) Γ ( a ) x a b e x as  x  with  x > 0. {\displaystyle {}_{1}F_{1}(a,b,x)\sim {\frac {\Gamma (b)}{\Gamma (a)}}\,x^{a-b}e^{x}\quad {\text{as }}x\to \infty {\text{ with }}x>0.}

References

  • Miller, P.D. (2006), Applied Asymptotic Analysis, Providence, RI: American Mathematical Society, p. 467, ISBN 978-0-8218-4078-8.
  • Watson, G. N. (1918), "The harmonic functions associated with the parabolic cylinder", Proceedings of the London Mathematical Society, vol. 2, no. 17, pp. 116–148, doi:10.1112/plms/s2-17.1.116.
  • Ablowitz, M. J., Fokas, A. S. (2003). Complex variables: introduction and applications. Cambridge University Press.