Uniform convergence

Mode of convergence of a function sequence
A sequence of functions (fₙ) converges uniformly to f when for arbitrary small ε there is an index N such that the graph of fₙ is in the ε-tube around f whenever n ≥ N.

In the mathematical field of analysis, uniform convergence is a mode of convergence of functions stronger than pointwise convergence, in the sense that the convergence is uniform[disambiguation needed] over the domain. A sequence of functions ( f n ) {\displaystyle (f_{n})} converges uniformly to a limiting function f {\displaystyle f} on a set E {\displaystyle E} as the function domain if, given any arbitrarily small positive number ϵ {\displaystyle \epsilon } , a number N {\displaystyle N} can be found such that each of the functions f N , f N + 1 , f N + 2 , {\displaystyle f_{N},f_{N+1},f_{N+2},\ldots } differs from f {\displaystyle f} by no more than ϵ {\displaystyle \epsilon } at every point x {\displaystyle x} in E {\displaystyle E} . Described in an informal way, if f n {\displaystyle f_{n}} converges to f {\displaystyle f} uniformly, then how quickly the functions f n {\displaystyle f_{n}} approach f {\displaystyle f} is "uniform" throughout E {\displaystyle E} in the following sense: in order to guarantee that f n ( x ) {\displaystyle f_{n}(x)} differs from f ( x ) {\displaystyle f(x)} by less than a chosen distance ϵ {\displaystyle \epsilon } , we only need to make sure that n {\displaystyle n} is larger than or equal to a certain N {\displaystyle N} , which we can find without knowing the value of x E {\displaystyle x\in E} in advance. In other words, there exists a number N = N ( ϵ ) {\displaystyle N=N(\epsilon )} that could depend on ϵ {\displaystyle \epsilon } but is independent of x {\displaystyle x} , such that choosing n N {\displaystyle n\geq N} will ensure that | f n ( x ) f ( x ) | < ϵ {\displaystyle |f_{n}(x)-f(x)|<\epsilon } for all x E {\displaystyle x\in E} . In contrast, pointwise convergence of f n {\displaystyle f_{n}} to f {\displaystyle f} merely guarantees that for any x E {\displaystyle x\in E} given in advance, we can find N = N ( ϵ , x ) {\displaystyle N=N(\epsilon ,x)} (i.e., N {\displaystyle N} could depend on the values of both ϵ {\displaystyle \epsilon } and x {\displaystyle x} ) such that, for that particular x {\displaystyle x} , f n ( x ) {\displaystyle f_{n}(x)} falls within ϵ {\displaystyle \epsilon } of f ( x ) {\displaystyle f(x)} whenever n N {\displaystyle n\geq N} (and a different x {\displaystyle x} may require a different, larger N {\displaystyle N} for n N {\displaystyle n\geq N} to guarantee that | f n ( x ) f ( x ) | < ϵ {\displaystyle |f_{n}(x)-f(x)|<\epsilon } ).

The difference between uniform convergence and pointwise convergence was not fully appreciated early in the history of calculus, leading to instances of faulty reasoning. The concept, which was first formalized by Karl Weierstrass, is important because several properties of the functions f n {\displaystyle f_{n}} , such as continuity, Riemann integrability, and, with additional hypotheses, differentiability, are transferred to the limit f {\displaystyle f} if the convergence is uniform, but not necessarily if the convergence is not uniform.

History

In 1821 Augustin-Louis Cauchy published a proof that a convergent sum of continuous functions is always continuous, to which Niels Henrik Abel in 1826 found purported counterexamples in the context of Fourier series, arguing that Cauchy's proof had to be incorrect. Completely standard notions of convergence did not exist at the time, and Cauchy handled convergence using infinitesimal methods. When put into the modern language, what Cauchy proved is that a uniformly convergent sequence of continuous functions has a continuous limit. The failure of a merely pointwise-convergent limit of continuous functions to converge to a continuous function illustrates the importance of distinguishing between different types of convergence when handling sequences of functions.[1]

The term uniform convergence was probably first used by Christoph Gudermann, in an 1838 paper on elliptic functions, where he employed the phrase "convergence in a uniform way" when the "mode of convergence" of a series n = 1 f n ( x , ϕ , ψ ) {\textstyle \sum _{n=1}^{\infty }f_{n}(x,\phi ,\psi )} is independent of the variables ϕ {\displaystyle \phi } and ψ . {\displaystyle \psi .} While he thought it a "remarkable fact" when a series converged in this way, he did not give a formal definition, nor use the property in any of his proofs.[2]

Later Gudermann's pupil Karl Weierstrass, who attended his course on elliptic functions in 1839–1840, coined the term gleichmäßig konvergent (German: uniformly convergent) which he used in his 1841 paper Zur Theorie der Potenzreihen, published in 1894. Independently, similar concepts were articulated by Philipp Ludwig von Seidel[3] and George Gabriel Stokes. G. H. Hardy compares the three definitions in his paper "Sir George Stokes and the concept of uniform convergence" and remarks: "Weierstrass's discovery was the earliest, and he alone fully realized its far-reaching importance as one of the fundamental ideas of analysis."

Under the influence of Weierstrass and Bernhard Riemann this concept and related questions were intensely studied at the end of the 19th century by Hermann Hankel, Paul du Bois-Reymond, Ulisse Dini, Cesare Arzelà and others.

Definition

We first define uniform convergence for real-valued functions, although the concept is readily generalized to functions mapping to metric spaces and, more generally, uniform spaces (see below).

Suppose E {\displaystyle E} is a set and ( f n ) n N {\displaystyle (f_{n})_{n\in \mathbb {N} }} is a sequence of real-valued functions on it. We say the sequence ( f n ) n N {\displaystyle (f_{n})_{n\in \mathbb {N} }} is uniformly convergent on E {\displaystyle E} with limit f : E R {\displaystyle f:E\to \mathbb {R} } if for every ϵ > 0 , {\displaystyle \epsilon >0,} there exists a natural number N {\displaystyle N} such that for all n N {\displaystyle n\geq N} and for all x E {\displaystyle x\in E}

| f n ( x ) f ( x ) | < ϵ . {\displaystyle |f_{n}(x)-f(x)|<\epsilon .}

The notation for uniform convergence of f n {\displaystyle f_{n}} to f {\displaystyle f} is not quite standardized and different authors have used a variety of symbols, including (in roughly decreasing order of popularity):

f n f , u n i f   l i m n f n = f , f n u n i f . f , f = u lim n f n . {\displaystyle f_{n}\rightrightarrows f,\quad {\underset {n\to \infty }{\mathrm {unif\ lim} }}f_{n}=f,\quad f_{n}{\overset {\mathrm {unif.} }{\longrightarrow }}f,\quad f=\mathrm {u} \!\!-\!\!\!\lim _{n\to \infty }f_{n}.}

Frequently, no special symbol is used, and authors simply write

f n f u n i f o r m l y {\displaystyle f_{n}\to f\quad \mathrm {uniformly} }

to indicate that convergence is uniform. (In contrast, the expression f n f {\displaystyle f_{n}\to f} on E {\displaystyle E} without an adverb is taken to mean pointwise convergence on E {\displaystyle E} : for all x E {\displaystyle x\in E} , f n ( x ) f ( x ) {\displaystyle f_{n}(x)\to f(x)} as n {\displaystyle n\to \infty } .)

Since R {\displaystyle \mathbb {R} } is a complete metric space, the Cauchy criterion can be used to give an equivalent alternative formulation for uniform convergence: ( f n ) n N {\displaystyle (f_{n})_{n\in \mathbb {N} }} converges uniformly on E {\displaystyle E} (in the previous sense) if and only if for every ϵ > 0 {\displaystyle \epsilon >0} , there exists a natural number N {\displaystyle N} such that

x E , m , n N | f m ( x ) f n ( x ) | < ϵ {\displaystyle x\in E,m,n\geq N\implies |f_{m}(x)-f_{n}(x)|<\epsilon } .

In yet another equivalent formulation, if we define

d n = sup x E | f n ( x ) f ( x ) | , {\displaystyle d_{n}=\sup _{x\in E}|f_{n}(x)-f(x)|,}

then f n {\displaystyle f_{n}} converges to f {\displaystyle f} uniformly if and only if d n 0 {\displaystyle d_{n}\to 0} as n {\displaystyle n\to \infty } . Thus, we can characterize uniform convergence of ( f n ) n N {\displaystyle (f_{n})_{n\in \mathbb {N} }} on E {\displaystyle E} as (simple) convergence of ( f n ) n N {\displaystyle (f_{n})_{n\in \mathbb {N} }} in the function space R E {\displaystyle \mathbb {R} ^{E}} with respect to the uniform metric (also called the supremum metric), defined by

d ( f , g ) = sup x E | f ( x ) g ( x ) | . {\displaystyle d(f,g)=\sup _{x\in E}|f(x)-g(x)|.}

Symbolically,

f n f d ( f n , f ) 0 {\displaystyle f_{n}\rightrightarrows f\iff d(f_{n},f)\to 0} .

The sequence ( f n ) n N {\displaystyle (f_{n})_{n\in \mathbb {N} }} is said to be locally uniformly convergent with limit f {\displaystyle f} if E {\displaystyle E} is a metric space and for every x E {\displaystyle x\in E} , there exists an r > 0 {\displaystyle r>0} such that ( f n ) {\displaystyle (f_{n})} converges uniformly on B ( x , r ) E . {\displaystyle B(x,r)\cap E.} It is clear that uniform convergence implies local uniform convergence, which implies pointwise convergence.

Notes

Intuitively, a sequence of functions f n {\displaystyle f_{n}} converges uniformly to f {\displaystyle f} if, given an arbitrarily small ϵ > 0 {\displaystyle \epsilon >0} , we can find an N N {\displaystyle N\in \mathbb {N} } so that the functions f n {\displaystyle f_{n}} with n > N {\displaystyle n>N} all fall within a "tube" of width 2 ϵ {\displaystyle 2\epsilon } centered around f {\displaystyle f} (i.e., between f ( x ) ϵ {\displaystyle f(x)-\epsilon } and f ( x ) + ϵ {\displaystyle f(x)+\epsilon } ) for the entire domain of the function.

Note that interchanging the order of quantifiers in the definition of uniform convergence by moving "for all x E {\displaystyle x\in E} " in front of "there exists a natural number N {\displaystyle N} " results in a definition of pointwise convergence of the sequence. To make this difference explicit, in the case of uniform convergence, N = N ( ϵ ) {\displaystyle N=N(\epsilon )} can only depend on ϵ {\displaystyle \epsilon } , and the choice of N {\displaystyle N} has to work for all x E {\displaystyle x\in E} , for a specific value of ϵ {\displaystyle \epsilon } that is given. In contrast, in the case of pointwise convergence, N = N ( ϵ , x ) {\displaystyle N=N(\epsilon ,x)} may depend on both ϵ {\displaystyle \epsilon } and x {\displaystyle x} , and the choice of N {\displaystyle N} only has to work for the specific values of ϵ {\displaystyle \epsilon } and x {\displaystyle x} that are given. Thus uniform convergence implies pointwise convergence, however the converse is not true, as the example in the section below illustrates.

Generalizations

One may straightforwardly extend the concept to functions EM, where (M, d) is a metric space, by replacing | f n ( x ) f ( x ) | {\displaystyle |f_{n}(x)-f(x)|} with d ( f n ( x ) , f ( x ) ) {\displaystyle d(f_{n}(x),f(x))} .

The most general setting is the uniform convergence of nets of functions EX, where X is a uniform space. We say that the net ( f α ) {\displaystyle (f_{\alpha })} converges uniformly with limit f : EX if and only if for every entourage V in X, there exists an α 0 {\displaystyle \alpha _{0}} , such that for every x in E and every α α 0 {\displaystyle \alpha \geq \alpha _{0}} , ( f α ( x ) , f ( x ) ) {\displaystyle (f_{\alpha }(x),f(x))} is in V. In this situation, uniform limit of continuous functions remains continuous.

Definition in a hyperreal setting

Uniform convergence admits a simplified definition in a hyperreal setting. Thus, a sequence f n {\displaystyle f_{n}} converges to f uniformly if for all hyperreal x in the domain of f {\displaystyle f^{*}} and all infinite n, f n ( x ) {\displaystyle f_{n}^{*}(x)} is infinitely close to f ( x ) {\displaystyle f^{*}(x)} (see microcontinuity for a similar definition of uniform continuity). In contrast, pointwise continuity requires this only for real x.

Examples

For x [ 0 , 1 ) {\displaystyle x\in [0,1)} , a basic example of uniform convergence can be illustrated as follows: the sequence ( 1 / 2 ) x + n {\displaystyle (1/2)^{x+n}} converges uniformly, while x n {\displaystyle x^{n}} does not. Specifically, assume ϵ = 1 / 4 {\displaystyle \epsilon =1/4} . Each function ( 1 / 2 ) x + n {\displaystyle (1/2)^{x+n}} is less than or equal to 1 / 4 {\displaystyle 1/4} when n 2 {\displaystyle n\geq 2} , regardless of the value of x {\displaystyle x} . On the other hand, x n {\displaystyle x^{n}} is only less than or equal to 1 / 4 {\displaystyle 1/4} at ever increasing values of n {\displaystyle n} when values of x {\displaystyle x} are selected closer and closer to 1 (explained more in depth further below).

Given a topological space X, we can equip the space of bounded real or complex-valued functions over X with the uniform norm topology, with the uniform metric defined by

d ( f , g ) = f g = sup x X | f ( x ) g ( x ) | . {\displaystyle d(f,g)=\|f-g\|_{\infty }=\sup _{x\in X}|f(x)-g(x)|.}

Then uniform convergence simply means convergence in the uniform norm topology:

lim n f n f = 0 {\displaystyle \lim _{n\to \infty }\|f_{n}-f\|_{\infty }=0} .

The sequence of functions ( f n ) {\displaystyle (f_{n})}

{ f n : [ 0 , 1 ] [ 0 , 1 ] f n ( x ) = x n {\displaystyle {\begin{cases}f_{n}:[0,1]\to [0,1]\\f_{n}(x)=x^{n}\end{cases}}}

is a classic example of a sequence of functions that converges to a function f {\displaystyle f} pointwise but not uniformly. To show this, we first observe that the pointwise limit of ( f n ) {\displaystyle (f_{n})} as n {\displaystyle n\to \infty } is the function f {\displaystyle f} , given by

f ( x ) = lim n f n ( x ) = { 0 , x [ 0 , 1 ) ; 1 , x = 1. {\displaystyle f(x)=\lim _{n\to \infty }f_{n}(x)={\begin{cases}0,&x\in [0,1);\\1,&x=1.\end{cases}}}

Pointwise convergence: Convergence is trivial for x = 0 {\displaystyle x=0} and x = 1 {\displaystyle x=1} , since f n ( 0 ) = f ( 0 ) = 0 {\displaystyle f_{n}(0)=f(0)=0} and f n ( 1 ) = f ( 1 ) = 1 {\displaystyle f_{n}(1)=f(1)=1} , for all n {\displaystyle n} . For x ( 0 , 1 ) {\displaystyle x\in (0,1)} and given ϵ > 0 {\displaystyle \epsilon >0} , we can ensure that | f n ( x ) f ( x ) | < ϵ {\displaystyle |f_{n}(x)-f(x)|<\epsilon } whenever n N {\displaystyle n\geq N} by choosing N = log ϵ / log x {\displaystyle N=\lceil \log \epsilon /\log x\rceil } , which is the minimum integer exponent of x {\displaystyle x} that allows it to reach or dip below ϵ {\displaystyle \epsilon } (here the upper square brackets indicate rounding up, see ceiling function). Hence, f n f {\displaystyle f_{n}\to f} pointwise for all x [ 0 , 1 ] {\displaystyle x\in [0,1]} . Note that the choice of N {\displaystyle N} depends on the value of ϵ {\displaystyle \epsilon } and x {\displaystyle x} . Moreover, for a fixed choice of ϵ {\displaystyle \epsilon } , N {\displaystyle N} (which cannot be defined to be smaller) grows without bound as x {\displaystyle x} approaches 1. These observations preclude the possibility of uniform convergence.

Non-uniformity of convergence: The convergence is not uniform, because we can find an ϵ > 0 {\displaystyle \epsilon >0} so that no matter how large we choose N , {\displaystyle N,} there will be values of x [ 0 , 1 ] {\displaystyle x\in [0,1]} and n N {\displaystyle n\geq N} such that | f n ( x ) f ( x ) | ϵ . {\displaystyle |f_{n}(x)-f(x)|\geq \epsilon .} To see this, first observe that regardless of how large n {\displaystyle n} becomes, there is always an x 0 [ 0 , 1 ) {\displaystyle x_{0}\in [0,1)} such that f n ( x 0 ) = 1 / 2. {\displaystyle f_{n}(x_{0})=1/2.} Thus, if we choose ϵ = 1 / 4 , {\displaystyle \epsilon =1/4,} we can never find an N {\displaystyle N} such that | f n ( x ) f ( x ) | < ϵ {\displaystyle |f_{n}(x)-f(x)|<\epsilon } for all x [ 0 , 1 ] {\displaystyle x\in [0,1]} and n N {\displaystyle n\geq N} . Explicitly, whatever candidate we choose for N {\displaystyle N} , consider the value of f N {\displaystyle f_{N}} at x 0 = ( 1 / 2 ) 1 / N {\displaystyle x_{0}=(1/2)^{1/N}} . Since

| f N ( x 0 ) f ( x 0 ) | = | [ ( 1 2 ) 1 N ] N 0 | = 1 2 > 1 4 = ϵ , {\displaystyle \left|f_{N}(x_{0})-f(x_{0})\right|=\left|\left[\left({\frac {1}{2}}\right)^{\frac {1}{N}}\right]^{N}-0\right|={\frac {1}{2}}>{\frac {1}{4}}=\epsilon ,}

the candidate fails because we have found an example of an x [ 0 , 1 ] {\displaystyle x\in [0,1]} that "escaped" our attempt to "confine" each f n   ( n N ) {\displaystyle f_{n}\ (n\geq N)} to within ϵ {\displaystyle \epsilon } of f {\displaystyle f} for all x [ 0 , 1 ] {\displaystyle x\in [0,1]} . In fact, it is easy to see that

lim n f n f = 1 , {\displaystyle \lim _{n\to \infty }\|f_{n}-f\|_{\infty }=1,}

contrary to the requirement that f n f 0 {\displaystyle \|f_{n}-f\|_{\infty }\to 0} if f n f {\displaystyle f_{n}\rightrightarrows f} .

In this example one can easily see that pointwise convergence does not preserve differentiability or continuity. While each function of the sequence is smooth, that is to say that for all n, f n C ( [ 0 , 1 ] ) {\displaystyle f_{n}\in C^{\infty }([0,1])} , the limit lim n f n {\displaystyle \lim _{n\to \infty }f_{n}} is not even continuous.

Exponential function

The series expansion of the exponential function can be shown to be uniformly convergent on any bounded subset S C {\displaystyle S\subset \mathbb {C} } using the Weierstrass M-test.

Theorem (Weierstrass M-test). Let ( f n ) {\displaystyle (f_{n})} be a sequence of functions f n : E C {\displaystyle f_{n}:E\to \mathbb {C} } and let M n {\displaystyle M_{n}} be a sequence of positive real numbers such that | f n ( x ) | M n {\displaystyle |f_{n}(x)|\leq M_{n}} for all x E {\displaystyle x\in E} and n = 1 , 2 , 3 , {\displaystyle n=1,2,3,\ldots } If n M n {\textstyle \sum _{n}M_{n}} converges, then n f n {\textstyle \sum _{n}f_{n}} converges absolutely and uniformly on E {\displaystyle E} .

The complex exponential function can be expressed as the series:

n = 0 z n n ! . {\displaystyle \sum _{n=0}^{\infty }{\frac {z^{n}}{n!}}.}

Any bounded subset is a subset of some disc D R {\displaystyle D_{R}} of radius R , {\displaystyle R,} centered on the origin in the complex plane. The Weierstrass M-test requires us to find an upper bound M n {\displaystyle M_{n}} on the terms of the series, with M n {\displaystyle M_{n}} independent of the position in the disc:

| z n n ! | M n , z D R . {\displaystyle \left|{\frac {z^{n}}{n!}}\right|\leq M_{n},\forall z\in D_{R}.}

To do this, we notice

| z n n ! | | z | n n ! R n n ! {\displaystyle \left|{\frac {z^{n}}{n!}}\right|\leq {\frac {|z|^{n}}{n!}}\leq {\frac {R^{n}}{n!}}}

and take M n = R n n ! . {\displaystyle M_{n}={\tfrac {R^{n}}{n!}}.}

If n = 0 M n {\displaystyle \sum _{n=0}^{\infty }M_{n}} is convergent, then the M-test asserts that the original series is uniformly convergent.

The ratio test can be used here:

lim n M n + 1 M n = lim n R n + 1 R n n ! ( n + 1 ) ! = lim n R n + 1 = 0 {\displaystyle \lim _{n\to \infty }{\frac {M_{n+1}}{M_{n}}}=\lim _{n\to \infty }{\frac {R^{n+1}}{R^{n}}}{\frac {n!}{(n+1)!}}=\lim _{n\to \infty }{\frac {R}{n+1}}=0}

which means the series over M n {\displaystyle M_{n}} is convergent. Thus the original series converges uniformly for all z D R , {\displaystyle z\in D_{R},} and since S D R {\displaystyle S\subset D_{R}} , the series is also uniformly convergent on S . {\displaystyle S.}

Properties

  • Every uniformly convergent sequence is locally uniformly convergent.
  • Every locally uniformly convergent sequence is compactly convergent.
  • For locally compact spaces local uniform convergence and compact convergence coincide.
  • A sequence of continuous functions on metric spaces, with the image metric space being complete, is uniformly convergent if and only if it is uniformly Cauchy.
  • If S {\displaystyle S} is a compact interval (or in general a compact topological space), and ( f n ) {\displaystyle (f_{n})} is a monotone increasing sequence (meaning f n ( x ) f n + 1 ( x ) {\displaystyle f_{n}(x)\leq f_{n+1}(x)} for all n and x) of continuous functions with a pointwise limit f {\displaystyle f} which is also continuous, then the convergence is necessarily uniform (Dini's theorem). Uniform convergence is also guaranteed if S {\displaystyle S} is a compact interval and ( f n ) {\displaystyle (f_{n})} is an equicontinuous sequence that converges pointwise.

Applications

To continuity

Counterexample to a strengthening of the uniform convergence theorem, in which pointwise convergence, rather than uniform convergence, is assumed. The continuous green functions sin n ( x ) {\displaystyle \sin ^{n}(x)} converge to the non-continuous red function. This can happen only if convergence is not uniform.

If E {\displaystyle E} and M {\displaystyle M} are topological spaces, then it makes sense to talk about the continuity of the functions f n , f : E M {\displaystyle f_{n},f:E\to M} . If we further assume that M {\displaystyle M} is a metric space, then (uniform) convergence of the f n {\displaystyle f_{n}} to f {\displaystyle f} is also well defined. The following result states that continuity is preserved by uniform convergence:

Uniform limit theorem — Suppose E {\displaystyle E} is a topological space, M {\displaystyle M} is a metric space, and ( f n ) {\displaystyle (f_{n})} is a sequence of continuous functions f n : E M {\displaystyle f_{n}:E\to M} . If f n f {\displaystyle f_{n}\rightrightarrows f} on E {\displaystyle E} , then f {\displaystyle f} is also continuous.

This theorem is proved by the "ε/3 trick", and is the archetypal example of this trick: to prove a given inequality (ε), one uses the definitions of continuity and uniform convergence to produce 3 inequalities (ε/3), and then combines them via the triangle inequality to produce the desired inequality.

This theorem is an important one in the history of real and Fourier analysis, since many 18th century mathematicians had the intuitive understanding that a sequence of continuous functions always converges to a continuous function. The image above shows a counterexample, and many discontinuous functions could, in fact, be written as a Fourier series of continuous functions. The erroneous claim that the pointwise limit of a sequence of continuous functions is continuous (originally stated in terms of convergent series of continuous functions) is infamously known as "Cauchy's wrong theorem". The uniform limit theorem shows that a stronger form of convergence, uniform convergence, is needed to ensure the preservation of continuity in the limit function.

More precisely, this theorem states that the uniform limit of uniformly continuous functions is uniformly continuous; for a locally compact space, continuity is equivalent to local uniform continuity, and thus the uniform limit of continuous functions is continuous.

To differentiability

If S {\displaystyle S} is an interval and all the functions f n {\displaystyle f_{n}} are differentiable and converge to a limit f {\displaystyle f} , it is often desirable to determine the derivative function f {\displaystyle f'} by taking the limit of the sequence f n {\displaystyle f'_{n}} . This is however in general not possible: even if the convergence is uniform, the limit function need not be differentiable (not even if the sequence consists of everywhere-analytic functions, see Weierstrass function), and even if it is differentiable, the derivative of the limit function need not be equal to the limit of the derivatives. Consider for instance f n ( x ) = n 1 / 2 sin ( n x ) {\displaystyle f_{n}(x)=n^{-1/2}{\sin(nx)}} with uniform limit f n f 0 {\displaystyle f_{n}\rightrightarrows f\equiv 0} . Clearly, f {\displaystyle f'} is also identically zero. However, the derivatives of the sequence of functions are given by f n ( x ) = n 1 / 2 cos n x , {\displaystyle f'_{n}(x)=n^{1/2}\cos nx,} and the sequence f n {\displaystyle f'_{n}} does not converge to f , {\displaystyle f',} or even to any function at all. In order to ensure a connection between the limit of a sequence of differentiable functions and the limit of the sequence of derivatives, the uniform convergence of the sequence of derivatives plus the convergence of the sequence of functions at at least one point is required:[4]

If ( f n ) {\displaystyle (f_{n})} is a sequence of differentiable functions on [ a , b ] {\displaystyle [a,b]} such that lim n f n ( x 0 ) {\displaystyle \lim _{n\to \infty }f_{n}(x_{0})} exists (and is finite) for some x 0 [ a , b ] {\displaystyle x_{0}\in [a,b]} and the sequence ( f n ) {\displaystyle (f'_{n})} converges uniformly on [ a , b ] {\displaystyle [a,b]} , then f n {\displaystyle f_{n}} converges uniformly to a function f {\displaystyle f} on [ a , b ] {\displaystyle [a,b]} , and f ( x ) = lim n f n ( x ) {\displaystyle f'(x)=\lim _{n\to \infty }f'_{n}(x)} for x [ a , b ] {\displaystyle x\in [a,b]} .

To integrability

Similarly, one often wants to exchange integrals and limit processes. For the Riemann integral, this can be done if uniform convergence is assumed:

If ( f n ) n = 1 {\displaystyle (f_{n})_{n=1}^{\infty }} is a sequence of Riemann integrable functions defined on a compact interval I {\displaystyle I} which uniformly converge with limit f {\displaystyle f} , then f {\displaystyle f} is Riemann integrable and its integral can be computed as the limit of the integrals of the f n {\displaystyle f_{n}} :
I f = lim n I f n . {\displaystyle \int _{I}f=\lim _{n\to \infty }\int _{I}f_{n}.}

In fact, for a uniformly convergent family of bounded functions on an interval, the upper and lower Riemann integrals converge to the upper and lower Riemann integrals of the limit function. This follows because, for n sufficiently large, the graph of f n {\displaystyle f_{n}} is within ε of the graph of f, and so the upper sum and lower sum of f n {\displaystyle f_{n}} are each within ε | I | {\displaystyle \varepsilon |I|} of the value of the upper and lower sums of f {\displaystyle f} , respectively.

Much stronger theorems in this respect, which require not much more than pointwise convergence, can be obtained if one abandons the Riemann integral and uses the Lebesgue integral instead.

To analyticity

Using Morera's Theorem, one can show that if a sequence of analytic functions converges uniformly in a region S of the complex plane, then the limit is analytic in S. This example demonstrates that complex functions are more well-behaved than real functions, since the uniform limit of analytic functions on a real interval need not even be differentiable (see Weierstrass function).

To series

We say that n = 1 f n {\textstyle \sum _{n=1}^{\infty }f_{n}} converges:

  1. pointwise on E if and only if the sequence of partial sums s n ( x ) = j = 1 n f j ( x ) {\displaystyle s_{n}(x)=\sum _{j=1}^{n}f_{j}(x)} converges for every x E {\displaystyle x\in E} .
  2. uniformly on E if and only if sn converges uniformly as n {\displaystyle n\to \infty } .
  3. absolutely on E if and only if n = 1 | f n | {\textstyle \sum _{n=1}^{\infty }|f_{n}|} converges for every x E {\displaystyle x\in E} .

With this definition comes the following result:

Let x0 be contained in the set E and each fn be continuous at x0. If f = n = 1 f n {\textstyle f=\sum _{n=1}^{\infty }f_{n}} converges uniformly on E then f is continuous at x0 in E. Suppose that E = [ a , b ] {\displaystyle E=[a,b]} and each fn is integrable on E. If n = 1 f n {\textstyle \sum _{n=1}^{\infty }f_{n}} converges uniformly on E then f is integrable on E and the series of integrals of fn is equal to integral of the series of fn.

Almost uniform convergence

If the domain of the functions is a measure space E then the related notion of almost uniform convergence can be defined. We say a sequence of functions ( f n ) {\displaystyle (f_{n})} converges almost uniformly on E if for every δ > 0 {\displaystyle \delta >0} there exists a measurable set E δ {\displaystyle E_{\delta }} with measure less than δ {\displaystyle \delta } such that the sequence of functions ( f n ) {\displaystyle (f_{n})} converges uniformly on E E δ {\displaystyle E\setminus E_{\delta }} . In other words, almost uniform convergence means there are sets of arbitrarily small measure for which the sequence of functions converges uniformly on their complement.

Note that almost uniform convergence of a sequence does not mean that the sequence converges uniformly almost everywhere as might be inferred from the name. However, Egorov's theorem does guarantee that on a finite measure space, a sequence of functions that converges almost everywhere also converges almost uniformly on the same set.

Almost uniform convergence implies almost everywhere convergence and convergence in measure.

See also

Notes

  1. ^ Sørensen, Henrik Kragh (2005). "Exceptions and counterexamples: Understanding Abel's comment on Cauchy's Theorem". Historia Mathematica. 32 (4): 453–480. doi:10.1016/j.hm.2004.11.010.
  2. ^ Jahnke, Hans Niels (2003). "6.7 The Foundation of Analysis in the 19th Century: Weierstrass". A history of analysis. AMS Bookstore. p. 184. ISBN 978-0-8218-2623-2.
  3. ^ Lakatos, Imre (1976). Proofs and Refutations. Cambridge University Press. pp. 141. ISBN 978-0-521-21078-2.
  4. ^ Rudin, Walter (1976). Principles of Mathematical Analysis 3rd edition, Theorem 7.17. McGraw-Hill: New York.

References

  • Konrad Knopp, Theory and Application of Infinite Series; Blackie and Son, London, 1954, reprinted by Dover Publications, ISBN 0-486-66165-2.
  • G. H. Hardy, Sir George Stokes and the concept of uniform convergence; Proceedings of the Cambridge Philosophical Society, 19, pp. 148–156 (1918)
  • Bourbaki; Elements of Mathematics: General Topology. Chapters 5–10 (paperback); ISBN 0-387-19374-X
  • Walter Rudin, Principles of Mathematical Analysis, 3rd ed., McGraw–Hill, 1976.
  • Gerald Folland, Real Analysis: Modern Techniques and Their Applications, Second Edition, John Wiley & Sons, Inc., 1999, ISBN 0-471-31716-0.
  • William Wade, An Introduction to Analysis, 3rd ed., Pearson, 2005

External links

  • v
  • t
  • e
Sequences and series
Integer sequences
Basic
Advanced (list)
Fibonacci spiral with square sizes up to 34.
Properties of sequencesProperties of series
Series
Convergence
Explicit series
Convergent
Divergent
Kinds of seriesHypergeometric series
  • Category