Linear form

Linear map from a vector space to its field of scalars

In mathematics, a linear form (also known as a linear functional,[1] a one-form, or a covector) is a linear map[nb 1] from a vector space to its field of scalars (often, the real numbers or the complex numbers).

If V is a vector space over a field k, the set of all linear functionals from V to k is itself a vector space over k with addition and scalar multiplication defined pointwise. This space is called the dual space of V, or sometimes the algebraic dual space, when a topological dual space is also considered. It is often denoted Hom(V, k),[2] or, when the field k is understood, V {\displaystyle V^{*}} ;[3] other notations are also used, such as V {\displaystyle V'} ,[4][5] V # {\displaystyle V^{\#}} or V . {\displaystyle V^{\vee }.} [2] When vectors are represented by column vectors (as is common when a basis is fixed), then linear functionals are represented as row vectors, and their values on specific vectors are given by matrix products (with the row vector on the left).

Examples

The constant zero function, mapping every vector to zero, is trivially a linear functional. Every other linear functional (such as the ones below) is surjective (that is, its range is all of k).

  • Indexing into a vector: The second element of a three-vector is given by the one-form [ 0 , 1 , 0 ] . {\displaystyle [0,1,0].} That is, the second element of [ x , y , z ] {\displaystyle [x,y,z]} is
    [ 0 , 1 , 0 ] [ x , y , z ] = y . {\displaystyle [0,1,0]\cdot [x,y,z]=y.}
  • Mean: The mean element of an n {\displaystyle n} -vector is given by the one-form [ 1 / n , 1 / n , , 1 / n ] . {\displaystyle \left[1/n,1/n,\ldots ,1/n\right].} That is,
    mean ( v ) = [ 1 / n , 1 / n , , 1 / n ] v . {\displaystyle \operatorname {mean} (v)=\left[1/n,1/n,\ldots ,1/n\right]\cdot v.}
  • Sampling: Sampling with a kernel can be considered a one-form, where the one-form is the kernel shifted to the appropriate location.
  • Net present value of a net cash flow, R ( t ) , {\displaystyle R(t),} is given by the one-form w ( t ) = ( 1 + i ) t {\displaystyle w(t)=(1+i)^{-t}} where i {\displaystyle i} is the discount rate. That is,
    N P V ( R ( t ) ) = w , R = t = 0 R ( t ) ( 1 + i ) t d t . {\displaystyle \mathrm {NPV} (R(t))=\langle w,R\rangle =\int _{t=0}^{\infty }{\frac {R(t)}{(1+i)^{t}}}\,dt.}

Linear functionals in Rn

Suppose that vectors in the real coordinate space R n {\displaystyle \mathbb {R} ^{n}} are represented as column vectors

x = [ x 1 x n ] . {\displaystyle \mathbf {x} ={\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}.}

For each row vector a = [ a 1 a n ] {\displaystyle \mathbf {a} ={\begin{bmatrix}a_{1}&\cdots &a_{n}\end{bmatrix}}} there is a linear functional f a {\displaystyle f_{\mathbf {a} }} defined by

f a ( x ) = a 1 x 1 + + a n x n , {\displaystyle f_{\mathbf {a} }(\mathbf {x} )=a_{1}x_{1}+\cdots +a_{n}x_{n},}
and each linear functional can be expressed in this form.

This can be interpreted as either the matrix product or the dot product of the row vector a {\displaystyle \mathbf {a} } and the column vector x {\displaystyle \mathbf {x} } :

f a ( x ) = a x = [ a 1 a n ] [ x 1 x n ] . {\displaystyle f_{\mathbf {a} }(\mathbf {x} )=\mathbf {a} \cdot \mathbf {x} ={\begin{bmatrix}a_{1}&\cdots &a_{n}\end{bmatrix}}{\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}.}

Trace of a square matrix

The trace tr ( A ) {\displaystyle \operatorname {tr} (A)} of a square matrix A {\displaystyle A} is the sum of all elements on its main diagonal. Matrices can be multiplied by scalars and two matrices of the same dimension can be added together; these operations make a vector space from the set of all n × n {\displaystyle n\times n} matrices. The trace is a linear functional on this space because tr ( s A ) = s tr ( A ) {\displaystyle \operatorname {tr} (sA)=s\operatorname {tr} (A)} and tr ( A + B ) = tr ( A ) + tr ( B ) {\displaystyle \operatorname {tr} (A+B)=\operatorname {tr} (A)+\operatorname {tr} (B)} for all scalars s {\displaystyle s} and all n × n {\displaystyle n\times n} matrices A  and  B . {\displaystyle A{\text{ and }}B.}

(Definite) Integration

Linear functionals first appeared in functional analysis, the study of vector spaces of functions. A typical example of a linear functional is integration: the linear transformation defined by the Riemann integral

I ( f ) = a b f ( x ) d x {\displaystyle I(f)=\int _{a}^{b}f(x)\,dx}
is a linear functional from the vector space C [ a , b ] {\displaystyle C[a,b]} of continuous functions on the interval [ a , b ] {\displaystyle [a,b]} to the real numbers. The linearity of I {\displaystyle I} follows from the standard facts about the integral:
I ( f + g ) = a b [ f ( x ) + g ( x ) ] d x = a b f ( x ) d x + a b g ( x ) d x = I ( f ) + I ( g ) I ( α f ) = a b α f ( x ) d x = α a b f ( x ) d x = α I ( f ) . {\displaystyle {\begin{aligned}I(f+g)&=\int _{a}^{b}[f(x)+g(x)]\,dx=\int _{a}^{b}f(x)\,dx+\int _{a}^{b}g(x)\,dx=I(f)+I(g)\\I(\alpha f)&=\int _{a}^{b}\alpha f(x)\,dx=\alpha \int _{a}^{b}f(x)\,dx=\alpha I(f).\end{aligned}}}

Evaluation

Let P n {\displaystyle P_{n}} denote the vector space of real-valued polynomial functions of degree n {\displaystyle \leq n} defined on an interval [ a , b ] . {\displaystyle [a,b].} If c [ a , b ] , {\displaystyle c\in [a,b],} then let ev c : P n R {\displaystyle \operatorname {ev} _{c}:P_{n}\to \mathbb {R} } be the evaluation functional

ev c f = f ( c ) . {\displaystyle \operatorname {ev} _{c}f=f(c).}
The mapping f f ( c ) {\displaystyle f\mapsto f(c)} is linear since
( f + g ) ( c ) = f ( c ) + g ( c ) ( α f ) ( c ) = α f ( c ) . {\displaystyle {\begin{aligned}(f+g)(c)&=f(c)+g(c)\\(\alpha f)(c)&=\alpha f(c).\end{aligned}}}

If x 0 , , x n {\displaystyle x_{0},\ldots ,x_{n}} are n + 1 {\displaystyle n+1} distinct points in [ a , b ] , {\displaystyle [a,b],} then the evaluation functionals ev x i , {\displaystyle \operatorname {ev} _{x_{i}},} i = 0 , , n {\displaystyle i=0,\ldots ,n} form a basis of the dual space of P n {\displaystyle P_{n}} (Lax (1996) proves this last fact using Lagrange interpolation).

Non-example

A function f {\displaystyle f} having the equation of a line f ( x ) = a + r x {\displaystyle f(x)=a+rx} with a 0 {\displaystyle a\neq 0} (for example, f ( x ) = 1 + 2 x {\displaystyle f(x)=1+2x} ) is not a linear functional on R {\displaystyle \mathbb {R} } , since it is not linear.[nb 2] It is, however, affine-linear.

Visualization

Geometric interpretation of a 1-form α as a stack of hyperplanes of constant value, each corresponding to those vectors that α maps to a given scalar value shown next to it along with the "sense" of increase. The   zero plane is through the origin.

In finite dimensions, a linear functional can be visualized in terms of its level sets, the sets of vectors which map to a given value. In three dimensions, the level sets of a linear functional are a family of mutually parallel planes; in higher dimensions, they are parallel hyperplanes. This method of visualizing linear functionals is sometimes introduced in general relativity texts, such as Gravitation by Misner, Thorne & Wheeler (1973).

Applications

Application to quadrature

If x 0 , , x n {\displaystyle x_{0},\ldots ,x_{n}} are n + 1 {\displaystyle n+1} distinct points in [a, b], then the linear functionals ev x i : f f ( x i ) {\displaystyle \operatorname {ev} _{x_{i}}:f\mapsto f\left(x_{i}\right)} defined above form a basis of the dual space of Pn, the space of polynomials of degree n . {\displaystyle \leq n.} The integration functional I is also a linear functional on Pn, and so can be expressed as a linear combination of these basis elements. In symbols, there are coefficients a 0 , , a n {\displaystyle a_{0},\ldots ,a_{n}} for which

I ( f ) = a 0 f ( x 0 ) + a 1 f ( x 1 ) + + a n f ( x n ) {\displaystyle I(f)=a_{0}f(x_{0})+a_{1}f(x_{1})+\dots +a_{n}f(x_{n})}
for all f P n . {\displaystyle f\in P_{n}.} This forms the foundation of the theory of numerical quadrature.[6]

In quantum mechanics

Linear functionals are particularly important in quantum mechanics. Quantum mechanical systems are represented by Hilbert spaces, which are antiisomorphic to their own dual spaces. A state of a quantum mechanical system can be identified with a linear functional. For more information see bra–ket notation.

Distributions

In the theory of generalized functions, certain kinds of generalized functions called distributions can be realized as linear functionals on spaces of test functions.

Dual vectors and bilinear forms

Linear functionals (1-forms) α, β and their sum σ and vectors u, v, w, in 3d Euclidean space. The number of (1-form) hyperplanes intersected by a vector equals the inner product.[7]

Every non-degenerate bilinear form on a finite-dimensional vector space V induces an isomorphism VV : vv such that

v ( w ) := v , w w V , {\displaystyle v^{*}(w):=\langle v,w\rangle \quad \forall w\in V,}

where the bilinear form on V is denoted , {\displaystyle \langle \,\cdot \,,\,\cdot \,\rangle } (for instance, in Euclidean space, v , w = v w {\displaystyle \langle v,w\rangle =v\cdot w} is the dot product of v and w).

The inverse isomorphism is VV : vv, where v is the unique element of V such that

v , w = v ( w ) {\displaystyle \langle v,w\rangle =v^{*}(w)}
for all w V . {\displaystyle w\in V.}

The above defined vector vV is said to be the dual vector of v V . {\displaystyle v\in V.}

In an infinite dimensional Hilbert space, analogous results hold by the Riesz representation theorem. There is a mapping VV from V into its continuous dual space V.

Relationship to bases

Basis of the dual space

Let the vector space V have a basis e 1 , e 2 , , e n {\displaystyle \mathbf {e} _{1},\mathbf {e} _{2},\dots ,\mathbf {e} _{n}} , not necessarily orthogonal. Then the dual space V {\displaystyle V^{*}} has a basis ω ~ 1 , ω ~ 2 , , ω ~ n {\displaystyle {\tilde {\omega }}^{1},{\tilde {\omega }}^{2},\dots ,{\tilde {\omega }}^{n}} called the dual basis defined by the special property that

ω ~ i ( e j ) = { 1 if   i = j 0 if   i j . {\displaystyle {\tilde {\omega }}^{i}(\mathbf {e} _{j})={\begin{cases}1&{\text{if}}\ i=j\\0&{\text{if}}\ i\neq j.\end{cases}}}

Or, more succinctly,

ω ~ i ( e j ) = δ i j {\displaystyle {\tilde {\omega }}^{i}(\mathbf {e} _{j})=\delta _{ij}}

where δ is the Kronecker delta. Here the superscripts of the basis functionals are not exponents but are instead contravariant indices.

A linear functional u ~ {\displaystyle {\tilde {u}}} belonging to the dual space V ~ {\displaystyle {\tilde {V}}} can be expressed as a linear combination of basis functionals, with coefficients ("components") ui,

u ~ = i = 1 n u i ω ~ i . {\displaystyle {\tilde {u}}=\sum _{i=1}^{n}u_{i}\,{\tilde {\omega }}^{i}.}

Then, applying the functional u ~ {\displaystyle {\tilde {u}}} to a basis vector e j {\displaystyle \mathbf {e} _{j}} yields

u ~ ( e j ) = i = 1 n ( u i ω ~ i ) e j = i u i [ ω ~ i ( e j ) ] {\displaystyle {\tilde {u}}(\mathbf {e} _{j})=\sum _{i=1}^{n}\left(u_{i}\,{\tilde {\omega }}^{i}\right)\mathbf {e} _{j}=\sum _{i}u_{i}\left[{\tilde {\omega }}^{i}\left(\mathbf {e} _{j}\right)\right]}

due to linearity of scalar multiples of functionals and pointwise linearity of sums of functionals. Then

u ~ ( e j ) = i u i [ ω ~ i ( e j ) ] = i u i δ i j = u j . {\displaystyle {\begin{aligned}{\tilde {u}}({\mathbf {e} }_{j})&=\sum _{i}u_{i}\left[{\tilde {\omega }}^{i}\left({\mathbf {e} }_{j}\right)\right]\\&=\sum _{i}u_{i}{\delta }_{ij}\\&=u_{j}.\end{aligned}}}

So each component of a linear functional can be extracted by applying the functional to the corresponding basis vector.

The dual basis and inner product

When the space V carries an inner product, then it is possible to write explicitly a formula for the dual basis of a given basis. Let V have (not necessarily orthogonal) basis e 1 , , e n . {\displaystyle \mathbf {e} _{1},\dots ,\mathbf {e} _{n}.} In three dimensions (n = 3), the dual basis can be written explicitly

ω ~ i ( v ) = 1 2 j = 1 3 k = 1 3 ε i j k ( e j × e k ) e 1 e 2 × e 3 , v , {\displaystyle {\tilde {\omega }}^{i}(\mathbf {v} )={\frac {1}{2}}\left\langle {\frac {\sum _{j=1}^{3}\sum _{k=1}^{3}\varepsilon ^{ijk}\,(\mathbf {e} _{j}\times \mathbf {e} _{k})}{\mathbf {e} _{1}\cdot \mathbf {e} _{2}\times \mathbf {e} _{3}}},\mathbf {v} \right\rangle ,}
for i = 1 , 2 , 3 , {\displaystyle i=1,2,3,} where ε is the Levi-Civita symbol and , {\displaystyle \langle \cdot ,\cdot \rangle } the inner product (or dot product) on V.

In higher dimensions, this generalizes as follows

ω ~ i ( v ) = 1 i 2 < i 3 < < i n n ε i i 2 i n ( e i 2 e i n ) ( e 1 e n ) , v , {\displaystyle {\tilde {\omega }}^{i}(\mathbf {v} )=\left\langle {\frac {\sum _{1\leq i_{2}<i_{3}<\dots <i_{n}\leq n}\varepsilon ^{ii_{2}\dots i_{n}}(\star \mathbf {e} _{i_{2}}\wedge \cdots \wedge \mathbf {e} _{i_{n}})}{\star (\mathbf {e} _{1}\wedge \cdots \wedge \mathbf {e} _{n})}},\mathbf {v} \right\rangle ,}
where {\displaystyle \star } is the Hodge star operator.

Over a ring

Modules over a ring are generalizations of vector spaces, which removes the restriction that coefficients belong to a field. Given a module M over a ring R, a linear form on M is a linear map from M to R, where the latter is considered as a module over itself. The space of linear forms is always denoted Homk(V, k), whether k is a field or not. It is a right module, if V is a left module.

The existence of "enough" linear forms on a module is equivalent to projectivity.[8]

Dual Basis Lemma — An R-module M is projective if and only if there exists a subset A M {\displaystyle A\subset M} and linear forms { f a a A } {\displaystyle \{f_{a}\mid a\in A\}} such that, for every x M , {\displaystyle x\in M,} only finitely many f a ( x ) {\displaystyle f_{a}(x)} are nonzero, and

x = a A f a ( x ) a {\displaystyle x=\sum _{a\in A}{f_{a}(x)a}}

Change of field

Suppose that X {\displaystyle X} is a vector space over C . {\displaystyle \mathbb {C} .} Restricting scalar multiplication to R {\displaystyle \mathbb {R} } gives rise to a real vector space[9] X R {\displaystyle X_{\mathbb {R} }} called the realification of X . {\displaystyle X.} Any vector space X {\displaystyle X} over C {\displaystyle \mathbb {C} } is also a vector space over R , {\displaystyle \mathbb {R} ,} endowed with a complex structure; that is, there exists a real vector subspace X R {\displaystyle X_{\mathbb {R} }} such that we can (formally) write X = X R X R i {\displaystyle X=X_{\mathbb {R} }\oplus X_{\mathbb {R} }i} as R {\displaystyle \mathbb {R} } -vector spaces.

Real versus complex linear functionals

Every linear functional on X {\displaystyle X} is complex-valued while every linear functional on X R {\displaystyle X_{\mathbb {R} }} is real-valued. If dim X 0 {\displaystyle \dim X\neq 0} then a linear functional on either one of X {\displaystyle X} or X R {\displaystyle X_{\mathbb {R} }} is non-trivial (meaning not identically 0 {\displaystyle 0} ) if and only if it is surjective (because if φ ( x ) 0 {\displaystyle \varphi (x)\neq 0} then for any scalar s , {\displaystyle s,} φ ( ( s / φ ( x ) ) x ) = s {\displaystyle \varphi \left((s/\varphi (x))x\right)=s} ), where the image of a linear functional on X {\displaystyle X} is C {\displaystyle \mathbb {C} } while the image of a linear functional on X R {\displaystyle X_{\mathbb {R} }} is R . {\displaystyle \mathbb {R} .} Consequently, the only function on X {\displaystyle X} that is both a linear functional on X {\displaystyle X} and a linear function on X R {\displaystyle X_{\mathbb {R} }} is the trivial functional; in other words, X # X R # = { 0 } , {\displaystyle X^{\#}\cap X_{\mathbb {R} }^{\#}=\{0\},} where # {\displaystyle \,{\cdot }^{\#}} denotes the space's algebraic dual space. However, every C {\displaystyle \mathbb {C} } -linear functional on X {\displaystyle X} is an R {\displaystyle \mathbb {R} } -linear operator (meaning that it is additive and homogeneous over R {\displaystyle \mathbb {R} } ), but unless it is identically 0 , {\displaystyle 0,} it is not an R {\displaystyle \mathbb {R} } -linear functional on X {\displaystyle X} because its range (which is C {\displaystyle \mathbb {C} } ) is 2-dimensional over R . {\displaystyle \mathbb {R} .} Conversely, a non-zero R {\displaystyle \mathbb {R} } -linear functional has range too small to be a C {\displaystyle \mathbb {C} } -linear functional as well.

Real and imaginary parts

If φ X # {\displaystyle \varphi \in X^{\#}} then denote its real part by φ R := Re φ {\displaystyle \varphi _{\mathbb {R} }:=\operatorname {Re} \varphi } and its imaginary part by φ i := Im φ . {\displaystyle \varphi _{i}:=\operatorname {Im} \varphi .} Then φ R : X R {\displaystyle \varphi _{\mathbb {R} }:X\to \mathbb {R} } and φ i : X R {\displaystyle \varphi _{i}:X\to \mathbb {R} } are linear functionals on X R {\displaystyle X_{\mathbb {R} }} and φ = φ R + i φ i . {\displaystyle \varphi =\varphi _{\mathbb {R} }+i\varphi _{i}.} The fact that z = Re z i Re ( i z ) = Im ( i z ) + i Im z {\displaystyle z=\operatorname {Re} z-i\operatorname {Re} (iz)=\operatorname {Im} (iz)+i\operatorname {Im} z} for all z C {\displaystyle z\in \mathbb {C} } implies that for all x X , {\displaystyle x\in X,} [9]

φ ( x ) = φ R ( x ) i φ R ( i x ) = φ i ( i x ) + i φ i ( x ) {\displaystyle {\begin{alignedat}{4}\varphi (x)&=\varphi _{\mathbb {R} }(x)-i\varphi _{\mathbb {R} }(ix)\\&=\varphi _{i}(ix)+i\varphi _{i}(x)\\\end{alignedat}}}
and consequently, that φ i ( x ) = φ R ( i x ) {\displaystyle \varphi _{i}(x)=-\varphi _{\mathbb {R} }(ix)} and φ R ( x ) = φ i ( i x ) . {\displaystyle \varphi _{\mathbb {R} }(x)=\varphi _{i}(ix).} [10]

The assignment φ φ R {\displaystyle \varphi \mapsto \varphi _{\mathbb {R} }} defines a bijective[10] R {\displaystyle \mathbb {R} } -linear operator X # X R # {\displaystyle X^{\#}\to X_{\mathbb {R} }^{\#}} whose inverse is the map L : X R # X # {\displaystyle L_{\bullet }:X_{\mathbb {R} }^{\#}\to X^{\#}} defined by the assignment g L g {\displaystyle g\mapsto L_{g}} that sends g : X R R {\displaystyle g:X_{\mathbb {R} }\to \mathbb {R} } to the linear functional L g : X C {\displaystyle L_{g}:X\to \mathbb {C} } defined by

L g ( x ) := g ( x ) i g ( i x )  for all  x X . {\displaystyle L_{g}(x):=g(x)-ig(ix)\quad {\text{ for all }}x\in X.}
The real part of L g {\displaystyle L_{g}} is g {\displaystyle g} and the bijection L : X R # X # {\displaystyle L_{\bullet }:X_{\mathbb {R} }^{\#}\to X^{\#}} is an R {\displaystyle \mathbb {R} } -linear operator, meaning that L g + h = L g + L h {\displaystyle L_{g+h}=L_{g}+L_{h}} and L r g = r L g {\displaystyle L_{rg}=rL_{g}} for all r R {\displaystyle r\in \mathbb {R} } and g , h X R # . {\displaystyle g,h\in X_{\mathbb {R} }^{\#}.} [10] Similarly for the imaginary part, the assignment φ φ i {\displaystyle \varphi \mapsto \varphi _{i}} induces an R {\displaystyle \mathbb {R} } -linear bijection X # X R # {\displaystyle X^{\#}\to X_{\mathbb {R} }^{\#}} whose inverse is the map X R # X # {\displaystyle X_{\mathbb {R} }^{\#}\to X^{\#}} defined by sending I X R # {\displaystyle I\in X_{\mathbb {R} }^{\#}} to the linear functional on X {\displaystyle X} defined by x I ( i x ) + i I ( x ) . {\displaystyle x\mapsto I(ix)+iI(x).}

This relationship was discovered by Henry Löwig in 1934 (although it is usually credited to F. Murray),[11] and can be generalized to arbitrary finite extensions of a field in the natural way. It has many important consequences, some of which will now be described.

Properties and relationships

Suppose φ : X C {\displaystyle \varphi :X\to \mathbb {C} } is a linear functional on X {\displaystyle X} with real part φ R := Re φ {\displaystyle \varphi _{\mathbb {R} }:=\operatorname {Re} \varphi } and imaginary part φ i := Im φ . {\displaystyle \varphi _{i}:=\operatorname {Im} \varphi .}

Then φ = 0 {\displaystyle \varphi =0} if and only if φ R = 0 {\displaystyle \varphi _{\mathbb {R} }=0} if and only if φ i = 0. {\displaystyle \varphi _{i}=0.}

Assume that X {\displaystyle X} is a topological vector space. Then φ {\displaystyle \varphi } is continuous if and only if its real part φ R {\displaystyle \varphi _{\mathbb {R} }} is continuous, if and only if φ {\displaystyle \varphi } 's imaginary part φ i {\displaystyle \varphi _{i}} is continuous. That is, either all three of φ , φ R , {\displaystyle \varphi ,\varphi _{\mathbb {R} },} and φ i {\displaystyle \varphi _{i}} are continuous or none are continuous. This remains true if the word "continuous" is replaced with the word "bounded". In particular, φ X {\displaystyle \varphi \in X^{\prime }} if and only if φ R X R {\displaystyle \varphi _{\mathbb {R} }\in X_{\mathbb {R} }^{\prime }} where the prime denotes the space's continuous dual space.[9]

Let B X . {\displaystyle B\subseteq X.} If u B B {\displaystyle uB\subseteq B} for all scalars u C {\displaystyle u\in \mathbb {C} } of unit length (meaning | u | = 1 {\displaystyle |u|=1} ) then[proof 1][12]

sup b B | φ ( b ) | = sup b B | φ R ( b ) | . {\displaystyle \sup _{b\in B}|\varphi (b)|=\sup _{b\in B}\left|\varphi _{\mathbb {R} }(b)\right|.}
Similarly, if φ i := Im φ : X R {\displaystyle \varphi _{i}:=\operatorname {Im} \varphi :X\to \mathbb {R} } denotes the complex part of φ {\displaystyle \varphi } then i B B {\displaystyle iB\subseteq B} implies
sup b B | φ R ( b ) | = sup b B | φ i ( b ) | . {\displaystyle \sup _{b\in B}\left|\varphi _{\mathbb {R} }(b)\right|=\sup _{b\in B}\left|\varphi _{i}(b)\right|.}
If X {\displaystyle X} is a normed space with norm {\displaystyle \|\cdot \|} and if B = { x X : x 1 } {\displaystyle B=\{x\in X:\|x\|\leq 1\}} is the closed unit ball then the supremums above are the operator norms (defined in the usual way) of φ , φ R , {\displaystyle \varphi ,\varphi _{\mathbb {R} },} and φ i {\displaystyle \varphi _{i}} so that [12]
φ = φ R = φ i . {\displaystyle \|\varphi \|=\left\|\varphi _{\mathbb {R} }\right\|=\left\|\varphi _{i}\right\|.}
This conclusion extends to the analogous statement for polars of balanced sets in general topological vector spaces.

  • If X {\displaystyle X} is a complex Hilbert space with a (complex) inner product | {\displaystyle \langle \,\cdot \,|\,\cdot \,\rangle } that is antilinear in its first coordinate (and linear in the second) then X R {\displaystyle X_{\mathbb {R} }} becomes a real Hilbert space when endowed with the real part of | . {\displaystyle \langle \,\cdot \,|\,\cdot \,\rangle .} Explicitly, this real inner product on X R {\displaystyle X_{\mathbb {R} }} is defined by x | y R := Re x | y {\displaystyle \langle x|y\rangle _{\mathbb {R} }:=\operatorname {Re} \langle x|y\rangle } for all x , y X {\displaystyle x,y\in X} and it induces the same norm on X {\displaystyle X} as | {\displaystyle \langle \,\cdot \,|\,\cdot \,\rangle } because x | x R = x | x {\displaystyle {\sqrt {\langle x|x\rangle _{\mathbb {R} }}}={\sqrt {\langle x|x\rangle }}} for all vectors x . {\displaystyle x.} Applying the Riesz representation theorem to φ X {\displaystyle \varphi \in X^{\prime }} (resp. to φ R X R {\displaystyle \varphi _{\mathbb {R} }\in X_{\mathbb {R} }^{\prime }} ) guarantees the existence of a unique vector f φ X {\displaystyle f_{\varphi }\in X} (resp. f φ R X R {\displaystyle f_{\varphi _{\mathbb {R} }}\in X_{\mathbb {R} }} ) such that φ ( x ) = f φ | x {\displaystyle \varphi (x)=\left\langle f_{\varphi }|\,x\right\rangle } (resp. φ R ( x ) = f φ R | x R {\displaystyle \varphi _{\mathbb {R} }(x)=\left\langle f_{\varphi _{\mathbb {R} }}|\,x\right\rangle _{\mathbb {R} }} ) for all vectors x . {\displaystyle x.} The theorem also guarantees that f φ = φ X {\displaystyle \left\|f_{\varphi }\right\|=\|\varphi \|_{X^{\prime }}} and f φ R = φ R X R . {\displaystyle \left\|f_{\varphi _{\mathbb {R} }}\right\|=\left\|\varphi _{\mathbb {R} }\right\|_{X_{\mathbb {R} }^{\prime }}.} It is readily verified that f φ = f φ R . {\displaystyle f_{\varphi }=f_{\varphi _{\mathbb {R} }}.} Now f φ = f φ R {\displaystyle \left\|f_{\varphi }\right\|=\left\|f_{\varphi _{\mathbb {R} }}\right\|} and the previous equalities imply that φ X = φ R X R , {\displaystyle \|\varphi \|_{X^{\prime }}=\left\|\varphi _{\mathbb {R} }\right\|_{X_{\mathbb {R} }^{\prime }},} which is the same conclusion that was reached above.

In infinite dimensions

Below, all vector spaces are over either the real numbers R {\displaystyle \mathbb {R} } or the complex numbers C . {\displaystyle \mathbb {C} .}

If V {\displaystyle V} is a topological vector space, the space of continuous linear functionals — the continuous dual — is often simply called the dual space. If V {\displaystyle V} is a Banach space, then so is its (continuous) dual. To distinguish the ordinary dual space from the continuous dual space, the former is sometimes called the algebraic dual space. In finite dimensions, every linear functional is continuous, so the continuous dual is the same as the algebraic dual, but in infinite dimensions the continuous dual is a proper subspace of the algebraic dual.

A linear functional f on a (not necessarily locally convex) topological vector space X is continuous if and only if there exists a continuous seminorm p on X such that | f | p . {\displaystyle |f|\leq p.} [13]

Characterizing closed subspaces

Continuous linear functionals have nice properties for analysis: a linear functional is continuous if and only if its kernel is closed,[14] and a non-trivial continuous linear functional is an open map, even if the (topological) vector space is not complete.[15]

Hyperplanes and maximal subspaces

A vector subspace M {\displaystyle M} of X {\displaystyle X} is called maximal if M X {\displaystyle M\subsetneq X} (meaning M X {\displaystyle M\subseteq X} and M X {\displaystyle M\neq X} ) and does not exist a vector subspace N {\displaystyle N} of X {\displaystyle X} such that M N X . {\displaystyle M\subsetneq N\subsetneq X.} A vector subspace M {\displaystyle M} of X {\displaystyle X} is maximal if and only if it is the kernel of some non-trivial linear functional on X {\displaystyle X} (that is, M = ker f {\displaystyle M=\ker f} for some linear functional f {\displaystyle f} on X {\displaystyle X} that is not identically 0). An affine hyperplane in X {\displaystyle X} is a translate of a maximal vector subspace. By linearity, a subset H {\displaystyle H} of X {\displaystyle X} is a affine hyperplane if and only if there exists some non-trivial linear functional f {\displaystyle f} on X {\displaystyle X} such that H = f 1 ( 1 ) = { x X : f ( x ) = 1 } . {\displaystyle H=f^{-1}(1)=\{x\in X:f(x)=1\}.} [11] If f {\displaystyle f} is a linear functional and s 0 {\displaystyle s\neq 0} is a scalar then f 1 ( s ) = s ( f 1 ( 1 ) ) = ( 1 s f ) 1 ( 1 ) . {\displaystyle f^{-1}(s)=s\left(f^{-1}(1)\right)=\left({\frac {1}{s}}f\right)^{-1}(1).} This equality can be used to relate different level sets of f . {\displaystyle f.} Moreover, if f 0 {\displaystyle f\neq 0} then the kernel of f {\displaystyle f} can be reconstructed from the affine hyperplane H := f 1 ( 1 ) {\displaystyle H:=f^{-1}(1)} by ker f = H H . {\displaystyle \ker f=H-H.}

Relationships between multiple linear functionals

Any two linear functionals with the same kernel are proportional (i.e. scalar multiples of each other). This fact can be generalized to the following theorem.

Theorem[16][17] — If f , g 1 , , g n {\displaystyle f,g_{1},\ldots ,g_{n}} are linear functionals on X, then the following are equivalent:

  1. f can be written as a linear combination of g 1 , , g n {\displaystyle g_{1},\ldots ,g_{n}} ; that is, there exist scalars s 1 , , s n {\displaystyle s_{1},\ldots ,s_{n}} such that s f = s 1 g 1 + + s n g n {\displaystyle sf=s_{1}g_{1}+\cdots +s_{n}g_{n}} ;
  2. i = 1 n ker g i ker f {\displaystyle \bigcap _{i=1}^{n}\ker g_{i}\subseteq \ker f} ;
  3. there exists a real number r such that | f ( x ) | r g i ( x ) {\displaystyle |f(x)|\leq rg_{i}(x)} for all x X {\displaystyle x\in X} and all i = 1 , , n . {\displaystyle i=1,\ldots ,n.}

If f is a non-trivial linear functional on X with kernel N, x X {\displaystyle x\in X} satisfies f ( x ) = 1 , {\displaystyle f(x)=1,} and U is a balanced subset of X, then N ( x + U ) = {\displaystyle N\cap (x+U)=\varnothing } if and only if | f ( u ) | < 1 {\displaystyle |f(u)|<1} for all u U . {\displaystyle u\in U.} [15]

Hahn–Banach theorem

Any (algebraic) linear functional on a vector subspace can be extended to the whole space; for example, the evaluation functionals described above can be extended to the vector space of polynomials on all of R . {\displaystyle \mathbb {R} .} However, this extension cannot always be done while keeping the linear functional continuous. The Hahn–Banach family of theorems gives conditions under which this extension can be done. For example,

Hahn–Banach dominated extension theorem[18](Rudin 1991, Th. 3.2) — If p : X R {\displaystyle p:X\to \mathbb {R} } is a sublinear function, and f : M R {\displaystyle f:M\to \mathbb {R} } is a linear functional on a linear subspace M X {\displaystyle M\subseteq X} which is dominated by p on M, then there exists a linear extension F : X R {\displaystyle F:X\to \mathbb {R} } of f to the whole space X that is dominated by p, i.e., there exists a linear functional F such that

F ( m ) = f ( m ) {\displaystyle F(m)=f(m)}
for all m M , {\displaystyle m\in M,} and
| F ( x ) | p ( x ) {\displaystyle |F(x)|\leq p(x)}
for all x X . {\displaystyle x\in X.}

Equicontinuity of families of linear functionals

Let X be a topological vector space (TVS) with continuous dual space X . {\displaystyle X'.}

For any subset H of X , {\displaystyle X',} the following are equivalent:[19]

  1. H is equicontinuous;
  2. H is contained in the polar of some neighborhood of 0 {\displaystyle 0} in X;
  3. the (pre)polar of H is a neighborhood of 0 {\displaystyle 0} in X;

If H is an equicontinuous subset of X {\displaystyle X'} then the following sets are also equicontinuous: the weak-* closure, the balanced hull, the convex hull, and the convex balanced hull.[19] Moreover, Alaoglu's theorem implies that the weak-* closure of an equicontinuous subset of X {\displaystyle X'} is weak-* compact (and thus that every equicontinuous subset weak-* relatively compact).[20][19]

See also

Notes

Footnotes

  1. ^ In some texts the roles are reversed and vectors are defined as linear maps from covectors to scalars
  2. ^ For instance, f ( 1 + 1 ) = a + 2 r 2 a + 2 r = f ( 1 ) + f ( 1 ) . {\displaystyle f(1+1)=a+2r\neq 2a+2r=f(1)+f(1).}

Proofs

  1. ^ It is true if B = {\displaystyle B=\varnothing } so assume otherwise. Since | Re z | | z | {\displaystyle \left|\operatorname {Re} z\right|\leq |z|} for all scalars z C , {\displaystyle z\in \mathbb {C} ,} it follows that sup x B | φ R ( x ) | sup x B | φ ( x ) | . {\textstyle \sup _{x\in B}\left|\varphi _{\mathbb {R} }(x)\right|\leq \sup _{x\in B}|\varphi (x)|.} If b B {\displaystyle b\in B} then let r b 0 {\displaystyle r_{b}\geq 0} and u b C {\displaystyle u_{b}\in \mathbb {C} } be such that | u b | = 1 {\displaystyle \left|u_{b}\right|=1} and φ ( b ) = r b u b , {\displaystyle \varphi (b)=r_{b}u_{b},} where if r b = 0 {\displaystyle r_{b}=0} then take u b := 1. {\displaystyle u_{b}:=1.} Then | φ ( b ) | = r b {\displaystyle |\varphi (b)|=r_{b}} and because φ ( 1 u b b ) = r b {\textstyle \varphi \left({\frac {1}{u_{b}}}b\right)=r_{b}} is a real number, φ R ( 1 u b b ) = φ ( 1 u b b ) = r b . {\textstyle \varphi _{\mathbb {R} }\left({\frac {1}{u_{b}}}b\right)=\varphi \left({\frac {1}{u_{b}}}b\right)=r_{b}.} By assumption 1 u b b B {\textstyle {\frac {1}{u_{b}}}b\in B} so | φ ( b ) | = r b sup x B | φ R ( x ) | . {\textstyle |\varphi (b)|=r_{b}\leq \sup _{x\in B}\left|\varphi _{\mathbb {R} }(x)\right|.} Since b B {\displaystyle b\in B} was arbitrary, it follows that sup x B | φ ( x ) | sup x B | φ R ( x ) | . {\textstyle \sup _{x\in B}|\varphi (x)|\leq \sup _{x\in B}\left|\varphi _{\mathbb {R} }(x)\right|.} {\displaystyle \blacksquare }

References

  1. ^ Axler (2015) p. 101, §3.92
  2. ^ a b Tu (2011) p. 19, §3.1
  3. ^ Katznelson & Katznelson (2008) p. 37, §2.1.3
  4. ^ Axler (2015) p. 101, §3.94
  5. ^ Halmos (1974) p. 20, §13
  6. ^ Lax 1996
  7. ^ Misner, Thorne & Wheeler (1973) p. 57
  8. ^ Clark, Pete L. Commutative Algebra (PDF). Unpublished. Lemma 3.12.
  9. ^ a b c Rudin 1991, pp. 57.
  10. ^ a b c Narici & Beckenstein 2011, pp. 9–11.
  11. ^ a b Narici & Beckenstein 2011, pp. 10–11.
  12. ^ a b Narici & Beckenstein 2011, pp. 126–128.
  13. ^ Narici & Beckenstein 2011, p. 126.
  14. ^ Rudin 1991, Theorem 1.18
  15. ^ a b Narici & Beckenstein 2011, p. 128.
  16. ^ Rudin 1991, pp. 63–64.
  17. ^ Narici & Beckenstein 2011, pp. 1–18.
  18. ^ Narici & Beckenstein 2011, pp. 177–220.
  19. ^ a b c Narici & Beckenstein 2011, pp. 225–273.
  20. ^ Schaefer & Wolff 1999, Corollary 4.3.

Bibliography

  • Axler, Sheldon (2015), Linear Algebra Done Right, Undergraduate Texts in Mathematics (3rd ed.), Springer, ISBN 978-3-319-11079-0
  • Bishop, Richard; Goldberg, Samuel (1980), "Chapter 4", Tensor Analysis on Manifolds, Dover Publications, ISBN 0-486-64039-6
  • Conway, John (1990). A course in functional analysis. Graduate Texts in Mathematics. Vol. 96 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-97245-9. OCLC 21195908.
  • Dunford, Nelson (1988). Linear operators (in Romanian). New York: Interscience Publishers. ISBN 0-471-60848-3. OCLC 18412261.
  • Halmos, Paul Richard (1974), Finite-Dimensional Vector Spaces, Undergraduate Texts in Mathematics (1958 2nd ed.), Springer, ISBN 0-387-90093-4
  • Katznelson, Yitzhak; Katznelson, Yonatan R. (2008), A (Terse) Introduction to Linear Algebra, American Mathematical Society, ISBN 978-0-8218-4419-9
  • Lax, Peter (1996), Linear algebra, Wiley-Interscience, ISBN 978-0-471-11111-5
  • Misner, Charles W.; Thorne, Kip S.; Wheeler, John A. (1973), Gravitation, W. H. Freeman, ISBN 0-7167-0344-0
  • Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
  • Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
  • Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
  • Schutz, Bernard (1985), "Chapter 3", A first course in general relativity, Cambridge, UK: Cambridge University Press, ISBN 0-521-27703-5
  • Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
  • Tu, Loring W. (2011), An Introduction to Manifolds, Universitext (2nd ed.), Springer, ISBN 978-0-8218-4419-9
  • Wilansky, Albert (2013). Modern Methods in Topological Vector Spaces. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-49353-4. OCLC 849801114.
  • v
  • t
  • e
Spaces
Properties
TheoremsOperatorsAlgebrasOpen problemsApplicationsAdvanced topics
  • Category
  • v
  • t
  • e
Basic concepts
Main results
Maps
Types of sets
Set operations
Types of TVSs
  • Category
Authority control databases: National Edit this at Wikidata
  • Czech Republic