Riesz representation theorem

Theorem about the dual of a Hilbert space

The Riesz representation theorem, sometimes called the Riesz–Fréchet representation theorem after Frigyes Riesz and Maurice René Fréchet, establishes an important connection between a Hilbert space and its continuous dual space. If the underlying field is the real numbers, the two are isometrically isomorphic; if the underlying field is the complex numbers, the two are isometrically anti-isomorphic. The (anti-) isomorphism is a particular natural isomorphism.

Preliminaries and notation

Let H {\displaystyle H} be a Hilbert space over a field F , {\displaystyle \mathbb {F} ,} where F {\displaystyle \mathbb {F} } is either the real numbers R {\displaystyle \mathbb {R} } or the complex numbers C . {\displaystyle \mathbb {C} .} If F = C {\displaystyle \mathbb {F} =\mathbb {C} } (resp. if F = R {\displaystyle \mathbb {F} =\mathbb {R} } ) then H {\displaystyle H} is called a complex Hilbert space (resp. a real Hilbert space). Every real Hilbert space can be extended to be a dense subset of a unique (up to bijective isometry) complex Hilbert space, called its complexification, which is why Hilbert spaces are often automatically assumed to be complex. Real and complex Hilbert spaces have in common many, but by no means all, properties and results/theorems.

This article is intended for both mathematicians and physicists and will describe the theorem for both. In both mathematics and physics, if a Hilbert space is assumed to be real (that is, if F = R {\displaystyle \mathbb {F} =\mathbb {R} } ) then this will usually be made clear. Often in mathematics, and especially in physics, unless indicated otherwise, "Hilbert space" is usually automatically assumed to mean "complex Hilbert space." Depending on the author, in mathematics, "Hilbert space" usually means either (1) a complex Hilbert space, or (2) a real or complex Hilbert space.

Linear and antilinear maps

By definition, an antilinear map (also called a conjugate-linear map) f : H Y {\displaystyle f:H\to Y} is a map between vector spaces that is additive:

f ( x + y ) = f ( x ) + f ( y )  for all  x , y H , {\displaystyle f(x+y)=f(x)+f(y)\quad {\text{ for all }}x,y\in H,}
and antilinear (also called conjugate-linear or conjugate-homogeneous):
f ( c x ) = c ¯ f ( x )  for all  x H  and all scalar  c F , {\displaystyle f(cx)={\overline {c}}f(x)\quad {\text{ for all }}x\in H{\text{ and all scalar }}c\in \mathbb {F} ,}
where c ¯ {\displaystyle {\overline {c}}} is the conjugate of the complex number c = a + b i {\displaystyle c=a+bi} , given by c ¯ = a b i {\displaystyle {\overline {c}}=a-bi} .

In contrast, a map f : H Y {\displaystyle f:H\to Y} is linear if it is additive and homogeneous:

f ( c x ) = c f ( x )  for all  x H  and all scalars  c F . {\displaystyle f(cx)=cf(x)\quad {\text{ for all }}x\in H\quad {\text{ and all scalars }}c\in \mathbb {F} .}

Every constant 0 {\displaystyle 0} map is always both linear and antilinear. If F = R {\displaystyle \mathbb {F} =\mathbb {R} } then the definitions of linear maps and antilinear maps are completely identical. A linear map from a Hilbert space into a Banach space (or more generally, from any Banach space into any topological vector space) is continuous if and only if it is bounded; the same is true of antilinear maps. The inverse of any antilinear (resp. linear) bijection is again an antilinear (resp. linear) bijection. The composition of two antilinear maps is a linear map.

Continuous dual and anti-dual spaces

A functional on H {\displaystyle H} is a function H F {\displaystyle H\to \mathbb {F} } whose codomain is the underlying scalar field F . {\displaystyle \mathbb {F} .} Denote by H {\displaystyle H^{*}} (resp. by H ¯ ) {\displaystyle {\overline {H}}^{*})} the set of all continuous linear (resp. continuous antilinear) functionals on H , {\displaystyle H,} which is called the (continuous) dual space (resp. the (continuous) anti-dual space) of H . {\displaystyle H.} [1] If F = R {\displaystyle \mathbb {F} =\mathbb {R} } then linear functionals on H {\displaystyle H} are the same as antilinear functionals and consequently, the same is true for such continuous maps: that is, H = H ¯ . {\displaystyle H^{*}={\overline {H}}^{*}.}

One-to-one correspondence between linear and antilinear functionals

Given any functional f   :   H F , {\displaystyle f~:~H\to \mathbb {F} ,} the conjugate of f {\displaystyle f} is the functional

f ¯ : H F h f ( h ) ¯ . {\displaystyle {\begin{alignedat}{4}{\overline {f}}:\,&H&&\to \,&&\mathbb {F} \\&h&&\mapsto \,&&{\overline {f(h)}}.\\\end{alignedat}}}

This assignment is most useful when F = C {\displaystyle \mathbb {F} =\mathbb {C} } because if F = R {\displaystyle \mathbb {F} =\mathbb {R} } then f = f ¯ {\displaystyle f={\overline {f}}} and the assignment f f ¯ {\displaystyle f\mapsto {\overline {f}}} reduces down to the identity map.

The assignment f f ¯ {\displaystyle f\mapsto {\overline {f}}} defines an antilinear bijective correspondence from the set of

all functionals (resp. all linear functionals, all continuous linear functionals H {\displaystyle H^{*}} ) on H , {\displaystyle H,}

onto the set of

all functionals (resp. all antilinear functionals, all continuous antilinear functionals H ¯ {\displaystyle {\overline {H}}^{*}} ) on H . {\displaystyle H.}

Mathematics vs. physics notations and definitions of inner product

The Hilbert space H {\displaystyle H} has an associated inner product H × H F {\displaystyle H\times H\to \mathbb {F} } valued in H {\displaystyle H} 's underlying scalar field F {\displaystyle \mathbb {F} } that is linear in one coordinate and antilinear in the other (as described in detail below). If H {\displaystyle H} is a complex Hilbert space (meaning, if F = C {\displaystyle \mathbb {F} =\mathbb {C} } ), which is very often the case, then which coordinate is antilinear and which is linear becomes a very important technicality. However, if F = R {\displaystyle \mathbb {F} =\mathbb {R} } then the inner product is a symmetric map that is simultaneously linear in each coordinate (that is, bilinear) and antilinear in each coordinate. Consequently, the question of which coordinate is linear and which is antilinear is irrelevant for real Hilbert spaces.

Notation for the inner product

In mathematics, the inner product on a Hilbert space H {\displaystyle H} is often denoted by , {\displaystyle \left\langle \cdot ,\cdot \right\rangle } or , H {\displaystyle \left\langle \cdot ,\cdot \right\rangle _{H}} while in physics, the bra–ket notation {\displaystyle \left\langle \cdot \mid \cdot \right\rangle } or H {\displaystyle \left\langle \cdot \mid \cdot \right\rangle _{H}} is typically used instead. In this article, these two notations will be related by the equality:

x , y := y x  for all  x , y H . {\displaystyle \left\langle x,y\right\rangle :=\left\langle y\mid x\right\rangle \quad {\text{ for all }}x,y\in H.}

Competing definitions of the inner product

The maps , {\displaystyle \left\langle \cdot ,\cdot \right\rangle } and {\displaystyle \left\langle \cdot \mid \cdot \right\rangle } are assumed to have the following two properties:

  1. The map , {\displaystyle \left\langle \cdot ,\cdot \right\rangle } is linear in its first coordinate; equivalently, the map {\displaystyle \left\langle \cdot \mid \cdot \right\rangle } is linear in its second coordinate. Explicitly, this means that for every fixed y H , {\displaystyle y\in H,} the map that is denoted by y = , y : H F {\displaystyle \left\langle \,y\mid \cdot \,\right\rangle =\left\langle \,\cdot ,y\,\right\rangle :H\to \mathbb {F} } and defined by
    h y h = h , y  for all  h H {\displaystyle h\mapsto \left\langle \,y\mid h\,\right\rangle =\left\langle \,h,y\,\right\rangle \quad {\text{ for all }}h\in H}
    is a linear functional on H . {\displaystyle H.}
    • In fact, this linear functional is continuous, so y = , y H . {\displaystyle \left\langle \,y\mid \cdot \,\right\rangle =\left\langle \,\cdot ,y\,\right\rangle \in H^{*}.}
  2. The map , {\displaystyle \left\langle \cdot ,\cdot \right\rangle } is antilinear in its second coordinate; equivalently, the map {\displaystyle \left\langle \cdot \mid \cdot \right\rangle } is antilinear in its first coordinate. Explicitly, this means that for every fixed y H , {\displaystyle y\in H,} the map that is denoted by y = y , : H F {\displaystyle \left\langle \,\cdot \mid y\,\right\rangle =\left\langle \,y,\cdot \,\right\rangle :H\to \mathbb {F} } and defined by
    h h y = y , h  for all  h H {\displaystyle h\mapsto \left\langle \,h\mid y\,\right\rangle =\left\langle \,y,h\,\right\rangle \quad {\text{ for all }}h\in H}
    is an antilinear functional on H . {\displaystyle H.}
    • In fact, this antilinear functional is continuous, so y = y , H ¯ . {\displaystyle \left\langle \,\cdot \mid y\,\right\rangle =\left\langle \,y,\cdot \,\right\rangle \in {\overline {H}}^{*}.}

In mathematics, the prevailing convention (i.e. the definition of an inner product) is that the inner product is linear in the first coordinate and antilinear in the other coordinate. In physics, the convention/definition is unfortunately the opposite, meaning that the inner product is linear in the second coordinate and antilinear in the other coordinate. This article will not choose one definition over the other. Instead, the assumptions made above make it so that the mathematics notation , {\displaystyle \left\langle \cdot ,\cdot \right\rangle } satisfies the mathematical convention/definition for the inner product (that is, linear in the first coordinate and antilinear in the other), while the physics bra–ket notation | {\displaystyle \left\langle \cdot |\cdot \right\rangle } satisfies the physics convention/definition for the inner product (that is, linear in the second coordinate and antilinear in the other). Consequently, the above two assumptions makes the notation used in each field consistent with that field's convention/definition for which coordinate is linear and which is antilinear.

Canonical norm and inner product on the dual space and anti-dual space

If x = y {\displaystyle x=y} then x x = x , x {\displaystyle \langle \,x\mid x\,\rangle =\langle \,x,x\,\rangle } is a non-negative real number and the map

x := x , x = x x {\displaystyle \|x\|:={\sqrt {\langle x,x\rangle }}={\sqrt {\langle x\mid x\rangle }}}

defines a canonical norm on H {\displaystyle H} that makes H {\displaystyle H} into a normed space.[1] As with all normed spaces, the (continuous) dual space H {\displaystyle H^{*}} carries a canonical norm, called the dual norm, that is defined by[1]

f H   :=   sup x 1 , x H | f ( x ) |  for every  f H . {\displaystyle \|f\|_{H^{*}}~:=~\sup _{\|x\|\leq 1,x\in H}|f(x)|\quad {\text{ for every }}f\in H^{*}.}

The canonical norm on the (continuous) anti-dual space H ¯ , {\displaystyle {\overline {H}}^{*},} denoted by f H ¯ , {\displaystyle \|f\|_{{\overline {H}}^{*}},} is defined by using this same equation:[1]

f H ¯   :=   sup x 1 , x H | f ( x ) |  for every  f H ¯ . {\displaystyle \|f\|_{{\overline {H}}^{*}}~:=~\sup _{\|x\|\leq 1,x\in H}|f(x)|\quad {\text{ for every }}f\in {\overline {H}}^{*}.}

This canonical norm on H {\displaystyle H^{*}} satisfies the parallelogram law, which means that the polarization identity can be used to define a canonical inner product on H , {\displaystyle H^{*},} which this article will denote by the notations

f , g H := g f H , {\displaystyle \left\langle f,g\right\rangle _{H^{*}}:=\left\langle g\mid f\right\rangle _{H^{*}},}
where this inner product turns H {\displaystyle H^{*}} into a Hilbert space. There are now two ways of defining a norm on H : {\displaystyle H^{*}:} the norm induced by this inner product (that is, the norm defined by f f , f H {\displaystyle f\mapsto {\sqrt {\left\langle f,f\right\rangle _{H^{*}}}}} ) and the usual dual norm (defined as the supremum over the closed unit ball). These norms are the same; explicitly, this means that the following holds for every f H : {\displaystyle f\in H^{*}:}
sup x 1 , x H | f ( x ) | = f H   =   f , f H   =   f f H . {\displaystyle \sup _{\|x\|\leq 1,x\in H}|f(x)|=\|f\|_{H^{*}}~=~{\sqrt {\langle f,f\rangle _{H^{*}}}}~=~{\sqrt {\langle f\mid f\rangle _{H^{*}}}}.}

As will be described later, the Riesz representation theorem can be used to give an equivalent definition of the canonical norm and the canonical inner product on H . {\displaystyle H^{*}.}

The same equations that were used above can also be used to define a norm and inner product on H {\displaystyle H} 's anti-dual space H ¯ . {\displaystyle {\overline {H}}^{*}.} [1]

Canonical isometry between the dual and antidual

The complex conjugate f ¯ {\displaystyle {\overline {f}}} of a functional f , {\displaystyle f,} which was defined above, satisfies

f H   =   f ¯ H ¯  and  g ¯ H   =   g H ¯ {\displaystyle \|f\|_{H^{*}}~=~\left\|{\overline {f}}\right\|_{{\overline {H}}^{*}}\quad {\text{ and }}\quad \left\|{\overline {g}}\right\|_{H^{*}}~=~\|g\|_{{\overline {H}}^{*}}}
for every f H {\displaystyle f\in H^{*}} and every g H ¯ . {\displaystyle g\in {\overline {H}}^{*}.} This says exactly that the canonical antilinear bijection defined by
Cong : H H ¯ f f ¯ {\displaystyle {\begin{alignedat}{4}\operatorname {Cong} :\;&&H^{*}&&\;\to \;&{\overline {H}}^{*}\\[0.3ex]&&f&&\;\mapsto \;&{\overline {f}}\\\end{alignedat}}}
as well as its inverse Cong 1   :   H ¯ H {\displaystyle \operatorname {Cong} ^{-1}~:~{\overline {H}}^{*}\to H^{*}} are antilinear isometries and consequently also homeomorphisms. The inner products on the dual space H {\displaystyle H^{*}} and the anti-dual space H ¯ , {\displaystyle {\overline {H}}^{*},} denoted respectively by , H {\displaystyle \langle \,\cdot \,,\,\cdot \,\rangle _{H^{*}}} and , H ¯ , {\displaystyle \langle \,\cdot \,,\,\cdot \,\rangle _{{\overline {H}}^{*}},} are related by
f ¯ | g ¯ H ¯ = f | g H ¯ = g | f H  for all  f , g H {\displaystyle \langle \,{\overline {f}}\,|\,{\overline {g}}\,\rangle _{{\overline {H}}^{*}}={\overline {\langle \,f\,|\,g\,\rangle _{H^{*}}}}=\langle \,g\,|\,f\,\rangle _{H^{*}}\qquad {\text{ for all }}f,g\in H^{*}}
and
f ¯ | g ¯ H = f | g H ¯ ¯ = g | f H ¯  for all  f , g H ¯ . {\displaystyle \langle \,{\overline {f}}\,|\,{\overline {g}}\,\rangle _{H^{*}}={\overline {\langle \,f\,|\,g\,\rangle _{{\overline {H}}^{*}}}}=\langle \,g\,|\,f\,\rangle _{{\overline {H}}^{*}}\qquad {\text{ for all }}f,g\in {\overline {H}}^{*}.}

If F = R {\displaystyle \mathbb {F} =\mathbb {R} } then H = H ¯ {\displaystyle H^{*}={\overline {H}}^{*}} and this canonical map Cong : H H ¯ {\displaystyle \operatorname {Cong} :H^{*}\to {\overline {H}}^{*}} reduces down to the identity map.

Riesz representation theorem

Two vectors x {\displaystyle x} and y {\displaystyle y} are orthogonal if x , y = 0 , {\displaystyle \langle x,y\rangle =0,} which happens if and only if y y + s x {\displaystyle \|y\|\leq \|y+sx\|} for all scalars s . {\displaystyle s.} [2] The orthogonal complement of a subset X H {\displaystyle X\subseteq H} is

X := { y H : y , x = 0  for all  x X } , {\displaystyle X^{\bot }:=\{\,y\in H:\langle y,x\rangle =0{\text{ for all }}x\in X\,\},}
which is always a closed vector subspace of H . {\displaystyle H.} The Hilbert projection theorem guarantees that for any nonempty closed convex subset C {\displaystyle C} of a Hilbert space there exists a unique vector m C {\displaystyle m\in C} such that m = inf c C c ; {\displaystyle \|m\|=\inf _{c\in C}\|c\|;} that is, m C {\displaystyle m\in C} is the (unique) global minimum point of the function C [ 0 , ) {\displaystyle C\to [0,\infty )} defined by c c . {\displaystyle c\mapsto \|c\|.}

Statement

Riesz representation theorem — Let H {\displaystyle H} be a Hilbert space whose inner product x , y {\displaystyle \left\langle x,y\right\rangle } is linear in its first argument and antilinear in its second argument and let y x := x , y {\displaystyle \langle y\mid x\rangle :=\langle x,y\rangle } be the corresponding physics notation. For every continuous linear functional φ H , {\displaystyle \varphi \in H^{*},} there exists a unique vector f φ H , {\displaystyle f_{\varphi }\in H,} called the Riesz representation of φ , {\displaystyle \varphi ,} such that[3]

φ ( x ) = x , f φ = f φ x  for all  x H . {\displaystyle \varphi (x)=\left\langle x,f_{\varphi }\right\rangle =\left\langle f_{\varphi }\mid x\right\rangle \quad {\text{ for all }}x\in H.}

Importantly for complex Hilbert spaces, f φ {\displaystyle f_{\varphi }} is always located in the antilinear coordinate of the inner product.[note 1]

Furthermore, the length of the representation vector is equal to the norm of the functional:

f φ H = φ H , {\displaystyle \left\|f_{\varphi }\right\|_{H}=\|\varphi \|_{H^{*}},}
and f φ {\displaystyle f_{\varphi }} is the unique vector f φ ( ker φ ) {\displaystyle f_{\varphi }\in \left(\ker \varphi \right)^{\bot }} with φ ( f φ ) = φ 2 . {\displaystyle \varphi \left(f_{\varphi }\right)=\|\varphi \|^{2}.} It is also the unique element of minimum norm in C := φ 1 ( φ 2 ) {\displaystyle C:=\varphi ^{-1}\left(\|\varphi \|^{2}\right)} ; that is to say, f φ {\displaystyle f_{\varphi }} is the unique element of C {\displaystyle C} satisfying f φ = inf c C c . {\displaystyle \left\|f_{\varphi }\right\|=\inf _{c\in C}\|c\|.} Moreover, any non-zero q ( ker φ ) {\displaystyle q\in (\ker \varphi )^{\bot }} can be written as q = ( q 2 / φ ( q ) ¯ )   f φ . {\displaystyle q=\left(\|q\|^{2}/\,{\overline {\varphi (q)}}\right)\ f_{\varphi }.}

Corollary — The canonical map from H {\displaystyle H} into its dual H {\displaystyle H^{*}} [1] is the injective antilinear operator isometry[note 2][1]

Φ : H H y , y = y | {\displaystyle {\begin{alignedat}{4}\Phi :\;&&H&&\;\to \;&H^{*}\\[0.3ex]&&y&&\;\mapsto \;&\langle \,\cdot \,,y\rangle =\langle y|\,\cdot \,\rangle \\\end{alignedat}}}
The Riesz representation theorem states that this map is surjective (and thus bijective) when H {\displaystyle H} is complete and that its inverse is the bijective isometric antilinear isomorphism
Φ 1 : H H φ f φ . {\displaystyle {\begin{alignedat}{4}\Phi ^{-1}:\;&&H^{*}&&\;\to \;&H\\[0.3ex]&&\varphi &&\;\mapsto \;&f_{\varphi }\\\end{alignedat}}.}
Consequently, every continuous linear functional on the Hilbert space H {\displaystyle H} can be written uniquely in the form y | {\displaystyle \langle y\,|\,\cdot \,\rangle } [1] where y | H = y H {\displaystyle \|\langle y\,|\cdot \rangle \|_{H^{*}}=\|y\|_{H}} for every y H . {\displaystyle y\in H.} The assignment y y , = | y {\displaystyle y\mapsto \langle y,\cdot \rangle =\langle \cdot \,|\,y\rangle } can also be viewed as a bijective linear isometry H H ¯ {\displaystyle H\to {\overline {H}}^{*}} into the anti-dual space of H , {\displaystyle H,} [1] which is the complex conjugate vector space of the continuous dual space H . {\displaystyle H^{*}.}

The inner products on H {\displaystyle H} and H {\displaystyle H^{*}} are related by

Φ h , Φ k H = h , k ¯ H = k , h H  for all  h , k H {\displaystyle \left\langle \Phi h,\Phi k\right\rangle _{H^{*}}={\overline {\langle h,k\rangle }}_{H}=\langle k,h\rangle _{H}\quad {\text{ for all }}h,k\in H}
and similarly,
Φ 1 φ , Φ 1 ψ H = φ , ψ ¯ H = ψ , φ H  for all  φ , ψ H . {\displaystyle \left\langle \Phi ^{-1}\varphi ,\Phi ^{-1}\psi \right\rangle _{H}={\overline {\langle \varphi ,\psi \rangle }}_{H^{*}}=\left\langle \psi ,\varphi \right\rangle _{H^{*}}\quad {\text{ for all }}\varphi ,\psi \in H^{*}.}

The set C := φ 1 ( φ 2 ) {\displaystyle C:=\varphi ^{-1}\left(\|\varphi \|^{2}\right)} satisfies C = f φ + ker φ {\displaystyle C=f_{\varphi }+\ker \varphi } and C f φ = ker φ {\displaystyle C-f_{\varphi }=\ker \varphi } so when f φ 0 {\displaystyle f_{\varphi }\neq 0} then C {\displaystyle C} can be interpreted as being the affine hyperplane[note 3] that is parallel to the vector subspace ker φ {\displaystyle \ker \varphi } and contains f φ . {\displaystyle f_{\varphi }.}

For y H , {\displaystyle y\in H,} the physics notation for the functional Φ ( y ) H {\displaystyle \Phi (y)\in H^{*}} is the bra y | , {\displaystyle \langle y|,} where explicitly this means that y | := Φ ( y ) , {\displaystyle \langle y|:=\Phi (y),} which complements the ket notation | y {\displaystyle |y\rangle } defined by | y := y . {\displaystyle |y\rangle :=y.} In the mathematical treatment of quantum mechanics, the theorem can be seen as a justification for the popular bra–ket notation. The theorem says that, every bra ψ | {\displaystyle \langle \psi \,|} has a corresponding ket | ψ , {\displaystyle |\,\psi \rangle ,} and the latter is unique.

Historically, the theorem is often attributed simultaneously to Riesz and Fréchet in 1907 (see references).

Proof[4]

Let F {\displaystyle \mathbb {F} } denote the underlying scalar field of H . {\displaystyle H.}

Proof of norm formula:

Fix y H . {\displaystyle y\in H.} Define Λ : H F {\displaystyle \Lambda :H\to \mathbb {F} } by Λ ( z ) := y | z , {\displaystyle \Lambda (z):=\langle \,y\,|\,z\,\rangle ,} which is a linear functional on H {\displaystyle H} since z {\displaystyle z} is in the linear argument. By the Cauchy–Schwarz inequality,

| Λ ( z ) | = | y | z | y z {\displaystyle |\Lambda (z)|=|\langle \,y\,|\,z\,\rangle |\leq \|y\|\|z\|}
which shows that Λ {\displaystyle \Lambda } is bounded (equivalently, continuous) and that Λ y . {\displaystyle \|\Lambda \|\leq \|y\|.} It remains to show that y Λ . {\displaystyle \|y\|\leq \|\Lambda \|.} By using y {\displaystyle y} in place of z , {\displaystyle z,} it follows that
y 2 = y | y = Λ y = | Λ ( y ) | Λ y {\displaystyle \|y\|^{2}=\langle \,y\,|\,y\,\rangle =\Lambda y=|\Lambda (y)|\leq \|\Lambda \|\|y\|}
(the equality Λ y = | Λ ( y ) | {\displaystyle \Lambda y=|\Lambda (y)|} holds because Λ y = y 2 0 {\displaystyle \Lambda y=\|y\|^{2}\geq 0} is real and non-negative). Thus that Λ = y . {\displaystyle \|\Lambda \|=\|y\|.} {\displaystyle \blacksquare }

The proof above did not use the fact that H {\displaystyle H} is complete, which shows that the formula for the norm y | H = y H {\displaystyle \|\langle \,y\,|\,\cdot \,\rangle \|_{H^{*}}=\|y\|_{H}} holds more generally for all inner product spaces.


Proof that a Riesz representation of φ {\displaystyle \varphi } is unique:

Suppose f , g H {\displaystyle f,g\in H} are such that φ ( z ) = f | z {\displaystyle \varphi (z)=\langle \,f\,|\,z\,\rangle } and φ ( z ) = g | z {\displaystyle \varphi (z)=\langle \,g\,|\,z\,\rangle } for all z H . {\displaystyle z\in H.} Then

f g | z = f | z g | z = φ ( z ) φ ( z ) = 0  for all  z H {\displaystyle \langle \,f-g\,|\,z\,\rangle =\langle \,f\,|\,z\,\rangle -\langle \,g\,|\,z\,\rangle =\varphi (z)-\varphi (z)=0\quad {\text{ for all }}z\in H}
which shows that Λ := f g | {\displaystyle \Lambda :=\langle \,f-g\,|\,\cdot \,\rangle } is the constant 0 {\displaystyle 0} linear functional. Consequently 0 = f g | = f g , {\displaystyle 0=\|\langle \,f-g\,|\,\cdot \,\rangle \|=\|f-g\|,} which implies that f g = 0. {\displaystyle f-g=0.} {\displaystyle \blacksquare }


Proof that a vector f φ {\displaystyle f_{\varphi }} representing φ {\displaystyle \varphi } exists:

Let K := ker φ := { m H : φ ( m ) = 0 } . {\displaystyle K:=\ker \varphi :=\{m\in H:\varphi (m)=0\}.} If K = H {\displaystyle K=H} (or equivalently, if φ = 0 {\displaystyle \varphi =0} ) then taking f φ := 0 {\displaystyle f_{\varphi }:=0} completes the proof so assume that K H {\displaystyle K\neq H} and φ 0. {\displaystyle \varphi \neq 0.} The continuity of φ {\displaystyle \varphi } implies that K {\displaystyle K} is a closed subspace of H {\displaystyle H} (because K = φ 1 ( { 0 } ) {\displaystyle K=\varphi ^{-1}(\{0\})} and { 0 } {\displaystyle \{0\}} is a closed subset of F {\displaystyle \mathbb {F} } ). Let

K := { v H   :   v | k = 0    for all  k K } {\displaystyle K^{\bot }:=\{v\in H~:~\langle \,v\,|\,k\,\rangle =0~{\text{ for all }}k\in K\}}
denote the orthogonal complement of K {\displaystyle K} in H . {\displaystyle H.} Because K {\displaystyle K} is closed and H {\displaystyle H} is a Hilbert space,[note 4] H {\displaystyle H} can be written as the direct sum H = K K {\displaystyle H=K\oplus K^{\bot }} [note 5] (a proof of this is given in the article on the Hilbert projection theorem). Because K H , {\displaystyle K\neq H,} there exists some non-zero p K . {\displaystyle p\in K^{\bot }.} For any h H , {\displaystyle h\in H,}
φ [ ( φ h ) p ( φ p ) h ]   =   φ [ ( φ h ) p ] φ [ ( φ p ) h ]   =   ( φ h ) φ p ( φ p ) φ h = 0 , {\displaystyle \varphi [(\varphi h)p-(\varphi p)h]~=~\varphi [(\varphi h)p]-\varphi [(\varphi p)h]~=~(\varphi h)\varphi p-(\varphi p)\varphi h=0,}
which shows that ( φ h ) p ( φ p ) h     ker φ = K , {\displaystyle (\varphi h)p-(\varphi p)h~\in ~\ker \varphi =K,} where now p K {\displaystyle p\in K^{\bot }} implies
0 = p | ( φ h ) p ( φ p ) h   =   p | ( φ h ) p p | ( φ p ) h   =   ( φ h ) p | p ( φ p ) p | h . {\displaystyle 0=\langle \,p\,|\,(\varphi h)p-(\varphi p)h\,\rangle ~=~\langle \,p\,|\,(\varphi h)p\,\rangle -\langle \,p\,|\,(\varphi p)h\,\rangle ~=~(\varphi h)\langle \,p\,|\,p\,\rangle -(\varphi p)\langle \,p\,|\,h\,\rangle .}
Solving for φ h {\displaystyle \varphi h} shows that
φ h = ( φ p ) p | h p 2 = φ p ¯ p 2 p | h  for every  h H , {\displaystyle \varphi h={\frac {(\varphi p)\langle \,p\,|\,h\,\rangle }{\|p\|^{2}}}=\left\langle \,{\frac {\overline {\varphi p}}{\|p\|^{2}}}p\,{\Bigg |}\,h\,\right\rangle \quad {\text{ for every }}h\in H,}
which proves that the vector f φ := φ p ¯ p 2 p {\displaystyle f_{\varphi }:={\frac {\overline {\varphi p}}{\|p\|^{2}}}p} satisfies φ h = f φ | h  for every  h H . {\displaystyle \varphi h=\langle \,f_{\varphi }\,|\,h\,\rangle {\text{ for every }}h\in H.}

Applying the norm formula that was proved above with y := f φ {\displaystyle y:=f_{\varphi }} shows that φ H = f φ | H = f φ H . {\displaystyle \|\varphi \|_{H^{*}}=\left\|\left\langle \,f_{\varphi }\,|\,\cdot \,\right\rangle \right\|_{H^{*}}=\left\|f_{\varphi }\right\|_{H}.} Also, the vector u := p p {\displaystyle u:={\frac {p}{\|p\|}}} has norm u = 1 {\displaystyle \|u\|=1} and satisfies f φ := φ ( u ) ¯ u . {\displaystyle f_{\varphi }:={\overline {\varphi (u)}}u.} {\displaystyle \blacksquare }


It can now be deduced that K {\displaystyle K^{\bot }} is 1 {\displaystyle 1} -dimensional when φ 0. {\displaystyle \varphi \neq 0.} Let q K {\displaystyle q\in K^{\bot }} be any non-zero vector. Replacing p {\displaystyle p} with q {\displaystyle q} in the proof above shows that the vector g := φ q ¯ q 2 q {\displaystyle g:={\frac {\overline {\varphi q}}{\|q\|^{2}}}q} satisfies φ ( h ) = g | h {\displaystyle \varphi (h)=\langle \,g\,|\,h\,\rangle } for every h H . {\displaystyle h\in H.} The uniqueness of the (non-zero) vector f φ {\displaystyle f_{\varphi }} representing φ {\displaystyle \varphi } implies that f φ = g , {\displaystyle f_{\varphi }=g,} which in turn implies that φ q ¯ 0 {\displaystyle {\overline {\varphi q}}\neq 0} and q = q 2 φ q ¯ f φ . {\displaystyle q={\frac {\|q\|^{2}}{\overline {\varphi q}}}f_{\varphi }.} Thus every vector in K {\displaystyle K^{\bot }} is a scalar multiple of f φ . {\displaystyle f_{\varphi }.} {\displaystyle \blacksquare }

The formulas for the inner products follow from the polarization identity.

Observations

If φ H {\displaystyle \varphi \in H^{*}} then

φ ( f φ ) = f φ , f φ = f φ 2 = φ 2 . {\displaystyle \varphi \left(f_{\varphi }\right)=\left\langle f_{\varphi },f_{\varphi }\right\rangle =\left\|f_{\varphi }\right\|^{2}=\|\varphi \|^{2}.}
So in particular, φ ( f φ ) 0 {\displaystyle \varphi \left(f_{\varphi }\right)\geq 0} is always real and furthermore, φ ( f φ ) = 0 {\displaystyle \varphi \left(f_{\varphi }\right)=0} if and only if f φ = 0 {\displaystyle f_{\varphi }=0} if and only if φ = 0. {\displaystyle \varphi =0.}

Linear functionals as affine hyperplanes

A non-trivial continuous linear functional φ {\displaystyle \varphi } is often interpreted geometrically by identifying it with the affine hyperplane A := φ 1 ( 1 ) {\displaystyle A:=\varphi ^{-1}(1)} (the kernel ker φ = φ 1 ( 0 ) {\displaystyle \ker \varphi =\varphi ^{-1}(0)} is also often visualized alongside A := φ 1 ( 1 ) {\displaystyle A:=\varphi ^{-1}(1)} although knowing A {\displaystyle A} is enough to reconstruct ker φ {\displaystyle \ker \varphi } because if A = {\displaystyle A=\varnothing } then ker φ = H {\displaystyle \ker \varphi =H} and otherwise ker φ = A A {\displaystyle \ker \varphi =A-A} ). In particular, the norm of φ {\displaystyle \varphi } should somehow be interpretable as the "norm of the hyperplane A {\displaystyle A} ". When φ 0 {\displaystyle \varphi \neq 0} then the Riesz representation theorem provides such an interpretation of φ {\displaystyle \|\varphi \|} in terms of the affine hyperplane[note 3] A := φ 1 ( 1 ) {\displaystyle A:=\varphi ^{-1}(1)} as follows: using the notation from the theorem's statement, from φ 2 0 {\displaystyle \|\varphi \|^{2}\neq 0} it follows that C := φ 1 ( φ 2 ) = φ 2 φ 1 ( 1 ) = φ 2 A {\displaystyle C:=\varphi ^{-1}\left(\|\varphi \|^{2}\right)=\|\varphi \|^{2}\varphi ^{-1}(1)=\|\varphi \|^{2}A} and so φ = f φ = inf c C c {\displaystyle \|\varphi \|=\left\|f_{\varphi }\right\|=\inf _{c\in C}\|c\|} implies φ = inf a A φ 2 a {\displaystyle \|\varphi \|=\inf _{a\in A}\|\varphi \|^{2}\|a\|} and thus φ = 1 inf a A a . {\displaystyle \|\varphi \|={\frac {1}{\inf _{a\in A}\|a\|}}.} This can also be seen by applying the Hilbert projection theorem to A {\displaystyle A} and concluding that the global minimum point of the map A [ 0 , ) {\displaystyle A\to [0,\infty )} defined by a a {\displaystyle a\mapsto \|a\|} is f φ φ 2 A . {\displaystyle {\frac {f_{\varphi }}{\|\varphi \|^{2}}}\in A.} The formulas

1 inf a A a = sup a A 1 a {\displaystyle {\frac {1}{\inf _{a\in A}\|a\|}}=\sup _{a\in A}{\frac {1}{\|a\|}}}
provide the promised interpretation of the linear functional's norm φ {\displaystyle \|\varphi \|} entirely in terms of its associated affine hyperplane A = φ 1 ( 1 ) {\displaystyle A=\varphi ^{-1}(1)} (because with this formula, knowing only the set A {\displaystyle A} is enough to describe the norm of its associated linear functional). Defining 1 := 0 , {\displaystyle {\frac {1}{\infty }}:=0,} the infimum formula
φ = 1 inf a φ 1 ( 1 ) a {\displaystyle \|\varphi \|={\frac {1}{\inf _{a\in \varphi ^{-1}(1)}\|a\|}}}
will also hold when φ = 0. {\displaystyle \varphi =0.} When the supremum is taken in R {\displaystyle \mathbb {R} } (as is typically assumed), then the supremum of the empty set is sup = {\displaystyle \sup \varnothing =-\infty } but if the supremum is taken in the non-negative reals [ 0 , ) {\displaystyle [0,\infty )} (which is the image/range of the norm {\displaystyle \|\,\cdot \,\|} when dim H > 0 {\displaystyle \dim H>0} ) then this supremum is instead sup = 0 , {\displaystyle \sup \varnothing =0,} in which case the supremum formula φ = sup a φ 1 ( 1 ) 1 a {\displaystyle \|\varphi \|=\sup _{a\in \varphi ^{-1}(1)}{\frac {1}{\|a\|}}} will also hold when φ = 0 {\displaystyle \varphi =0} (although the atypical equality sup = 0 {\displaystyle \sup \varnothing =0} is usually unexpected and so risks causing confusion).

Constructions of the representing vector

Using the notation from the theorem above, several ways of constructing f φ {\displaystyle f_{\varphi }} from φ H {\displaystyle \varphi \in H^{*}} are now described. If φ = 0 {\displaystyle \varphi =0} then f φ := 0 {\displaystyle f_{\varphi }:=0} ; in other words,

f 0 = 0. {\displaystyle f_{0}=0.}

This special case of φ = 0 {\displaystyle \varphi =0} is henceforth assumed to be known, which is why some of the constructions given below start by assuming φ 0. {\displaystyle \varphi \neq 0.}

Orthogonal complement of kernel

If φ 0 {\displaystyle \varphi \neq 0} then for any 0 u ( ker φ ) , {\displaystyle 0\neq u\in (\ker \varphi )^{\bot },}

f φ := φ ( u ) ¯ u u 2 . {\displaystyle f_{\varphi }:={\frac {{\overline {\varphi (u)}}u}{\|u\|^{2}}}.}

If u ( ker φ ) {\displaystyle u\in (\ker \varphi )^{\bot }} is a unit vector (meaning u = 1 {\displaystyle \|u\|=1} ) then

f φ := φ ( u ) ¯ u {\displaystyle f_{\varphi }:={\overline {\varphi (u)}}u}
(this is true even if φ = 0 {\displaystyle \varphi =0} because in this case f φ = φ ( u ) ¯ u = 0 ¯ u = 0 {\displaystyle f_{\varphi }={\overline {\varphi (u)}}u={\overline {0}}u=0} ). If u {\displaystyle u} is a unit vector satisfying the above condition then the same is true of u , {\displaystyle -u,} which is also a unit vector in ( ker φ ) . {\displaystyle (\ker \varphi )^{\bot }.} However, φ ( u ) ¯ ( u ) = φ ( u ) ¯ u = f φ {\displaystyle {\overline {\varphi (-u)}}(-u)={\overline {\varphi (u)}}u=f_{\varphi }} so both these vectors result in the same f φ . {\displaystyle f_{\varphi }.}

Orthogonal projection onto kernel

If x H {\displaystyle x\in H} is such that φ ( x ) 0 {\displaystyle \varphi (x)\neq 0} and if x K {\displaystyle x_{K}} is the orthogonal projection of x {\displaystyle x} onto ker φ {\displaystyle \ker \varphi } then[proof 1]

f φ = φ 2 φ ( x ) ( x x K ) . {\displaystyle f_{\varphi }={\frac {\|\varphi \|^{2}}{\varphi (x)}}\left(x-x_{K}\right).}

Orthonormal basis

Given an orthonormal basis { e i } i I {\displaystyle \left\{e_{i}\right\}_{i\in I}} of H {\displaystyle H} and a continuous linear functional φ H , {\displaystyle \varphi \in H^{*},} the vector f φ H {\displaystyle f_{\varphi }\in H} can be constructed uniquely by

f φ = i I φ ( e i ) ¯ e i {\displaystyle f_{\varphi }=\sum _{i\in I}{\overline {\varphi \left(e_{i}\right)}}e_{i}}
where all but at most countably many φ ( e i ) {\displaystyle \varphi \left(e_{i}\right)} will be equal to 0 {\displaystyle 0} and where the value of f φ {\displaystyle f_{\varphi }} does not actually depend on choice of orthonormal basis (that is, using any other orthonormal basis for H {\displaystyle H} will result in the same vector). If y H {\displaystyle y\in H} is written as y = i I a i e i {\displaystyle y=\sum _{i\in I}a_{i}e_{i}} then
φ ( y ) = i I φ ( e i ) a i = f φ | y {\displaystyle \varphi (y)=\sum _{i\in I}\varphi \left(e_{i}\right)a_{i}=\langle f_{\varphi }|y\rangle }
and
f φ 2 = φ ( f φ ) = i I φ ( e i ) φ ( e i ) ¯ = i I | φ ( e i ) | 2 = φ 2 . {\displaystyle \left\|f_{\varphi }\right\|^{2}=\varphi \left(f_{\varphi }\right)=\sum _{i\in I}\varphi \left(e_{i}\right){\overline {\varphi \left(e_{i}\right)}}=\sum _{i\in I}\left|\varphi \left(e_{i}\right)\right|^{2}=\|\varphi \|^{2}.}

If the orthonormal basis { e i } i I = { e i } i = 1 {\displaystyle \left\{e_{i}\right\}_{i\in I}=\left\{e_{i}\right\}_{i=1}^{\infty }} is a sequence then this becomes

f φ = φ ( e 1 ) ¯ e 1 + φ ( e 2 ) ¯ e 2 + {\displaystyle f_{\varphi }={\overline {\varphi \left(e_{1}\right)}}e_{1}+{\overline {\varphi \left(e_{2}\right)}}e_{2}+\cdots }
and if y H {\displaystyle y\in H} is written as y = i I a i e i = a 1 e 1 + a 2 e 2 + {\displaystyle y=\sum _{i\in I}a_{i}e_{i}=a_{1}e_{1}+a_{2}e_{2}+\cdots } then
φ ( y ) = φ ( e 1 ) a 1 + φ ( e 2 ) a 2 + = f φ | y . {\displaystyle \varphi (y)=\varphi \left(e_{1}\right)a_{1}+\varphi \left(e_{2}\right)a_{2}+\cdots =\langle f_{\varphi }|y\rangle .}

Example in finite dimensions using matrix transformations

Consider the special case of H = C n {\displaystyle H=\mathbb {C} ^{n}} (where n > 0 {\displaystyle n>0} is an integer) with the standard inner product

z w := z ¯ T w  for all  w , z H {\displaystyle \langle z\mid w\rangle :={\overline {\,{\vec {z}}\,\,}}^{\operatorname {T} }{\vec {w}}\qquad {\text{ for all }}\;w,z\in H}
where w  and  z {\displaystyle w{\text{ and }}z} are represented as column matrices w := [ w 1 w n ] {\displaystyle {\vec {w}}:={\begin{bmatrix}w_{1}\\\vdots \\w_{n}\end{bmatrix}}} and z := [ z 1 z n ] {\displaystyle {\vec {z}}:={\begin{bmatrix}z_{1}\\\vdots \\z_{n}\end{bmatrix}}} with respect to the standard orthonormal basis e 1 , , e n {\displaystyle e_{1},\ldots ,e_{n}} on H {\displaystyle H} (here, e i {\displaystyle e_{i}} is 1 {\displaystyle 1} at its i {\displaystyle i} th coordinate and 0 {\displaystyle 0} everywhere else; as usual, H {\displaystyle H^{*}} will now be associated with the dual basis) and where z ¯ T := [ z 1 ¯ , , z n ¯ ] {\displaystyle {\overline {\,{\vec {z}}\,}}^{\operatorname {T} }:=\left[{\overline {z_{1}}},\ldots ,{\overline {z_{n}}}\right]} denotes the conjugate transpose of z . {\displaystyle {\vec {z}}.} Let φ H {\displaystyle \varphi \in H^{*}} be any linear functional and let φ 1 , , φ n C {\displaystyle \varphi _{1},\ldots ,\varphi _{n}\in \mathbb {C} } be the unique scalars such that
φ ( w 1 , , w n ) = φ 1 w 1 + + φ n w n  for all  w := ( w 1 , , w n ) H , {\displaystyle \varphi \left(w_{1},\ldots ,w_{n}\right)=\varphi _{1}w_{1}+\cdots +\varphi _{n}w_{n}\qquad {\text{ for all }}\;w:=\left(w_{1},\ldots ,w_{n}\right)\in H,}
where it can be shown that φ i = φ ( e i ) {\displaystyle \varphi _{i}=\varphi \left(e_{i}\right)} for all i = 1 , , n . {\displaystyle i=1,\ldots ,n.} Then the Riesz representation of φ {\displaystyle \varphi } is the vector
f φ   :=   φ 1 ¯ e 1 + + φ n ¯ e n   =   ( φ 1 ¯ , , φ n ¯ ) H . {\displaystyle f_{\varphi }~:=~{\overline {\varphi _{1}}}e_{1}+\cdots +{\overline {\varphi _{n}}}e_{n}~=~\left({\overline {\varphi _{1}}},\ldots ,{\overline {\varphi _{n}}}\right)\in H.}
To see why, identify every vector w = ( w 1 , , w n ) {\displaystyle w=\left(w_{1},\ldots ,w_{n}\right)} in H {\displaystyle H} with the column matrix w := [ w 1 w n ] {\displaystyle {\vec {w}}:={\begin{bmatrix}w_{1}\\\vdots \\w_{n}\end{bmatrix}}} so that f φ {\displaystyle f_{\varphi }} is identified with f φ := [ φ 1 ¯ φ n ¯ ] = [ φ ( e 1 ) ¯ φ ( e n ) ¯ ] . {\displaystyle {\vec {f_{\varphi }}}:={\begin{bmatrix}{\overline {\varphi _{1}}}\\\vdots \\{\overline {\varphi _{n}}}\end{bmatrix}}={\begin{bmatrix}{\overline {\varphi \left(e_{1}\right)}}\\\vdots \\{\overline {\varphi \left(e_{n}\right)}}\end{bmatrix}}.} As usual, also identify the linear functional φ {\displaystyle \varphi } with its transformation matrix, which is the row matrix φ := [ φ 1 , , φ n ] {\displaystyle {\vec {\varphi }}:=\left[\varphi _{1},\ldots ,\varphi _{n}\right]} so that f φ := φ ¯ T {\displaystyle {\vec {f_{\varphi }}}:={\overline {\,{\vec {\varphi }}\,\,}}^{\operatorname {T} }} and the function φ {\displaystyle \varphi } is the assignment w φ w , {\displaystyle {\vec {w}}\mapsto {\vec {\varphi }}\,{\vec {w}},} where the right hand side is matrix multiplication. Then for all w = ( w 1 , , w n ) H , {\displaystyle w=\left(w_{1},\ldots ,w_{n}\right)\in H,}
φ ( w ) = φ 1 w 1 + + φ n w n = [ φ 1 , , φ n ] [ w 1 w n ] = [ φ 1 ¯ φ n ¯ ] ¯ T w = f φ ¯ T w = f φ w , {\displaystyle \varphi (w)=\varphi _{1}w_{1}+\cdots +\varphi _{n}w_{n}=\left[\varphi _{1},\ldots ,\varphi _{n}\right]{\begin{bmatrix}w_{1}\\\vdots \\w_{n}\end{bmatrix}}={\overline {\begin{bmatrix}{\overline {\varphi _{1}}}\\\vdots \\{\overline {\varphi _{n}}}\end{bmatrix}}}^{\operatorname {T} }{\vec {w}}={\overline {\,{\vec {f_{\varphi }}}\,\,}}^{\operatorname {T} }{\vec {w}}=\left\langle \,\,f_{\varphi }\,\mid \,w\,\right\rangle ,}
which shows that f φ {\displaystyle f_{\varphi }} satisfies the defining condition of the Riesz representation of φ . {\displaystyle \varphi .} The bijective antilinear isometry Φ : H H {\displaystyle \Phi :H\to H^{*}} defined in the corollary to the Riesz representation theorem is the assignment that sends z = ( z 1 , , z n ) H {\displaystyle z=\left(z_{1},\ldots ,z_{n}\right)\in H} to the linear functional Φ ( z ) H {\displaystyle \Phi (z)\in H^{*}} on H {\displaystyle H} defined by
w = ( w 1 , , w n )     z w = z 1 ¯ w 1 + + z n ¯ w n , {\displaystyle w=\left(w_{1},\ldots ,w_{n}\right)~\mapsto ~\langle \,z\,\mid \,w\,\rangle ={\overline {z_{1}}}w_{1}+\cdots +{\overline {z_{n}}}w_{n},}
where under the identification of vectors in H {\displaystyle H} with column matrices and vector in H {\displaystyle H^{*}} with row matrices, Φ {\displaystyle \Phi } is just the assignment
z = [ z 1 z n ]     z ¯ T = [ z 1 ¯ , , z n ¯ ] . {\displaystyle {\vec {z}}={\begin{bmatrix}z_{1}\\\vdots \\z_{n}\end{bmatrix}}~\mapsto ~{\overline {\,{\vec {z}}\,}}^{\operatorname {T} }=\left[{\overline {z_{1}}},\ldots ,{\overline {z_{n}}}\right].}
As described in the corollary, Φ {\displaystyle \Phi } 's inverse Φ 1 : H H {\displaystyle \Phi ^{-1}:H^{*}\to H} is the antilinear isometry φ f φ , {\displaystyle \varphi \mapsto f_{\varphi },} which was just shown above to be:
φ     f φ   :=   ( φ ( e 1 ) ¯ , , φ ( e n ) ¯ ) ; {\displaystyle \varphi ~\mapsto ~f_{\varphi }~:=~\left({\overline {\varphi \left(e_{1}\right)}},\ldots ,{\overline {\varphi \left(e_{n}\right)}}\right);}
where in terms of matrices, Φ 1 {\displaystyle \Phi ^{-1}} is the assignment
φ = [ φ 1 , , φ n ]     φ ¯ T = [ φ 1 ¯ φ n ¯ ] . {\displaystyle {\vec {\varphi }}=\left[\varphi _{1},\ldots ,\varphi _{n}\right]~\mapsto ~{\overline {\,{\vec {\varphi }}\,\,}}^{\operatorname {T} }={\begin{bmatrix}{\overline {\varphi _{1}}}\\\vdots \\{\overline {\varphi _{n}}}\end{bmatrix}}.}
Thus in terms of matrices, each of Φ : H H {\displaystyle \Phi :H\to H^{*}} and Φ 1 : H H {\displaystyle \Phi ^{-1}:H^{*}\to H} is just the operation of conjugate transposition v v ¯ T {\displaystyle {\vec {v}}\mapsto {\overline {\,{\vec {v}}\,}}^{\operatorname {T} }} (although between different spaces of matrices: if H {\displaystyle H} is identified with the space of all column (respectively, row) matrices then H {\displaystyle H^{*}} is identified with the space of all row (respectively, column matrices).

This example used the standard inner product, which is the map z w := z ¯ T w , {\displaystyle \langle z\mid w\rangle :={\overline {\,{\vec {z}}\,\,}}^{\operatorname {T} }{\vec {w}},} but if a different inner product is used, such as z w M := z ¯ T M w {\displaystyle \langle z\mid w\rangle _{M}:={\overline {\,{\vec {z}}\,\,}}^{\operatorname {T} }\,M\,{\vec {w}}\,} where M {\displaystyle M} is any Hermitian positive-definite matrix, or if a different orthonormal basis is used then the transformation matrices, and thus also the above formulas, will be different.

Relationship with the associated real Hilbert space

Assume that H {\displaystyle H} is a complex Hilbert space with inner product . {\displaystyle \langle \,\cdot \mid \cdot \,\rangle .} When the Hilbert space H {\displaystyle H} is reinterpreted as a real Hilbert space then it will be denoted by H R , {\displaystyle H_{\mathbb {R} },} where the (real) inner-product on H R {\displaystyle H_{\mathbb {R} }} is the real part of H {\displaystyle H} 's inner product; that is:

x , y R := re x , y . {\displaystyle \langle x,y\rangle _{\mathbb {R} }:=\operatorname {re} \langle x,y\rangle .}

The norm on H R {\displaystyle H_{\mathbb {R} }} induced by , R {\displaystyle \langle \,\cdot \,,\,\cdot \,\rangle _{\mathbb {R} }} is equal to the original norm on H {\displaystyle H} and the continuous dual space of H R {\displaystyle H_{\mathbb {R} }} is the set of all real-valued bounded R {\displaystyle \mathbb {R} } -linear functionals on H R {\displaystyle H_{\mathbb {R} }} (see the article about the polarization identity for additional details about this relationship). Let ψ R := re ψ {\displaystyle \psi _{\mathbb {R} }:=\operatorname {re} \psi } and ψ i := im ψ {\displaystyle \psi _{i}:=\operatorname {im} \psi } denote the real and imaginary parts of a linear functional ψ , {\displaystyle \psi ,} so that ψ = re ψ + i im ψ = ψ R + i ψ i . {\displaystyle \psi =\operatorname {re} \psi +i\operatorname {im} \psi =\psi _{\mathbb {R} }+i\psi _{i}.} The formula expressing a linear functional in terms of its real part is

ψ ( h ) = ψ R ( h ) i ψ R ( i h )  for  h H , {\displaystyle \psi (h)=\psi _{\mathbb {R} }(h)-i\psi _{\mathbb {R} }(ih)\quad {\text{ for }}h\in H,}
where ψ i ( h ) = i ψ R ( i h ) {\displaystyle \psi _{i}(h)=-i\psi _{\mathbb {R} }(ih)} for all h H . {\displaystyle h\in H.} It follows that ker ψ R = ψ 1 ( i R ) , {\displaystyle \ker \psi _{\mathbb {R} }=\psi ^{-1}(i\mathbb {R} ),} and that ψ = 0 {\displaystyle \psi =0} if and only if ψ R = 0. {\displaystyle \psi _{\mathbb {R} }=0.} It can also be shown that ψ = ψ R = ψ i {\displaystyle \|\psi \|=\left\|\psi _{\mathbb {R} }\right\|=\left\|\psi _{i}\right\|} where ψ R := sup h 1 | ψ R ( h ) | {\displaystyle \left\|\psi _{\mathbb {R} }\right\|:=\sup _{\|h\|\leq 1}\left|\psi _{\mathbb {R} }(h)\right|} and ψ i := sup h 1 | ψ i ( h ) | {\displaystyle \left\|\psi _{i}\right\|:=\sup _{\|h\|\leq 1}\left|\psi _{i}(h)\right|} are the usual operator norms. In particular, a linear functional ψ {\displaystyle \psi } is bounded if and only if its real part ψ R {\displaystyle \psi _{\mathbb {R} }} is bounded.

Representing a functional and its real part

The Riesz representation of a continuous linear function φ {\displaystyle \varphi } on a complex Hilbert space is equal to the Riesz representation of its real part re φ {\displaystyle \operatorname {re} \varphi } on its associated real Hilbert space.

Explicitly, let φ H {\displaystyle \varphi \in H^{*}} and as above, let f φ H {\displaystyle f_{\varphi }\in H} be the Riesz representation of φ {\displaystyle \varphi } obtained in ( H , , , ) , {\displaystyle (H,\langle ,\cdot ,\cdot \rangle ),} so it is the unique vector that satisfies φ ( x ) = f φ x {\displaystyle \varphi (x)=\left\langle f_{\varphi }\mid x\right\rangle } for all x H . {\displaystyle x\in H.} The real part of φ {\displaystyle \varphi } is a continuous real linear functional on H R {\displaystyle H_{\mathbb {R} }} and so the Riesz representation theorem may be applied to φ R := re φ {\displaystyle \varphi _{\mathbb {R} }:=\operatorname {re} \varphi } and the associated real Hilbert space ( H R , , , R ) {\displaystyle \left(H_{\mathbb {R} },\langle ,\cdot ,\cdot \rangle _{\mathbb {R} }\right)} to produce its Riesz representation, which will be denoted by f φ R . {\displaystyle f_{\varphi _{\mathbb {R} }}.} That is, f φ R {\displaystyle f_{\varphi _{\mathbb {R} }}} is the unique vector in H R {\displaystyle H_{\mathbb {R} }} that satisfies φ R ( x ) = f φ R x R {\displaystyle \varphi _{\mathbb {R} }(x)=\left\langle f_{\varphi _{\mathbb {R} }}\mid x\right\rangle _{\mathbb {R} }} for all x H . {\displaystyle x\in H.} The conclusion is f φ R = f φ . {\displaystyle f_{\varphi _{\mathbb {R} }}=f_{\varphi }.} This follows from the main theorem because ker φ R = φ 1 ( i R ) {\displaystyle \ker \varphi _{\mathbb {R} }=\varphi ^{-1}(i\mathbb {R} )} and if x H {\displaystyle x\in H} then

f φ x R = re f φ x = re φ ( x ) = φ R ( x ) {\displaystyle \left\langle f_{\varphi }\mid x\right\rangle _{\mathbb {R} }=\operatorname {re} \left\langle f_{\varphi }\mid x\right\rangle =\operatorname {re} \varphi (x)=\varphi _{\mathbb {R} }(x)}
and consequently, if m ker φ R {\displaystyle m\in \ker \varphi _{\mathbb {R} }} then f φ m R = 0 , {\displaystyle \left\langle f_{\varphi }\mid m\right\rangle _{\mathbb {R} }=0,} which shows that f φ ( ker φ R ) R . {\displaystyle f_{\varphi }\in (\ker \varphi _{\mathbb {R} })^{\perp _{\mathbb {R} }}.} Moreover, φ ( f φ ) = φ 2 {\displaystyle \varphi (f_{\varphi })=\|\varphi \|^{2}} being a real number implies that φ R ( f φ ) = re φ ( f φ ) = φ 2 . {\displaystyle \varphi _{\mathbb {R} }(f_{\varphi })=\operatorname {re} \varphi (f_{\varphi })=\|\varphi \|^{2}.} In other words, in the theorem and constructions above, if H {\displaystyle H} is replaced with its real Hilbert space counterpart H R {\displaystyle H_{\mathbb {R} }} and if φ {\displaystyle \varphi } is replaced with re φ {\displaystyle \operatorname {re} \varphi } then f φ = f re φ . {\displaystyle f_{\varphi }=f_{\operatorname {re} \varphi }.} This means that vector f φ {\displaystyle f_{\varphi }} obtained by using ( H R , , , R ) {\displaystyle \left(H_{\mathbb {R} },\langle ,\cdot ,\cdot \rangle _{\mathbb {R} }\right)} and the real linear functional re φ {\displaystyle \operatorname {re} \varphi } is the equal to the vector obtained by using the origin complex Hilbert space ( H , , , ) {\displaystyle \left(H,\left\langle ,\cdot ,\cdot \right\rangle \right)} and original complex linear functional φ {\displaystyle \varphi } (with identical norm values as well).

Furthermore, if φ 0 {\displaystyle \varphi \neq 0} then f φ {\displaystyle f_{\varphi }} is perpendicular to ker φ R {\displaystyle \ker \varphi _{\mathbb {R} }} with respect to , R {\displaystyle \langle \cdot ,\cdot \rangle _{\mathbb {R} }} where the kernel of φ {\displaystyle \varphi } is be a proper subspace of the kernel of its real part φ R . {\displaystyle \varphi _{\mathbb {R} }.} Assume now that φ 0. {\displaystyle \varphi \neq 0.} Then f φ ker φ R {\displaystyle f_{\varphi }\not \in \ker \varphi _{\mathbb {R} }} because φ R ( f φ ) = φ ( f φ ) = φ 2 0 {\displaystyle \varphi _{\mathbb {R} }\left(f_{\varphi }\right)=\varphi \left(f_{\varphi }\right)=\|\varphi \|^{2}\neq 0} and ker φ {\displaystyle \ker \varphi } is a proper subset of ker φ R . {\displaystyle \ker \varphi _{\mathbb {R} }.} The vector subspace ker φ {\displaystyle \ker \varphi } has real codimension 1 {\displaystyle 1} in ker φ R , {\displaystyle \ker \varphi _{\mathbb {R} },} while ker φ R {\displaystyle \ker \varphi _{\mathbb {R} }} has real codimension 1 {\displaystyle 1} in H R , {\displaystyle H_{\mathbb {R} },} and f φ , ker φ R R = 0. {\displaystyle \left\langle f_{\varphi },\ker \varphi _{\mathbb {R} }\right\rangle _{\mathbb {R} }=0.} That is, f φ {\displaystyle f_{\varphi }} is perpendicular to ker φ R {\displaystyle \ker \varphi _{\mathbb {R} }} with respect to , R . {\displaystyle \langle \cdot ,\cdot \rangle _{\mathbb {R} }.}

Canonical injections into the dual and anti-dual

Induced linear map into anti-dual

The map defined by placing y {\displaystyle y} into the linear coordinate of the inner product and letting the variable h H {\displaystyle h\in H} vary over the antilinear coordinate results in an antilinear functional:

y = y , : H F  defined by  h h y = y , h . {\displaystyle \langle \,\cdot \mid y\,\rangle =\langle \,y,\cdot \,\rangle :H\to \mathbb {F} \quad {\text{ defined by }}\quad h\mapsto \langle \,h\mid y\,\rangle =\langle \,y,h\,\rangle .}

This map is an element of H ¯ , {\displaystyle {\overline {H}}^{*},} which is the continuous anti-dual space of H . {\displaystyle H.} The canonical map from H {\displaystyle H} into its anti-dual H ¯ {\displaystyle {\overline {H}}^{*}} [1] is the linear operator

In H H ¯ : H H ¯ y y = y , {\displaystyle {\begin{alignedat}{4}\operatorname {In} _{H}^{{\overline {H}}^{*}}:\;&&H&&\;\to \;&{\overline {H}}^{*}\\[0.3ex]&&y&&\;\mapsto \;&\langle \,\cdot \mid y\,\rangle =\langle \,y,\cdot \,\rangle \\[0.3ex]\end{alignedat}}}
which is also an injective isometry.[1] The Fundamental theorem of Hilbert spaces, which is related to Riesz representation theorem, states that this map is surjective (and thus bijective). Consequently, every antilinear functional on H {\displaystyle H} can be written (uniquely) in this form.[1]

If Cong : H H ¯ {\displaystyle \operatorname {Cong} :H^{*}\to {\overline {H}}^{*}} is the canonical antilinear bijective isometry f f ¯ {\displaystyle f\mapsto {\overline {f}}} that was defined above, then the following equality holds:

Cong     In H H   =   In H H ¯ . {\displaystyle \operatorname {Cong} ~\circ ~\operatorname {In} _{H}^{H^{*}}~=~\operatorname {In} _{H}^{{\overline {H}}^{*}}.}

Extending the bra–ket notation to bras and kets

Let ( H , , H ) {\displaystyle \left(H,\langle \cdot ,\cdot \rangle _{H}\right)} be a Hilbert space and as before, let y | x H := x , y H . {\displaystyle \langle y\,|\,x\rangle _{H}:=\langle x,y\rangle _{H}.} Let

Φ : H H g g H = , g H {\displaystyle {\begin{alignedat}{4}\Phi :\;&&H&&\;\to \;&H^{*}\\[0.3ex]&&g&&\;\mapsto \;&\left\langle \,g\mid \cdot \,\right\rangle _{H}=\left\langle \,\cdot ,g\,\right\rangle _{H}\\\end{alignedat}}}
which is a bijective antilinear isometry that satisfies
( Φ h ) g = h g H = g , h H  for all  g , h H . {\displaystyle (\Phi h)g=\langle h\mid g\rangle _{H}=\langle g,h\rangle _{H}\quad {\text{ for all }}g,h\in H.}

Bras

Given a vector h H , {\displaystyle h\in H,} let h | {\displaystyle \langle h\,|} denote the continuous linear functional Φ h {\displaystyle \Phi h} ; that is,

h |   :=   Φ h {\displaystyle \langle h\,|~:=~\Phi h}
so that this functional h | {\displaystyle \langle h\,|} is defined by g h g H . {\displaystyle g\mapsto \left\langle \,h\mid g\,\right\rangle _{H}.} This map was denoted by h {\displaystyle \left\langle h\mid \cdot \,\right\rangle } earlier in this article.

The assignment h h | {\displaystyle h\mapsto \langle h|} is just the isometric antilinear isomorphism Φ   :   H H , {\displaystyle \Phi ~:~H\to H^{*},} which is why   c g + h |   =   c ¯ g   +   h |   {\displaystyle ~\langle cg+h\,|~=~{\overline {c}}\langle g\mid ~+~\langle h\,|~} holds for all g , h H {\displaystyle g,h\in H} and all scalars c . {\displaystyle c.} The result of plugging some given g H {\displaystyle g\in H} into the functional h | {\displaystyle \langle h\,|} is the scalar h | g H = g , h H , {\displaystyle \langle h\,|\,g\rangle _{H}=\langle g,h\rangle _{H},} which may be denoted by h g . {\displaystyle \langle h\mid g\rangle .} [note 6]

Bra of a linear functional

Given a continuous linear functional ψ H , {\displaystyle \psi \in H^{*},} let ψ {\displaystyle \langle \psi \mid } denote the vector Φ 1 ψ H {\displaystyle \Phi ^{-1}\psi \in H} ; that is,

ψ   :=   Φ 1 ψ . {\displaystyle \langle \psi \mid ~:=~\Phi ^{-1}\psi .}

The assignment ψ ψ {\displaystyle \psi \mapsto \langle \psi \mid } is just the isometric antilinear isomorphism Φ 1   :   H H , {\displaystyle \Phi ^{-1}~:~H^{*}\to H,} which is why   c ψ + ϕ   =   c ¯ ψ   +   ϕ   {\displaystyle ~\langle c\psi +\phi \mid ~=~{\overline {c}}\langle \psi \mid ~+~\langle \phi \mid ~} holds for all ϕ , ψ H {\displaystyle \phi ,\psi \in H^{*}} and all scalars c . {\displaystyle c.}

The defining condition of the vector ψ | H {\displaystyle \langle \psi |\in H} is the technically correct but unsightly equality

ψ g H   =   ψ g  for all  g H , {\displaystyle \left\langle \,\langle \psi \mid \,\mid g\right\rangle _{H}~=~\psi g\quad {\text{ for all }}g\in H,}
which is why the notation ψ g {\displaystyle \left\langle \psi \mid g\right\rangle } is used in place of ψ g H = g , ψ H . {\displaystyle \left\langle \,\langle \psi \mid \,\mid g\right\rangle _{H}=\left\langle g,\,\langle \psi \mid \right\rangle _{H}.} With this notation, the defining condition becomes
ψ g   =   ψ g  for all  g H . {\displaystyle \left\langle \psi \mid g\right\rangle ~=~\psi g\quad {\text{ for all }}g\in H.}

Kets

For any given vector g H , {\displaystyle g\in H,} the notation | g {\displaystyle |\,g\rangle } is used to denote g {\displaystyle g} ; that is,

g := g . {\displaystyle \mid g\rangle :=g.}

The assignment g | g {\displaystyle g\mapsto |\,g\rangle } is just the identity map Id H : H H , {\displaystyle \operatorname {Id} _{H}:H\to H,} which is why   c g + h   =   c g   +   h   {\displaystyle ~\mid cg+h\rangle ~=~c\mid g\rangle ~+~\mid h\rangle ~} holds for all g , h H {\displaystyle g,h\in H} and all scalars c . {\displaystyle c.}

The notation h g {\displaystyle \langle h\mid g\rangle } and ψ g {\displaystyle \langle \psi \mid g\rangle } is used in place of h g H   =   g , h H {\displaystyle \left\langle h\mid \,\mid g\rangle \,\right\rangle _{H}~=~\left\langle \mid g\rangle ,h\right\rangle _{H}} and ψ g H   =   g , ψ H , {\displaystyle \left\langle \psi \mid \,\mid g\rangle \,\right\rangle _{H}~=~\left\langle g,\,\langle \psi \mid \right\rangle _{H},} respectively. As expected,   ψ g = ψ g   {\displaystyle ~\langle \psi \mid g\rangle =\psi g~} and   h g   {\displaystyle ~\langle h\mid g\rangle ~} really is just the scalar   h g H   =   g , h H . {\displaystyle ~\langle h\mid g\rangle _{H}~=~\langle g,h\rangle _{H}.}

Adjoints and transposes

Let A : H Z {\displaystyle A:H\to Z} be a continuous linear operator between Hilbert spaces ( H , , H ) {\displaystyle \left(H,\langle \cdot ,\cdot \rangle _{H}\right)} and ( Z , , Z ) . {\displaystyle \left(Z,\langle \cdot ,\cdot \rangle _{Z}\right).} As before, let y x H := x , y H {\displaystyle \langle y\mid x\rangle _{H}:=\langle x,y\rangle _{H}} and y x Z := x , y Z . {\displaystyle \langle y\mid x\rangle _{Z}:=\langle x,y\rangle _{Z}.}

Denote by

Φ H : H H g g H  and  Φ Z : Z Z y y Z {\displaystyle {\begin{alignedat}{4}\Phi _{H}:\;&&H&&\;\to \;&H^{*}\\[0.3ex]&&g&&\;\mapsto \;&\langle \,g\mid \cdot \,\rangle _{H}\\\end{alignedat}}\quad {\text{ and }}\quad {\begin{alignedat}{4}\Phi _{Z}:\;&&Z&&\;\to \;&Z^{*}\\[0.3ex]&&y&&\;\mapsto \;&\langle \,y\mid \cdot \,\rangle _{Z}\\\end{alignedat}}}
the usual bijective antilinear isometries that satisfy:
( Φ H g ) h = g h H  for all  g , h H  and  ( Φ Z y ) z = y z Z  for all  y , z Z . {\displaystyle \left(\Phi _{H}g\right)h=\langle g\mid h\rangle _{H}\quad {\text{ for all }}g,h\in H\qquad {\text{ and }}\qquad \left(\Phi _{Z}y\right)z=\langle y\mid z\rangle _{Z}\quad {\text{ for all }}y,z\in Z.}

Definition of the adjoint

For every z Z , {\displaystyle z\in Z,} the scalar-valued map z A ( ) Z {\displaystyle \langle z\mid A(\cdot )\rangle _{Z}} [note 7] on H {\displaystyle H} defined by

h z A h Z = A h , z Z {\displaystyle h\mapsto \langle z\mid Ah\rangle _{Z}=\langle Ah,z\rangle _{Z}}

is a continuous linear functional on H {\displaystyle H} and so by the Riesz representation theorem, there exists a unique vector in H , {\displaystyle H,} denoted by A z , {\displaystyle A^{*}z,} such that z A ( ) Z = A z H , {\displaystyle \langle z\mid A(\cdot )\rangle _{Z}=\left\langle A^{*}z\mid \cdot \,\right\rangle _{H},} or equivalently, such that

z A h Z = A z h H  for all  h H . {\displaystyle \langle z\mid Ah\rangle _{Z}=\left\langle A^{*}z\mid h\right\rangle _{H}\quad {\text{ for all }}h\in H.}

The assignment z A z {\displaystyle z\mapsto A^{*}z} thus induces a function A : Z H {\displaystyle A^{*}:Z\to H} called the adjoint of A : H Z {\displaystyle A:H\to Z} whose defining condition is

z A h Z = A z h H  for all  h H  and all  z Z . {\displaystyle \langle z\mid Ah\rangle _{Z}=\left\langle A^{*}z\mid h\right\rangle _{H}\quad {\text{ for all }}h\in H{\text{ and all }}z\in Z.}
The adjoint A : Z H {\displaystyle A^{*}:Z\to H} is necessarily a continuous (equivalently, a bounded) linear operator.

If H {\displaystyle H} is finite dimensional with the standard inner product and if M {\displaystyle M} is the transformation matrix of A {\displaystyle A} with respect to the standard orthonormal basis then M {\displaystyle M} 's conjugate transpose M T ¯ {\displaystyle {\overline {M^{\operatorname {T} }}}} is the transformation matrix of the adjoint A . {\displaystyle A^{*}.}

Adjoints are transposes

It is also possible to define the transpose or algebraic adjoint of A : H Z , {\displaystyle A:H\to Z,} which is the map t A : Z H {\displaystyle {}^{t}A:Z^{*}\to H^{*}} defined by sending a continuous linear functionals ψ Z {\displaystyle \psi \in Z^{*}} to

t A ( ψ ) := ψ A , {\displaystyle {}^{t}A(\psi ):=\psi \circ A,}
where the composition ψ A {\displaystyle \psi \circ A} is always a continuous linear functional on H {\displaystyle H} and it satisfies A = t A {\displaystyle \|A\|=\left\|{}^{t}A\right\|} (this is true more generally, when H {\displaystyle H} and Z {\displaystyle Z} are merely normed spaces).[5] So for example, if z Z {\displaystyle z\in Z} then t A {\displaystyle {}^{t}A} sends the continuous linear functional z Z Z {\displaystyle \langle z\mid \cdot \rangle _{Z}\in Z^{*}} (defined on Z {\displaystyle Z} by g z g Z {\displaystyle g\mapsto \langle z\mid g\rangle _{Z}} ) to the continuous linear functional z A ( ) Z H {\displaystyle \langle z\mid A(\cdot )\rangle _{Z}\in H^{*}} (defined on H {\displaystyle H} by h z A ( h ) Z {\displaystyle h\mapsto \langle z\mid A(h)\rangle _{Z}} );[note 7] using bra-ket notation, this can be written as t A z   =   z A {\displaystyle {}^{t}A\langle z\mid ~=~\langle z\mid A} where the juxtaposition of z {\displaystyle \langle z\mid } with A {\displaystyle A} on the right hand side denotes function composition: H A Z z C . {\displaystyle H\xrightarrow {A} Z\xrightarrow {\langle z\mid } \mathbb {C} .}

The adjoint A : Z H {\displaystyle A^{*}:Z\to H} is actually just to the transpose t A : Z H {\displaystyle {}^{t}A:Z^{*}\to H^{*}} [2] when the Riesz representation theorem is used to identify Z {\displaystyle Z} with Z {\displaystyle Z^{*}} and H {\displaystyle H} with H . {\displaystyle H^{*}.}

Explicitly, the relationship between the adjoint and transpose is:

t A     Φ Z   =   Φ H     A {\displaystyle {}^{t}A~\circ ~\Phi _{Z}~=~\Phi _{H}~\circ ~A^{*}}

 

 

 

 

(Adjoint-transpose)

which can be rewritten as:

A   =   Φ H 1     t A     Φ Z  and  t A   =   Φ H     A     Φ Z 1 . {\displaystyle A^{*}~=~\Phi _{H}^{-1}~\circ ~{}^{t}A~\circ ~\Phi _{Z}\quad {\text{ and }}\quad {}^{t}A~=~\Phi _{H}~\circ ~A^{*}~\circ ~\Phi _{Z}^{-1}.}

Proof

To show that t A     Φ Z   =   Φ H     A , {\displaystyle {}^{t}A~\circ ~\Phi _{Z}~=~\Phi _{H}~\circ ~A^{*},} fix z Z . {\displaystyle z\in Z.} The definition of t A {\displaystyle {}^{t}A} implies

( t A Φ Z ) z = t A ( Φ Z z ) = ( Φ Z z ) A {\displaystyle \left({}^{t}A\circ \Phi _{Z}\right)z={}^{t}A\left(\Phi _{Z}z\right)=\left(\Phi _{Z}z\right)\circ A}
so it remains to show that ( Φ Z z ) A = Φ H ( A z ) . {\displaystyle \left(\Phi _{Z}z\right)\circ A=\Phi _{H}\left(A^{*}z\right).} If h H {\displaystyle h\in H} then
( ( Φ Z z ) A ) h = ( Φ Z z ) ( A h ) = z A h Z = A z h H = ( Φ H ( A z ) ) h , {\displaystyle \left(\left(\Phi _{Z}z\right)\circ A\right)h=\left(\Phi _{Z}z\right)(Ah)=\langle z\mid Ah\rangle _{Z}=\langle A^{*}z\mid h\rangle _{H}=\left(\Phi _{H}(A^{*}z)\right)h,}
as desired. {\displaystyle \blacksquare }

Alternatively, the value of the left and right hand sides of (Adjoint-transpose) at any given z Z {\displaystyle z\in Z} can be rewritten in terms of the inner products as:

( t A     Φ Z ) z = z A ( ) Z  and  ( Φ H     A ) z = A z H {\displaystyle \left({}^{t}A~\circ ~\Phi _{Z}\right)z=\langle z\mid A(\cdot )\rangle _{Z}\quad {\text{ and }}\quad \left(\Phi _{H}~\circ ~A^{*}\right)z=\langle A^{*}z\mid \cdot \,\rangle _{H}}
so that t A     Φ Z   =   Φ H     A {\displaystyle {}^{t}A~\circ ~\Phi _{Z}~=~\Phi _{H}~\circ ~A^{*}} holds if and only if z A ( ) Z = A z H {\displaystyle \langle z\mid A(\cdot )\rangle _{Z}=\langle A^{*}z\mid \cdot \,\rangle _{H}} holds; but the equality on the right holds by definition of A z . {\displaystyle A^{*}z.} The defining condition of A z {\displaystyle A^{*}z} can also be written
z A   =   A z {\displaystyle \langle z\mid A~=~\langle A^{*}z\mid }
if bra-ket notation is used.

Descriptions of self-adjoint, normal, and unitary operators

Assume Z = H {\displaystyle Z=H} and let Φ := Φ H = Φ Z . {\displaystyle \Phi :=\Phi _{H}=\Phi _{Z}.} Let A : H H {\displaystyle A:H\to H} be a continuous (that is, bounded) linear operator.

Whether or not A : H H {\displaystyle A:H\to H} is self-adjoint, normal, or unitary depends entirely on whether or not A {\displaystyle A} satisfies certain defining conditions related to its adjoint, which was shown by (Adjoint-transpose) to essentially be just the transpose t A : H H . {\displaystyle {}^{t}A:H^{*}\to H^{*}.} Because the transpose of A {\displaystyle A} is a map between continuous linear functionals, these defining conditions can consequently be re-expressed entirely in terms of linear functionals, as the remainder of subsection will now describe in detail. The linear functionals that are involved are the simplest possible continuous linear functionals on H {\displaystyle H} that can be defined entirely in terms of A , {\displaystyle A,} the inner product {\displaystyle \langle \,\cdot \mid \cdot \,\rangle } on H , {\displaystyle H,} and some given vector h H . {\displaystyle h\in H.} Specifically, these are A h {\displaystyle \left\langle Ah\mid \cdot \,\right\rangle } and h A ( ) {\displaystyle \langle h\mid A(\cdot )\rangle } [note 7] where

A h = Φ ( A h ) = ( Φ A ) h  and  h A ( ) = ( t A Φ ) h . {\displaystyle \left\langle Ah\mid \cdot \,\right\rangle =\Phi (Ah)=(\Phi \circ A)h\quad {\text{ and }}\quad \langle h\mid A(\cdot )\rangle =\left({}^{t}A\circ \Phi \right)h.}

Self-adjoint operators

A continuous linear operator A : H H {\displaystyle A:H\to H} is called self-adjoint it is equal to its own adjoint; that is, if A = A . {\displaystyle A=A^{*}.} Using (Adjoint-transpose), this happens if and only if:

Φ A = t A Φ {\displaystyle \Phi \circ A={}^{t}A\circ \Phi }
where this equality can be rewritten in the following two equivalent forms:
A = Φ 1 t A Φ  or  t A = Φ A Φ 1 . {\displaystyle A=\Phi ^{-1}\circ {}^{t}A\circ \Phi \quad {\text{ or }}\quad {}^{t}A=\Phi \circ A\circ \Phi ^{-1}.}

Unraveling notation and definitions produces the following characterization of self-adjoint operators in terms of the aforementioned continuous linear functionals: A {\displaystyle A} is self-adjoint if and only if for all z H , {\displaystyle z\in H,} the linear functional z A ( ) {\displaystyle \langle z\mid A(\cdot )\rangle } [note 7] is equal to the linear functional A z {\displaystyle \langle Az\mid \cdot \,\rangle } ; that is, if and only if

z A ( ) = A z  for all  z H {\displaystyle \langle z\mid A(\cdot )\rangle =\langle Az\mid \cdot \,\rangle \quad {\text{ for all }}z\in H}

 

 

 

 

(Self-adjointness functionals)

where if bra-ket notation is used, this is

z A   =   A z  for all  z H . {\displaystyle \langle z\mid A~=~\langle Az\mid \quad {\text{ for all }}z\in H.}

Normal operators

A continuous linear operator A : H H {\displaystyle A:H\to H} is called normal if A A = A A , {\displaystyle AA^{*}=A^{*}A,} which happens if and only if for all z , h H , {\displaystyle z,h\in H,}

A A z h = A A z h . {\displaystyle \left\langle AA^{*}z\mid h\right\rangle =\left\langle A^{*}Az\mid h\right\rangle .}

Using (Adjoint-transpose) and unraveling notation and definitions produces[proof 2] the following characterization of normal operators in terms of inner products of continuous linear functionals: A {\displaystyle A} is a normal operator if and only if

A h A z H   =   h | A ( ) z A ( ) H  for all  z , h H {\displaystyle \left\langle \,\langle Ah\mid \cdot \,\rangle \mid \langle Az\mid \cdot \,\rangle \,\right\rangle _{H^{*}}~=~\left\langle \,\langle h|A(\cdot )\rangle \mid \langle z\mid A(\cdot )\rangle \,\right\rangle _{H^{*}}\quad {\text{ for all }}z,h\in H}

 

 

 

 

(Normality functionals)

where the left hand side is also equal to A h A z ¯ H = A z A h H . {\displaystyle {\overline {\langle Ah\mid Az\rangle }}_{H}=\langle Az\mid Ah\rangle _{H}.} The left hand side of this characterization involves only linear functionals of the form A h {\displaystyle \langle Ah\mid \cdot \,\rangle } while the right hand side involves only linear functions of the form h A ( ) {\displaystyle \langle h\mid A(\cdot )\rangle } (defined as above[note 7]). So in plain English, characterization (Normality functionals) says that an operator is normal when the inner product of any two linear functions of the first form is equal to the inner product of their second form (using the same vectors z , h H {\displaystyle z,h\in H} for both forms). In other words, if it happens to be the case (and when A {\displaystyle A} is injective or self-adjoint, it is) that the assignment of linear functionals A h     h | A ( ) {\displaystyle \langle Ah\mid \cdot \,\rangle ~\mapsto ~\langle h|A(\cdot )\rangle } is well-defined (or alternatively, if h | A ( )     A h {\displaystyle \langle h|A(\cdot )\rangle ~\mapsto ~\langle Ah\mid \cdot \,\rangle } is well-defined) where h {\displaystyle h} ranges over H , {\displaystyle H,} then A {\displaystyle A} is a normal operator if and only if this assignment preserves the inner product on H . {\displaystyle H^{*}.}

The fact that every self-adjoint bounded linear operator is normal follows readily by direct substitution of A = A {\displaystyle A^{*}=A} into either side of A A = A A . {\displaystyle A^{*}A=AA^{*}.} This same fact also follows immediately from the direct substitution of the equalities (Self-adjointness functionals) into either side of (Normality functionals).

Alternatively, for a complex Hilbert space, the continuous linear operator A {\displaystyle A} is a normal operator if and only if A z = A z {\displaystyle \|Az\|=\left\|A^{*}z\right\|} for every z H , {\displaystyle z\in H,} [2] which happens if and only if

A z H = z | A ( ) H  for every  z H . {\displaystyle \|Az\|_{H}=\|\langle z\,|\,A(\cdot )\rangle \|_{H^{*}}\quad {\text{ for every }}z\in H.}

Unitary operators

An invertible bounded linear operator A : H H {\displaystyle A:H\to H} is said to be unitary if its inverse is its adjoint: A 1 = A . {\displaystyle A^{-1}=A^{*}.} By using (Adjoint-transpose), this is seen to be equivalent to Φ A 1 = t A Φ . {\displaystyle \Phi \circ A^{-1}={}^{t}A\circ \Phi .} Unraveling notation and definitions, it follows that A {\displaystyle A} is unitary if and only if

A 1 z = z A ( )  for all  z H . {\displaystyle \langle A^{-1}z\mid \cdot \,\rangle =\langle z\mid A(\cdot )\rangle \quad {\text{ for all }}z\in H.}

The fact that a bounded invertible linear operator A : H H {\displaystyle A:H\to H} is unitary if and only if A A = Id H {\displaystyle A^{*}A=\operatorname {Id} _{H}} (or equivalently, t A Φ A = Φ {\displaystyle {}^{t}A\circ \Phi \circ A=\Phi } ) produces another (well-known) characterization: an invertible bounded linear map A {\displaystyle A} is unitary if and only if

A z A ( ) = z  for all  z H . {\displaystyle \langle Az\mid A(\cdot )\,\rangle =\langle z\mid \cdot \,\rangle \quad {\text{ for all }}z\in H.}

Because A : H H {\displaystyle A:H\to H} is invertible (and so in particular a bijection), this is also true of the transpose t A : H H . {\displaystyle {}^{t}A:H^{*}\to H^{*}.} This fact also allows the vector z H {\displaystyle z\in H} in the above characterizations to be replaced with A z {\displaystyle Az} or A 1 z , {\displaystyle A^{-1}z,} thereby producing many more equalities. Similarly, {\displaystyle \,\cdot \,} can be replaced with A ( ) {\displaystyle A(\cdot )} or A 1 ( ) . {\displaystyle A^{-1}(\cdot ).}

See also

Citations

  1. ^ a b c d e f g h i j k l Trèves 2006, pp. 112–123.
  2. ^ a b c Rudin 1991, pp. 306–312.
  3. ^ Roman 2008, p. 351 Theorem 13.32
  4. ^ Rudin 1991, pp. 307−309.
  5. ^ Rudin 1991, pp. 92–115.

Notes

  1. ^ If F = R {\displaystyle \mathbb {F} =\mathbb {R} } then the inner product will be symmetric so it does not matter which coordinate of the inner product the element y {\displaystyle y} is placed into because the same map will result. But if F = C {\displaystyle \mathbb {F} =\mathbb {C} } then except for the constant 0 {\displaystyle 0} map, antilinear functionals on H {\displaystyle H} are completely distinct from linear functionals on H , {\displaystyle H,} which makes the coordinate that y {\displaystyle y} is placed into is very important. For a non-zero y H {\displaystyle y\in H} to induce a linear functional (rather than an antilinear functional), y {\displaystyle y} must be placed into the antilinear coordinate of the inner product. If it is incorrectly placed into the linear coordinate instead of the antilinear coordinate then the resulting map will be the antilinear map h y , h = h y , {\displaystyle h\mapsto \langle y,h\rangle =\langle h\mid y\rangle ,} which is not a linear functional on H {\displaystyle H} and so it will not be an element of the continuous dual space H . {\displaystyle H^{*}.}
  2. ^ This means that for all vectors y H : {\displaystyle y\in H:} (1) Φ : H H {\displaystyle \Phi :H\to H^{*}} is injective. (2) The norms of y {\displaystyle y} and Φ ( y ) {\displaystyle \Phi (y)} are the same: Φ ( y ) = y . {\displaystyle \|\Phi (y)\|=\|y\|.} (3) Φ {\displaystyle \Phi } is an additive map, meaning that Φ ( x + y ) = Φ ( x ) + Φ ( y ) {\displaystyle \Phi (x+y)=\Phi (x)+\Phi (y)} for all x , y H . {\displaystyle x,y\in H.} (4) Φ {\displaystyle \Phi } is conjugate homogeneous: Φ ( s y ) = s ¯ Φ ( y ) {\displaystyle \Phi (sy)={\overline {s}}\Phi (y)} for all scalars s . {\displaystyle s.} (5) Φ {\displaystyle \Phi } is real homogeneous: Φ ( r y ) = r Φ ( y ) {\displaystyle \Phi (ry)=r\Phi (y)} for all real numbers r R . {\displaystyle r\in \mathbb {R} .}
  3. ^ a b This footnote explains how to define - using only H {\displaystyle H} 's operations - addition and scalar multiplication of affine hyperplanes so that these operations correspond to addition and scalar multiplication of linear functionals. Let H {\displaystyle H} be any vector space and let H # {\displaystyle H^{\#}} denote its algebraic dual space. Let A := { φ 1 ( 1 ) : φ H # } {\displaystyle {\mathcal {A}}:=\left\{\varphi ^{-1}(1):\varphi \in H^{\#}\right\}} and let ^ {\displaystyle \,{\hat {\cdot }}\,} and + ^ {\displaystyle \,{\hat {+}}\,} denote the (unique) vector space operations on A {\displaystyle {\mathcal {A}}} that make the bijection I : H # A {\displaystyle I:H^{\#}\to {\mathcal {A}}} defined by φ φ 1 ( 1 ) {\displaystyle \varphi \mapsto \varphi ^{-1}(1)} into a vector space isomorphism. Note that φ 1 ( 1 ) = {\displaystyle \varphi ^{-1}(1)=\varnothing } if and only if φ = 0 , {\displaystyle \varphi =0,} so {\displaystyle \varnothing } is the additive identity of ( A , + ^ , ^ ) {\displaystyle \left({\mathcal {A}},{\hat {+}},{\hat {\cdot }}\right)} (because this is true of I 1 ( ) = 0 {\displaystyle I^{-1}(\varnothing )=0} in H # {\displaystyle H^{\#}} and I {\displaystyle I} is a vector space isomorphism). For every A A , {\displaystyle A\in {\mathcal {A}},} let ker A = H {\displaystyle \ker A=H} if A = {\displaystyle A=\varnothing } and let ker A = A A {\displaystyle \ker A=A-A} otherwise; if A = I ( φ ) = φ 1 ( 1 ) {\displaystyle A=I(\varphi )=\varphi ^{-1}(1)} then ker A = ker φ {\displaystyle \ker A=\ker \varphi } so this definition is consistent with the usual definition of the kernel of a linear functional. Say that A , B A {\displaystyle A,B\in {\mathcal {A}}} are parallel if ker A = ker B , {\displaystyle \ker A=\ker B,} where if A {\displaystyle A} and B {\displaystyle B} are not empty then this happens if and only if the linear functionals I 1 ( A ) {\displaystyle I^{-1}(A)} and I 1 ( B ) {\displaystyle I^{-1}(B)} are non-zero scalar multiples of each other. The vector space operations on the vector space of affine hyperplanes A {\displaystyle {\mathcal {A}}} are now described in a way that involves only the vector space operations on H {\displaystyle H} ; this results in an interpretation of the vector space operations on the algebraic dual space H # {\displaystyle H^{\#}} that is entirely in terms of affine hyperplanes. Fix hyperplanes A , B A . {\displaystyle A,B\in {\mathcal {A}}.} If s {\displaystyle s} is a scalar then s ^ A = { h H : s h A } . {\displaystyle s{\hat {\cdot }}A=\left\{h\in H:sh\in A\right\}.} Describing the operation A + ^ B {\displaystyle A{\hat {+}}B} in terms of only the sets A = φ 1 ( 1 ) {\displaystyle A=\varphi ^{-1}(1)} and B = ψ 1 ( 1 ) {\displaystyle B=\psi ^{-1}(1)} is more complicated because by definition, A + ^ B = I ( φ ) + ^ I ( ψ ) := I ( φ + ψ ) = ( φ + ψ ) 1 ( 1 ) . {\displaystyle A{\hat {+}}B=I(\varphi ){\hat {+}}I(\psi ):=I(\varphi +\psi )=(\varphi +\psi )^{-1}(1).} If A = {\displaystyle A=\varnothing } (respectively, if B = {\displaystyle B=\varnothing } ) then A + ^ B {\displaystyle A{\hat {+}}B} is equal to B {\displaystyle B} (resp. is equal to A {\displaystyle A} ) so assume A {\displaystyle A\neq \varnothing } and B . {\displaystyle B\neq \varnothing .} The hyperplanes A {\displaystyle A} and B {\displaystyle B} are parallel if and only if there exists some scalar r {\displaystyle r} (necessarily non-0) such that A = r B , {\displaystyle A=rB,} in which case A + ^ B = { h H : ( 1 + r ) h B } ; {\displaystyle A{\hat {+}}B=\left\{h\in H:(1+r)h\in B\right\};} this can optionally be subdivided into two cases: if r = 1 {\displaystyle r=-1} (which happens if and only if the linear functionals I 1 ( A ) {\displaystyle I^{-1}(A)} and I 1 ( B ) {\displaystyle I^{-1}(B)} are negatives of each) then A + ^ B = {\displaystyle A{\hat {+}}B=\varnothing } while if r 1 {\displaystyle r\neq -1} then A + ^ B = 1 1 + r B = r 1 + r A . {\displaystyle A{\hat {+}}B={\frac {1}{1+r}}B={\frac {r}{1+r}}A.} Finally, assume now that ker A ker B . {\displaystyle \ker A\neq \ker B.} Then A + ^ B {\displaystyle A{\hat {+}}B} is the unique affine hyperplane containing both A ker B {\displaystyle A\cap \ker B} and B ker A {\displaystyle B\cap \ker A} as subsets; explicitly, ker ( A + ^ B ) = span ( A ker B B ker A ) {\displaystyle \ker \left(A{\hat {+}}B\right)=\operatorname {span} \left(A\cap \ker B-B\cap \ker A\right)} and A + ^ B = A ker B + ker ( A + ^ B ) = B ker A + ker ( A + ^ B ) . {\displaystyle A{\hat {+}}B=A\cap \ker B+\ker \left(A{\hat {+}}B\right)=B\cap \ker A+\ker \left(A{\hat {+}}B\right).} To see why this formula for A + ^ B {\displaystyle A{\hat {+}}B} should hold, consider H := R 3 , {\displaystyle H:=\mathbb {R} ^{3},} A := φ 1 ( 1 ) , {\displaystyle A:=\varphi ^{-1}(1),} and B := ψ 1 ( 1 ) , {\displaystyle B:=\psi ^{-1}(1),} where φ ( x , y , z ) := x {\displaystyle \varphi (x,y,z):=x} and ψ ( x , y , z ) := x + y {\displaystyle \psi (x,y,z):=x+y} (or alternatively, ψ ( x , y , z ) := y {\displaystyle \psi (x,y,z):=y} ). Then by definition, A + ^ B := ( φ + ψ ) 1 ( 1 ) {\displaystyle A{\hat {+}}B:=(\varphi +\psi )^{-1}(1)} and ker ( A + ^ B ) := ( φ + ψ ) 1 ( 0 ) . {\displaystyle \ker \left(A{\hat {+}}B\right):=(\varphi +\psi )^{-1}(0).} Now A ker B   =   φ 1 ( 1 ) ψ 1 ( 0 )     ( φ + ψ ) 1 ( 1 ) {\displaystyle A\cap \ker B~=~\varphi ^{-1}(1)\cap \psi ^{-1}(0)~\subseteq ~(\varphi +\psi )^{-1}(1)} is an affine subspace of codimension 2 {\displaystyle 2} in H {\displaystyle H} (it is equal to a translation of the z {\displaystyle z} -axis { ( 0 , 0 ) } × R {\displaystyle \{(0,0)\}\times \mathbb {R} } ). The same is true of B ker A . {\displaystyle B\cap \ker A.} Plotting an x {\displaystyle x} - y {\displaystyle y} -plane cross section (that is, setting z = {\displaystyle z=} constant) of the sets ker A , ker B , A {\displaystyle \ker A,\ker B,A} and B {\displaystyle B} (each of which will be plotted as a line), the set ( φ + ψ ) 1 ( 1 ) {\displaystyle (\varphi +\psi )^{-1}(1)} will then be plotted as the (unique) line passing through the A ker B {\displaystyle A\cap \ker B} and B ker A {\displaystyle B\cap \ker A} (which will be plotted as two distinct points) while ( φ + ψ ) 1 ( 0 ) = ker ( A + ^ B ) {\displaystyle (\varphi +\psi )^{-1}(0)=\ker \left(A{\hat {+}}B\right)} will be plotted the line through the origin that is parallel to A + ^ B = ( φ + ψ ) 1 ( 1 ) . {\displaystyle A{\hat {+}}B=(\varphi +\psi )^{-1}(1).} The above formulas for ker ( A + ^ B ) := ( φ + ψ ) 1 ( 0 ) {\displaystyle \ker \left(A{\hat {+}}B\right):=(\varphi +\psi )^{-1}(0)} and A + ^ B := ( φ + ψ ) 1 ( 1 ) {\displaystyle A{\hat {+}}B:=(\varphi +\psi )^{-1}(1)} follow naturally from the plot and they also hold in general.
  4. ^ Showing that there is a non-zero vector v {\displaystyle v} in K {\displaystyle K^{\bot }} relies on the continuity of ϕ {\displaystyle \phi } and the Cauchy completeness of H . {\displaystyle H.} This is the only place in the proof in which these properties are used.
  5. ^ Technically, H = K K {\displaystyle H=K\oplus K^{\bot }} means that the addition map K × K H {\displaystyle K\times K^{\bot }\to H} defined by ( k , p ) k + p {\displaystyle (k,p)\mapsto k+p} is a surjective linear isomorphism and homeomorphism. See the article on complemented subspaces for more details.
  6. ^ The usual notation for plugging an element g {\displaystyle g} into a linear map F {\displaystyle F} is F ( g ) {\displaystyle F(g)} and sometimes F g . {\displaystyle Fg.} Replacing F {\displaystyle F} with h ∣:=   Φ h {\displaystyle \langle h\mid :=~\Phi h} produces h ( g ) {\displaystyle \langle h\mid (g)} or h g , {\displaystyle \langle h\mid g,} which is unsightly (despite being consistent with the usual notation used with functions). Consequently, the symbol {\displaystyle \,\rangle \,} is appended to the end, so that the notation h g {\displaystyle \langle h\mid g\rangle } is used instead to denote this value ( Φ h ) g . {\displaystyle (\Phi h)g.}
  7. ^ a b c d e The notation z A ( ) Z {\displaystyle \left\langle z\mid A(\cdot )\right\rangle _{Z}} denotes the continuous linear functional defined by g z A g Z . {\displaystyle g\mapsto \left\langle z\mid Ag\right\rangle _{Z}.}

Proofs

  1. ^ This is because x K = x x , f φ f φ 2 f φ . {\displaystyle x_{K}=x-{\frac {\left\langle x,f_{\varphi }\right\rangle }{\left\|f_{\varphi }\right\|^{2}}}f_{\varphi }.} Now use f φ 2 = φ 2 {\displaystyle \left\|f_{\varphi }\right\|^{2}=\|\varphi \|^{2}} and x , f φ = φ ( x ) {\displaystyle \left\langle x,f_{\varphi }\right\rangle =\varphi (x)} and solve for f φ . {\displaystyle f_{\varphi }.} {\displaystyle \blacksquare }
  2. ^ A A z h = A z A h H = Φ A h Φ A z H {\displaystyle \left\langle A^{*}Az\mid h\right\rangle =\left\langle \,Az\mid Ah\,\right\rangle _{H}=\left\langle \,\Phi Ah\mid \Phi Az\,\right\rangle _{H^{*}}} where Φ A h := A h {\displaystyle \Phi Ah:=\left\langle Ah\mid \cdot \,\right\rangle } and Φ A z := A z . {\displaystyle \Phi Az:=\left\langle Az\mid \cdot \,\right\rangle .} By definition of the adjoint, A h A z = h A A z {\displaystyle \left\langle A^{*}h\mid A^{*}z\,\right\rangle =\left\langle h\mid AA^{*}z\,\right\rangle } so taking the complex conjugate of both sides proves that A A z h = A z A h . {\displaystyle \left\langle AA^{*}z\mid h\right\rangle =\left\langle A^{*}z\mid A^{*}h\right\rangle .} From A = Φ 1 t A Φ , {\displaystyle A^{*}=\Phi ^{-1}\circ {}^{t}A\circ \Phi ,} it follows that A A z | h H = A z A h H = Φ 1 t A Φ z Φ 1 t A Φ h H = t A Φ h t A Φ z H {\displaystyle \left\langle AA^{*}z\,|\,h\right\rangle _{H}=\left\langle A^{*}z\mid A^{*}h\right\rangle _{H}=\left\langle \Phi ^{-1}\circ {}^{t}A\circ \Phi z\mid \Phi ^{-1}\circ {}^{t}A\circ \Phi h\right\rangle _{H}=\left\langle \,{}^{t}A\circ \Phi h\mid {}^{t}A\circ \Phi z\right\rangle _{H^{*}}} where ( t A Φ ) h = h | A ( ) {\displaystyle \left({}^{t}A\circ \Phi \right)h=\langle h\,|\,A(\cdot )\rangle } and ( t A Φ ) z = z | A ( ) . {\displaystyle \left({}^{t}A\circ \Phi \right)z=\langle z\,|\,A(\cdot )\rangle .} {\displaystyle \blacksquare }

Bibliography

  • Bachman, George; Narici, Lawrence (2000). Functional Analysis (Second ed.). Mineola, New York: Dover Publications. ISBN 978-0486402512. OCLC 829157984.
  • Fréchet, M. (1907). "Sur les ensembles de fonctions et les opérations linéaires". Les Comptes rendus de l'Académie des sciences (in French). 144: 1414–1416.
  • P. Halmos Measure Theory, D. van Nostrand and Co., 1950.
  • P. Halmos, A Hilbert Space Problem Book, Springer, New York 1982 (problem 3 contains version for vector spaces with coordinate systems).
  • Riesz, F. (1907). "Sur une espèce de géométrie analytique des systèmes de fonctions sommables". Comptes rendus de l'Académie des Sciences (in French). 144: 1409–1411.
  • Riesz, F. (1909). "Sur les opérations fonctionnelles linéaires". Comptes rendus de l'Académie des Sciences (in French). 149: 974–977.
  • Roman, Stephen (2008), Advanced Linear Algebra, Graduate Texts in Mathematics (Third ed.), Springer, ISBN 978-0-387-72828-5
  • Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
  • Walter Rudin, Real and Complex Analysis, McGraw-Hill, 1966, ISBN 0-07-100276-6.
  • Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
  • v
  • t
  • e
Spaces
Properties
TheoremsOperatorsAlgebrasOpen problemsApplicationsAdvanced topics
  • Category