Hamilton–Jacobi equation

Formulation of classical mechanics based on the calculus of variations
Part of a series on
Classical mechanics
F = d d t ( m v ) {\displaystyle {\textbf {F}}={\frac {d}{dt}}(m{\textbf {v}})}
Branches
  • Applied
  • Celestial
  • Continuum
  • Dynamics
  • Kinematics
  • Kinetics
  • Statics
  • Statistical mechanics
Fundamentals
  • icon Physics portal
  •  Category
  • v
  • t
  • e
Part of a series of articles about
Calculus
a b f ( t ) d t = f ( b ) f ( a ) {\displaystyle \int _{a}^{b}f'(t)\,dt=f(b)-f(a)}
  • v
  • t
  • e

In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.

The Hamilton–Jacobi equation is a formulation of mechanics in which the motion of a particle can be represented as a wave. In this sense, it fulfilled a long-held goal of theoretical physics (dating at least to Johann Bernoulli in the eighteenth century) of finding an analogy between the propagation of light and the motion of a particle. The wave equation followed by mechanical systems is similar to, but not identical with, Schrödinger's equation, as described below; for this reason, the Hamilton–Jacobi equation is considered the "closest approach" of classical mechanics to quantum mechanics.[1][2] The qualitative form of this connection is called Hamilton's optico-mechanical analogy.

In mathematics, the Hamilton–Jacobi equation is a necessary condition describing extremal geometry in generalizations of problems from the calculus of variations. It can be understood as a special case of the Hamilton–Jacobi–Bellman equation from dynamic programming.[3]

Overview

The Hamilton–Jacobi equation is a first-order, non-linear partial differential equation

S t = H ( q , S q , t ) . {\displaystyle -{\frac {\partial S}{\partial t}}=H\!\!\left(\mathbf {q} ,{\frac {\partial S}{\partial \mathbf {q} }},t\right).}

for a system of particles at coordinates q {\displaystyle \mathbf {q} } . The function H {\displaystyle H} is the system's Hamiltonian giving the system's energy. The solution of the equation is the action functional, S {\displaystyle S} ,[4] called Hamilton's principal function in older textbooks. The solution can be related to the system Lagrangian   L   {\displaystyle \ {\mathcal {L}}\ } by an indefinite integral of the form used in the principle of least action:[5]: 431 

  S = L   d t +   s o m e   c o n s t a n t   {\displaystyle \ S=\int {\mathcal {L}}\ \operatorname {d} t+~{\mathsf {some\ constant}}~}
Geometrical surfaces of constant action are perpendicular to system trajectories, creating a wavefront-like view of the system dynamics. This property of the Hamilton-Jacobi equation connects classical mechanics to quantum mechanics.[6]: 175 

Mathematical formulation

Notation

Boldface variables such as q {\displaystyle \mathbf {q} } represent a list of N {\displaystyle N} generalized coordinates,

q = ( q 1 , q 2 , , q N 1 , q N ) {\displaystyle \mathbf {q} =(q_{1},q_{2},\ldots ,q_{N-1},q_{N})}

A dot over a variable or list signifies the time derivative (see Newton's notation). For example,

q ˙ = d q d t . {\displaystyle {\dot {\mathbf {q} }}={\frac {d\mathbf {q} }{dt}}.}

The dot product notation between two lists of the same number of coordinates is a shorthand for the sum of the products of corresponding components, such as

p q = k = 1 N p k q k . {\displaystyle \mathbf {p} \cdot \mathbf {q} =\sum _{k=1}^{N}p_{k}q_{k}.}

The action functional (a.k.a. Hamilton's principal function)

Definition

Let the Hessian matrix H L ( q , q ˙ , t ) = { 2 L / q ˙ i q ˙ j } i j {\textstyle H_{\cal {L}}(\mathbf {q} ,\mathbf {\dot {q}} ,t)=\left\{\partial ^{2}{\cal {L}}/\partial {\dot {q}}^{i}\partial {\dot {q}}^{j}\right\}_{ij}} be invertible. The relation

d d t L q ˙ i = j = 1 n ( 2 L q ˙ i q ˙ j q ¨ j + 2 L q ˙ i q j q ˙ j ) + 2 L q ˙ i t , i = 1 , , n , {\displaystyle {\frac {d}{dt}}{\frac {\partial {\cal {L}}}{\partial {\dot {q}}^{i}}}=\sum _{j=1}^{n}\left({\frac {\partial ^{2}{\cal {L}}}{\partial {\dot {q}}^{i}\partial {\dot {q}}^{j}}}{\ddot {q}}^{j}+{\frac {\partial ^{2}{\cal {L}}}{\partial {\dot {q}}^{i}\partial {q}^{j}}}{\dot {q}}^{j}\right)+{\frac {\partial ^{2}{\cal {L}}}{\partial {\dot {q}}^{i}\partial t}},\qquad i=1,\ldots ,n,}
shows that the Euler–Lagrange equations form a n × n {\displaystyle n\times n} system of second-order ordinary differential equations. Inverting the matrix H L {\displaystyle H_{\cal {L}}} transforms this system into
q ¨ i = F i ( q , q ˙ , t ) ,   i = 1 , , n . {\displaystyle {\ddot {q}}^{i}=F_{i}(\mathbf {q} ,\mathbf {\dot {q}} ,t),\ i=1,\ldots ,n.}

Let a time instant t 0 {\displaystyle t_{0}} and a point q 0 M {\displaystyle \mathbf {q} _{0}\in M} in the configuration space be fixed. The existence and uniqueness theorems guarantee that, for every v 0 , {\displaystyle \mathbf {v} _{0},} the initial value problem with the conditions γ | τ = t 0 = q 0 {\displaystyle \gamma |_{\tau =t_{0}}=\mathbf {q} _{0}} and γ ˙ | τ = t 0 = v 0 {\displaystyle {\dot {\gamma }}|_{\tau =t_{0}}=\mathbf {v} _{0}} has a locally unique solution γ = γ ( τ ; t 0 , q 0 , v 0 ) . {\displaystyle \gamma =\gamma (\tau ;t_{0},\mathbf {q} _{0},\mathbf {v} _{0}).} Additionally, let there be a sufficiently small time interval ( t 0 , t 1 ) {\displaystyle (t_{0},t_{1})} such that extremals with different initial velocities v 0 {\displaystyle \mathbf {v} _{0}} would not intersect in M × ( t 0 , t 1 ) . {\displaystyle M\times (t_{0},t_{1}).} The latter means that, for any q M {\displaystyle \mathbf {q} \in M} and any t ( t 0 , t 1 ) , {\displaystyle t\in (t_{0},t_{1}),} there can be at most one extremal γ = γ ( τ ; t , t 0 , q , q 0 ) {\displaystyle \gamma =\gamma (\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0})} for which γ | τ = t 0 = q 0 {\displaystyle \gamma |_{\tau =t_{0}}=\mathbf {q} _{0}} and γ | τ = t = q . {\displaystyle \gamma |_{\tau =t}=\mathbf {q} .} Substituting γ = γ ( τ ; t , t 0 , q , q 0 ) {\displaystyle \gamma =\gamma (\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0})} into the action functional results in the Hamilton's principal function (HPF)

S ( q , t ; q 0 , t 0 )   = def t 0 t L ( γ ( τ ; ) , γ ˙ ( τ ; ) , τ ) d τ , {\displaystyle S(\mathbf {q} ,t;\mathbf {q} _{0},t_{0})\ {\stackrel {\text{def}}{=}}\int _{t_{0}}^{t}{\mathcal {L}}(\gamma (\tau ;\cdot ),{\dot {\gamma }}(\tau ;\cdot ),\tau )\,d\tau ,}

where

  • γ = γ ( τ ; t , t 0 , q , q 0 ) , {\displaystyle \gamma =\gamma (\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0}),}
  • γ | τ = t 0 = q 0 , {\displaystyle \gamma |_{\tau =t_{0}}=\mathbf {q} _{0},}
  • γ | τ = t = q . {\displaystyle \gamma |_{\tau =t}=\mathbf {q} .}

Formula for the momenta

The momenta are defined as the quantities p i ( q , q ˙ , t ) = L / q ˙ i . {\textstyle p_{i}(\mathbf {q} ,\mathbf {\dot {q}} ,t)=\partial {\cal {L}}/\partial {\dot {q}}^{i}.} This section shows that the dependency of p i {\displaystyle p_{i}} on q ˙ {\displaystyle \mathbf {\dot {q}} } disappears, once the HPF is known.

Indeed, let a time instant t 0 {\displaystyle t_{0}} and a point q 0 {\displaystyle \mathbf {q} _{0}} in the configuration space be fixed. For every time instant t {\displaystyle t} and a point q , {\displaystyle \mathbf {q} ,} let γ = γ ( τ ; t , t 0 , q , q 0 ) {\displaystyle \gamma =\gamma (\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0})} be the (unique) extremal from the definition of the Hamilton's principal function S {\displaystyle S} . Call v = def γ ˙ ( τ ; t , t 0 , q , q 0 ) | τ = t {\displaystyle \mathbf {v} \,{\stackrel {\text{def}}{=}}\,{\dot {\gamma }}(\tau ;t,t_{0},\mathbf {q} ,\mathbf {q} _{0})|_{\tau =t}} the velocity at τ = t {\displaystyle \tau =t} . Then

S q i = L q ˙ i | q ˙ = v , i = 1 , , n . {\displaystyle {\frac {\partial S}{\partial q^{i}}}=\left.{\frac {\partial {\cal {L}}}{\partial {\dot {q}}^{i}}}\right|_{\mathbf {\dot {q}} =\mathbf {v} }\!\!\!\!\!\!\!,\quad i=1,\ldots ,n.}

Proof

While the proof below assumes the configuration space to be an open subset of R n , {\displaystyle \mathbb {R} ^{n},} the underlying technique applies equally to arbitrary spaces. In the context of this proof, the calligraphic letter S {\displaystyle {\cal {S}}} denotes the action functional, and the italic S {\displaystyle S} the Hamilton's principal function.

Step 1. Let ξ = ξ ( t ) {\displaystyle \xi =\xi (t)} be a path in the configuration space, and δ ξ = δ ξ ( t ) {\displaystyle \delta \xi =\delta \xi (t)} a vector field along ξ {\displaystyle \xi } . (For each t , {\displaystyle t,} the vector δ ξ ( t ) {\displaystyle \delta \xi (t)} is called perturbation, infinitesimal variation or virtual displacement of the mechanical system at the point ξ ( t ) {\displaystyle \xi (t)} ). Recall that the variation δ S δ ξ [ γ , t 1 , t 0 ] {\displaystyle \delta {\cal {S}}_{\delta \xi }[\gamma ,t_{1},t_{0}]} of the action S {\displaystyle {\cal {S}}} at the point ξ {\displaystyle \xi } in the direction δ ξ {\displaystyle \delta \xi } is given by the formula

δ S δ ξ [ ξ , t 1 , t 0 ] = t 0 t 1 ( L q d d t L q ˙ ) δ ξ d t + L q ˙ δ ξ | t 0 t 1 , {\displaystyle \delta {\cal {S}}_{\delta \xi }[\xi ,t_{1},t_{0}]=\int _{t_{0}}^{t_{1}}\left({\frac {\partial {\cal {L}}}{\partial \mathbf {q} }}-{\frac {d}{dt}}{\frac {\partial {\cal {L}}}{\partial \mathbf {\dot {q}} }}\right)\delta \xi \,dt+{\frac {\partial {\cal {L}}}{\partial \mathbf {\dot {q}} }}\,\delta \xi {\Biggl |}_{t_{0}}^{t_{1}},}
where one should substitute q i = ξ i ( t ) {\displaystyle q^{i}=\xi ^{i}(t)} and q ˙ i = ξ ˙ i ( t ) {\displaystyle {\dot {q}}^{i}={\dot {\xi }}^{i}(t)} after calculating the partial derivatives on the right-hand side. (This formula follows from the definition of Gateaux derivative via integration by parts).

Assume that ξ {\displaystyle \xi } is an extremal. Since ξ {\displaystyle \xi } now satisfies the Euler–Lagrange equations, the integral term vanishes. If ξ {\displaystyle \xi } 's starting point q 0 {\displaystyle \mathbf {q} _{0}} is fixed, then, by the same logic that was used to derive the Euler–Lagrange equations, δ ξ ( t 0 ) = 0. {\displaystyle \delta \xi (t_{0})=0.} Thus,

δ S δ ξ [ ξ , t ; t 0 ] = L q ˙ | q ˙ = ξ ˙ ( t ) q = ξ ( t ) δ ξ ( t ) . {\displaystyle \delta {\cal {S}}_{\delta \xi }[\xi ,t;t_{0}]=\left.{\frac {\partial {\cal {L}}}{\partial \mathbf {\dot {q}} }}\right|_{\mathbf {\dot {q}} ={\dot {\xi }}(t)}^{\mathbf {q} =\xi (t)}\,\delta \xi (t).}

Step 2. Let γ = γ ( τ ; q , q 0 , t , t 0 ) {\displaystyle \gamma =\gamma (\tau ;\mathbf {q} ,\mathbf {q} _{0},t,t_{0})} be the (unique) extremal from the definition of HPF, δ γ = δ γ ( τ ) {\displaystyle \delta \gamma =\delta \gamma (\tau )} a vector field along γ , {\displaystyle \gamma ,} and γ ε = γ ε ( τ ; q ε , q 0 , t , t 0 ) {\displaystyle \gamma _{\varepsilon }=\gamma _{\varepsilon }(\tau ;\mathbf {q} _{\varepsilon },\mathbf {q} _{0},t,t_{0})} a variation of γ {\displaystyle \gamma } "compatible" with δ γ . {\displaystyle \delta \gamma .} In precise terms, γ ε | ε = 0 = γ , {\displaystyle \gamma _{\varepsilon }|_{\varepsilon =0}=\gamma ,} γ ˙ ε | ε = 0 = δ γ , {\displaystyle {\dot {\gamma }}_{\varepsilon }|_{\varepsilon =0}=\delta \gamma ,} γ ε | τ = t 0 = γ | τ = t 0 = q 0 . {\displaystyle \gamma _{\varepsilon }|_{\tau =t_{0}}=\gamma |_{\tau =t_{0}}=\mathbf {q} _{0}.}

By definition of HPF and Gateaux derivative,

δ S δ γ [ γ , t ] = def d S [ γ ε , t ] d ε | ε = 0 = d S ( γ ε ( t ) , t ) d ε | ε = 0 = S q δ γ ( t ) . {\displaystyle \delta {\cal {S}}_{\delta \gamma }[\gamma ,t]{\overset {\text{def}}{{}={}}}\left.{\frac {d{\cal {S}}[\gamma _{\varepsilon },t]}{d\varepsilon }}\right|_{\varepsilon =0}=\left.{\frac {dS(\gamma _{\varepsilon }(t),t)}{d\varepsilon }}\right|_{\varepsilon =0}={\frac {\partial S}{\mathbf {\partial q} }}\,\delta \gamma (t).}

Here, we took into account that q = γ ( t ; q , q 0 , t , t 0 ) {\displaystyle \mathbf {q} =\gamma (t;\mathbf {q} ,\mathbf {q} _{0},t,t_{0})} and dropped t 0 {\displaystyle t_{0}} for compactness.

Step 3. We now substitute ξ = γ {\displaystyle \xi =\gamma } and δ ξ = δ γ {\displaystyle \delta \xi =\delta \gamma } into the expression for δ S δ ξ [ ξ , t ; t 0 ] {\displaystyle \delta {\cal {S}}_{\delta \xi }[\xi ,t;t_{0}]} from Step 1 and compare the result with the formula derived in Step 2. The fact that, for t > t 0 , {\displaystyle t>t_{0},} the vector field δ γ {\displaystyle \delta \gamma } was chosen arbitrarily completes the proof.

Formula

Given the Hamiltonian H ( q , p , t ) {\displaystyle H(\mathbf {q} ,\mathbf {p} ,t)} of a mechanical system, the Hamilton–Jacobi equation is a first-order, non-linear partial differential equation for the Hamilton's principal function S {\displaystyle S} ,[7]

S t = H ( q , S q , t ) . {\displaystyle -{\frac {\partial S}{\partial t}}=H\left(\mathbf {q} ,{\frac {\partial S}{\partial \mathbf {q} }},t\right).}

Derivation

For an extremal ξ = ξ ( t ; t 0 , q 0 , v 0 ) , {\displaystyle \xi =\xi (t;t_{0},\mathbf {q} _{0},\mathbf {v} _{0}),} where v 0 = ξ ˙ | t = t 0 {\displaystyle \mathbf {v} _{0}={\dot {\xi }}|_{t=t_{0}}} is the initial speed (see discussion preceding the definition of HPF),

L ( ξ ( t ) , ξ ˙ ( t ) , t ) = d S ( ξ ( t ) , t ) d t = [ S q q ˙ + S t ] q ˙ = ξ ˙ ( t ) q = ξ ( t ) . {\displaystyle {\cal {L}}(\xi (t),{\dot {\xi }}(t),t)={\frac {dS(\xi (t),t)}{dt}}=\left[{\frac {\partial S}{\partial \mathbf {q} }}\mathbf {\dot {q}} +{\frac {\partial S}{\partial t}}\right]_{\mathbf {\dot {q}} ={\dot {\xi }}(t)}^{\mathbf {q} =\xi (t)}.}

From the formula for p i = p i ( q , t ) {\displaystyle p_{i}=p_{i}(\mathbf {q} ,t)} and the coordinate-based definition of the Hamiltonian

H ( q , p , t ) = p q ˙ L ( q , q ˙ , t ) , {\displaystyle H(\mathbf {q} ,\mathbf {p} ,t)=\mathbf {p} \mathbf {\dot {q}} -{\cal {L}}(\mathbf {q} ,\mathbf {\dot {q}} ,t),}
with q ˙ ( p , q , t ) {\displaystyle \mathbf {\dot {q}} (\mathbf {p} ,\mathbf {q} ,t)} satisfying the (uniquely solvable for q ˙ ) {\displaystyle \mathbf {\dot {q}} )} equation p = L ( q , q ˙ , t ) q ˙ , {\textstyle \mathbf {p} ={\frac {\partial {\cal {L}}(\mathbf {q} ,\mathbf {\dot {q}} ,t)}{\partial \mathbf {\dot {q}} }},} obtain
S t = L ( q , q ˙ , t ) S q q ˙ = H ( q , S q , t ) , {\displaystyle {\frac {\partial S}{\partial t}}={\cal {L}}(\mathbf {q} ,\mathbf {\dot {q}} ,t)-{\frac {\partial S}{\mathbf {\partial q} }}\mathbf {\dot {q}} =-H\left(\mathbf {q} ,{\frac {\partial S}{\partial \mathbf {q} }},t\right),}
where q = ξ ( t ) {\displaystyle \mathbf {q} =\xi (t)} and q ˙ = ξ ˙ ( t ) . {\displaystyle \mathbf {\dot {q}} ={\dot {\xi }}(t).}

Alternatively, as described below, the Hamilton–Jacobi equation may be derived from Hamiltonian mechanics by treating S {\displaystyle S} as the generating function for a canonical transformation of the classical Hamiltonian

H = H ( q 1 , q 2 , , q N ; p 1 , p 2 , , p N ; t ) . {\displaystyle H=H(q_{1},q_{2},\ldots ,q_{N};p_{1},p_{2},\ldots ,p_{N};t).}

The conjugate momenta correspond to the first derivatives of S {\displaystyle S} with respect to the generalized coordinates

p k = S q k . {\displaystyle p_{k}={\frac {\partial S}{\partial q_{k}}}.}

As a solution to the Hamilton–Jacobi equation, the principal function contains N + 1 {\displaystyle N+1} undetermined constants, the first N {\displaystyle N} of them denoted as α 1 , α 2 , , α N {\displaystyle \alpha _{1},\,\alpha _{2},\dots ,\alpha _{N}} , and the last one coming from the integration of S t {\displaystyle {\frac {\partial S}{\partial t}}} .

The relationship between p {\displaystyle \mathbf {p} } and q {\displaystyle \mathbf {q} } then describes the orbit in phase space in terms of these constants of motion. Furthermore, the quantities

β k = S α k , k = 1 , 2 , , N {\displaystyle \beta _{k}={\frac {\partial S}{\partial \alpha _{k}}},\quad k=1,2,\ldots ,N}
are also constants of motion, and these equations can be inverted to find q {\displaystyle \mathbf {q} } as a function of all the α {\displaystyle \alpha } and β {\displaystyle \beta } constants and time.[8]

Comparison with other formulations of mechanics

The Hamilton–Jacobi equation is a single, first-order partial differential equation for the function of the N {\displaystyle N} generalized coordinates q 1 , q 2 , , q N {\displaystyle q_{1},\,q_{2},\dots ,q_{N}} and the time t {\displaystyle t} . The generalized momenta do not appear, except as derivatives of S {\displaystyle S} , the classical action.

For comparison, in the equivalent Euler–Lagrange equations of motion of Lagrangian mechanics, the conjugate momenta also do not appear; however, those equations are a system of N {\displaystyle N} , generally second-order equations for the time evolution of the generalized coordinates. Similarly, Hamilton's equations of motion are another system of 2N first-order equations for the time evolution of the generalized coordinates and their conjugate momenta p 1 , p 2 , , p N {\displaystyle p_{1},\,p_{2},\dots ,p_{N}} .

Since the HJE is an equivalent expression of an integral minimization problem such as Hamilton's principle, the HJE can be useful in other problems of the calculus of variations and, more generally, in other branches of mathematics and physics, such as dynamical systems, symplectic geometry and quantum chaos. For example, the Hamilton–Jacobi equations can be used to determine the geodesics on a Riemannian manifold, an important variational problem in Riemannian geometry. However as a computational tool, the partial differential equations are notoriously complicated to solve except when is it possible to separate the independent variables; in this case the HJE become computationally useful.[5]: 444 

Derivation using a canonical transformation

Any canonical transformation involving a type-2 generating function G 2 ( q , P , t ) {\displaystyle G_{2}(\mathbf {q} ,\mathbf {P} ,t)} leads to the relations

p = G 2 q , Q = G 2 P , K ( Q , P , t ) = H ( q , p , t ) + G 2 t {\displaystyle \mathbf {p} ={\partial G_{2} \over \partial \mathbf {q} },\quad \mathbf {Q} ={\partial G_{2} \over \partial \mathbf {P} },\quad K(\mathbf {Q} ,\mathbf {P} ,t)=H(\mathbf {q} ,\mathbf {p} ,t)+{\partial G_{2} \over \partial t}}
and Hamilton's equations in terms of the new variables P , Q {\displaystyle \mathbf {P} ,\,\mathbf {Q} } and new Hamiltonian K {\displaystyle K} have the same form:
P ˙ = K Q , Q ˙ = + K P . {\displaystyle {\dot {\mathbf {P} }}=-{\partial K \over \partial \mathbf {Q} },\quad {\dot {\mathbf {Q} }}=+{\partial K \over \partial \mathbf {P} }.}

To derive the HJE, a generating function G 2 ( q , P , t ) {\displaystyle G_{2}(\mathbf {q} ,\mathbf {P} ,t)} is chosen in such a way that, it will make the new Hamiltonian K = 0 {\displaystyle K=0} . Hence, all its derivatives are also zero, and the transformed Hamilton's equations become trivial

P ˙ = Q ˙ = 0 {\displaystyle {\dot {\mathbf {P} }}={\dot {\mathbf {Q} }}=0}
so the new generalized coordinates and momenta are constants of motion. As they are constants, in this context the new generalized momenta P {\displaystyle \mathbf {P} } are usually denoted α 1 , α 2 , , α N {\displaystyle \alpha _{1},\,\alpha _{2},\dots ,\alpha _{N}} , i.e. P m = α m {\displaystyle P_{m}=\alpha _{m}} and the new generalized coordinates Q {\displaystyle \mathbf {Q} } are typically denoted as β 1 , β 2 , , β N {\displaystyle \beta _{1},\,\beta _{2},\dots ,\beta _{N}} , so Q m = β m {\displaystyle Q_{m}=\beta _{m}} .

Setting the generating function equal to Hamilton's principal function, plus an arbitrary constant A {\displaystyle A} :

G 2 ( q , α , t ) = S ( q , t ) + A , {\displaystyle G_{2}(\mathbf {q} ,{\boldsymbol {\alpha }},t)=S(\mathbf {q} ,t)+A,}
the HJE automatically arises
p = G 2 q = S q H ( q , p , t ) + G 2 t = 0 H ( q , S q , t ) + S t = 0. {\displaystyle \mathbf {p} ={\frac {\partial G_{2}}{\partial \mathbf {q} }}={\frac {\partial S}{\partial \mathbf {q} }}\,\rightarrow \,H(\mathbf {q} ,\mathbf {p} ,t)+{\partial G_{2} \over \partial t}=0\,\rightarrow \,H\left(\mathbf {q} ,{\frac {\partial S}{\partial \mathbf {q} }},t\right)+{\partial S \over \partial t}=0.}

When solved for S ( q , α , t ) {\displaystyle S(\mathbf {q} ,{\boldsymbol {\alpha }},t)} , these also give us the useful equations

Q = β = S α , {\displaystyle \mathbf {Q} ={\boldsymbol {\beta }}={\partial S \over \partial {\boldsymbol {\alpha }}},}
or written in components for clarity
Q m = β m = S ( q , α , t ) α m . {\displaystyle Q_{m}=\beta _{m}={\frac {\partial S(\mathbf {q} ,{\boldsymbol {\alpha }},t)}{\partial \alpha _{m}}}.}

Ideally, these N equations can be inverted to find the original generalized coordinates q {\displaystyle \mathbf {q} } as a function of the constants α , β , {\displaystyle {\boldsymbol {\alpha }},\,{\boldsymbol {\beta }},} and t {\displaystyle t} , thus solving the original problem.

Separation of variables

When the problem allows additive separation of variables, the HJE leads directly to constants of motion. For example, the time t can be separated if the Hamiltonian does not depend on time explicitly. In that case, the time derivative S t {\displaystyle {\frac {\partial S}{\partial t}}} in the HJE must be a constant, usually denoted ( E {\displaystyle -E} ), giving the separated solution

S = W ( q 1 , q 2 , , q N ) E t {\displaystyle S=W(q_{1},q_{2},\ldots ,q_{N})-Et}
where the time-independent function W ( q ) {\displaystyle W(\mathbf {q} )} is sometimes called the abbreviated action or Hamilton's characteristic function [5]: 434  and sometimes[9]: 607  written S 0 {\displaystyle S_{0}} (see action principle names). The reduced Hamilton–Jacobi equation can then be written
H ( q , S q ) = E . {\displaystyle H\left(\mathbf {q} ,{\frac {\partial S}{\partial \mathbf {q} }}\right)=E.}

To illustrate separability for other variables, a certain generalized coordinate q k {\displaystyle q_{k}} and its derivative S q k {\displaystyle {\frac {\partial S}{\partial q_{k}}}} are assumed to appear together as a single function

ψ ( q k , S q k ) {\displaystyle \psi \left(q_{k},{\frac {\partial S}{\partial q_{k}}}\right)}
in the Hamiltonian
H = H ( q 1 , q 2 , , q k 1 , q k + 1 , , q N ; p 1 , p 2 , , p k 1 , p k + 1 , , p N ; ψ ; t ) . {\displaystyle H=H(q_{1},q_{2},\ldots ,q_{k-1},q_{k+1},\ldots ,q_{N};p_{1},p_{2},\ldots ,p_{k-1},p_{k+1},\ldots ,p_{N};\psi ;t).}

In that case, the function S can be partitioned into two functions, one that depends only on qk and another that depends only on the remaining generalized coordinates

S = S k ( q k ) + S rem ( q 1 , , q k 1 , q k + 1 , , q N , t ) . {\displaystyle S=S_{k}(q_{k})+S_{\text{rem}}(q_{1},\ldots ,q_{k-1},q_{k+1},\ldots ,q_{N},t).}

Substitution of these formulae into the Hamilton–Jacobi equation shows that the function ψ must be a constant (denoted here as Γ k {\displaystyle \Gamma _{k}} ), yielding a first-order ordinary differential equation for S k ( q k ) , {\displaystyle S_{k}(q_{k}),}

ψ ( q k , d S k d q k ) = Γ k . {\displaystyle \psi \left(q_{k},{\frac {dS_{k}}{dq_{k}}}\right)=\Gamma _{k}.}

In fortunate cases, the function S {\displaystyle S} can be separated completely into N {\displaystyle N} functions S m ( q m ) , {\displaystyle S_{m}(q_{m}),}

S = S 1 ( q 1 ) + S 2 ( q 2 ) + + S N ( q N ) E t . {\displaystyle S=S_{1}(q_{1})+S_{2}(q_{2})+\cdots +S_{N}(q_{N})-Et.}

In such a case, the problem devolves to N {\displaystyle N} ordinary differential equations.

The separability of S depends both on the Hamiltonian and on the choice of generalized coordinates. For orthogonal coordinates and Hamiltonians that have no time dependence and are quadratic in the generalized momenta, S {\displaystyle S} will be completely separable if the potential energy is additively separable in each coordinate, where the potential energy term for each coordinate is multiplied by the coordinate-dependent factor in the corresponding momentum term of the Hamiltonian (the Staeckel conditions). For illustration, several examples in orthogonal coordinates are worked in the next sections.

Examples in various coordinate systems

Spherical coordinates

In spherical coordinates the Hamiltonian of a free particle moving in a conservative potential U can be written

H = 1 2 m [ p r 2 + p θ 2 r 2 + p ϕ 2 r 2 sin 2 θ ] + U ( r , θ , ϕ ) . {\displaystyle H={\frac {1}{2m}}\left[p_{r}^{2}+{\frac {p_{\theta }^{2}}{r^{2}}}+{\frac {p_{\phi }^{2}}{r^{2}\sin ^{2}\theta }}\right]+U(r,\theta ,\phi ).}

The Hamilton–Jacobi equation is completely separable in these coordinates provided that there exist functions U r ( r ) , U θ ( θ ) , U ϕ ( ϕ ) {\displaystyle U_{r}(r),U_{\theta }(\theta ),U_{\phi }(\phi )} such that U {\displaystyle U} can be written in the analogous form

U ( r , θ , ϕ ) = U r ( r ) + U θ ( θ ) r 2 + U ϕ ( ϕ ) r 2 sin 2 θ . {\displaystyle U(r,\theta ,\phi )=U_{r}(r)+{\frac {U_{\theta }(\theta )}{r^{2}}}+{\frac {U_{\phi }(\phi )}{r^{2}\sin ^{2}\theta }}.}

Substitution of the completely separated solution

S = S r ( r ) + S θ ( θ ) + S ϕ ( ϕ ) E t {\displaystyle S=S_{r}(r)+S_{\theta }(\theta )+S_{\phi }(\phi )-Et}
into the HJE yields
1 2 m ( d S r d r ) 2 + U r ( r ) + 1 2 m r 2 [ ( d S θ d θ ) 2 + 2 m U θ ( θ ) ] + 1 2 m r 2 sin 2 θ [ ( d S ϕ d ϕ ) 2 + 2 m U ϕ ( ϕ ) ] = E . {\displaystyle {\frac {1}{2m}}\left({\frac {dS_{r}}{dr}}\right)^{2}+U_{r}(r)+{\frac {1}{2mr^{2}}}\left[\left({\frac {dS_{\theta }}{d\theta }}\right)^{2}+2mU_{\theta }(\theta )\right]+{\frac {1}{2mr^{2}\sin ^{2}\theta }}\left[\left({\frac {dS_{\phi }}{d\phi }}\right)^{2}+2mU_{\phi }(\phi )\right]=E.}

This equation may be solved by successive integrations of ordinary differential equations, beginning with the equation for ϕ {\displaystyle \phi }

( d S ϕ d ϕ ) 2 + 2 m U ϕ ( ϕ ) = Γ ϕ {\displaystyle \left({\frac {dS_{\phi }}{d\phi }}\right)^{2}+2mU_{\phi }(\phi )=\Gamma _{\phi }}
where Γ ϕ {\displaystyle \Gamma _{\phi }} is a constant of the motion that eliminates the ϕ {\displaystyle \phi } dependence from the Hamilton–Jacobi equation
1 2 m ( d S r d r ) 2 + U r ( r ) + 1 2 m r 2 [ ( d S θ d θ ) 2 + 2 m U θ ( θ ) + Γ ϕ sin 2 θ ] = E . {\displaystyle {\frac {1}{2m}}\left({\frac {dS_{r}}{dr}}\right)^{2}+U_{r}(r)+{\frac {1}{2mr^{2}}}\left[\left({\frac {dS_{\theta }}{d\theta }}\right)^{2}+2mU_{\theta }(\theta )+{\frac {\Gamma _{\phi }}{\sin ^{2}\theta }}\right]=E.}

The next ordinary differential equation involves the θ {\displaystyle \theta } generalized coordinate

( d S θ d θ ) 2 + 2 m U θ ( θ ) + Γ ϕ sin 2 θ = Γ θ {\displaystyle \left({\frac {dS_{\theta }}{d\theta }}\right)^{2}+2mU_{\theta }(\theta )+{\frac {\Gamma _{\phi }}{\sin ^{2}\theta }}=\Gamma _{\theta }}
where Γ θ {\displaystyle \Gamma _{\theta }} is again a constant of the motion that eliminates the θ {\displaystyle \theta } dependence and reduces the HJE to the final ordinary differential equation
1 2 m ( d S r d r ) 2 + U r ( r ) + Γ θ 2 m r 2 = E {\displaystyle {\frac {1}{2m}}\left({\frac {dS_{r}}{dr}}\right)^{2}+U_{r}(r)+{\frac {\Gamma _{\theta }}{2mr^{2}}}=E}
whose integration completes the solution for S {\displaystyle S} .

Elliptic cylindrical coordinates

The Hamiltonian in elliptic cylindrical coordinates can be written

H = p μ 2 + p ν 2 2 m a 2 ( sinh 2 μ + sin 2 ν ) + p z 2 2 m + U ( μ , ν , z ) {\displaystyle H={\frac {p_{\mu }^{2}+p_{\nu }^{2}}{2ma^{2}\left(\sinh ^{2}\mu +\sin ^{2}\nu \right)}}+{\frac {p_{z}^{2}}{2m}}+U(\mu ,\nu ,z)}
where the foci of the ellipses are located at ± a {\displaystyle \pm a} on the x {\displaystyle x} -axis. The Hamilton–Jacobi equation is completely separable in these coordinates provided that U {\displaystyle U} has an analogous form
U ( μ , ν , z ) = U μ ( μ ) + U ν ( ν ) sinh 2 μ + sin 2 ν + U z ( z ) {\displaystyle U(\mu ,\nu ,z)={\frac {U_{\mu }(\mu )+U_{\nu }(\nu )}{\sinh ^{2}\mu +\sin ^{2}\nu }}+U_{z}(z)}
where U μ ( μ ) {\displaystyle U_{\mu }(\mu )} , U ν ( ν ) {\displaystyle U_{\nu }(\nu )} and U z ( z ) {\displaystyle U_{z}(z)} are arbitrary functions. Substitution of the completely separated solution
S = S μ ( μ ) + S ν ( ν ) + S z ( z ) E t {\displaystyle S=S_{\mu }(\mu )+S_{\nu }(\nu )+S_{z}(z)-Et}
into the HJE yields
1 2 m ( d S z d z ) 2 + U z ( z ) + 1 2 m a 2 ( sinh 2 μ + sin 2 ν ) [ ( d S μ d μ ) 2 + ( d S ν d ν ) 2 + 2 m a 2 U μ ( μ ) + 2 m a 2 U ν ( ν ) ] = E . {\displaystyle {\frac {1}{2m}}\left({\frac {dS_{z}}{dz}}\right)^{2}+U_{z}(z)+{\frac {1}{2ma^{2}\left(\sinh ^{2}\mu +\sin ^{2}\nu \right)}}\left[\left({\frac {dS_{\mu }}{d\mu }}\right)^{2}+\left({\frac {dS_{\nu }}{d\nu }}\right)^{2}+2ma^{2}U_{\mu }(\mu )+2ma^{2}U_{\nu }(\nu )\right]=E.}

Separating the first ordinary differential equation

1 2 m ( d S z d z ) 2 + U z ( z ) = Γ z {\displaystyle {\frac {1}{2m}}\left({\frac {dS_{z}}{dz}}\right)^{2}+U_{z}(z)=\Gamma _{z}}
yields the reduced Hamilton–Jacobi equation (after re-arrangement and multiplication of both sides by the denominator)
( d S μ d μ ) 2 + ( d S ν d ν ) 2 + 2 m a 2 U μ ( μ ) + 2 m a 2 U ν ( ν ) = 2 m a 2 ( sinh 2 μ + sin 2 ν ) ( E Γ z ) {\displaystyle \left({\frac {dS_{\mu }}{d\mu }}\right)^{2}+\left({\frac {dS_{\nu }}{d\nu }}\right)^{2}+2ma^{2}U_{\mu }(\mu )+2ma^{2}U_{\nu }(\nu )=2ma^{2}\left(\sinh ^{2}\mu +\sin ^{2}\nu \right)\left(E-\Gamma _{z}\right)}
which itself may be separated into two independent ordinary differential equations
( d S μ d μ ) 2 + 2 m a 2 U μ ( μ ) + 2 m a 2 ( Γ z E ) sinh 2 μ = Γ μ {\displaystyle \left({\frac {dS_{\mu }}{d\mu }}\right)^{2}+2ma^{2}U_{\mu }(\mu )+2ma^{2}\left(\Gamma _{z}-E\right)\sinh ^{2}\mu =\Gamma _{\mu }}
( d S ν d ν ) 2 + 2 m a 2 U ν ( ν ) + 2 m a 2 ( Γ z E ) sin 2 ν = Γ ν {\displaystyle \left({\frac {dS_{\nu }}{d\nu }}\right)^{2}+2ma^{2}U_{\nu }(\nu )+2ma^{2}\left(\Gamma _{z}-E\right)\sin ^{2}\nu =\Gamma _{\nu }}
that, when solved, provide a complete solution for S {\displaystyle S} .

Parabolic cylindrical coordinates

The Hamiltonian in parabolic cylindrical coordinates can be written

H = p σ 2 + p τ 2 2 m ( σ 2 + τ 2 ) + p z 2 2 m + U ( σ , τ , z ) . {\displaystyle H={\frac {p_{\sigma }^{2}+p_{\tau }^{2}}{2m\left(\sigma ^{2}+\tau ^{2}\right)}}+{\frac {p_{z}^{2}}{2m}}+U(\sigma ,\tau ,z).}

The Hamilton–Jacobi equation is completely separable in these coordinates provided that U {\displaystyle U} has an analogous form

U ( σ , τ , z ) = U σ ( σ ) + U τ ( τ ) σ 2 + τ 2 + U z ( z ) {\displaystyle U(\sigma ,\tau ,z)={\frac {U_{\sigma }(\sigma )+U_{\tau }(\tau )}{\sigma ^{2}+\tau ^{2}}}+U_{z}(z)}
where U σ ( σ ) {\displaystyle U_{\sigma }(\sigma )} , U τ ( τ ) {\displaystyle U_{\tau }(\tau )} , and U z ( z ) {\displaystyle U_{z}(z)} are arbitrary functions. Substitution of the completely separated solution
S = S σ ( σ ) + S τ ( τ ) + S z ( z ) E t + constant {\displaystyle S=S_{\sigma }(\sigma )+S_{\tau }(\tau )+S_{z}(z)-Et+{\text{constant}}}
into the HJE yields
1 2 m ( d S z d z ) 2 + U z ( z ) + 1 2 m ( σ 2 + τ 2 ) [ ( d S σ d σ ) 2 + ( d S τ d τ ) 2 + 2 m U σ ( σ ) + 2 m U τ ( τ ) ] = E . {\displaystyle {\frac {1}{2m}}\left({\frac {dS_{z}}{dz}}\right)^{2}+U_{z}(z)+{\frac {1}{2m\left(\sigma ^{2}+\tau ^{2}\right)}}\left[\left({\frac {dS_{\sigma }}{d\sigma }}\right)^{2}+\left({\frac {dS_{\tau }}{d\tau }}\right)^{2}+2mU_{\sigma }(\sigma )+2mU_{\tau }(\tau )\right]=E.}

Separating the first ordinary differential equation

1 2 m ( d S z d z ) 2 + U z ( z ) = Γ z {\displaystyle {\frac {1}{2m}}\left({\frac {dS_{z}}{dz}}\right)^{2}+U_{z}(z)=\Gamma _{z}}
yields the reduced Hamilton–Jacobi equation (after re-arrangement and multiplication of both sides by the denominator)
( d S σ d σ ) 2 + ( d S τ d τ ) 2 + 2 m U σ ( σ ) + 2 m U τ ( τ ) = 2 m ( σ 2 + τ 2 ) ( E Γ z ) {\displaystyle \left({\frac {dS_{\sigma }}{d\sigma }}\right)^{2}+\left({\frac {dS_{\tau }}{d\tau }}\right)^{2}+2mU_{\sigma }(\sigma )+2mU_{\tau }(\tau )=2m\left(\sigma ^{2}+\tau ^{2}\right)\left(E-\Gamma _{z}\right)}
which itself may be separated into two independent ordinary differential equations
( d S σ d σ ) 2 + 2 m U σ ( σ ) + 2 m σ 2 ( Γ z E ) = Γ σ {\displaystyle \left({\frac {dS_{\sigma }}{d\sigma }}\right)^{2}+2mU_{\sigma }(\sigma )+2m\sigma ^{2}\left(\Gamma _{z}-E\right)=\Gamma _{\sigma }}
( d S τ d τ ) 2 + 2 m U τ ( τ ) + 2 m τ 2 ( Γ z E ) = Γ τ {\displaystyle \left({\frac {dS_{\tau }}{d\tau }}\right)^{2}+2mU_{\tau }(\tau )+2m\tau ^{2}\left(\Gamma _{z}-E\right)=\Gamma _{\tau }}
that, when solved, provide a complete solution for S {\displaystyle S} .

Waves and particles

Optical wave fronts and trajectories

The HJE establishes a duality between trajectories and wavefronts.[10] For example, in geometrical optics, light can be considered either as “rays” or waves. The wave front can be defined as the surface C t {\textstyle {\cal {C}}_{t}} that the light emitted at time t = 0 {\textstyle t=0} has reached at time t {\textstyle t} . Light rays and wave fronts are dual: if one is known, the other can be deduced.

More precisely, geometrical optics is a variational problem where the “action” is the travel time T {\textstyle T} along a path,

T = 1 c A B n d s {\displaystyle T={\frac {1}{c}}\int _{A}^{B}n\,ds}
where n {\textstyle n} is the medium's index of refraction and d s {\textstyle ds} is an infinitesimal arc length. From the above formulation, one can compute the ray paths using the Euler–Lagrange formulation; alternatively, one can compute the wave fronts by solving the Hamilton–Jacobi equation. Knowing one leads to knowing the other.

The above duality is very general and applies to all systems that derive from a variational principle: either compute the trajectories using Euler–Lagrange equations or the wave fronts by using Hamilton–Jacobi equation.

The wave front at time t {\textstyle t} , for a system initially at q 0 {\textstyle \mathbf {q} _{0}} at time t 0 {\textstyle t_{0}} , is defined as the collection of points q {\textstyle \mathbf {q} } such that S ( q , t ) = const {\textstyle S(\mathbf {q} ,t)={\text{const}}} . If S ( q , t ) {\textstyle S(\mathbf {q} ,t)} is known, the momentum is immediately deduced.

p = S q . {\displaystyle \mathbf {p} ={\frac {\partial S}{\partial \mathbf {q} }}.}

Once p {\textstyle \mathbf {p} } is known, tangents to the trajectories q ˙ {\textstyle {\dot {\mathbf {q} }}} are computed by solving the equation

L q ˙ = p {\displaystyle {\frac {\partial {\cal {L}}}{\partial {\dot {\mathbf {q} }}}}={\boldsymbol {p}}}
for q ˙ {\textstyle {\dot {\mathbf {q} }}} , where L {\textstyle {\cal {L}}} is the Lagrangian. The trajectories are then recovered from the knowledge of q ˙ {\textstyle {\dot {\mathbf {q} }}} .

Relationship to the Schrödinger equation

The isosurfaces of the function S ( q , t ) {\displaystyle S(\mathbf {q} ,t)} can be determined at any time t. The motion of an S {\displaystyle S} -isosurface as a function of time is defined by the motions of the particles beginning at the points q {\displaystyle \mathbf {q} } on the isosurface. The motion of such an isosurface can be thought of as a wave moving through q {\displaystyle \mathbf {q} } -space, although it does not obey the wave equation exactly. To show this, let S represent the phase of a wave

ψ = ψ 0 e i S / {\displaystyle \psi =\psi _{0}e^{iS/\hbar }}
where {\displaystyle \hbar } is a constant (the Planck constant) introduced to make the exponential argument dimensionless; changes in the amplitude of the wave can be represented by having S {\displaystyle S} be a complex number. The Hamilton–Jacobi equation is then rewritten as
2 2 m 2 ψ U ψ = i ψ t {\displaystyle {\frac {\hbar ^{2}}{2m}}\nabla ^{2}\psi -U\psi ={\frac {\hbar }{i}}{\frac {\partial \psi }{\partial t}}}
which is the Schrödinger equation.

Conversely, starting with the Schrödinger equation and our ansatz for ψ {\displaystyle \psi } , it can be deduced that[11]

1 2 m ( S ) 2 + U + S t = i 2 m 2 S . {\displaystyle {\frac {1}{2m}}\left(\nabla S\right)^{2}+U+{\frac {\partial S}{\partial t}}={\frac {i\hbar }{2m}}\nabla ^{2}S.}

The classical limit ( 0 {\displaystyle \hbar \rightarrow 0} ) of the Schrödinger equation above becomes identical to the following variant of the Hamilton–Jacobi equation,

1 2 m ( S ) 2 + U + S t = 0. {\displaystyle {\frac {1}{2m}}\left(\nabla S\right)^{2}+U+{\frac {\partial S}{\partial t}}=0.}

Applications

HJE in a gravitational field

Using the energy–momentum relation in the form[12]

g α β P α P β ( m c ) 2 = 0 {\displaystyle g^{\alpha \beta }P_{\alpha }P_{\beta }-(mc)^{2}=0}
for a particle of rest mass m {\displaystyle m} travelling in curved space, where g α β {\displaystyle g^{\alpha \beta }} are the contravariant coordinates of the metric tensor (i.e., the inverse metric) solved from the Einstein field equations, and c {\displaystyle c} is the speed of light. Setting the four-momentum P α {\displaystyle P_{\alpha }} equal to the four-gradient of the action S {\displaystyle S} ,
P α = S x α {\displaystyle P_{\alpha }=-{\frac {\partial S}{\partial x^{\alpha }}}}
gives the Hamilton–Jacobi equation in the geometry determined by the metric g {\displaystyle g} :
g α β S x α S x β ( m c ) 2 = 0 , {\displaystyle g^{\alpha \beta }{\frac {\partial S}{\partial x^{\alpha }}}{\frac {\partial S}{\partial x^{\beta }}}-(mc)^{2}=0,}
in other words, in a gravitational field.

HJE in electromagnetic fields

For a particle of rest mass m {\displaystyle m} and electric charge e {\displaystyle e} moving in electromagnetic field with four-potential A i = ( ϕ , A ) {\displaystyle A_{i}=(\phi ,\mathrm {A} )} in vacuum, the Hamilton–Jacobi equation in geometry determined by the metric tensor g i k = g i k {\displaystyle g^{ik}=g_{ik}} has a form

g i k ( S x i + e c A i ) ( S x k + e c A k ) = m 2 c 2 {\displaystyle g^{ik}\left({\frac {\partial S}{\partial x^{i}}}+{\frac {e}{c}}A_{i}\right)\left({\frac {\partial S}{\partial x^{k}}}+{\frac {e}{c}}A_{k}\right)=m^{2}c^{2}}
and can be solved for the Hamilton principal action function S {\displaystyle S} to obtain further solution for the particle trajectory and momentum:[13]
x = e c γ A z d ξ , {\displaystyle x=-{\frac {e}{c\gamma }}\int A_{z}\,d\xi ,}
y = e c γ A y d ξ , {\displaystyle y=-{\frac {e}{c\gamma }}\int A_{y}\,d\xi ,}
z = e 2 2 c 2 γ 2 ( A 2 A 2 ¯ ) d ξ , {\displaystyle z=-{\frac {e^{2}}{2c^{2}\gamma ^{2}}}\int (\mathrm {A} ^{2}-{\overline {\mathrm {A} ^{2}}})\,d\xi ,}
ξ = c t e 2 2 γ 2 c 2 ( A 2 A 2 ¯ ) d ξ , {\displaystyle \xi =ct-{\frac {e^{2}}{2\gamma ^{2}c^{2}}}\int (\mathrm {A} ^{2}-{\overline {\mathrm {A} ^{2}}})\,d\xi ,}
p x = e c A x , p y = e c A y , {\displaystyle p_{x}=-{\frac {e}{c}}A_{x},\quad p_{y}=-{\frac {e}{c}}A_{y},}
p z = e 2 2 γ c ( A 2 A 2 ¯ ) , {\displaystyle p_{z}={\frac {e^{2}}{2\gamma c}}(\mathrm {A} ^{2}-{\overline {\mathrm {A} ^{2}}}),}
E = c γ + e 2 2 γ c ( A 2 A 2 ¯ ) , {\displaystyle {\mathcal {E}}=c\gamma +{\frac {e^{2}}{2\gamma c}}(\mathrm {A} ^{2}-{\overline {\mathrm {A} ^{2}}}),}
where ξ = c t z {\displaystyle \xi =ct-z} and γ 2 = m 2 c 2 + e 2 c 2 A ¯ 2 {\displaystyle \gamma ^{2}=m^{2}c^{2}+{\frac {e^{2}}{c^{2}}}{\overline {A}}^{2}} with A ¯ {\displaystyle {\overline {\mathbf {A} }}} the cycle average of the vector potential.

A circularly polarized wave

In the case of circular polarization,

E x = E 0 sin ω ξ 1 , E y = E 0 cos ω ξ 1 , {\displaystyle E_{x}=E_{0}\sin \omega \xi _{1},\quad E_{y}=E_{0}\cos \omega \xi _{1},}
A x = c E 0 ω cos ω ξ 1 , A y = c E 0 ω sin ω ξ 1 . {\displaystyle A_{x}={\frac {cE_{0}}{\omega }}\cos \omega \xi _{1},\quad A_{y}=-{\frac {cE_{0}}{\omega }}\sin \omega \xi _{1}.}

Hence

x = e c E 0 ω sin ω ξ 1 , {\displaystyle x=-{\frac {ecE_{0}}{\omega }}\sin \omega \xi _{1},}
y = e c E 0 ω cos ω ξ 1 , {\displaystyle y=-{\frac {ecE_{0}}{\omega }}\cos \omega \xi _{1},}
p x = e E 0 ω cos ω ξ 1 , {\displaystyle p_{x}=-{\frac {eE_{0}}{\omega }}\cos \omega \xi _{1},}
p y = e E 0 ω sin ω ξ 1 , {\displaystyle p_{y}={\frac {eE_{0}}{\omega }}\sin \omega \xi _{1},}
where ξ 1 = ξ / c {\displaystyle \xi _{1}=\xi /c} , implying the particle moving along a circular trajectory with a permanent radius e c E 0 / γ ω 2 {\displaystyle ecE_{0}/\gamma \omega ^{2}} and an invariable value of momentum e E 0 / ω 2 {\displaystyle eE_{0}/\omega ^{2}} directed along a magnetic field vector.

A monochromatic linearly polarized plane wave

For the flat, monochromatic, linearly polarized wave with a field E {\displaystyle E} directed along the axis y {\displaystyle y}

E y = E 0 cos ω ξ 1 , {\displaystyle E_{y}=E_{0}\cos \omega \xi _{1},}
A y = c E 0 ω sin ω ξ 1 , {\displaystyle A_{y}=-{\frac {cE_{0}}{\omega }}\sin \omega \xi _{1},}
hence
x = const , {\displaystyle x={\text{const}},}
y 0 = e c E 0 γ ω 2 , {\displaystyle y_{0}=-{\frac {ecE_{0}}{\gamma \omega ^{2}}},}
y = y 0 cos ω ξ 1 , z = C z y 0 sin 2 ω ξ 1 , {\displaystyle y=y_{0}\cos \omega \xi _{1},\quad z=C_{z}y_{0}\sin 2\omega \xi _{1},}
C z = e E 0 8 γ ω , γ 2 = m 2 c 2 + e 2 E 0 2 2 ω 2 , {\displaystyle C_{z}={\frac {eE_{0}}{8\gamma \omega }},\quad \gamma ^{2}=m^{2}c^{2}+{\frac {e^{2}E_{0}^{2}}{2\omega ^{2}}},}
p x = 0 , {\displaystyle p_{x}=0,}
p y , 0 = e E 0 ω , {\displaystyle p_{y,0}={\frac {eE_{0}}{\omega }},}
p y = p y , 0 sin ω ξ 1 , {\displaystyle p_{y}=p_{y,0}\sin \omega \xi _{1},}
p z = 2 C z p y , 0 cos 2 ω ξ 1 {\displaystyle p_{z}=-2C_{z}p_{y,0}\cos 2\omega \xi _{1}}
implying the particle figure-8 trajectory with a long its axis oriented along the electric field E {\displaystyle E} vector.

An electromagnetic wave with a solenoidal magnetic field

For the electromagnetic wave with axial (solenoidal) magnetic field:[14]

E = E ϕ = ω ρ 0 c B 0 cos ω ξ 1 , {\displaystyle E=E_{\phi }={\frac {\omega \rho _{0}}{c}}B_{0}\cos \omega \xi _{1},}
A ϕ = ρ 0 B 0 sin ω ξ 1 = L s π ρ 0 N s I 0 sin ω ξ 1 , {\displaystyle A_{\phi }=-\rho _{0}B_{0}\sin \omega \xi _{1}=-{\frac {L_{s}}{\pi \rho _{0}N_{s}}}I_{0}\sin \omega \xi _{1},}
hence
x = constant , {\displaystyle x={\text{constant}},}
y 0 = e ρ 0 B 0 γ ω , {\displaystyle y_{0}=-{\frac {e\rho _{0}B_{0}}{\gamma \omega }},}
y = y 0 cos ω ξ 1 , {\displaystyle y=y_{0}\cos \omega \xi _{1},}
z = C z y 0 sin 2 ω ξ 1 , {\displaystyle z=C_{z}y_{0}\sin 2\omega \xi _{1},}
C z = e ρ 0 B 0 8 c γ , {\displaystyle C_{z}={\frac {e\rho _{0}B_{0}}{8c\gamma }},}
γ 2 = m 2 c 2 + e 2 ρ 0 2 B 0 2 2 c 2 , {\displaystyle \gamma ^{2}=m^{2}c^{2}+{\frac {e^{2}\rho _{0}^{2}B_{0}^{2}}{2c^{2}}},}
p x = 0 , {\displaystyle p_{x}=0,}
p y , 0 = e ρ 0 B 0 c , {\displaystyle p_{y,0}={\frac {e\rho _{0}B_{0}}{c}},}
p y = p y , 0 sin ω ξ 1 , {\displaystyle p_{y}=p_{y,0}\sin \omega \xi _{1},}
p z = 2 C z p y , 0 cos 2 ω ξ 1 , {\displaystyle p_{z}=-2C_{z}p_{y,0}\cos 2\omega \xi _{1},}
where B 0 {\displaystyle B_{0}} is the magnetic field magnitude in a solenoid with the effective radius ρ 0 {\displaystyle \rho _{0}} , inductivity L s {\displaystyle L_{s}} , number of windings N s {\displaystyle N_{s}} , and an electric current magnitude I 0 {\displaystyle I_{0}} through the solenoid windings. The particle motion occurs along the figure-8 trajectory in y z {\displaystyle yz} plane set perpendicular to the solenoid axis with arbitrary azimuth angle φ {\displaystyle \varphi } due to axial symmetry of the solenoidal magnetic field.

See also

  • iconMathematics portal
  • iconPhysics portal

References

  1. ^ Goldstein, Herbert (1980). Classical Mechanics (2nd ed.). Reading, MA: Addison-Wesley. pp. 484–492. ISBN 978-0-201-02918-5. (particularly the discussion beginning in the last paragraph of page 491)
  2. ^ Sakurai, pp. 103–107.
  3. ^ Kálmán, Rudolf E. (1963). "The Theory of Optimal Control and the Calculus of Variations". In Bellman, Richard (ed.). Mathematical Optimization Techniques. Berkeley: University of California Press. pp. 309–331. OCLC 1033974.
  4. ^ Hand, L.N.; Finch, J.D. (2008). Analytical Mechanics. Cambridge University Press. ISBN 978-0-521-57572-0.
  5. ^ a b c Goldstein, Herbert; Poole, Charles P.; Safko, John L. (2008). Classical mechanics (3, [Nachdr.] ed.). San Francisco Munich: Addison Wesley. ISBN 978-0-201-65702-9.
  6. ^ Coopersmith, Jennifer (2017). The lazy universe : an introduction to the principle of least action. Oxford, UK / New York, NY: Oxford University Press. ISBN 978-0-19-874304-0.
  7. ^ Hand, L. N.; Finch, J. D. (2008). Analytical Mechanics. Cambridge University Press. ISBN 978-0-521-57572-0.
  8. ^ Goldstein, Herbert (1980). Classical Mechanics (2nd ed.). Reading, MA: Addison-Wesley. p. 440. ISBN 978-0-201-02918-5.
  9. ^ Hanc, Jozef; Taylor, Edwin F.; Tuleja, Slavomir (2005-07-01). "Variational mechanics in one and two dimensions". American Journal of Physics. 73 (7): 603–610. Bibcode:2005AmJPh..73..603H. doi:10.1119/1.1848516. ISSN 0002-9505.
  10. ^ Houchmandzadeh, Bahram (2020). "The Hamilton-Jacobi Equation : an alternative approach". American Journal of Physics. 85 (5): 10.1119/10.0000781. arXiv:1910.09414. Bibcode:2020AmJPh..88..353H. doi:10.1119/10.0000781. S2CID 204800598.
  11. ^ Goldstein, Herbert (1980). Classical Mechanics (2nd ed.). Reading, MA: Addison-Wesley. pp. 490–491. ISBN 978-0-201-02918-5.
  12. ^ Wheeler, John; Misner, Charles; Thorne, Kip (1973). Gravitation. W.H. Freeman & Co. pp. 649, 1188. ISBN 978-0-7167-0344-0.
  13. ^ Landau, L.; Lifshitz, E. (1959). The Classical Theory of Fields. Reading, Massachusetts: Addison-Wesley. OCLC 17966515.
  14. ^ E. V. Shun'ko; D. E. Stevenson; V. S. Belkin (2014). "Inductively Coupling Plasma Reactor With Plasma Electron Energy Controllable in the Range from ~6 to ~100 eV". IEEE Transactions on Plasma Science. 42, part II (3): 774–785. Bibcode:2014ITPS...42..774S. doi:10.1109/TPS.2014.2299954. S2CID 34765246.

Further reading

  • Arnold, V.I. (1989). Mathematical Methods of Classical Mechanics (2 ed.). New York: Springer. ISBN 0-387-96890-3.
  • Hamilton, W. (1833). "On a General Method of Expressing the Paths of Light, and of the Planets, by the Coefficients of a Characteristic Function" (PDF). Dublin University Review: 795–826.
  • Hamilton, W. (1834). "On the Application to Dynamics of a General Mathematical Method previously Applied to Optics" (PDF). British Association Report: 513–518.
  • Fetter, A. & Walecka, J. (2003). Theoretical Mechanics of Particles and Continua. Dover Books. ISBN 978-0-486-43261-8.
  • Landau, L. D.; Lifshitz, E. M. (1975). Mechanics. Amsterdam: Elsevier.
  • Sakurai, J. J. (1985). Modern Quantum Mechanics. Benjamin/Cummings Publishing. ISBN 978-0-8053-7501-5.
  • Jacobi, C. G. J. (1884), Vorlesungen über Dynamik, C. G. J. Jacobi's Gesammelte Werke (in German), Berlin: G. Reimer, OL 14009561M
  • Nakane, Michiyo; Fraser, Craig G. (2002). "The Early History of Hamilton-Jacobi Dynamics". Centaurus. 44 (3–4): 161–227. doi:10.1111/j.1600-0498.2002.tb00613.x. PMID 17357243.