Abstract
Many evolution problems in physics are described by partial differential equations on an infinite domain; therefore, one is interested in the solutions to such problems for a given initial dataset. A prominent example is the binary blackhole problem within Einstein’s theory of gravitation, in which one computes the gravitational radiation emitted from the inspiral of the two black holes, merger and ringdown. Powerful mathematical tools can be used to establish qualitative statements about the solutions, such as their existence, uniqueness, continuous dependence on the initial data, or their asymptotic behavior over large time scales. However, one is often interested in computing the solution itself, and unless the partial differential equation is very simple, or the initial data possesses a high degree of symmetry, this computation requires approximation by numerical discretization. When solving such discrete problems on a machine, one is faced with a finite limit to computational resources, which leads to the replacement of the infinite continuum domain with a finite computer grid. This, in turn, leads to a discrete initialboundary value problem. The hope is to recover, with high accuracy, the exact solution in the limit where the grid spacing converges to zero with the boundary being pushed to infinity.
The goal of this article is to review some of the theory necessary to understand the continuum and discrete initial boundaryvalue problems arising from hyperbolic partial differential equations and to discuss its applications to numerical relativity; in particular, we present wellposed initial and initialboundary value formulations of Einstein’s equations, and we discuss multidomain highorder finite difference and spectral methods to solve them.
Introduction
This review discusses fundamental tools from the analytical and numerical theory underlying the Einstein field equations as an evolution problem on a finite computational domain. The process of reaching the current status of numerical relativity after decades of effort not only has driven the community to use state of the art techniques but also to extend and work out new approaches and methodologies of its own. This review discusses some of the theory involved in setting up the problem and numerical approaches for solving it. Its scope is rather broad: it ranges from analytical aspects related to the wellposedness of the Cauchy problem to numerical discretization schemes guaranteeing stability and convergence to the exact solution.
At the continuum, emphasis is placed on setting up the initialboundary value problem (IBVP) for Einstein’s equations properly, by which we mean obtaining a wellposed formulation, which is flexible enough to incorporate coordinate conditions, which allow for longterm and accurate stable numerical evolutions. Here, the wellposedness property is essential, in that it guarantees the existence of a unique solution, which depends continuously on the initial and boundary data. In particular, this assures that small perturbations in the data do not get arbitrarily amplified. Since such small perturbations do appear in numerical simulations because of discretization errors or finite machine precision, if such unbounded growth were allowed, the numerical solution would not converge to the exact one as resolution is increased. This picture is at the core of Lax’ historical theorem, which implies that the consistency of the numerical scheme is not sufficient for its solution to converge to the exact one. Instead, the scheme also needs to be numerically stable, a property, which is the discrete counterpart of wellposedness of the continuum problem.
While the wellposedness of the Cauchy problem in general relativity in the absence of boundaries was established a long time ago, only relatively recently has the IBVP been addressed and wellposed problems formulated. This is mainly due to the fact that the IBVP presents several new challenges, related to constraint preservation, the minimization of spurious reflections, and wellposedness. In fact, it is only very recently that such a wellposed problem has been found for a metric based formulation used in numerical relativity, and there are still open issues that need to be sorted out. It is interesting to point out that the IBVP in general relativity has driven research, which has led to wellposedness results for secondorder systems with a new large class of boundary conditions, which, in addition to Einstein’s equations, are also applicable to Maxwell’s equations in their potential formulation.
At the discrete level, the focus of this review is mainly on designing numerical schemes for which fast convergence to the exact solution is guaranteed. Unfortunately, no or very few general results are known for nonlinear equations and, therefore, we concentrate on schemes for which stability and convergence can be shown at the linear level, at least. If the exact solution is smooth, as expected for vacuum solutions of Einstein’s field equations with smooth initial data and appropriate gauge conditions, at least as long as no curvature singularities form, it is not unreasonable to expect that schemes guaranteeing stability at the linearized level, perhaps with some additional filtering, are also stable for the nonlinear problem. Furthermore, since the solutions are expected to be smooth, emphasis is placed here on using fast converging space discretizations, such as highorder finitedifference or spectral methods, especially those which can be applied to multidomain implementations.
The organization of this review is at follows. Section 3 starts with a discussion of wellposedness for initialvalue problems for evolution problems in general, with special emphasis on hyperbolic ones, including their algebraic characterization. Next, in Section 4 we review some formulations of Einstein’s equations, which yield a wellposed initialvalue problem. Here, we mainly focus on the harmonic and BSSN formulations, which are the two most widely used ones in numerical relativity, as well as the ADM formulation with different gauge conditions. Actual numerical simulations always involve the presence of computational boundaries, which raises the need of analyzing wellposedness of the IBVP. For this reason, the theory of IBVP for hyperbolic problems is reviewed in Section 5, followed by a presentation of the state of the art of boundary conditions for the harmonic and BSSN formulations of Einstein’s equations in Section 6, where open problems related with gauge uniqueness are also described.
Section 7 reviews some of the numerical stability theory, including necessary eigenvalue conditions. These are quite useful in practice for analyzing complicated systems or discretizations. We also discuss necessary and sufficient conditions for stability within the method of lines, and RungeKutta methods. Sections 8 and 9 are devoted to two classes of spatial approximations: finite differences and spectral methods. Finite differences are rather standard and widespread, so in Section 8 we mostly focus on the construction of optimized operators of arbitrary high order satisfying the summationbyparts property, which is useful in stability analyses. We also briefly mention classical polynomial interpolation and how to systematically construct finitedifference operators from it. In Section 9 we present the main elements and theory of spectral methods, including spectral convergence from solutions to SturmLiouville problems, expansions in orthogonal polynomials, Gauss quadratures, spectral differentiation, and spectral viscosity. We present several explicit formulae for the families of polynomials most widely used: Legendre and Chebyshev. Section 10 describes boundary closures. In the present context they refer to procedures for imposing boundary conditions leading to stability results. We emphasize the penalty technique, which applies to both finitedifference methods of arbitrary highorder and spectral ones, as well as outer and interface boundaries, such as those appearing when there are multiple grids as in complex geometries domain decompositions. We also discuss absorbing boundary conditions for Einstein’s equations. Finally, Section 11 presents a random sample of approaches in numerical relativity using multiple, semistructured grids, and/or curvilinear coordinates. In particular, some of these examples illustrate many of the methods discussed in this review in realistic simulations.
There are many topics related to numerical relativity, which are not covered by this review. It does not include discussions of physical results in general relativity obtained through numerical simulations, such as critical phenomena or gravitational waveforms computed from binary blackhole mergers. For reviews on these topics we refer the reader to [223] and [337, 122], respectively. See also [9, 45] for recent books on numerical relativity. Next, we do not discuss setting up initial data and solving the Einstein constraints, and refer to [133]. For reviews on the characteristic and conformal approach, which are only briefly mentioned in Section 6.4, we refer the reader to [432] and [172], respectively. Most of the results specific to Einstein’s field equations in Sections 4 and 6 apply to fourdimensional gravity only, though it should be possible to generalize some of them to higherdimensional theories. Also, as we have already mentioned, the results described here mostly apply to the vacuum field equations, in which case the solutions are expected to be smooth. For aspects involving the presence of shocks, such as those present in relativistic hydrodynamics we refer the reader to [165, 295]. Finally, see [352] for a more detailed review on hyperbolic formulations of Einstein’s equations, and [351] for one on global existence theorems in general relativity. Spectral methods in numerical relativity are discussed in detail in [215]. The 3+1 approach to general relativity is thoroughly reviewed in [214]. Finally, we refer the reader to [126] for a recent book on general relativity and the Einstein equations, which, among many other topics, discusses local and global aspects of the Cauchy problem, the constraint equations, and selfgravitating matter fields such as relativistic fluids and the relativistic kinetic theory of gases.
Except for a few historical remarks, this review does not discuss much of the historical path to the techniques and tools presented, but rather describes the state of the art of a subset of those which appear to be useful. Our choice of topics is mostly influenced by those for which some analysis is available or possible.
We have tried to make each section as selfconsistent as possible within the scope of a manageable review, so that they can be read separately, though each of them builds from the previous ones. Numerous examples are included.
Notation and Conventions
Throughout this article, we use the following notation and conventions. For a complex vector u ∈ ℂ^{m}, we denote by u* its transposed, complex conjugate, such that u · v := u*v is the standard scalar product for two vectors u, v ∈ ℂ^{m}. The corresponding norm is defined by \(\vert u\vert := \sqrt {{u^\ast}u}\). The norm of a complex, m × k matrix A is
The transposed, complex conjugate of A is denoted by A*, such that v · (Au) = (A*v) · u for all u ∈ ℂ^{k} and v ∈ ℂ^{m}. For two Hermitian m × m matrices A = A* and B = B*, the inequality A ≤ B means u · Au ≤ u · Bu for all u ∈ ℂ^{m}. The identity matrix is denoted by I.
The spectrum of a complex, m × m matrix A is the set of all eigenvalues of A,
which is real for Hermitian matrices. The spectral radius of A is defined as
then, the matrix norm B of a complex m × k matrix B can also be computed as \(\vert B\vert := \sqrt {\rho ({B^\ast}B)}\).
Next, we denote by L^{2}(U) the class of measurable functions ƒ: U ⊂ ℝ^{n} → ℂ^{m} on the open subset U of ℝ^{n}, which are squareintegrable. Two functions ƒ, g ∈ L^{2}(U), which differ from each other only by a set of measure zero, are identified. The scalar product on L^{2}(U) is defined as
and the corresponding norm is \(\Vert f\Vert := \sqrt {\langle f,f\rangle}\). According to the CauchySchwarz inequality we have
The Fourier transform of a function ƒ, belonging to the class \(C_0^\infty ({{\rm{\mathbb R}}^n})\) of infinitelydifferentiable functions with compact support, is defined as
According to Parseval’s identities, \(\langle \hat f,\hat g\rangle = \langle f,g\rangle\) for all \(f,g \in C_0^\infty ({{\rm{\mathbb R}}^n})\), and the map \(C_0^\infty ({{\rm{{\mathbb R}}}^n}) \rightarrow {L^2}({{\rm{{\mathbb R}}}^n}),f \mapsto \hat f\) can be extended to a linear, unitary map Ƒ : L^{2}(ℝ^{n}) → L^{2}(ℝ^{n}) called the FourierPlancharel operator; see, for example, [346]. Its inverse is given by \({{\mathcal F}^{ 1}}(f)(x) = \hat f( x)\) for ƒ ∈ L^{2}(ℝ^{n}) and x ∈ ℝ^{n}.
For a differentiable function u, we denote by u_{ t }, u_{ x }, u_{ y }, u_{ z } its partial derivatives with respect to t, x, y, z.
Indices labeling gridpoints and number of basis functions range from 0 to N. Superscripts and subscripts are used to denote the numerical solution at some discrete timestep and gridpoint, as in
We use boldface fonts for gridfunctions, as in
The InitialValue Problem
We start here with a discussion of hyperbolic evolution problems on the infinite domain ℝ^{n}. This is usually the situation one encounters in the mathematical description of isolated systems, where some strong field phenomena take place “near the origin” and generates waves, which are emitted toward “infinity”. Therefore, the goal of this section is to analyze the wellposedness of the Cauchy problem for quasilinear hyperbolic evolution equations without boundaries. The case with boundaries is the subject of Section 5. As mentioned in the introduction (Section 1), the wellposedness results are fundamental in the sense that they give existence (at least local in time if the problem is nonlinear) and uniqueness of solutions and show that these depend continuously on the initial data. Of course, how the solution actually appears in detail needs to be established by more sophisticated mathematical tools or by numerical experiments, but it is clear that it does not make sense to speak about “the solution” if the problem is not well posed.
Our presentation starts with the simplest case of linear constant coefficient problems in Section 3.1, where solutions can be constructed explicitly using Fourier transform. Then, we consider in Section 3.2 linear problems with variable coefficients, which we reduce to the constant coefficient case using the localization principle. Next, in Section 3.3, we treat firstorder quasilinear equations, which we reduce to the previous case by the principle of linearization. Finally, in Section 3.4 we summarize some basic results about abstract evolution operators, which give the general framework for treating evolution problems including not only those described by local partial differential operators, but also more general ones.
Much of the material from the first three subsections is taken from the book by Kreiss and Lorenz [259]. However, our summary also includes recent results concerning secondorder equations, examples of wave systems on curved spacetimes, and a very brief review of semigroup theory.
Linear, constant coefficient problems
We consider an evolution equation on ndimensional space of the following form:
Here, u = u(t, x) ∈ ℂ^{m} is the state vector, and u_{ t } its partial derivative with respect to t. Next, the A_{ v }’s denote complex, m × m matrices where v = (v_{1}, v_{2}, …, v_{ n }) denotes a multiindex with components V_{ j } ∈ {0, 1, 2, 3, …} and v := v_{1} + … + v_{ n }. Finally, D_{ v } denotes the partial derivative operator
of order v, where D_{0} := I. Here are a few representative examples:
Example 1. The advection equation u_{ t }(t, x) = λu_{ x }(t, x) with speed λ ∈ ℝ in the negative x direction.
Example 2. The heat equation u_{ t }(t, x) = Δu(t, x), where
denotes the Laplace operator.
Example 3. The Schrödinger equation u_{ t }(t, x) = iΔu(t, x).
Example 4. The wave equation U_{ tt } = ΔU, which can be cast into the form of Eq. (3.1),
We can find solutions of Eq. (3.1) by Fourier transformation in space,
Applied to Eq. (3.1) this yields the system of linear ordinary differential equations
for each wave vector k ∈ ℝ^{n} where P(ik), called the symbol of the differential operator P(∂/∂x), is defined as
The solution of Eq. (3.4) is given by
where û(0, k) is determined by the initial data for u at t = 0. Therefore, the formal solution of the Cauchy problem
with given initial data ƒ for u at t = 0 is
where \(\hat f(k) = {1 \over {{{(2\pi)}^{n/2}}}}\int {{e^{ ik \cdot x}}f(x){d^n}x}\)
Wellposedness
At this point we have to ask ourselves if expression (3.9) makes sense. In fact, we do not expect the integral to converge in general. Even if \({\hat f}\) is smooth and decays rapidly to zero as k → ∞ we could still have problems if e^{P(ik)t} diverges as k → ∞. One simple, but very restrictive, possibility to control this problem is to limit ourselves to initial data ƒ in the class \({{\mathcal S}^\omega}\) of functions, which are the Fourier transform of a C^{∞}function with compact support, i.e., \(f \in {{\mathcal S}^\omega}\), where
A function in this space is real analytic and decays faster than any polynomial as x → ∞.^{Footnote 1} If \(f \in {{\mathcal S}^\omega}\) the integral in Eq. (3.9) is welldefined and we obtain a solution of the Cauchy problem (3.7, 3.8), which, for each t ≥ 0 lies in this space. However, this possibility suffers from several unwanted features:

The space of admissible initial data is very restrictive. Indeed, since \(f \in {{\mathcal S}^\omega}\) is necessarily analytic it is not possible to consider nontrivial data with, say, compact support, and study the propagation of the support for such data.

For fixed t > 0, the solution may grow without bound when perturbations with arbitrarily small amplitude but higher and higher frequency components are considered. Such an effect is illustrated in Example 6 below.

The function space \({{\mathcal S}^\omega}\) does not seem to be useful as a solution space when considering linear variable coefficient or quasilinear problems, since, for such problems, the different k modes do not decouple from each other. Hence, mode coupling can lead to components with arbitrarily high frequencies.^{Footnote 2}
For these reasons, it is desirable to consider initial data of a more general class than \({{\mathcal S}^\omega}\). For this, we need to control the growth of e^{P(ik)t}. This is captured in the following
Definition 1. The Cauchy problem ( 3.7 , 3.8 ) is called well posed if there are constants K ≥ 1 and α ∈ ℝ such that
The importance of this definition relies on the property that for each fixed time t > 0 the norm e^{P(ik)t} of the propagator is bounded by the constant C(t) := Ke^{αt}, which is independent of the wave vector k. The definition does not state anything about the growth of the solution with time other that this growth is bounded by an exponential. In this sense, unless one can choose α ≤ 0 or α > 0 arbitrarily small, wellposedness is not a statement about the stability in time, but rather about stability with respect to mode fluctuations.
Let us illustrate the meaning of Definition 1 with a few examples:
Example 5. The heat equation u_{ t }(t, x) = Δu(t, x).
Fourier transformation converts this equation into û_{ t }(t, k) = −k^{2}û(t, k). Hence, the symbol is \(P(ik) =  \vert k{\vert ^2}\) and \(\vert {e^{P(ik)t}}\vert = {e^{\vert k{\vert ^2}t}} \leq 1\). The problem is well posed.
Example 6. The backwards heat equation u_{ t }(t, x) = −Δu(t, x).
In this case the symbol is \(P(ik) = + \vert k{\vert ^2}\) and \(\vert {e^{P(ik)t}}\vert = {e^{\vert k{\vert ^2}}}\). In contrast to the previous case, e^{P(ik)t} exhibits exponential frequencydependent growth for each fixed t > 0 and the problem is not well posed. Notice that small initial perturbations with large k are amplified by a factor that becomes larger and larger as k increases. Therefore, after an arbitrarily small time, the solution is contaminated by highfrequency modes.
Example 7. The Schrödinger equation u_{ t }(t, x) = iΔu(t, x).
In this case we have P(ik) = ik^{2} and e^{P(ik)}^{t} = 1. The problem is well posed. Furthermore, the evolution is unitary, and we can evolve forward and backwards in time. When compared to the previous example, it is the factor i in front of the Laplace operator that saves the situation and allows the evolution backwards in time.
Example 8. The onedimensional wave equation written in firstorder form,
The symbol is P(ik) = ikA. Since the matrix A is symmetric and has eigenvalues ±1, there exists an orthogonal transformation U such that
Therefore, e^{P(ik)t} = 1, and the problem is well posed.
Example 9. Perturb the previous problem by a lowerorder term,
The symbol is P(ik) = ikA + λI, and e^{P(ik)t} = e^{λt}. The problem is well posed, even though the solution grows exponentially in time if λ > 0.
More generally one can show (see Theorem 2.1.2 in [259]):
Lemma 1. The Cauchy problem for the firstorder equation u_{ t } = Au_{ x } + B with complex m × m matrices A and B is well posed if and only if A is diagonalizable and has only real eigenvalues.
By considering the eigenvalues of the symbol P(ik) we obtain the following simple necessary condition for wellposedness:
Lemma 2 (Petrovskii condition). Suppose the Cauchy problem ( 3.7 , 3.8 ) is well posed. Then, there is a constant α ∈ ℝ such that
for all eigenvalues λ of P(ik).
Proof. Suppose λ is an eigenvalue of P(ik) with corresponding eigenvector v, P(ik)v = λv. Then, if the problem is well posed,
for all t ≥ 0, which implies that _{ e }^{Re(λ)t} ≤ Ke^{αt} for all t ≥ 0, and hence Re(λ) ≤ α. □
Although the Petrovskii condition is a very simple necessary condition, we stress that it is not sufficient in general. Counterexamples are firstorder systems, which are weakly, but not strongly, hyperbolic; see Example 10 below.
Extension of solutions
Now that we have defined and illustrated the notion of wellposedness, let us see how it can be used to solve the Cauchy problem (3.7, 3.8) for initial data more general than in \({{\mathcal S}^\omega}\). Suppose first that \(f \in {{\mathcal S}^\omega}\), as before. Then, if the problem is well posed, Parseval’s identities imply that the solution (3.9) must satisfy
Therefore, the \({{\mathcal S}^\omega}\)solution satisfies the following estimate
for all \(f \in {{\mathcal S}^\omega}\). This estimate is important because it allows us to extend the solution to the much larger space L^{2}(ℝ^{n}). This extension is defined in the following way: let ƒ ∈ L^{2}(ℝ^{n}). Since \({{\mathcal S}^\omega}\) is dense in L^{2}(ℝ^{n}) there exists a sequence {ƒ_{ j }} in \({{\mathcal S}^\omega}\) such that ‖ƒ_{ j } − ƒ‖ → 0. Therefore, if the problem is well posed, it follows from the estimate (3.18) that the corresponding solutions u_{ j } defined by Eq. (3.9) form a Cauchysequence in L^{2}(ℝ^{n}), and we can define
where the limit exists in the L^{2}(ℝ^{n}) sense. The linear map U(t) : L^{2}(ℝ^{n}) → L^{2}(ℝ^{n}) satisfies the following properties:

(i)
U(0) = I is the identity map.

(ii)
U(t + s) = U(t)U(s) for all t, s ≥ 0.

(iii)
For \(f \in {{\mathcal S}^\omega},u(t,.) = U(t)f\) is the unique solution to the Cauchy problem (3.7, 3.8).

(iv)
‖U(t)ƒ‖ ≤ Ke^{αt}‖ƒ‖ for all ƒ ∈ L^{2}(ℝ^{n}) and all t ≥ 0.
The family {U(t) : t ≥ 0} is called a semigroup on L^{2}(ℝ^{n}). In general, U(t) cannot be extended to negative t as the example of the backwards heat equation, Example 6, shows.
For ƒ ∈ L^{2}(ℝ^{n}) the function u(t, x) := U(t)ƒ(x) is called a weak solution of the Cauchy problem (3.7, 3.8). It can also be constructed in an abstract way by using the FourierPlancharel operator Ƒ : L^{2}(ℝ^{n}) → L^{2}(ℝ^{n}). If the problem is well posed, then for each ƒ ∈ L^{2}(ℝ^{n}) and t ≥ 0 the map k ↦ e^{P(ik)t}Ƒ(ƒ)(k) defines an L^{2}(ℝ^{n})function, and, hence, we can define
According to Duhamel’s principle, the semigroup U(t) can also be used to construct weak solutions of the inhomogeneous problem,
where F : [0, ∞) → L^{2}(ℝ^{n}), t ↦ F(t, ·) is continuous:
for a discussion on semigroups in a more general context see Section 3.4.
Algebraic characterization
In order to extend the solution concept to initial data more general than analytic, we have introduced the concept of wellposedness in Definition 1. However, given a symbol P(ik), it is not always a simple task to determine whether or not constants K ≥ 0 and α ∈ ℝ exist such that e^{P(ik)t} ≤ Xe^{αt} for all t ≥ 0 and k ∈ ℝ^{n}. Fortunately, the matrix theorem by Kreiss [257] provides necessary and sufficient conditions on the symbol P(ik) for wellposedness.
Theorem 1. Let P(ik), k ∈ ℝ^{n}, be the symbol of a constant coefficient linear problem, see Eq. (3.5) , and let α ∈ ℝ. Then, the following conditions are equivalent:

(i)
There exists a constant K ≥ 0 such that
$$\vert{e^{P(ik)t}}\vert \leq K{e^{\alpha t}}$$(3.24)for all t ≥ 0 and k ∈ ℝ^{n}.

(ii)
There exists a constant M > 0 and a family H(k) of m × m Hermitian matrices such that
$${M^{ 1}}I \leq H(k) \leq MI,\quad H(k)P(ik) + P{(ik)^{\ast}}H(k) \leq 2\alpha H(k)$$(3.25)for all k ∈ ℝ^{n}.
A generalization and complete proof of this theorem can be found in [259]. However, let us show here the implication (ii) ⇒ (i) since it illustrates the concept of energy estimates, which will be used quite often throughout this review (see Section 3.2.3 below for a more general discussion of these estimates). Hence, let H(k) be a family of m × m Hermitian matrices satisfying the condition (3.25). Let k ∈ ℝ^{n} and v_{0} ∈ ℂ^{m} be fixed, and define v(t) := e^{P(ik)t}v_{0} for t ≥ 0. Then we have the following estimate for the “energy” density v(t)*H(k)v(t),
which implies the differential inequality
Integrating, we find
which implies the inequality (3.24) with K = M.
Firstorder systems
Many systems in physics, like Maxwell’s equations, the Dirac equation, and certain formulation of Einstein’s equations are described by firstorder partialdifferential equations (PDEs). In fact, even systems, which are given by a higherorder PDE, can be reduced to first order at the cost of introducing new variables, and possibly also new constraints. Therefore, let us specialize the above results to a firstorder linear problem of the form
where A^{1}, …, A^{n}, B are complex m × m matrices. We split P(ik) = P_{0}(ik) + B into its principal symbol, \({P_0}(ik) = i\sum\limits_{j = 1}^n {{k_j}{A^j}}\), and the lowerorder term B. The principal part is the one that dominates for large k and hence the one that turns out to be important for wellposedness. Notice that P_{0}(ik) depends linearly on k. With these observations in mind we note:

A necessary condition for the problem to be well posed is that for each k ∈ ℝ^{n} with k = 1 the symbol P_{0}(ik) is diagonalizable and has only purely imaginary eigenvalues. To see this, we require the inequality
$$\vert {e^{\vert k\vert {P_0}(ik\prime)t + Bt}}\vert \leq K{e^{\alpha t}},\quad k\prime: = {k \over {\vert k\vert}},$$(3.29)for all t ≥ 0 and k ∈ ℝ^{n}, k ≠ 0, replace t by t/k, and take the limit k → ∞, which yields \(\vert{e^{{P_0}(i{k\prime})t}}\vert\, \leq K\) for all k′ ∈ ℝ^{n} with k′ = 1. Therefore, there must exist for each such k′ a complex m × m matrix S(k′) such that S(k′)^{−1}P_{0}(ik′)S(k′) = iΛ(k′), where Λ(k′) is a diagonal real matrix (cf. Lemma 1).

In this case the family of Hermitian m × m matrices H(k′) := (S(k′)^{−1})*S(k′)^{−1} satisfies
$$H(k\prime){P_0}(ik\prime) + {P_0}{(ik\prime)^{\ast}}H(k\prime) = 0$$(3.30)for all k′ ∈ ℝ^{n} with k′ = 1.

However, in order to obtain the energy estimate, one also needs the condition M^{−1}I ≤ H(k′) ≤ MI, that is, H(k′) must be uniformly bounded and positive. This follows automatically if H(k′) depends continuously on k′, since k′ varies over the (n − 1)dimensional unit sphere, which is compact.^{Footnote 3} In turn, it follows that H(k′) depends continuously on k′ if S(k′) does. However, although this may hold in many situations, continuous dependence of S(k′) on k′ cannot always be established; see Example 12 for a counterexample.
These observations motivate the following three notions of hyperbolicity, each of them being a stronger condition than the previous one:
Definition 2. The firstorder system (3.28) is called

(i)
weakly hyperbolic, if all the eigenvalues of its principal symbol P_{0}(ik) are purely imaginary.

(ii)
strongly hyperbolic, if there exists a constant M > 0 and a family of Hermitian m × m matrices H (k), k ∈ S^{n−1}, satisfying
$${M^{ 1}}I \leq H(k) \leq MI,\quad H(k){P_0}(ik) + {P_0}{(ik)^{\ast}}H(k) = 0,$$(3.31)for all k ∈ S^{n−1}, where S^{n−1} := {k ∈ ℝ^{n} : k = 1} denotes the unit sphere.

(iii)
symmetric hyperbolic, if there exists a Hermitian, positive definite m × m matrix H (which is independent of k) such that
$$H{P_0}(ik) + {P_0}{(ik)^{\ast}}H = 0,$$(3.32)for all k ∈ S^{n−1}.
The matrix theorem implies the following statements:

Strongly and symmetric hyperbolic systems give rise to a wellposed Cauchy problem. According to Theorem 1, their principal symbol satisfies
$$\vert {e^{{P_0}(ik)t}}\vert \leq K,\quad k \in {{\mathbb R}^n},\quad t \in {\mathbb R},$$(3.33)and this property is stable with respect to lowerorder perturbations,
$$\vert {e^{P(ik)t}}\vert = \vert {e^{{P_0}(ik)t + Bt}}\vert \leq K{e^{K\vert B\vert t}},\quad k \in {{\mathbb R}^n},\quad t \in {\mathbb R}.$$(3.34)The last inequality can be proven by applying Duhamel’s formula (3.23) to the function \(\hat u(t): = {e^{p(ik)t}}\hat f\), which satisfies û_{ t }(t) = P_{0}(ik)û(t) + F(t) with F(t) = Bû(t). The solution formula (3.23) then gives \(\vert \hat u(t)\vert \, \leq K(\vert \hat f\vert + \vert B\vert \int\nolimits_0^t {\vert \hat u(s)\vert ds)}\), which yields \(\vert \hat u(t)\vert \, \leq K{e^{K\vert B\vert t}}\vert \hat f\vert\) upon integration.

As we have anticipated above, a necessary condition for wellposedness is the existence of a complex m × m matrix S(k) for each k ∈ S^{n−1} on the unit sphere, which brings the principal symbol P_{0}(ik) into diagonal, purely imaginary form. If, furthermore, S(k) can be chosen such that S(k) and S(k)^{−1}are uniformly bounded for all k ∈ S^{n−1}, then H(k) := (S(k)^{−1})*S(k)^{−1} satisfies the conditions (3.31) for strong hyperbolicity. If the system is well posed, Theorem 2.4.1 in [259] shows that it is always possible to construct a symmetrizer H(k) satisfying the conditions (3.31) in this manner, and hence, strong hyperbolicity is also a necessary condition for wellposedness. The symmetrizer construction H(k) := (S(k)^{−1})*S(k)^{−1} is useful for applications, since S(k) is easily constructed from the eigenvectors and S(k)^{−1} from the eigenfields of the principal symbol; see Example 15.

Weakly hyperbolic systems are not well posed in general because \(\vert {e^{{P_0}(ik)t}}\vert\) might exhibit polynomial growth in kt. Although one might consider such polynomial growth as acceptable, such systems are unstable with respect to lowerorder perturbations. As the next example shows, it is possible that e^{P(ik)t} grows exponentially in k if the system is weakly hyperbolic.
Example 10. Consider the weakly hyperbolic system [259]
with a ∈ ℝ a parameter. The principal symbol is \({P_0}(ik) = ik\left({\begin{array}{*{20}c} 1 & 1 \\ 0 & 1 \\ \end{array}} \right)\) and
Using the tools described in Section 2 we find for the norm
which is approximately equal to kt for large kt. Hence, the solutions to Eq. (3.35) contain modes, which grow linearly in kt for large kt when a = 0, i.e., when there are no lowerorder terms.
However, when a ≠ 0, the eigenvalues of P(ik) are
which, for large k has real part \({\rm{Re}}({\lambda _ \pm}) \approx \pm \sqrt {\vert a\vert \vert k\vert/2}\). The eigenvalue with positive real part gives rise to solutions, which, for fixed t, grow exponentially in k.
Example 11. For the system [353],
the principal symbol, \({P_0}(ik) = i\left({\begin{array}{*{20}c} {{k_1} + {k_2}} & {{k_1}} \\ {0} & {2({k_1} + {k_2})} \\ \end{array}} \right)\), is diagonalizable for all vectors k = (k_{1}, k_{2}) ∈ S^{1} except for those with k_{1} + k_{2} = 0. In particular, P_{0}(ik) is diagonalizable for k = (1, 0) and k = (0, 1). This shows that in general, it is not sufficient to check that the n matrices A^{1}, A^{2}, …, A^{n} alone are diagonalizable and have real eigenvalues; one has to consider all possible linear combinations \(\sum\limits_{j = 1}^n {{A^j}{k_j}}\) with k ∈ S^{n−1}.
Example 12. Next, we present a system for which the eigenvectors of the principal symbol cannot be chosen to be continuous functions of k:
The principal symbol \({P_0}(ik) = i\left({\begin{array}{*{20}c} {{k_1}} & {{k_2}} \\ {{k_2}} & { {k_1}} \\ \end{array}} \right)\) has eigenvalues \({\lambda _ \pm}(k) = \pm i\sqrt {k_1^2 + k_2^2}\) and for (k_{1}, k_{2}) ≠ (0, 0) the corresponding eigenprojectors are
When (k_{1}, k_{2}) → (0, 0) the two eigenvalues fall together, and A(k) converges to the zero matrix. However, it is not possible to continuously extend P_{ ± }(k_{1}, k_{2}) to (k_{1}, k_{2}) = (0, 0). For example,
for positive h > 0. Therefore, any choice for the matrix S(k), which diagonalizes A(k), must be discontinuous at k = (0, 0, ±1) since the columns of S(k) are the eigenvectors of A(k).
Of course, A(k) is symmetric and so S(k) can be chosen to be unitary, which yields the trivial symmetrizer H(k) = I. Therefore, the system is symmetric hyperbolic and yields a wellposed Cauchy problem; however, this example shows that it is not always possible to choose S(k) as a continuous function of k.
Example 13. Consider the KleinGordon equation
in two spatial dimensions, where m ∈ ℝ is a parameter, which is proportional to the mass of the field Φ Introducing the variables u = (Φ, Φ_{ t }, Φ_{ x }, Φ_{ y }) we obtain the firstorder system
The matrix coefficients in front of u_{ x } and u_{ y } are symmetric; hence the system is symmetric hyperbolic with trivial symmetrizer H = diag(m^{2}, 1, 1, 1).^{Footnote 4} The corresponding Cauchy problem is well posed. However, a problem with this firstorder system is that it is only equivalent to the original, secondorder equation (3.43) if the constraints (u_{1})_{ x } = u_{3} and (u_{1})_{ y } = u_{4} are satisfied.
An alternative symmetric hyperbolic firstorder reduction of the KleinGordon equation, which does not require the introduction of constraints, is the Dirac equation in two spatial dimensions,
This system implies the KleinGordon equation (3.43) for either of the two components of v.
Yet another way of reducing secondorder equations to firstorder ones without introducing constraints will be discussed in Section 3.1.5.
Example 14. In terms of the electric and magnetic fields u = (E, B), Maxwell’s evolution equations,
constitute a symmetric hyperbolic system. Here, J is the current density and ∇ and Λ denote the nabla operator and the vector product, respectively. The principal symbol is
and a symmetrizer is given by the physical energy density,
in other words, H = 2 ^{−1}I is trivial. The constraints V · E = ρ and ∇ · B = 0 propagate as a consequence of Eqs. (3.46, 3.47), provided that the continuity equation holds: (∇ · E − ρ)_{ t } = −∇ · J − ρ_{ t } = 0, (∇ · B)_{ t } = 0.
Example 15. There are many alternative ways to write Maxwell’s equations. The following system [353, 287] was originally motivated by an analogy with certain parametrized firstorder hyperbolic formulations of the Einstein equations, and provides an example of a system that can be symmetric, strongly, weakly or not hyperbolic at all, depending on the parameter values. Using the Einstein summation convention, the evolution system in vacuum has the form
where E_{ i } and W_{ ij } = ∂_{ i }A_{ j }, i = 1, 2, 3, represent the Cartesian components of the electric field and the gradient of the magnetic potential Aj, respectively, and where the real parameters α and β determine the dynamics of the constraint hypersurface defined by ∂^{k}E_{ k } = 0 and ∂_{ k }W_{ ij }−∂_{ i }W_{ kj } = 0.
In order to analyze under which conditions on α and β the system (3.50, 3.51) is strongly hyperbolic we consider the corresponding symbol,
Decomposing E_{ i } and W_{ ij } into components parallel and orthogonal to k_{ i },
where in terms of the projector \({\gamma _i}^j: = {\delta _i}^j  {k_i}{k^j}\) orthogonal to k we have defined \(\bar E: = {k^l}{E_l},{\bar E_i}: = {\gamma _i}^j{E_j}\) and \(\bar W: = {k^i}{k^j}{W_{ij}},{\bar W_i}: = {\gamma _i}^k{W_{kj}}{k^j},{\bar V_j}: = {k^i}{W_{ik}}{\gamma ^k}_j,\bar U: = {\gamma ^{ij}}{W_{ij}}\) and \({\bar W_{ij}}: = ({\gamma _i}^k{\gamma _j}^l  {2^{ 1}}{\gamma _{ij}}{\gamma ^{kl}}){W_{kl}}\),^{Footnote 5} we can write the eigenvalue problem P_{0}(ik)u = iλu as
It follows that P_{0}(ik) is diagonalizable with purely complex eigenvalues if and only if αβ > 0. However, in order to show that in this case the system is strongly hyperbolic one still needs to construct a bounded symmetrizer H(k). For this, we set \(\mu := \sqrt {\alpha \beta}\) and diagonalize P_{0}(ik) = iS(k)Λ(k)S(k)^{−1} with Λ(k) = diag(µ, −µ, 0, 1, −1, 0, 0) and
Then, the quadratic form associated with the symmetrizer is
and H(k) is smooth in k ∈S^{2}. Therefore, the system is indeed strongly hyperbolic for αβ > 0.
In order to analyze under which conditions the system is symmetric hyperbolic we notice that because of rotational and parity invariance the most general kindependent symmetrizer must have the form
with strictly positive constants a, b, c and d, where Ŵ_{ ij } := W_{(ij)} − δ_{ ij }W/3 denotes the symmetric, tracefree part of W_{ ij } and \(W: = {W^j}_j\) its trace. Then,
for H to be a symmetrizer, the expression on the righthand side must be purely imaginary. This is the case if and only if a(α + 2) = b, −aα = c and 2aα/3 = d(1 + 3β/2). Since a, b, c and d are positive, these equalities can be satisfied if and only if −2 < α < 0 and β < −2/3. Therefore, if either α and β are both positive or α and β are both negative and α ≤ −2 or β ≥ −2/3, then the system (3.50, 3.51) is strongly but not symmetric hyperbolic.
Secondorder systems
An important class of systems in physics are wave problems. In the linear, constant coefficient case, they are described by an equation of the form
where v = v(t, x) ∈ ℂ^{m} is the state vector, and A^{ij} = A^{ij}, B^{j}, C^{j}, D, E denote complex m × m matrices. In order to apply the theory described so far, we reduce this equation to a system that is first order in time. This is achieved by introducing the new variable \(w: = {\upsilon _t}  \sum\limits_{j = 1}^n {{B^j}} {\partial \over {\partial {x^j}}}\upsilon\).^{Footnote 6} With this redefinition one obtains a system of the form (3.1) with u = (v, w)^{T} and
Now we could apply the matrix theorem, Theorem 1, to the corresponding symbol P(ik) and analyze under which conditions on the matrix coefficients A^{ij}, B^{j}, C^{j}, D, E the Cauchy problem is well posed. However, since our problem originates from a secondorder equation, it is convenient to rewrite the symbol in a slightly different way: instead of taking the Fourier transform of v and w directly, we multiply û by k and write the symbol in terms of the variable \(\hat U: = {(\vert k\vert \hat \upsilon, \hat w)^T}\). Then, the L^{2}norm of Û controls, through Parseval’s identity, the L^{2}norms of the first partial derivatives of v, as is the case for the usual energies for secondorder systems. In terms of Û the system reads
in Fourier space, where
with \({\hat k_j}: = {k_j}/\vert k\vert\). As for firstorder systems, we can split Q(ik) into its principal part,
which dominates for k → ∞, and the remaining, lowerorder terms. Because of the homogeneity of Q_{0}(ik) in k we can restrict ourselves to values of k ∈ S^{n−1} on the unit sphere, like for firstorder systems. Then, it follows as a consequence of the matrix theorem that the problem is well posed if and only if there exists a symmetrizer H(k) and a constant M > 0 satisfying
for all such k. Necessary and sufficient conditions under which such a symmetrizer exists have been given in [261] for the particular case in which the mixedsecondorder derivative term in Eq. (3.56) vanishes; that is, when B^{j} = 0. This result can be generalized in a straightforward manner to the case where the matrices B^{j} = β^{j}I are proportional to the identity:
Theorem 2. Suppose B^{j} = β^{j}I, j = 1, 2, …, n. (Note that this condition is trivially satisfied if m = 1.) Then, the Cauchy problem for Eq. (3.56) is well posed if and only if the symbol
has the following properties: there exist constants M > 0 and δ > 0 and a family h(k) of Hermitian m × m matrices such that
for all k ∈ S^{n−1}.
Proof. Since for B^{j} = β^{j}I the advection term \(ik\sum\limits_{j = 1}^n {{B^j}{{\hat k}_j}}\) commutes with any Hermitian matrix H(k), it is sufficient to prove the theorem for B^{j} = 0, in which case the principal symbol reduces to
We write the symmetrizer H(k) in the following block form,
where H_{11}(k), H_{22}(k) and H_{12}(k) are complex m × m matrices, the first two being Hermitian. then,
Now, suppose h(k) satisfies the conditions (3.63). Then, choosing H_{12}(k) := 0, H_{22}(k) := h(k) and H_{11}(k) := h(k)R(k) we find that H(k)Q_{0}(ik) + Q_{0}(ik)*H(k) = 0. Furthermore, M^{−1}I ≤ H_{22}(k) ≤ MI and δI ≤ H_{11}(k) = h(k)R(k) ≤ MCI where
is finite because R(k)u is continuous in k and u. Therefore, H(k) is a symmetrizer for Q_{0}(ik), and the problem is well posed.
Conversely, suppose that the problem is well posed with symmetrizer H(k). Then, the vanishing of H(k)Q_{0}(ik) + Q_{0}(ik)*H(k) yields the conditions H_{11}(k) = H_{22}(k)R(k) = R(k)*H_{22}(k) and the conditions (3.63) are satisfied for h(k) := H_{22}(k). □
Remark: The conditions (3.63) imply that R(k) is symmetric and positive with respect to the scalar product defined by h(k). Hence it is diagonalizable, and all its eigenvalues are positive. A practical way of finding h(k) is to construct T(k), which diagonalizes R(k), T(k)^{−1} R(k)T(k) = P(k) with P(k) diagonal and positive. Then, h(k) := (T(k)^{−1})*T(k)^{−1} is the candidate for satisfying the conditions (3.63).
Let us give some examples and applications:
Example 16. The KleinGordon equation v_{ tt } = Δv − m^{2}v on flat spacetime. In this case, A^{ij} = δ^{ij} and B^{j} = 0, and R(k) = k^{2} trivially satisfies the conditions of Theorem 2.
Example 17. In anticipation of the following Section 3.2, where linear problems with variable coefficients are treated, let us generalize the previous example on a curved spacetime (M, g). We assume that (M, g) is globally hyperbolic such that it can be foliated by spacelike hypersurfaces Σ_{ t }. In the ADM decomposition, the metric in adapted coordinates assumes the form
with α > 0 the lapse, β^{i} the shift vector, which is tangent to Σ_{ t }, and γ_{ ij }dx^{i} ⊗ dx^{j} the induced threemetric on the spacelike hypersurfaces Σ_{ t }. The inverse of the metric is given by
where γ^{ij} are the components of the inverse threemetric. The KleinGordon equation on (M, g) is
which, in the constant coefficient case, has the form of Eq. (3.56) with
Hence, R(k) = α^{2}γ^{ij}k_{ i }k_{ j }, and the conditions of Theorem 2 are satisfied with h(k) = 1 since α > 0 and γ^{ij} is symmetric positive definite.
Linear problems with variable coefficients
Next, we generalize the theory to linear evolution problems with variable coefficients. That is, we consider equations of the following form:
where now the complex m × m matrices A_{ v }(t, x) may depend on t and x. For simplicity, we assume that each matrix coefficient of A_{ v } belongs to the class \(C_b^\infty ([0,\infty) \times {{\rm{\mathbb R}}^n})\) of bounded, C^{∞}functions with bounded derivatives. Unlike the constant coefficient case, the different kmodes couple when performing a Fourier transformation, and there is no simple explicit representation of the solutions through the exponential of the symbol. Therefore, Definition 1 of wellposedness needs to be altered. Instead of giving an operatorbased definition, let us define wellposedness by the basic requirements a Cauchy problem should satisfy:
Definition 3. The Cauchy problem
is well posed if any \(f \in C_0^\infty ({{\rm{\mathbb R}}^n})\) gives rise to a unique C^{∞}solution u(t, x), and if there are constants K ≥ 1 and α ∈ ℝ such that
for all \(f \in C_0^\infty ({{\rm{\mathbb R}}^n})\) and all t ≥ 0.
Before we proceed and analyze under which conditions on the operator P(t, x, ∂/∂x) the Cauchy problem (3.73, 3.74) is well posed, let us make the following observations:

In the constant coefficient case, inequality (3.75) is equivalent to inequality (3.11), and in this sense Definition 3 is a generalization of Definition 1.

If u_{1} and u_{2} are the solutions corresponding to the initial data \({f_1},{f_2} \in C_0^\infty ({{\rm{\mathbb R}}^n})\), then the difference u = u_{2} − u_{1} satisfies the Cauchy problem (3.73, 3.74) with ƒ = ƒ_{2} − ƒ_{1} and the estimate (3.75) implies that
$$\vert \vert {u_2}(t, \cdot)  {u_1}(t, \cdot)\vert \vert \leq K{e^{\alpha t}}\vert \vert {f_2}  {f_1}\vert \vert ,\quad \quad t \geq 0.$$(3.76)In particular, this implies that u_{2}(t, ·) converges to u_{1}(t, ·) if ƒ_{2} converges to ƒ_{1} in the L^{2}sense. In this sense, the solution depends continuously on the initial data. This property is important for the convergence of a numerical approximation, as discussed in Section 7.

Estimate (3.75) also implies uniqueness of the solution, because for two solutions u_{1} and u_{2} with the same initial data \({f_1} = {f_2} \in C_0^\infty ({{\rm{\mathbb R}}^n})\) the inequality (3.76) implies u_{1} = u_{2}.

As in the constant coefficient case, it is possible to extend the solution concept to weak ones by taking sequences of C^{∞}elements. This defines a propagator U(t, s) : L^{2}(ℝ^{n}) → L^{2}(ℝ^{n}), which maps the solution at time s ≥ 0 to the solution at time t ≥ s and satisfies similar properties to the ones described in Section 3.1.2: (i) U(t, t) = I for all t ≥ 0, (ii) U(t, s)U(s, r) = U(t, r) for all t ≥ s ≥ r ≥ 0, (iii) for \(f \in C_0^\infty ({{\rm{R}}^n}),U(t,0)f\), U(t, 0)ƒ is the unique solution of the Cauchy problem (3.73, 3.74), (iv) ‖U(t, s)ƒ‖ ≤ Ke^{α(t−s)}‖ƒ‖ for all ƒ ∈ L^{2}(ℝ) and all t ≥ s ≥ 0. Furthermore, the Duhamel formula (3.23) holds with the replacement U(t − s) ↦ U (t, s).
The localization principle
Like in the constant coefficient case, we would like to have a criterion for wellposedness that is based on the coefficients A_{ v }(t, x) of the differential operator alone. As we have seen in the constant coefficient case, wellposedness is essentially a statement about high frequencies. Therefore, we are led to consider solutions with very high frequency or, equivalently, with very short wavelength. In this regime we can consider small neighborhoods and since the coefficients A_{ v }(t, x) are smooth, they are approximately constant in such neighborhoods. Therefore, intuitively, the question of wellposedness for the variable coefficient problem can be reduced to a frozen coefficient problem, where the values of the matrix coefficients A_{ v }(t, x) are frozen to their values at a given point.
In order to analyze this more carefully, and for the sake of illustration, let us consider a firstorder linear system with variable coefficients
where A^{1}, …, A^{n}, B are complex m×m matrices, whose coefficients belong to the class \(C_b^\infty ([0,\infty) \times {{\rm{\mathbb R}}^n})\) of bounded, C^{∞}functions with bounded derivatives. As mentioned above, the Fourier transform of this operator does not yield a simple, algebraic symbol like in the constant coefficient case.^{Footnote 7} However, given a specific point p_{0} = (t_{0}, x_{0}) ∈ [0, ∞) × ℝ^{n}, we may zoom into a very small neighborhood of p_{0}. Since the coefficients A^{j}(t, x) and B(t, x) are smooth, they will be approximately constant in this neighborhood and we may freeze the coefficients of A^{j}(t, x) and B(t, x) to their values at the point p_{0}. More precisely, let u(t, x) be a smooth solution of Eq. (3.77). Then, we consider the formal expansion
As a consequence of Eq. (3.77) one obtains
Taking the pointwise limit ε → 0 on both sides of this equation we obtain
where F_{0} := B(t_{0}, x_{0})u(t_{0}, x_{0}). Therefore, if u is a solution of the variable coefficient equation u_{ t } = P(t, x, ∂/∂x)u, then, u^{(1)} satisfies the linear constant coefficient problem \(u_t^{(1)}(t,x) = {P_0}({t_0},{x_0},\partial/\partial x){u^{(1)}} + {F_0}\) obtained by freezing the coefficients in the principal part of P(t, x, ∂/∂x)u to their values at the point p_{0} and by replacing the lowerorder term B(t, x) by the forcing term F_{0}. By adjusting the scaling of t, a similar conclusion can be obtained when P(t, x, ∂/∂x) is a higherderivative operator.
This leads us to the following statement: a necessary condition for the linear, variable coefficient Cauchy problem for the equation u_{ t } = P(t, x, ∂/∂x)u to be well posed is that all the corresponding problems for the frozen coefficient equations v_{ t } = P_{0}(t_{0}, x_{0}, ∂/∂x)v are well posed. For a rigorous proof of this statement for the case in which P(t, x, ∂/∂x) is timeindependent; see [397]. We stress that it is important to replace P(t, x, ∂/∂x) by its principal part P_{0}(t, x, ∂/∂x) when freezing the coefficients. The statement is false if lowerorder terms are retained; see [259, 397] for counterexamples.
Now it is natural to ask whether or not the converse statement is true: suppose that the Cauchy problems for all frozen coefficient equations v_{ t } = P_{0}(t_{0}, x_{0}, ∂/∂x)v are well posed; is the original, variable coefficient problem also well posed? It turns out this localization principle is valid in many cases under additional smoothness requirements. In order to formulate the latter, let us go back to the firstorder equation (3.77). We define its principal symbol as
In analogy to the constant coefficient case we define:
Definition 4. The firstorder system (3.77) is called

(i)
weakly hyperbolic if all the eigenvalues of its principal symbol P_{0}(t, x, ik) are purely imaginary.

(ii)
strongly hyperbolic if there exist M > 0 and a family of positive definite, Hermitian m × m matrices H(t, x, k), (t, x, k) ∈ Ω × S^{n−1}, whose coefficients belong to the class \(C_b^\infty (\Omega \times {S^{n  1}})\), such that
$${M^{ 1}}I \leq H(t,x,k) \leq MI,\quad \quad H(t,x,k){P_0}(t,x,ik) + {P_0}{(t,x,ik)^{\ast}}H(t,x,k) = 0,$$(3.81)for all (t, x, k) ∈ Ω × S^{n−1}, where Ω := [0, ∞) × ℝ^{n}.

(iii)
symmetric hyperbolic if it is strongly hyperbolic and the symmetrizer H(t, x, k) can be chosen independent of k.
We see that these definitions are straight extrapolations of the corresponding definitions (see Definition 2) in the constant coefficient case, except for the smoothness requirements for the symmetrizer H(t, x, k).^{Footnote 8} There are examples of illposed Cauchy problems for which a Hermitian, positivedefinite symmetrizer H(t, x, k) exists but is not smooth [397] showing that these requirements are necessary in general.
The smooth symmetrizer is used in order to construct a pseudodifferential operator
from which one defines a scalar product (·, ·)_{H(t)}, which, for each t, is equivalent to the L^{2} product. This scalar product has the property that a solution u to the equation (3.77) satisfies an inequality of the form
see, for instance, [411]. Upon integration this yields an estimate of the form of Eq. (3.75). In the symmetric hyperbolic case, we have simply [H(t)v] = H(t, x)v(x) and the scalar product is given by
We will return to the application of this scalar product for deriving energy estimates below. Let us state the important result:
Theorem 3. If the firstorder system (3.77) is strongly or symmetric hyperbolic in the sense of Definition 4, then the Cauchy problem ( 3.73 , 3.74 ) is well posed in the sense of Definition 3.
For a proof of this theorem, see, for instance, Proposition 7.1 and the comments following its formulation in Chapter 7 of [411]. Let us look at some examples:
Example 18. For a given, stationary fluid field, the nonrelativistic, ideal magnetohydrodynamic equations reduce to the simple system [120]
for the magnetic field B, where v is the fluid velocity. The principal symbol for this equation is given by
In order to analyze it, it is convenient to introduce an orthonormal frame e_{1}, e_{2}, e_{3} such that e_{1} is parallel to k. With respect to this, the matrix corresponding to P_{0}(x, ik) is
with purely imaginary eigenvalues 0, −ikv_{1}(x). However, the symbol is not diagonalizable when k is orthogonal to the fluid velocity, v_{1}(x) = 0, and so the system is only weakly hyperbolic.
One can still show that the system is well posed, if one takes into account the constraint ∇ · B = 0, which is preserved by the evolution equation (3.85). In Fourier space, this constraint forces B_{1} = 0, which eliminates the first row and column in the principal symbol, and yields a strongly hyperbolic symbol. However, at the numerical level, this means that special care needs to be taken when discretizing the system (3.85) since any discretization, which does not preserve ∇ · B = 0, will push the solution away from the constraint manifold, in which case the system is weakly hyperbolic. For numerical schemes, which explicitly preserve (divergencetransport) or enforce (divergencecleaning) the constraints, see [159] and [136], respectively. For alternative formulations, which are strongly hyperbolic without imposing the constraint; see [120].
Example 19. The localization principle can be generalized to a certain class of secondorder systems [261] [308]: For example, we may consider a secondorder linear equation of the form
x ∈ ℝ^{n}, t ≥ 0, where now the m × m matrices A^{jk} B^{j}, C^{j}, D and E belong to the class \(C_b^\infty ([0,\infty) \times {{\rm{\mathbb R}}^n})\) of bounded, C^{∞}functions with bounded derivatives. Zooming into a very small neighborhood of a given point P_{0} = (t_{0}, x_{0}) by applying the expansion in Eq. (3.78) to v, one obtains, in the limit ε → 0, the constant coefficient equation
with
where we have used the fact that \({\upsilon ^{(1)}}(t,x) = t{\upsilon _t}({t_0},{x_0}) + \sum\limits_{j = 1}^n {{x^j}{{\partial \upsilon} \over {\partial {x^j}}}({t_0},{x_0})}\). Eq. (3.89) can be rewritten as a firstorder system in Fourier space for the variable
see Section 3.1.5. Now Theorem 2 implies that the problem is well posed, if there exist constants M > 0 and δ > 0 and a family of positive definite m × m Hermitian matrices h(t, x, k), (t, x, k) ∈ Ω × S^{n−1}, which is C^{∞}smooth in all its arguments, such that M^{−1}I ≤ h(t, x, k) ≤ MI and h(t, x, k)R(t, x, k) = R(t, x, k)*h(t, x, k) ≥ δI for all (t, x, k) ∈ Ω × S^{n−1}, where \(R(t,x,k): = \sum\limits_{i,j = 1}^n {({A^{ij}}(t,x) + {B^i}(t,x){B^i}(t,x){B^j}(t,x)){k_i}{k_j}}\).
In particular, it follows that the Cauchy problem for the KleinGordon equation on a globallyhyperbolic spacetime M = [0, ∞) × ℝ^{n} with \(\alpha, {\beta ^i},{\gamma _{ij}} \in C_b^\infty ([0,\infty) \times {{\rm{\mathbb R}}^n})\), is well posed provided that α^{2}γ^{ij} is uniformly positive definite; see Example 17.
Characteristic speeds and fields
Consider a firstorder linear system of the form (3.77), which is strongly hyperbolic. Then, for each t ≥ 0, x ∈ ℝ^{n} and k ∈ S^{n−1} the principal symbol P_{0}(t, x, ik) is diagonalizable and has purely complex eigenvalues. In the constant coefficient case with no lowerorder terms (B = 0) an eigenvalue iµ(k) of P_{0}(ik) with corresponding eigenvector a(k) gives rise to the planewave solution
if lowerorder terms are present and the matrix coefficients A^{j}(t, x) are not constant, one can look for approximate planewave solutions, which have the form
where ε > 0 is a small parameter, ψ(t, x) a smoothphase function and a_{ ε }(t, x) = a_{0}(t, x) + εa_{1}(t, x) + ε^{2}a_{2}(t, x) + … a slowly varying amplitude. Introducing the ansatz (3.93) into Eq. (3.77) and taking the limit ε → 0 yields the problem
Setting ω(t, x) := ψ_{ t }(t, x) and k(t, x) := ∇ψ(t, x), a nontrivial solution exists if and only if the eikonal equation
is satisfied. Its solutions provide the phase function ψ(t, x) whose level sets have conormal ωdt + k · dx. The phase function and a_{0} determine approximate planewave solutions of the form (3.93). For this reason we call ω(k) the characteristic speed in the direction k ∈ S^{n−1}, and a_{0} a corresponding characteristic mode. For a strongly hyperbolic system, the solution at each point (t, x) can be expanded in terms of the characteristic modes e_{ j }(t, x, k) with respect to a given direction k ∈ S^{n−1},
The corresponding coefficients u^{(j)}(t, x, k) are called the characteristic fields.
Example 20. Consider the KleinGordon equation on a hyperbolic spacetime, as in Example 17. In this case the eikonal equation is
which yields \({\omega _ \pm}(k) = {\beta ^j}{k_j} \pm \alpha \sqrt {{\gamma ^{ij}}{k_i}{k_j}}\). The corresponding conormals ω±(k)dt + k_{ j }dx^{j} is null; hence the surfaces of constant phase are null surfaces. The characteristic modes and fields are
where U = (U_{1}, U_{2}) = (kv, v_{ t } − iβ^{j}k_{ j }v) and v is the KleinGordon field.
Example 21. In the formulation of Maxwell’s equations discussed in Example 15, the characteristic speeds are 0, \(\pm \sqrt {\alpha \beta}\) and ±1, and the corresponding characteristic fields are the components of the vector on the righthand side of Eq. (3.54).
Energy estimates and finite speed of propagation
Here we focus our attention on firstorder linear systems, which are symmetric hyperbolic. In this case it is not difficult to derive a priori energy estimates based on integration by parts. Such estimates assume the existence of a sufficiently smooth solution and bound an appropriate norm of the solution at some time t > 0 in terms of the same norm of the solution at the initial time t = 0. As we will illustrate here, such estimates already yield quite a lot of information on the qualitative behavior of the solutions. In particular, they give uniqueness, continuous dependence on the initial data and finite speed of propagation.
The word “energy” stems from the fact that for many problems the squared norm satisfying the estimate is directly or indirectly related to the physical energy of the system, although for many other problems the squared norm does not have a physical interpretation of any kind.
For firstorder symmetric hyperbolic linear systems, an a priori energy estimate can be constructed from the symmetrizer H(t, x) in the following way. For a given smooth solution u(t, x) of Eq. (3.77), define the vector field J on Ω = [0, ∞) × ℝ^{n} by its components
where A^{0}(t, x) := −I. By virtue of the evolution equation, J satisfies
where the Hermitian m × m matrix K(t, x) is defined as
if K = 0, Eq. (3.100) formally looks like a conservation law for the current density J. If K ≠ 0, we obtain, instead of a conserved quantity, an energylike expression whose growth can be controlled by its initial value. For this, we first notice that our assumptions on the matrices H(t, x), B(t, x) and A^{k}(t, x) imply that K(t, x) is bounded on Ω. In particular, since H(t, x) is uniformly positive, there is a constant α > 0 such that
Let Ω_{ T } = ∪_{0≤t≤T} Σ_{ t } be a tubular region obtained by piling up open subsets Σ_{ t } of t = const hypersurfaces. This region is enclosed by the initial surface Σ_{0}, the final surface Σ_{ T } and the boundary surface \({\mathcal T}: = {\cup _{0 \leq t \leq T}}\partial {\Sigma _t}\), which is assumed to be smooth. Integrating Eq. (3.100) over Ω_{ T } and using Gauss’ theorem, one obtains
where is the unit outward normal covector to \({\mathcal T}\) and dS the volume element on that surface. Defining the “energy” contained in the surface Σ_{ t } by
and assuming for the moment that the “flux” integral over \({\mathcal T}\) is positive or zero, one obtains the estimate
where we have used the inequality (3.102) and the definition of E(Σ_{ t }) in the last step. Defining the function \(h(T): = \int\nolimits_0^T {E({\Sigma _t})}\) this inequality can be rewritten as
which yields αh(T) ≤ E(Σ_{0})(e^{2αT} − 1) upon integration. This together with (3.105) gives
which bounds the energy at any time t ∈ [0, T] in terms of the initial energy.
In order to analyze the conditions under which the flux integral is positive or zero, we examine the sign of the integrand e_{ µ }J^{µ}(t, x). Decomposing e_{ µ }dx^{µ} = N[a dt + s_{1}dx^{1} + … + s_{2}dx^{n}] where s = (s_{1}, …, s_{ n }) is a unit vector and N > 0 a positive normalization constant, we have
where \({P_0}(t,x,s) = \sum\limits_{j = 1}^n {{A^j}(t,x){s_j}}\) is the principal symbol in the direction of the unit vector s. This is guaranteed to be positive if the boundary surface \({\mathcal T}\) is such that a(t, x) is greater than or equal to all the eigenvalues of the boundary matrix P_{0}(t, x, s), for each \((t,x) \in {\mathcal T}\). This is equivalent to the condition
Since H(t, x)P_{0}(t, x, s) is symmetric, the supremum is equal to the maximum eigenvalue of P_{0}(t, x, s). Therefore, condition (3.109) is equivalent to the requirement that a(t, x) be greater than or equal to the maximum characteristic speed in the direction of the unit outward normal s.
With these arguments, we arrive at the following conclusions and remarks:

Finite speed of propagation. Let p_{0} = (t_{0}, x_{0}) ∈ Ω be a given event, and set
$$v({t_0}): = \sup \left\{{{{{u^\ast}H(t,\,x){P_0}(t,\,x,\,s)u} \over {{u^\ast}H(t,\,x)u}}:0 \leq t \leq {t_0},\,x \in {{\mathbb R}^n},\,s \in {S^{n  1}},\,u \in {{\mathbb C}^m},\,u \neq 0} \right\}.$$(3.110)Define the past cone at p_{0} as^{Footnote 9}
$${C^ }({p_0}): = \{(t,\,x) \in \Omega :\vert x\vert\,\leq v({t_0})({t_0}  t)\} .$$(3.111)The unit outward normal to its boundary is e_{ µ }dx^{µ} = N[v(t_{0})dt+x·dx/x], which satisfies the condition (3.109). It follows from the estimate (3.107) applied to the domain Ω_{ T } = C^{−}(p_{0}) that the solution is zero on C^{−}(p_{0}) if the initial data is zero on the intersection of the cone C^{−}(p_{0}) with the initial surface t = 0. In other words, a perturbation in the initial data outside the ball x ≤ v (t_{0})t_{0} does not alter the solution inside the cone C^{−}(p_{0}). Using this argument, it also follows that if ƒ has compact support, the corresponding solution u(t, ·) also has compact support for all t > 0.

Continuous dependence on the initial data. Let \(f \in C_0^\infty ({{\rm{R}}^n})\) be smooth initial data with compact support. As we have seen above, the corresponding smooth solution u(t, ·) also has compact support for each t ≥ 0. Therefore, applying the estimate (3.107) to the case Σ_{ t } := {t} × ℝ^{n}, the boundary integral vanishes and we obtain
$$E({\Sigma _t}) \leq {e^{2\alpha t}}E({\Sigma _0}),\quad \quad t \geq 0.$$(3.112)In view of the definition of E(Σ_{ t }), see Eq. (3.104), and the properties (3.81) of the symmetrizer, it follows that
$$\Vert u(t, \cdot)\Vert \, \leq M{e^{\alpha t}}\Vert f \Vert ,\quad \quad t \geq 0,$$(3.113)which is of the required form; see Definition 3. In particular, we have uniqueness and continuous dependence on the initial data.

The statements about finite speed of propagation and continuous dependence on the data can easily be generalized to the case of a firstorder symmetric hyperbolic inhomogeneous equation u_{ t } = P(t, x, ∂/∂x)u + F(t, x), with F : Ω → ℂ^{m} a bounded, C^{∞}function with bounded derivatives. In this case, the inequality (3.113) is replaced by
$$\Vert u(t, \cdot)\Vert \, \leq \,M{e^{\alpha t}}\left[ {\Vert f \Vert + \int\limits_0^t {{e^{ \alpha s}}}\Vert F(s, \cdot)\Vert ds} \right],\quad \quad t \geq 0.$$(3.114) 
If the boundary surface \({\mathcal T}\) does not satisfy the condition (3.109) for the boundary integral to be positive, then suitable boundary conditions need to be specified in order to control the sign of this term. This will be discussed in Section 5.2.

Although different techniques have to be used to prove them, very similar results hold for strongly hyperbolic systems [353].

For definitions of hyperbolicity of a geometric PDE on a manifold, which do not require a 3+1 decomposition of spacetime, see, for instance, [205, 353], for firstorder systems and [47] for secondorder ones.
Example 22. We have seen that for the KleinGordon equation propagating on a globallyhyperbolic spacetime, the characteristic speeds are the speed of light. Therefore, in the case of a constant metric (i.e., Minkowksi space), the past cone C^{−}(p_{0}) defined in Eq. (3.111) coincides with the past light cone at the event p_{0}. A slight refinement of the above argument shows that the statement remains true for a KleinGordon field propagating on any hyperbolic spacetime.
Example 23. In Example 21 we have seen that the characteristic speeds of the system given in Example 15 are 0, \(\pm \sqrt {\alpha \beta}\) and ±1, where αβ > 0 is assumed for strong hyperbolicity. Therefore, the past cone C^{−}(p_{0}) corresponds to the past light cone provided that 0 < αβ ≤ 1. For αβ > 1, the formulation has superluminal constraintviolating modes, and an initial perturbation emanating from a region outside the past light cone at p_{0} could affect the solution at p_{0}. In this case, the past light cone at p_{0} is a proper subset of C^{−}(p_{0}).
Quasilinear equations
Next, we generalize the theory one more step and consider evolution systems, which are described by quasilinear partial differential equations, that is, by nonlinear partial differential equations, which are linear in their highestorder derivatives. This already covers most of the interesting physical systems, including the YangMills and the Einstein equations. Restricting ourselves to the firstorder case, such equations have the form
where all the coefficients of the complex m×m matrices A^{1}(t, x, u), …, A^{n}(t, x, u) and the nonlinear source term F(t, x, u) ∈ ℂ^{m} belong to the class \(C_b^\infty ([0,T] \times {{\rm{\mathbb R}}^n} \times {{\rm{\mathbb C}}^m})\) of bounded, C^{∞}functions with bounded derivatives. Compared to the linear case, there are two new features the solutions may exhibit:

The nonlinear term F(t, x, u) may induce blowup of the solutions in finite time. This is already the case for the simple example where m = 1, all the matrices A^{j} vanish identically and F(t, x, u) = u^{2}, in which case Eq. (3.115) reduces to u_{ t } = u^{2}. In the context of Einstein’s equations such a blowup is expected when a curvature singularity forms, or it could also occur in the presence of a coordinate singularity due to a “bad” gauge condition.

In contrast to the linear case, the matrix functions A^{j} in front of the derivative operator now depend pointwise on the state vector itself, which implies, in particular, that the characteristic speeds and fields depend on u. This can lead to the formation of shocks where characteristics cross each other, like in the simple example of Burger’s equation u_{ t } = uu_{ x } corresponding to the case m = n = 1, A^{1}(t, x, u) = and F(t, x, u) = 0. In general, shocks may form when the system is not linearly degenerated or genuinely nonlinear [250]. The Einstein vacuum equations, on the other hand, can be written in linearly degenerate form (see, for example, [6, 7, 348, 8]) and are therefore expected to be free of physical shocks.
For these reasons, one cannot expect global existence of smooth solutions from smooth initial data with compact support in general, and the best one can hope for is existence of a smooth solution on some finite time interval [0, T], where T might depend on the initial data.
Under such restrictions, it is possible to prove wellposedness of the Cauchy problem. The idea is to linearize the problem and to apply Banach’s fixedpoint theorem. This is discussed next.
The principle of linearization
Suppose u^{(0)}(t, x) is a C^{∞} (reference) solution of Eq. (3.115), corresponding to initial data ƒ(x) = u^{(0)}(0, x). Assuming this solution to be uniquely determined by the initial data ƒ, we may ask if a unique solution u also exists for the perturbed problem
where the perturbations δF(t, x) and δƒ(x) belong to the class of bounded, C^{∞}functions with bounded derivatives. This leads to the following definition:
Definition 5. Consider the nonlinear Cauchy problem given by Eq. (3.115) and prescribed initial data for u at t = 0. Let u^{(0)} be a C^{∞}solution to this problem, which is uniquely determined by its initial data ƒ. Then, the problem is called well posed at mathb ƒu^{(0)}, if there are normed vector spaces X, Y, and Z and constants K > 0, ε > 0 such that for all sufficientlysmooth perturbations δƒ and δF lying in Y and Z, respectively, with
the perturbed problem ( 3.116 , 3.117 ) is also uniquely solvable and the corresponding solution u satisfies u − u^{(0)} ∈ X and the estimate
Here, the norms X and Y appearing on both sides of Eq. (3.119) are different from each other because ‖u − u^{(0)}‖_{ X } controls the function u − u^{(0)} over the spacetime region [0, T] × ℝ^{n} while is a norm controlling the function δƒ on ℝ^{n}.
If the problem is well posed at u^{(0)}, we may consider a oneparameter curve ƒ_{ ε } of initial data lying in \(C_0^\infty ({{\rm{\mathbb R}}^n})\) that goes through ƒ and assume that there is a corresponding solution u_{ ε }(t, x) for each small enough ε, which lies close to u^{(0)} in the sense of inequality (3.119). Expanding
and plugging into the Eq. (3.115) we find, to first order in ε,
with
Eq. (3.121) is a firstorder linear equation with variable coefficients for the first variation, v^{(1)}, for which we can apply the theory described in Section 3.2. Therefore, it is reasonable to assume that the linearized problem is strongly hyperbolic for any smooth function u^{(0)}(t, x). In particular, if we generalize the definitions of strongly and symmetric hyperbolicity given in Definition 4 to the quasilinear case by requiring that the symmetrizer H(t, x, k, u) has coefficients in \(C_0^\infty (\Omega \times {S^{n  1}} \times {{\rm{C}}^m})\), it follows that the linearized problem is well posed provided that the quasilinear problem is strongly or symmetric hyperbolic.
The linearization principle states that the converse is also true: the nonlinear problem is well posed at u^{(0)} if all the linear problems, which are obtained by linearizing Eq. (3.115) at functions in a suitable neighborhood of u^{(0)} are well posed. To prove that this principle holds, one sets up the following iteration. We define the sequence u^{(k)} of functions by iteratively solving the linear problems
for k = 0, 1, 2, … starting with the reference solution u^{(0)}. If the linearized problems are well posed in the sense of Definition 3 for functions lying in a neighborhood of u^{(0)}, one can solve each Cauchy problem (3.123, 3.124), at least for small enough time T_{ k }. The key point then, is to prove that T_{ k } does not shrink to zero when k → ∞ and to show that the sequence u^{(k)} of functions converges to a solution of the perturbed problem (3.116, 3.117). This is, of course, a nontrivial task, which requires controlling u^{(k)} and its derivatives in an appropriate way. For particular examples where this program is carried through; see [259]. For general results on quasilinear symmetric hyperbolic systems; see [251, 164, 412, 51].
Abstract evolution operators
A general framework for treating evolution problems is based on methods from functional analysis. Here, one considers a linear operator A : D(A) ⊂ X → X with dense domain, \(\overline {D(A)} = X\), in a Banach space X and asks under which conditions the Cauchy problem
possesses a unique solution curve, i.e., a continuously differentiable map u : [0, ∞) → D(A) ⊂ X satisfying Eqs. (3.125, 3.126) for each ƒ ∈ D(A). Under a mild assumption on this turns out to be the case if and only if^{Footnote 10} the operator A is the infinitesimal generator of a strongly continuous semigroup P(t), that is, a map \(P:[0,\infty) \rightarrow {\mathcal L}(X)\), with \({\mathcal L}(X)\) denoting the space of bounded, linear operators on X, with the properties that

(i)
P(0) = I,

(ii)
P(t + s) = P(t)P(s) for all t, s ≥ 0,

(iii)
\(\underset {t \rightarrow 0} {\lim} P(t)u = u\) for all \(u \in X\),

(iv)
\(D(A) = \left\{{u \in X:\underset {t \rightarrow 0} {\lim} {1 \over t}[P(t)u  u]{\rm{exists in}}X} \right\}\) and \(Au = \underset {t \rightarrow 0} {\lim} {1 \over t}[P(t)u  u],u \in D(A)\).
In this case, the solution curve of the Cauchy problem (3.125, 3.126) is given by u(t) = P(t)ƒ, t ≥ 0, ƒ ∈ D(A). One can show [327, 51] that P(t) always possesses constants K ≥ 1 and α ∈ ℝ such that
which implies that ‖u(t)‖ ≤ Ke^{αt}‖ƒ‖ for all ƒ ∈ D(A) and all t ≥ 0. Therefore, the semigroup P(t) gives existence, uniqueness and continuous dependence on the initial data.
There are several results giving necessary and sufficient conditions for the linear operator A to generate a strongly continuous semigroup; see, for instance, [327, 51]. One useful result, which we formulate for Hilbert spaces, is the following:
Theorem 4 (LumerPhillips). Let X be a complex Hilbert space with scalar product (·, ·), and let A : D(A) ⊂ X → X be a linear operator. Let α ∈ ℝ. Then, the following statements are equivalent:

(i)
A is the infinitesimal generator of a strongly continuous semigroup P(t) such that ‖P(t)‖ ≤ e^{αt} for all t ≥ 0.

(ii)
A − αI is dissipative, that is, Re(u, Au − αu) ≤ 0 for all u ∈ D(A), and the range of A − λI is equal X for some λ > α.
Example 24. As a simple example consider the Hilbert space X = L^{2}(ℝ^{n}) with the linear operator A : D(A) ⊂ X → X defined by
where Ƒ denotes the FourierPlancharel operator; see Section 2. Using Parseval’s identity, we find
hence A is dissipative. Furthermore, let v ∈ L^{2}(ℝ^{n}), then
defines an element in D(A) satisfying (I − A)u = u − Δu = v. Therefore, the range of A − I is equal to X, and Theorem 4 implies that A = Δ generates a strongly continuous semigroup P(t) on X such that ‖P(t)‖ ≤ 1 for all t ≥ 0. The curves u(t) := P(t)ƒ, t ≥ 0, ƒ ∈ L^{2}(ℝ^{n}) are the weak solutions to the heat equation on ℝ^{n}; see Section 3.1.2.
In general, the requirement for A − αI to be dissipative is equivalent to finding an energy estimate for the squared norm E := ‖u‖^{2} of u. Indeed, setting u(t) := P(t)ƒ and using u_{ t } = AP(t)ƒ we find
for all t ≥ 0 and ƒ ∈ D(A), which yields the estimate
for all ƒ ∈ D(A). Given the dissipativity of A − αI, the second requirement, that the range of A − λI is X for some λ > α, is equivalent to demanding that the linear operator A − λI : D(A) → X be invertible. Therefore, proving this condition requires solving the linear equation
for given v ∈ X. This condition is important for the existence of solutions, and shows that for general evolution problems, requiring an energy estimate is not sufficient. This statement is rather obvious, because given that A − αI is dissipative on D(A), one could just make D(A) smaller, and still have an energy estimate. However, if D(A) is too small, the Cauchy problem is overdetermined and a solution might not exist. We will encounter explicit examples of this phenomenon in Section 5, when discussing boundary conditions.
Finding the correct domain D(A) for the infinitesimal generator A is not always a trivial task, especially for equations involving singular coefficients. Fortunately, there are weaker versions of the LumerPhillips theorem, which only require checking conditions on a subspace D ⊂ D(A), which is dense in X. It is also possible to formulate the LumerPhillips theorem on Banach spaces. See [327, 152, 51] for more details.
The semigroup theory can be generalized to timedependent operators A(t), and to quasilinear equations where A(u) depends on the solution u itself. We refer the reader to [51] for these generalizations and for applications to examples from mathematical physics including general relativity. The theory of strongly continuous semigroups has also been used for formulating wellposed initialboundary value formulations for the Maxwell equations [354] and the linearized Einstein equations [309] with elliptic gauge conditions.
InitialValue Formulations for Einstein’s Equations
In this section, we apply the theory discussed in Section 3 to wellposed Cauchy formulations of Einstein’s vacuum equations. The first such formulation dates back to the 1950s [169] and will be discussed in Section 4.1. Since then, there has been a plethora of new formulations, which distinguish themselves by the choice of variables (metric vs. tetrad, Christoffel symbols vs. connection coefficients, inclusion or not of curvature components as independent variables, etc.), the choice of gauges and the use of the constraint equations in order to modify the evolution equations off the constraint surface. Many of these new formulations have been motivated by numerical calculations, which try to solve a given physical problem in a stable way.
By far the most successful formulations for numericallyevolving compactobject binaries have been the harmonic system, which is based on the original work of [169], and that of BaumgarteShapiroShibataNakamura (BSSN) [390, 44]. For this reason, we review these two formulations in detail in Sections 4.1 and 4.3, respectively. In Section 4.2 we also review the ArnowittDeserMisner (ADM) formulation [30], which is based on a Hamiltonian approach to general relativity and serves as a starting point for many hyperbolic systems, including the BSSN one. A list of references for hyperbolic reductions of Einstein’s equations not discussed here is given in Section 4.4.
The harmonic formulation
We start by discussing the harmonic formulation of Einstein’s field equations. Like in the potential formulation of electromagnetism, where the Lorentz gauge ∇_{ µ }A^{µ} = 0 allows one to cast Maxwell’s equations into a system of wave equations, it was observed early in [134, 269] that Einstein’s equations reduce to a system of wave equations when harmonic coordinates,
are used. There are many straightforward generalizations of these gauge conditions; one of them is to replace the righthand side of Eq. (4.1) by given source functions H^{v} [178, 182, 202].
In order to keep general covariance, we follow [232] and choose a fixed smooth background metric \({\overset \circ g _{\alpha \beta}}\) with corresponding LeviCivita connection \(\overset \circ \nabla\), Christoffel symbols \(\overset \circ \Gamma {\,^\mu}_{\alpha \beta}\), and curvature tensor \(\overset \circ R {\,^\alpha}_{\beta \mu \nu}\). Then, the generalized harmonic gauge condition can be rewritten as^{Footnote 11}
In the particular case where H^{µ} = 0 and where the background metric is Minkowski in standard Cartesian coordinates, \(\overset \circ \Gamma {\,^\mu}_{\alpha \beta}\) vanishes, and the condition C^{µ} = 0 reduces to the harmonic coordinate expression (4.1). However, unlike condition (4.1), Eq. (4.3) yields a coordinateindependent condition for any given vector field H^{µ} on spacetime since the difference \({C^\mu}_{\alpha \beta}: = {\Gamma ^\mu}_{\alpha \beta}  \overset \circ \Gamma {\,^\mu}_{\alpha \beta}\) between two connections is a tensor field. In terms of the difference, \(\,{h_{\alpha \beta}}: = {g_{\alpha \beta}}  {\overset \circ g _{\alpha \beta}}\), between the dynamical and background metric, this tensor field can be expressed as
Of course, the coordinateindependence is now traded for the introduction of a background metric \({\overset \circ g _{\alpha \beta}}\), and the question remains of how to choose \({\overset \circ g _{\alpha \beta}}\) and the vector field H^{µ}. A simple possibility is to choose H^{µ} = 0 and \({\overset \circ g _{\alpha \beta}}\) equal to the initial data for the metric, such that h_{ µv } = 0 initially.
Einstein’s field equations in the gauge C^{µ} = 0 are equivalent to the wave system
where T_{ αβ } is the stressenergy tensor and Newton’s constant. This system is subject to the harmonic constraint
Hyperbolicity
For any given smooth stressenergy tensor T_{ αβ }, the equations (4.5) constitute a quasilinear system of ten coupled wave equations for the ten coefficients of the difference metric h_{ αβ } (or equivalently, for the ten components of the dynamical metric g_{ αβ }) and, therefore, we can apply the results of Section 3 to formulate a (local in time) wellposed Cauchy problem for the wave system (4.5) with initial conditions
where \(h_{\alpha \beta}^{(0)}\) and \(k_{\alpha \beta}^{(0)}\) are two sufficientlysmooth symmetric tensor fields defined on the initial slice t = 0 satisfying the requirement that \({g_{\alpha \beta}}(0,x) = {\overset \circ g _{\alpha \beta}}(0,x) + h_{\alpha \beta}^{(0)}\) has Lorentz signature such that g^{00}(0, x) < 0 and the induced metric g_{ ij }(0, x), = 1, 2, 3, on t = 0 is positive definite^{Footnote 12}. For detailed wellposed Cauchy formulations we refer the reader to the original work in [169]; see also [85], [164], and [246], which presents an improvement on the results in the previous references due to weaker smoothness assumptions on the initial data.
An alternative way of establishing the hyperbolicity of the system (4.5) is to cast it into firstorder symmetric hyperbolic form [164, 18, 286]. There are several ways of constructing such a system; the simplest one is obtained [164] by introducing the first partial derivatives of g_{ αβ } as new variables,
then, the secondorder wave system (4.5) can be rewritten in the form
where l.o. are lowerorder terms not depending on any derivatives of the state vector u = (g_{ αβ }, k_{ αβ }, D_{ jαβ }). The system of equations (4.9, 4.10, 4.11) constitutes a quasilinear firstorder symmetric hyperbolic system for u with symmetrizer given by the quadratic form
However, it should be noted that the symmetrizer is only positive definite if g^{ij} is; that is, only if the time evolution vector field ∂_{ t } is timelike. In many situations, this requirement might be too restrictive. Inside a Schwarzschild black hole, for example, the asymptotically timelike Killing field ∂_{ t } is spacelike.
However, as indicated above, the firstorder symmetric hyperbolic reduction (4.9, 4.10, 4.11) is not unique. A different reduction is based on the variables ũ = (h_{ αβ }, Π_{ αβ }, Φ_{ jαβ }), where \({\Pi _{\alpha \beta}}: = {n^\mu}{\overset \circ \nabla _\mu}{h_{\alpha \beta}}\) is the derivative of g_{ αβ } in the direction of the futuredirected unit normal n^{µ} to the timeslices t = const, and \({\Phi _{j\alpha \beta}}: = {\overset \circ \nabla _j}{h_{\alpha \beta}}\). This yields a firstorder system, which is symmetric hyperbolic as long as the t = const slices are spacelike, independent of whether or not ∂_{ t } is timelike [18, 286].
Constraint propagation and damping
The hyperbolicity results described above guarantee that unique solutions of the nonlinear wave system (4.5) exist, at least for short times, and that they depend continuously on the initial data \(h_{\alpha \beta}^{(0)},k_{\alpha \beta}^{(0)}\). However, in order to obtain a solution of Einstein’s field equations one has to ensure that the harmonic constraint (4.3) is identically satisfied.
The system (4.5) is equivalent to the modified Einstein equations
where denotes the Ricci tensor, and where C^{µ} = 0 if the harmonic constraint holds. From the twice contracted Bianchi identities 2∇_{ β }R^{αβ} − ∇^{α}(g_{ µv }R^{µv}) = 0 one obtains the following equation for the constraint variable C^{α},
This system describes the propagation of constraint violations, which are present if is nonzero. For this reason, we call it the constraint propagation system, or subsidiary system. Provided the stressenergy tensor is divergence free, ∇_{ β }T^{αβ} = 0, this is a linear, secondorder hyperbolic equation for C^{α}.^{Footnote 13} Therefore, it follows from the uniqueness properties of such hyperbolic problems that C^{α} = 0 provided the initial data \(h_{\alpha \beta}^{(0)},k_{\alpha \beta}^{(0)}\) satisfies the initial constraints
This turns out to be equivalent to solving C^{α}(0, x) = 0 plus the usual Hamiltonian and momentum constraints; see [169, 286]. Summarizing, specifying initial data \(h_{\alpha \beta}^{(0)},k_{\alpha \beta}^{(0)}\) satisfying the constraints (4.15), the corresponding unique solution to the nonlinear wave system (4.5) yields a solution to the Einstein equations.
However, in numerical calculations, one cannot assume that the initial constraints (4.15) are satisfied exactly, due to truncation and roundoff errors. The propagation of these errors is described by the constraint propagation system (4.14), and hyperbolicity guarantees that for each fixed time t > 0 of existence, these errors converge to zero if the initial constraint violation converges to zero, which is usually the case when resolution is increased. On the other hand, due to limited computer resources, one cannot reach the limit of infinite resolution, and from a practical point of view one does not want the constraint errors to grow rapidly in time for fixed resolution. Therefore, one would like to design an evolution scheme in which the constraint violations are damped in time, such that the constraint hypersurface is an attractor set in phase space. A general method for damping constraints violations in the context of firstorder symmetric hyperbolic formulations of Einstein’s field equations was given in [74]. This method was then adapted to the harmonic formulation in [224]. The procedure proposed in [224] consists in adding lowerorder friction terms in Eq. (4.13), which damp constraint violations. Explicitly, the modified system reads
with n^{µ} the futuredirected unit normal to the t = const surfaces, and κ and ρ real constants, where κ > 0 determines the timescale on which the constraint violations C^{µ} are damped.
With this modification the constraint propagation system reads
A mode analysis for linear vacuum perturbations of the Minkowski metric reveals [224] that for κ > 0 and ρ > −1 all modes, except those, which are constant in space, are damped. Numerical codes based on the modified system (4.16) or similar systems have been used in the context of binary blackhole evolutions [335, 336, 286, 384, 36, 403, 320], the headon collision of boson stars [323] and the evolution of black strings in fivedimensional gravity [279], among other references.
For a discussion on possible effects due to nonlinearities in the constraint propagation system; see [185].
Geometric issues
The results described so far guarantee the localintime unique existence of solutions to Einstein’s equations in harmonic coordinates, given a sufficientlysmooth initial data set (h^{(0)}, k^{(0)}). However, since general relativity is a diffeomorphism invariant theory, some questions remain. The first issue is whether or not the harmonic gauge is sufficiently general such that any solution of the field equations can be obtained by this method, at least for short enough time. The answer is affirmative [169, 164]. Namely, let (M, g), M = (−ε, ε) × ℝ^{3}, be a smooth spacetime satisfying Einstein’s field equations such that the initial surface t = 0 is spacelike with respect to g. Then, we can find a diffeomorphism ϕ : M → M in a neighborhood of the initial surface, which leaves it invariant and casts the metric into the harmonic gauge. For this, one solves the harmonic wave map equation (4.2) with initial data
Since equation (4.2) is a secondorder hyperbolic one, a unique solution exists, at least on some sufficientlysmall time interval (−ε′, ε′). Furthermore, choosing ε′ > 0 small enough, ϕ : (−ε′, ε′) × ℝ^{3} → M describes a diffeomorphism when restricted to its image. By construction, ḡ := (ϕ^{−1})*g satisfies the harmonic gauge condition (4.3).
The next issue is the question of geometric uniqueness. Let g^{(1)} and g^{(2)} be two solutions of Einstein’s equations with the same initial data on t = 0, i.e., \(g_{\alpha \beta}^{(1)}(0,x) = g_{\alpha \beta}^{(2)}(0,x),{\partial _t}g_{\alpha \beta}^{(1)}(0,x) = {\partial _t}g_{\alpha \beta}^{(2)}(0,x)\). Are these solutions related, at least for small time, by a diffeomorphism? Again, the answer is affirmative [169, 164] because one can transform both solutions to harmonic coordinates using the above diffeomorphism ϕ without changing their initial data. It then follows by the uniqueness property of the nonlinear wave system (4.5) that the transformed solutions must be identical, at least on some sufficientlysmall time interval. Note that this geometric uniqueness property also implies that the solutions are, at least locally, independent of the background metric. For further results on geometric uniqueness involving only the first and second fundamental forms of the initial surface; see [127], where it is shown that every such initialdata set satisfying the Hamiltonian and momentum constraints possesses a unique maximal Cauchy development.
Finally, we mention that results about the nonlinear stability of Minkowski spacetime with respect to vacuum and vacuumscalar perturbations have been established based on the harmonic system [283, 284], offering an alternative proof to the one of [129].
The ADM formulation
In the usual 3+1 decomposition of Einstein’s field equations (see, for example, [214], for a through discussion of it) one evolves the three metric and the extrinsic curvature (the first and second fundamental forms) relative to a foliation Σ_{ t } of spacetime by spacelike hypersurfaces. The motivation for this formulation stems from the Hamiltonian description of general relativity (see, for instance, Appendix E in [429]) where the “q” variables are the three metric γ_{ ij } and the associated canonical momenta π^{ij} (the “p” variables) are related to the extrinsic curvature K_{ ij } according to
where γ = det(γ_{ ij }) denotes the determinant of the threemetric and K = γ^{ij}K_{ ij } the trace of the extrinsic curvature.
In York’s formulation [444] of the 3+1 decomposed Einstein equations, the evolution equations are
Here, the operator ∂_{0} is defined as ∂_{0} := α^{−1}(∂_{ t } − £_{ β }) with α and β^{i} denoting lapse and shift, respectively. It is equal to the Lie derivative along the futuredirected unit normal n to the time slices when acting on covariant tensor fields orthogonal to n. Next, \(R_{ij}^{(3)}\) and D_{ j } are the Ricci tensor and covariant derivative operator belonging to the three metric γ_{ ij }, and ρ := n^{α}n^{β}T_{ αβ } and σ_{ ij } := T_{ ij } are the energy density and the stress tensor as measured by observers moving along the futuredirected unit normal n to the time slices. Finally, σ := γ^{ij}T_{ ij } denotes the trace of the stress tensor. The evolution system (4.20, 4.21) is subject to the Hamiltonian and momentum constraints,
where j_{ i } := −n^{β}T_{ iβ } is the flux density.
Algebraic gauge conditions
One issue with the evolution equations (4.20, 4.21) is the principle part of the Ricci tensor belonging to the threemetric,
which does not define a positivedefinite operator. This is due to the fact that the linearized Ricci tensor is invariant with respect to infinitesimal coordinate transformations γ_{ ij } ↦ γ_{ ij } + 2∂_{(i}ξ_{j)} generated by a vector field ξ = ξ^{i}∂_{ i }. This has the following implications for the evolution equations (4.20, 4.21), assuming for the moment that lapse and shift are fixed, a priori specified functions, in which case the system is equivalent to the secondorder system \(\partial _0^2{\gamma _{ij}} =  2R_{ij}^{(3)} + {\rm{l}}{\rm{.o}}.\) for the three metric. Linearizing and localizing as described in Section 3 one obtains a linear, constant coefficient problem of the form (3.56), which can be brought into firstorder form via the reduction in Fourier space described in Section 3.1.5. The resulting firstorder system has the form of Eq. (3.58) with the symbol
where R(k) is, up to a factor 2, the principal symbol of the Ricci operator,
Here, \(\overset \circ \alpha, \overset \circ \beta {\,^i}\) and \({\overset \circ \gamma _{ij}}\) refer to the frozen lapse, shift and threemetric, respectively. According to Theorem 2, the problem is well posed if and only there is a uniformly positive and bounded symmetrizer \(h(\hat k)\) such that \(h(\hat k)R(\hat k)\) is symmetric and uniformly positive for \(\hat k \in {S^2}\). Although \(R(\hat k)\) is diagonalizable and its eigenvalues are not negative, some of them are zero since \(R(\hat k){\gamma _{ij}} = 0\) for γ_{ ij } of the form \({\gamma _{ij}} = 2{{\hat k}_{(i\xi j)}}\) with an arbitrary oneform ξ_{ j }, so h(k)R(k) cannot be positive.
These arguments were used in [308] to show that the evolution system (4.20, 4.21) with fixed lapse and shift is weakly but not strongly hyperbolic. The results in [308] also analyze modifications of the equations for which the lapse is densitized and the Hamiltonian constraint is used to modify the trace of Eq. (4.21). The conclusion is that such changes cannot make the evolution equations (4.20, 4.21) strongly hyperbolic. Therefore, these equations, with given shift and densitized lapse, are not suited for numerical evolutions.^{Footnote 14}
Dynamical gauge conditions leading to a wellposed formulation
The results obtained so far often lead to the popular statement “The ADM equations are not strongly hyperbolic.” However, consider the possibility of determining the lapse and shift through evolution equations. A natural choice, motivated by the discussion in Section 4.1, is to impose the harmonic gauge constraint (4.3). Assuming that the background metric \({\overset \circ g _{\alpha \beta}}\) is Minkowski in Cartesian coordinates for simplicity, this yields the following equations for the 3+1 decomposed variables,
with ƒ a constant, which is equal to one for the harmonic time coordinate t. Let us analyze the hyperbolicity of the evolution system (4.27, 4.28, 4.20, 4.21) for the fields u = (α, β^{i}, γ_{ ij }, K_{ ij }), where for generality and later use, we do not necessarily assume ƒ = 1 in Eq. (4.27). Since this is a mixed first/secondorder system, we base our analysis on the firstorder pseudodifferential reduction discussed in Section 3.1.5. After linearizing and localizing, we obtain the constant coefficient linear problem
where \(\overset \circ \alpha, \overset \circ \beta {\,^k}\) and \({\overset \circ \gamma _{ij}}\) refer to the quantities corresponding to α, β^{k}, γ_{ ij } of the background metric when frozen at a given point. In order to rewrite this in firstorder form, we perform a Fourier transformation in space and introduce the variables Û = (a, b_{ i }, l_{ ij }, p_{ ij }) with
where \(\vert k\vert := \sqrt {\overset \circ \gamma {\,^{ij}}{k_i}{k_j}}\) and the hatted quantities refer to their Fourier transform. With this, we obtain the firstorder system Û_{ t } = P(ik)Û where the symbol has the form \(P(ik) = i\overset \circ \beta {\,^s}{k_s}I + \overset \circ \alpha Q(ik)\) with
where \({{\hat k}_i}: = {k_i}/\vert k\vert, {{\hat k}^i}: = \overset \circ \gamma {\,^{ij}}{{\hat k}_j},l: + \overset \circ \gamma {\,^{ij}}{l_{ij}}\), and \(p: = \overset \circ \gamma {\,^{ij}}{p_{ij}}\). In order to determine the eigenfields S(k)^{−1}Û such that S(k)^{−1}P(ik)S(k) is diagonal, we decompose
into pieces parallel and orthogonal to \({{\hat k}_i}\), similar to Example 15. Then, the problem decouples into a tensor sector, involving \(({{\hat l}_{ij}},{{\hat p}_{ij}})\), into a vector sector, involving \(({{\bar b}_i},{{\bar l}_i},{{\bar p}_i})\) and a scalar sector involving \((a,\bar b,\bar l,\bar p,{{\bar l}\prime},{{\bar p}\prime})\). In the tensor sector, we have
which has the eigenvalues ±ik with corresponding eigenfields \({{\hat l}_{ij}} \pm {{\hat p}_{ij}}\). In the vector sector, we have
which is also diagonalizable with eigenvalues 0, ±ik and corresponding eigenfields \({{\bar p}_j}\) and \({{\bar l}_j} \pm ({{\bar b}_j} + {{\bar p}_j})\). Finally, in the scalar sector we have
It turns out Q^{(scaial)}(ik) is diagonalizable with purely imaginary values if and only if ƒ > 0 and ƒ ≠ 1. In this case, the eigenvalues and corresponding eigenfields are \(\pm i\vert k\vert, \, \pm i\vert k\vert, \, \pm i\sqrt f \vert k\vert\) and \({{\bar l}{\prime}} \pm {{\bar p}{\prime}},\bar l \pm (2\bar b + \bar p),\,a + f{{\bar l}{\prime}}/(f  1) \pm \sqrt f [\bar p + (f + 1)/(f  1)/{{\bar p}{\prime}}]/2\), respectively. A symmetrizer for P(ik), which is smooth in \(k \in {S^2},\overset \circ \alpha, \overset \circ \beta {\,^k}\) and \({\overset \circ \gamma _{ij}}\), can be constructed from the eigenfields as in Example 15.
Remarks:

If instead of imposing the dynamical shift condition (4.28), β is a priori specified, then the resulting evolution system, consisting of Eqs. (4.27, 4.20, 4.21), is weakly hyperbolic for any choice of ƒ. Indeed, in that case the symbol (4.36) in the vector sector reduces to the Jordan block
$${Q^{(vector)}}(ik)\left({\begin{array}{*{20}c} {{{\bar l}_j}} \\ {{{\bar p}_j}} \\ \end{array}} \right) = i\vert k\vert \left({\begin{array}{*{20}c} 0 & 1 \\ 0 & 0 \\ \end{array}} \right)\left({\begin{array}{*{20}c} {{{\bar l}_j}} \\ {{{\bar p}_j}} \\ \end{array}} \right),$$(4.38)which cannot be diagonalized.

When linearized about Minkowski spacetime, it is possible to classify the characteristic fields into physical, constraintviolating and gauge fields; see [106]. For the system (4.29–4.32) the physical fields are the ones in the tensor sector, \({{\hat l}_{ij}} \pm {{\hat p}_{ij}}\), the constraintviolating ones are \({{\bar p}_j}\) and \({{\bar l}{\prime}} \pm {{\bar p}{\prime}}\), and the gauge fields are the remaining characteristic variables. Observe that the constraintviolating fields are governed by a stronglyhyperbolic system (see also Section 4.2.4 below), and that in this particular formulation of the ADM equations the gauge fields are coupled to the constraintviolating ones. This coupling is one of the properties that make it possible to cast the system as a strongly hyperbolic one.
We conclude that the evolution system (4.27, 4.28, 4.20, 4.21) is strongly hyperbolic if and only if ƒ > 0 and ƒ ≠ 1. Although the full harmonic gauge condition (4.3) is excluded from these restrictions,^{Footnote 15} there is still a large family of evolution equations for the lapse and shift that give rise to a strongly hyperbolic problem together with the standard evolution equations (4.20, 4.21) from the 3+1 decomposition.
Elliptic gauge conditions leading to a wellposed formulation
Rather than fixing the lapse and shift algebraically or dynamically, an alternative, which has been considered in the literature, is to fix them according to elliptic equations. A natural restriction on the extrinsic geometry of the time slices Σ_{ t } is to require that their mean curvature, c = −K/3, vanishes or is constant [391]. Taking the trace of Eq. (4.21) and using the Hamiltonian constraint to eliminate the trace of \(R_{ij}^{(3)}\) yields the following equation for the lapse,
which is a secondorder linear elliptic equation. The operator inside the square parenthesis is formally positive if the strong energy condition, ρ + σ ≥ 0, holds, and so it is invertible when defined on appropriate function spaces. See also [203] for generalizations of this condition. Concerning the shift, one choice, which is motivated by eliminating the “bad” terms in the expression for the Ricci tensor, Eq. (4.24), is the spatial harmonic gauge [25]. In terms of a fixed (possibly timedependent) background metric \({\overset \circ \gamma _{ij}}\) on Σ_{ t }, this gauge is defined as (cf. Eq. (4.3))
where \(\overset \circ D\) is the LeviCivita connection with respect to \(\overset \circ \gamma\) and \(\overset \circ \Gamma {\,^k}_{ij}\) denote the corresponding Christoffel symbols. The main importance of this gauge is that it permits one to rewrite the Ricci tensor belonging to the three metric in the form
where \({\overset \circ D _k}\) denotes the covariant derivative with respect to the background metric \(\overset \circ \gamma\) and where the lowerorder terms “l.o.” depend only on γ_{ ij } and its first derivatives \({\overset \circ D _k}{\gamma _{ij}}\). When V^{k} = 0 the operator on the righthand side is secondorder quasilinear elliptic, and with this, the evolution system (4.20, 4.21) has the form of a nonlinear wave equation for the threemetric γ_{ ij }. However, the coefficients and source terms in this equation still depend on the lapse and shift. For constant mean curvature slices the lapse satisfies the elliptic scalar equation (4.39), and with the spatial harmonic gauge the shift is determined by the requirement that Eq. (4.40) is preserved throughout evolution, which yields an elliptic vector equation for it. In [25] it was shown that the coupled hyperbolicelliptic system consisting of the evolution equations (4.20, 4.21) with the Ricci tensor rewritten in elliptic form using the condition V^{k} = 0, the constant mean curvature condition (4.39), and this elliptic equation for β^{i}, gives rise to a wellposed Cauchy problem in vacuum. Besides eliminating the “bad” terms in the Ricci tensor, the spatial harmonic gauge also has other nice properties, which were exploited in the wellposed formulation of [25]. For example, the covariant Laplacian of a function ƒ is
which does not contain any derivatives of the three metric γ^{ij} if V^{k} = 0. For applications of the hyperbolicelliptic formulation in [25] to the global existence of expanding vacuum cosmologies; see [26, 27].
Other methods for specifying the shift have been proposed in [391], with the idea of minimizing a functional of the type
where Θ_{ ij } := ∂_{ t }γ_{ ij }/2 = −αK_{ ij } + D_{(i}β_{j)} is the strain tensor. Therefore, the functional I[β] minimizes time changes in the three metric in an averaged sense. In particular, I[β] attains its absolute minimum (zero) if ∂_{ t } is a Killing vector field. Therefore, one expects the resulting gauge condition to minimize the time dependence of the coordinate components of the three metric. An alternative is to replace the strain by its tracefree part on the righthand side of Eq. (4.43), giving rise to the minimal distortion gauge. Both conditions yield a secondorder elliptic equation for the shift vector, which has unique solutions provided suitable boundary conditions are specified. For generalizations and further results on these type of gauge conditions; see [73, 203, 204]. However, it seems to be currently unknown whether or not these elliptic shift conditions, together with the evolution system (4.20, 4.21) and an appropriate condition on the lapse, lead to a wellposed Cauchy problem.
Constraint propagation
The evolution equations (4.20, 4.21) are equivalent to the components of the Einstein equations corresponding to the spatial part of the Ricci tensor,
and in order to obtain a solution of the full Einstein equations one also needs to solve the constraints H = 8πG_{ Nρ } and M_{ i } = 8πG_{ N }j_{ i }. As in Section 4.2.3, the constraint propagation system can be obtained from the twice contracted Bianchi identities, which, in the 3+1 decomposition, read
The condition of the stressenergy tensor being divergencefree leads to similar evolution equations for ρ and j_{ i }. Therefore, the equations (4.44) lead to the following symmetric hyperbolic system [190, 445] for the constraint variables \({\mathcal H}: = H  8\pi {G_N}\rho\) and \({{\mathcal M}_i}: = {M_i}  8\pi {G_N}{j_i}\),
As has also been observed in [190], the constraint propagation system associated with the standard ADM equations, where Eq. (4.44) is replaced by its tracereversed version R_{ ij } − γ_{ ij }g^{µv}R_{ µv }/2 = 8πG_{ N }T_{ ij } is
which is only weakly hyperbolic. Therefore, it is much more difficult to control the constraint fields in the standard ADM case than in York’s formulation of the 3+1 equations.
The BSSN formulation
The BSSN formulation is based on the 3+1 decomposition of Einstein’s field equations. Unlike the harmonic formulation, which has been motivated by the mathematical structure of the equations and the understanding of the Cauchy formulation in general relativity, this system has been mainly developed and improved based on its capability of numerically evolving spacetimes containing compact objects in a stable way. Interestingly, it turns out that in spite of the fact that the BSSN formulation is based on an entirely different motivation, mathematical questions like the wellposedness of its Cauchy problem can be answered, at least for most gauge conditions.
In the BSSN formulation, the three metric γ_{ ij } and the extrinsic curvature K_{ ij } are decomposed according to
Here, K = γ^{ij} K_{ ij } and Ã_{ ij } are the trace and the traceless part, respectively, of the conformallyrescaled extrinsic curvature. The conformal factor e^{2ϕ} is determined by the requirement for the conformal metric to have unit determinant. Aside from these variables one also evolves the lapse (α), the shift (β^{i}) and its time derivative (B^{i}), and the variable
In terms of the operator \({\partial _0} = {\partial _t}  {\beta ^j}{\partial _j}\) the BSSN evolution equations are
Here, quantities with a tilde refer to the conformal three metric \({{\tilde \gamma}_{ij}}\), which is also used in order to raise and lower indices. In particular, \({{\tilde D}_i}\) and \({{\tilde \Gamma}^k}_{ij}\) denote the covariant derivative and the Christoffel symbols, respectively, with respect to \({{\tilde \gamma}_{ij}}\). Expressions with a superscript TF refer to their traceless part with respect to the conformal metric. Next, the sum \({{\tilde R}_{ij}} + R_{ij}^\phi\) represents the Ricci tensor associated with the physical three metric γ_{ ij }, where
The term \({{\hat \partial}_0}{{\tilde \Gamma}^i}\) in Eq. (4.55) is set equal to the righthand side of Eq. (4.59). The parameter m in the latter equation modifies the evolution flow off the constraint surface by adding the momentum constraint to the evolution equation for the variable \({{\tilde \Gamma}^i}\). This parameter was first introduced in [10] in order to compare the stability properties of the BSSN evolution equations with those of the ADM formulation.
The gauge conditions, which are imposed on the lapse and shift in Eqs. (4.52, 4.54, 4.55), were introduced in [52] and generalize the BonaMassó condition [62] and the hyperbolic Gamma driver condition [11]. It is assumed that the functions ƒ (α, ϕ, x^{µ}), G(α, ϕ, x^{µ}) and H(α, ϕ, x^{µ}) are strictly positive and smooth in their arguments, and that K_{0}(x^{µ}) and η^{i}(B^{j}, α, x^{µ}) are smooth functions of their arguments. The choice
with η a positive constant, corresponds to the evolution system used in many blackhole simulations based on 1 + log slicing and the moving puncture technique (see, for instance, [423] and references therein). Finally, the source terms S, Ŝ_{ ij } and S^{i} are defined in the following way: denoting by \(R_{ij}^{(3)}\) and \(R_{ij}^{(4)}\) the Ricci tensors belonging to the threemetric γ_{ ij } and the spacetime metric, respectively, and introducing the constraint variables
the source terms are defined as
for vacuum evolutions one sets S = 0, Ŝ_{ ij } = 0 and S^{i} =0. When matter fields are present, the Einstein field equations are equivalent to the evolution equations (4.52–4.59) setting \(S =  4\pi {G_N}(\rho + \sigma),{{\hat S}_{ij}} = 8\pi {G_N}\sigma _{ij}^{TF},{S^i} = 16\pi {G_N}m\alpha {{\tilde \gamma}^{ik}}{j_k}\) and the constraints H = 8πG_{ N }ρ, Mi = 8πG_{ N }j_{ i } and C^{i} = 0.
When comparing Cauchy evolutions in different spatial coordinates, it is very convenient to reformulate the BSSN system such that it is covariant with respect to spatial coordinate transformations. This is indeed possible; see [77, 82]. One way of achieving this is to fix a smooth background threemetric \({\overset \circ \gamma _{ij}}\), similarly as in Section 4.1, and to replace the fields ϕ and \({{\tilde \Gamma}^i}\) by the scalar and vector fields
where γ and \(\overset \circ \gamma\) denote the determinants of γ_{ ij } and \({\overset \circ \gamma_{ij}}\), and \({\overset \circ D_j}\) is the covariant derivative associated to the latter. If \({\overset \circ \gamma _{ij}}\) is flat^{Footnote 16} and timeindependent, the corresponding BSSN equations are obtained by replacing \({\partial _k} \mapsto {\overset \circ D _k}\) and \({{\tilde \Gamma}^k}_{ij} \mapsto {{\tilde \Gamma}^k}_{ij}  \overset \circ \Gamma {\,^k}_{ij}\) in Eqs. (4.52–4.59, 4.60, 4.61, 4.64–4.66).
The hyperbolicity of the BSSN evolution equations
In fact, the ADM formulation in the spatial harmonic gauge described in Section 4.2.3 and the BSSN formulation are based on some common ideas. In the covariant reformulation of BSSN just mentioned, the variable \({{\tilde \Gamma}^i}\) is just the quantity V^{i} defined in Eq. (4.40), where γ_{ ij } is replaced by the conformal metric \({\overset \circ \gamma _{ij}}\). Instead of requiring \({{\tilde \Gamma}^i}\) to vanish, which would convert the operator on the righthand side of Eq. (4.60) into a quasilinear elliptic operator, one promotes this quantity to an independent field satisfying the evolution equation (4.59) (see also the discussion below Equation (2.18) in [390]). In this way, the \({{\tilde \gamma}_{ij}}  {{\tilde A}_{ij}}\)block of the evolution equations forms a wave system. However, this system is coupled through its principal terms to the evolution equations of the remaining variables, and so one needs to analyze the complete system. As follows from the discussion below, it is crucial to add the momentum constraint to Eq. (4.59) with an appropriate factor m in order to obtain a hyperbolic system.
The hyperbolicity of the BSSN evolution equations was first analyzed in a systematic way in [373], where it was established that for fixed shift and densitized lapse,
the evolution system (4.53, 4.56–4.59) is strongly hyperbolic for σ > 0 and m > 1/4 and symmetric hyperbolic for m > 1 and 6σ = 4m − 1. This was shown by introducing new variables and enlarging the system to a strongly or symmetric hyperbolic firstorder one. In fact, similar firstorder reductions were already obtained in [196, 188]. However, in [373] it was shown that the firstorder enlargements are equivalent to the original system if the extra constraints associated to the definition of the new variables are satisfied, and that these extra constraints propagate independently of the BSSN constraints H = 0, M_{ i } = 0 and C^{i} = 0. This establishes the wellposedness of the Cauchy problem for the system (4.69, 4.53, 4.56–4.59) under the aforementioned conditions on σ and m. Based on the same method, a symmetric hyperbolic firstorder enlargement of the evolution equations (4.52, 4.53, 4.56–4.59) and fixed shift was obtained in [52] under the conditions ƒ > 0 and 4m = 3ƒ + 1 and used to construct boundary conditions for BSSN. Firstorder stronglyhyperbolic reductions for the full system (4.52–4.59) have also been recently analyzed in [82].
An alternative and efficient method for analyzing the system consists in reducing it to a firstorder pseudodifferential system, as described in Section 3.1.5. This method has been applied in [308] to derive a strongly hyperbolic system very similar to BSSN with fixed, densitized lapse and fixed shift. This system is then shown to yield a wellposed Cauchy problem. In [52] the same method was applied to the evolution system (4.52–4.59). Linearizing and localizing, one obtains a firstorder system of the form \({{\hat U}_t} = P(ik)\hat U = i\overset \circ \beta {\,^s}{k_s}\hat U + \overset \circ \alpha Q(ik)\hat U\). The eigenvalues of Q(ik) are 0, \(\pm i, \pm i\sqrt m, \pm i\sqrt \mu, \pm \sqrt f, \pm \sqrt {GH}, \pm \sqrt \kappa\), where we have defined µ := (4m − 1)/3 and κ := 4GH/3. The system is weakly hyperbolic provided that
and it is strongly hyperbolic if, in addition, the parameter and the functions ƒ, G, and H can be chosen such that the functions
are bounded and smooth. In particular, this requires that the nominators converge to zero at least as fast as the denominators when ƒ → κ, μ → κ or 3κ → 4m, respectively. Since κ > 0, the boundedness of κ/(ƒ − κ) requires that ƒ ≠ κ. For the standard choice m = 1, the conditions on the gauge parameters leading to strong hyperbolicity are, therefore, ƒ > 0, κ > 0 and ƒ ≠ κ. Unfortunately, for the choice (4.62, 4.63) used in binary blackhole simulations these conditions reduce to
which is typically violated at some twosurface, since asymptotically, α → 1 and ϕ → 0 while near black holes is small and positive. It is currently not known whether or not the Cauchy problem is well posed if the system is strongly hyperbolic everywhere except at points belonging to a set of zero measure, such as a twosurface. Although numerical simulations based on finitedifference discretizations with the standard choice (4.62, 4.63) show no apparent sign of instabilities near such surfaces, the wellposedness for the Cauchy problem for the BSSN system (4.52–4.59) with the choice (4.62, 4.63) for the gauge source functions remains an open problem when the condition (4.72) is violated. However, a wellposed problem could be formulated by modifying the choice for the functions G and H such that ƒ ≠ κ and ƒ, κ > 0 are guaranteed to hold everywhere.
Yet a different approach to analyzing the hyperbolicity of BSSN has been given in [219, 220] based on a new definition of strongly and symmetric hyperbolicity for evolution systems, which are first order in time and second order in space. Based on this definition, it has been verified that the BSSN system (4.69, 4.53, 4.56–4.59) is strongly hyperbolic for σ > 0 and m > 1/4 and symmetric hyperbolic for 6σ = 4m − 1 > 0. (Note that this generalizes the original result in [373] where, in addition, m > 1 was required.) The results in [220] also discuss more general 3+1 formulations, including the one in [308] and construct constraintpreserving boundary conditions. The relation between the different approaches to analyzing hyperbolicity of evolution systems, which are first order in time and second order in space, has been analyzed in [221].
Strong hyperbolicity for different versions of the gauge evolution equations (4.52, 4.54, 4.55), where the normal operator \({{\hat \partial}_0}\) is sometimes replaced by ∂_{ t }, has been analyzed in [222]. See Table I in that reference for a comparison between the different versions and the conditions they are subject to in order to satisfy strong hyperbolicity. It should be noted that when m = 1 and \({{\hat \partial}_0}\) is replaced by ∂_{ t }, additional conditions restricting the magnitude of the shift appear in addition to ƒ > 0 and ƒ ≠ κ.
Constraint propagation
As mentioned above, the BSSN evolution equations (4.52–4.59) are only equivalent to Einstein’s field equation if the constraints
are satisfied. Using the twice contracted Bianchi identities in their 3+1 decomposed form, Eqs. (4.45, 4.46), and assuming that the stressenergy tensor is divergence free, it is not difficult to show that the equations (4.52–4.59) imply the following evolution system for the constraint fields [52, 220]:
This is the constraint propagation system for BSSN, which describes the propagation of constraint violations, which are usually present in numerical simulations due to truncation and roundoff errors. There are at least three reasons for establishing the wellposedness of its Cauchy problem. The first reason is to show that the unique solution of the system (4.74–4.76) with zero initial data is the trivial solution. This implies that it is sufficient to solve the constraints at the initial time t = 0. Then, any smooth enough solution of the BSSN evolution equations with such data satisfies the constraint propagation system with \({\mathcal H} = 0,{{\mathcal M}_j} = 0\) and C^{i} =0, and it follows from the uniqueness property of this system that the constraints must hold everywhere and at each time. In this way, one obtains a solution to Einstein’s field equations. However, in numerical calculations, the initial constraints are not exactly satisfied due to numerical errors. This brings us to the second reason for having a wellposed problem at the level of the constraint propagation system; namely, the continuous dependence on the initial data. Indeed, the initial constraint violations give rise to constraint violating solutions; but, if these violations are governed by a wellposed evolution system, the norm of the constraint violations is controlled by those of the initial violations for each fixed time t > 0. In particular, the constraint violations must converge to zero if the initial constraint violations do. Since the initial constraint errors go to zero when resolution is increased (provided a stable numerical scheme is used to solve the constraints), this guarantees convergence to a constraintsatisfying solution.^{Footnote 17} Finally, the third reason for establishing wellposedness for the constraint propagation system is the construction of constraintpreserving boundary conditions, which will be explained in detail in Section 6.
The hyperbolicity of the constraint propagation system (4.74–4.76) has been analyzed in [220, 52, 81, 80], and [315] and shown to be reducible to a symmetric hyperbolic firstorder system for m > 1/4. Furthermore, there are no superluminal characteristic fields if 1/4 < m ≤ 1. Because of finite speed of propagation, this means that BSSN with 1/4 < m ≤ 1 (which includes the standard choice m = 1) does not possess superluminal constraintviolating modes. This is an important property, for it shows that constraint violations that originate inside black hole regions (which usually dominate the constraint errors due to high gradients at the punctures or stuffing of the blackhole singularities in the turducken approach [156, 81, 80]) cannot propagate to the exterior region.
In [353] a general result is derived, showing that under a mild assumption on the form of the constraints, strong hyperbolicity of the main evolution system implies strong hyperbolicity of the constraint propagation system, with the characteristic speeds of the latter being a subset of those of the former. The result does not hold in general if “strong” is replaced by “symmetric”, since there are known examples for which the main evolution system is symmetric hyperbolic, while the constraint propagation system is only strongly hyperbolic [108].
Other hyperbolic formulations
There exist many other hyperbolic reductions of Einstein’s field equations. In particular, there has been a large amount of work on casting the evolution equations into firstorder symmetric [2, 182, 195, 3, 21, 155, 248, 443, 22, 74, 234, 254, 383, 377, 18, 285, 285, 86] and strongly hyperbolic [62, 63, 12, 59, 60, 13, 64, 367, 222, 78, 58, 82] form; see [182, 352, 188, 353] for reviews. For systems involving wave equations for the extrinsic curvature; see [128, 2]; see also [424] and [20, 75, 374, 379, 436] for applications to perturbation theory and the linear stability of solitons and hairy black holes.
Recently, there has also been work deriving strongly or symmetric hyperbolic formulations from an action principle [79, 58, 243].
Boundary Conditions: The InitialBoundary Value Problem
In Section 3 we discussed the general Cauchy problem for quasilinear hyperbolic evolution equations on the unbounded domain ℝ^{n}. However, in the numerical modeling of such problems one is faced with the finiteness of computer resources. A common approach for dealing with this problem is to truncate the domain via an artificial boundary, thus forming a finite computational domain with outer boundary. Absorbing boundary conditions must then be specified at the boundary such that the resulting IBVP is well posed and such that the amount of spurious reflection is minimized.
Therefore, we examine in this section quasilinear hyperbolic evolution equations on a finite, open domain Σ ⊂ ℝ^{n} with C^{∞}smooth boundary ∂Σ. Let T > 0. We are considering an IBVP of the following form,
where u(t, x) ∈ ℂ^{m} is the state vector, A^{1}(t, x, u), …, A^{n}(t, x, u) are complex m×m matrices, F(t,x,u) ∈ ℂ^{m}, and b(t,x,u) is a complex r × m matrix. As before, we assume for simplicity that all coefficients belong to the class \(C_b^\infty ([0,T] \times \Sigma \times {{\rm{\mathbb C}}^m})\) of bounded, smooth functions with bounded derivatives. The data consists of the initial data \(f \in C_b^\infty (\Sigma, {{\rm{\mathbb C}}^m})\) and the boundary data \(g \in C_b^\infty ([0,T] \times \partial \Sigma, {{\rm{\mathbb C}}^r})\).
Compared to the initialvalue problem discussed in Section 3 the following new issues and difficulties appear when boundaries are present:

For a smooth solution to exist, the data f and g must satisfy appropriate compatibility conditions at the intersection S := {0} × ∂Σ between the initial and boundary surface [344]. Assuming that u is continuous, for instance, Eqs. (5.2, 5.3) imply that g(0, x) = b(0, x, f(x))f(x) for all x ∈ ∂Σ. If u is continuously differentiable, then taking a time derivative of Eq. (5.3) and using Eqs. (5.1, 5.2) leads to
$${g_t}(0,x) = c(x)\;\left[ {\sum\limits_{j = 1}^n {{A^j}} (0,x,f(x)){{\partial f} \over {\partial {x^j}}}(x) + F(0,x,f(x))} \right] + {b_t}(0,x,f(x))f(x),\qquad x \in \partial \Sigma ,$$(5.4)where c(x) is the complex r × m matrix with coefficients
$$c{(x)^A}_{\;B} = b{(0,x,f(x))^A}_{\;B} + \sum\limits_{C = 1}^m {{{\partial {b^A}_C} \over {\partial {u^B}}}} (0,x,f(x))f{(x)^C},\qquad A = 1, \ldots ,r,\quad B = 1, \ldots ,m.$$(5.5)Assuming higher regularity of u, one obtains additional compatibility conditions by taking further time derivatives of Eq. (5.3). In particular, for an infinitelydifferentiable solution u, one has an infinite family of such compatibility conditions at S, and one must make sure that the data f, g satisfies each of them if the solution u is to be reproduced by the IBVP. If an exact solution u(^{0)} of the partialdifferential equation (5.1) is known, a convenient way of satisfying these conditions is to choose the data such that in a neighborhood of S, f and g agree with the corresponding values for u^{(0)} i.e., such that f(x) = u^{(0)}(0, x) and g(t, x) = b(t, x, u^{(0)}(t, x))u^{(0)}(t, x) for (t, x) in a neighborhood of S. However, depending on the problem at hand, this might be too restrictive.

The next issue is the question of what class of boundary conditions (5.3) leads to a wellposed problem. In particular, one would like to know, which are the restrictions on the matrix b(t, x, u) implying existence of a unique solution, provided the compatibility conditions hold. In order to illustrate this issue on a very simple example, consider the advection equation u_{ t } = u_{ x } on the interval [−1, 1]. The most general solution has the form u(t, x) = h(t + x) for some differentiable function h: (−1, ∞) → ℂ. The function h is determined on the interval [−1, 1] by the initial data alone, and so the initial data alone fixes the solution on the strip −1 −t ≤ x ≤ 1 − t. Therefore, one is not allowed to specify any boundary conditions at x = −1, whereas data must be specified for u at x = 1 in order to uniquely determine the function h on the interval (1, ∞).

Additional difficulties appear when the system has constraints, like in the case of electromagnetism and general relativity. In the previous Section 4, we saw in the case of Einstein’s equations that it is usually sufficient to solve these constraints on an initial Cauchy surface, since the Bianchi identities and the evolution equations imply that the constraints propagate. However, in the presence of boundaries one can only guarantee that the constraints remain satisfied inside the future domain of dependence of the initial surface Σ_{0}:= {0} × Σ unless the boundary conditions are chosen with care. Methods for constructing constraintpreserving boundary conditions, which make sure that the constraints propagate correctly on the whole spacetime domain [0, T] × Σ will be discussed in Section 6.
There are two common techniques for analyzing an IBVP. The first, discussed in Section 5.1, is based on the linearization and localization principles, and reduces the problem to linear, constant coefficient IBVPs which can be explicitly solved using Fourier transformations, similar to the case without boundaries. This approach, called the Laplace method, is very useful for finding necessary conditions for the wellposedness of linear, constant coefficient IBVPs. Likely, these conditions are also necessary for the quasilinear IBVP, since smallamplitude highfrequency perturbations are essentially governed by the corresponding linearized, frozen coefficient problem. Based on the Kreiss symmetrizer construction [258] and the theory of pseudodifferential operators, the Laplace method also gives sufficient conditions for the linear, variable coefficient problem to be well posed; however, the general theory is rather technical. For a discussion and interpretation of this approach in terms of wave propagation we refer to [241].
The second method, which is discussed in Section 5.2, is based on energy inequalities obtained from integration by parts and does not require the use of pseudodifferential operators. It provides a class of boundary conditions, called maximal dissipative, which leads to a wellposed IBVP. Essentially, these boundary conditions specify data to the incoming normal characteristic fields, or to an appropriate linear combination of the in and outgoing normal characteristic fields. Although technically less involved than the Laplace one, this method requires the evolution equations (5.1) to be symmetric hyperbolic in order to be applicable, and it gives sufficient, but not necessary, conditions for wellposedness.
In Section 5.3 we also discuss absorbing boundary conditions, which are designed to minimize spurious reflections from the boundary surface.
The Laplace method
Upon linearization and localization, the IBVP (5.1, 5.2, 5.3) reduces to a linear, constantcoefficient problem of the following form,
where A^{j} = A^{j}(t_{0}, x_{0}, u^{(0)} (t_{0}, x_{0})), b = b(t_{0}, x_{0},u^{(0)} (t_{0}, x_{0})) denote the matrix coefficients corresponding to A^{j}(t, x, u) and b(t, x, u) linearized about a solution u^{(0)} and frozen at the point p_{0} = (t_{0}, x_{0}), and where, for generality, we include the forcing term F_{0}(t, x) with components in the class \(C_b^\infty ([0,\infty) \times \Sigma)\). Since the freezing process involves a zoom into a very small neighborhood of p_{0}, we may replace Σ by ℝ^{n} for all points p_{0} lying inside the domain Σ. We are then back into the case of Section 3, and we conclude that a necessary condition for the IBVP (5.1, 5.2, 5.3) to be well posed at u^{(0)} is that all linearized, frozen coefficient Cauchy problems corresponding to p_{0} ∈ Σ are well posed. In particular, the equation (5.6) must be strongly hyperbolic.
Now let us consider a point p_{0} ∈ ∂Σ at the boundary. Since ∂Σ is assumed to be smooth, it will be mapped to a plane during the freezing process. Therefore, taking points p_{0} ∈ ∂Σ, it is sufficient to consider the linear, constant coefficient IBVP (5.6, 5.7, 5.8) on the half space
say. This is the subject of this subsection. Because we are dealing with a constant coefficient problem on the halfspace, we can reduce the problem to an ordinary differential boundary problem on the interval [0, ∞) by employing Fourier transformation in the directions t and y:= (x_{2}, …, x_{ n }) tangential to the boundary. More precisely, we first exponentially damp the function u(t, x) in time by defining for η > 0 the function
We denote by ũ_{ η }(ξ, x_{1}, k) the Fourier transformation of u_{ η }(t, x_{1}, y) with respect to the directions t, and y tangential to the boundary and define the LaplaceFourier transformation of u by
then, ũ satisfies the following boundary value problem,
where, for notational simplicity, we set A := A^{1} and B^{j}:= A^{j}, j = 2, …, n, and where B(s, k):= sI − iB^{2}k^{2} − … − iB^{n}k_{ n }. Here, \(\tilde F(s,{x_1},k) = {{\tilde F}_0}(s,{x_1},k) + \hat f({x_1},k)\) with \({{\tilde F}_0}\) and \({\hat f}\) denoting the LaplaceFourier and Fourier transform, respectively, of F_{0} and f, and \(\tilde g(s,k)\) is the LaplaceFourier transform of the boundary data g.
In the following, we assume for simplicity that the boundary matrix A is invertible, and that the equation (5.6) is strongly hyperbolic. An interesting example with a singular boundary matrix is mentioned in Example 26 below. If A can be inverted, then we rewrite Eq. (5.12) as the linear ordinary differential equation
where M(s,k):= A^{−1}B(s,k). We solve this equation subject to the boundary conditions (5.13) and the requirement that ũ vanishes as x_{1} → ∞. For this, it is useful to have information about the eigenvalues of M(s, k).
Lemma 3 ([258, 259, 228]). Suppose the equation (5.6) is strongly hyperbolic and the boundary matrix A has q negative and m − q positive eigenvalues. Then, M(s, k) has precisely q eigenvalues with negative real part and m − q eigenvalues with positive real part. (The eigenvalues are counted according to their algebraic multiplicity.) Furthermore, there is a constant δ > 0 such that the eigenvalues κ of M(s,k) satisfy the estimate
for all Re(s) > 0 and k ∈ ℝ^{n−1}.
Proof. Let Re(s) > 0, β ∈ ℝ and k ∈ ℝ^{n−1}. Then
Since the equation (5.6) is strongly hyperbolic there is a constant K and matrices S(β, k) such that (see the comments below Definition 2)
for all (β, k) ∈ ℝ^{n}, where Λ(β, k) is a real, diagonal matrix. Hence,
and since sI − iΛ(β, k) is diagonal and its diagonal entries have real part greater than or equal to Re(s), it follows that
with δ:= (K^{2} ∣A∣)^{−1}. Therefore, the eigenvalues κ of M(s, k) must satisfy
for all β ∈ ℝ. Choosing β:= Im(κ) proves the inequality (5.15). Furthermore, since the eigenvalues κ = κ(s, k) can be chosen to be continuous functions of (s, k) [252], and since for k = 0, M(s, 0) = sA^{−1}, the number of eigenvalues κ with positive real part is equal to the number of positive eigenvalues of A. □
According to this lemma, the Jordan normal form of the matrix M(s, k) has the following form:
with T(s, k) a regular matrix, N(s, k) is nilpotent (N(s, k)^{m} = 0) and
is the diagonal matrix with the eigenvalues of M(s,k), where κ_{1}, …,κ_{ q } have negative real part. Furthermore, N(s, k) commutes with D(s, k). Transforming to the variable ṽ(s, x, k):= T(s,k)^{−1}ũ(s, x, k) the boundary value problem (5.12, 5.13) simplifies to
Necessary conditions for wellposedness and the Lopatinsky condition
Having cast the IBVP into the ordinary differential system (5.23, 5.24), we are ready to obtain a simple necessary condition for wellposedness. For this, we consider the problem for \(\tilde F = 0\) and split ṽ = (ṽ_{−}, ṽ_{+}) where ṽ_{−} := (ṽ_{1}, … ṽ_{ q }) and ṽ_{+}:= (ṽ_{ q }+1, …, ṽ_{ m }) are the variables corresponding to the eigenvalues of M(s, k) with negative and positive real parts, respectively. Accordingly, we split
and bT(s, k) = (b_{−}(s, k), b_{+}(s, k)). When \(\tilde F = 0\) the most general solution of Eq. (5.23) is
with constant vectors σ_{−}(s, k) ∈ ℂ^{q} and σ_{+}(s, k) ∈ ℂ^{m−q}. The expression for ṽ_{+} describes modes that grow exponentially in x_{1} and do not satisfy the required boundary condition at x_{1} → ∞ unless σ_{+}(s, k) = 0; hence, we set σ_{+}(s, k) = 0. In view of the boundary conditions (5.24), we then obtain the algebraic equation
Therefore, a necessary condition for existence and uniqueness is that the r × q matrix b_{−}(s, k) be a square matrix, i.e., r = q, and that
for all Re(s) > 0 and k ∈ ℝ^{n−1}. Let us make the following observations:

The condition (5.27) implies that we must specify exactly as many linearlyindependent boundary conditions as there are incoming characteristic fields, since q is the number of negative eigenvalues of the boundary matrix A = A^{1}.

The violation of condition (5.27) at some (s_{0}, k_{0}) with Re(s_{0}) > 0 and k ∈ ℝ^{n−1} gives rise to the simple wave solutions
$$u(t,{x_1},y) = {e^{{s_0}t + i{k_0}\cdot y}}\tilde u({s_0},{x_1},{k_0}),\qquad t \geq 0,\quad ({x_1},y) \in \Sigma ,$$(5.28)where ũ(s_{0},·, k_{0}) = T(s, k)ṽ(s_{0},·, k_{0}) ∈ L^{2}(0, ∞) is a nontrivial solution of the problem (5.23, 5.24) with homogeneous data \(\tilde F = 0\) and \(\tilde g = 0\). Therefore, an equivalent necessary condition for wellposedness is that no such simple wave solutions exist. This is known as the Lopatinsky condition.

If such a simple wave solution exists for some (s_{0}, k_{0}), then the homogeneity of the problem implies the existence of a whole family,
$${u_\alpha}(t,{x_1},y) = {e^{\alpha ({s_0}t + i{k_0}\cdot y)}}\tilde u(\alpha {s_0},\alpha {x_1},\alpha {k_0}),\qquad t \geq 0,\quad ({x_1},y) \in \Sigma ,$$(5.29)of such solutions parametrized by α > 0. In particular, it follows that
$$\vert {u_\alpha}(t,{x_1},y)\vert \; = {e^{\alpha {\rm{Re}}(s)t}}\vert \tilde u(\alpha {s_0},\alpha {x_1},\alpha {k_0})\vert \; = {e^{\alpha {\rm{Re}}(s)t}}\vert {u_\alpha}(0,{x_1},y)\vert ,$$(5.30)such that
$${{\vert {u_\alpha}(t,{x_1},y)\vert} \over {\vert {u_\alpha}(0,{x_1},y)\vert}} = {e^{\alpha {\rm{Re}}(s)t}} \rightarrow \infty$$(5.31)for all t > 0, as α → ∞. Therefore, one has solutions growing exponentially in time at an arbitrarily large rate.^{Footnote 18}
Example 25. Consider the IBVP for the massless Dirac equation in two spatial dimensions (cf. Section 8.4.1 in [259]),
where a and b are two complex constants to be determined. Assuming f = 0, LaplaceFourier transformation leads to the boundaryvalue problem
The eigenvalues and corresponding eigenvectors of the matrix M(s, k) are κ_{±} = _{±}λ and e_{±} = (ik, s ∓ λ)^{T}, with \(\lambda := \sqrt {{s^2} + {k^2}}\), where the root is chosen such that Re(λ) > 0 for Re(s) > 0. The solution, which is square integrable on [0, ∞), is the one associated with κ_{−}; that is,
with σ a constant. Introduced into the boundary condition (5.36) leads to the condition
and the Lopatinsky condition is satisfied if and only if the expression inside the square brackets on the lefthand side is different from zero for all Re(s) > 0 and k ∈ ℝ. Clearly, this implies b ≠ 0, since otherwise this expression is zero for k = 0. Assuming b ≠ 0 and k ≠ 0, we then obtain the condition
for all z:= s/∣k∣ with Re(z) > 0, which is the case if and only if ∣a/b∣ ≤ 1 or a/b ∈ ℝ; see Figure 1. The particular case a = 0, b = 1 corresponds to fixing the incoming normal characteristic field u_{2} to g at the boundary.
Example 26. We consider the Maxwell evolution equations of Example 15 on the halfspace x_{1} > 0, and freeze the incoming normal characteristic fields to zero at the boundary. These fields are the ones defined in Eq. (3.54), which correspond to negative eigenvalues and k = ∂_{ x };^{Footnote 19} hence
where A = 2, 3 label the coordinates tangential to the boundary, and where we recall that \(\mu = \sqrt {\alpha \beta}\), assuming that α and β have the same sign such that the evolution system (3.50, 3.51) is strongly hyperbolic. In this example, we apply the Lopatinsky condition in order to find necessary conditions for the resulting IBVP to be well posed. For simplicity, we assume that \(\mu = \sqrt {\alpha \beta} = 1\), which implies that the system is strongly hyperbolic for all values of α ≠ 0, but symmetric hyperbolic only if −3/2 < α < 0; see Example 15.
In order to analyze the system, it is convenient to introduce the variables U_{1}:= W_{22} + W_{33}, U_{ A }:= W_{1A} − (1 + α)W_{A1}, Z:= βW_{11} − (1 + β/2)U_{1}, and \({{\bar W}_{AB}}: = {W_{AB}}  {\delta _{AB}}{U_1}/2\), which are motivated by the form of the characteristic fields with respect to the direction k = −∂_{1} normal to the boundary x_{1} = 0; see Example 15. With these assumptions and definitions, LaplaceFourier transformation of the system (3.50, 3.51) yields
where we have used β = 1/α since μ = 1. The last three equations are purely algebraic and can be used to eliminate the zero speed fields \(\tilde Z,{{\tilde W}_{A1}}\) and \({{\tilde \bar W}_{AB}}\) from the remaining equations. The result is the ordinary differential system
In order to diagonalize this system, we decompose Ẽ_{ A } and Ũ_{ A } into their components parallel and orthogonal to k; if \(\hat k: = k/\vert k\vert\) and \({\hat l}\) form an orthonormal basis of the boundary x_{1} = 0,^{Footnote 20} then these are defined as
then, the system decouples into two blocks, one comprising the transverse quantities (Ẽ_{⊥}, Ũ_{⊥}) and the other the quantities (Ẽ_{1}, Ũ_{1}, Ẽ_{∥}, Ũ_{∥}). The first block gives
and the corresponding solutions with exponential decay at x_{1} → ∞ have the form
where σ_{0} is a complex constant, and where we have defined \(\lambda := \sqrt {{s^2} + \vert k{\vert ^2}}\) with the root chosen such that Re(λ) > 0 for Re(s) > 0. The second block is
with corresponding decaying solutions
with complex constants σ_{1} and σ_{2}.
On the other hand, LaplaceFourier transformation of the boundary conditions (5.40) leads to
Introducing into this solutions (5.43, 5.45) gives
and
In the first case, since Re(s + λ) ≥ Re(s) > 0, we obtain σ_{0} = 0 and there are no simple wave solutions in the transverse sector. In the second case, the determinant of the system is
which is different from zero if and only if \(z + \sqrt {{z^2} + 1} \neq \pm (1 + \alpha)\) for all Re(z) > 0, where z:= s/∣k∣. Since α is real, this is the case if and only if −2 ≤ α ≤ 0; see Figure 1.
We conclude that the strongly hyperbolic evolution system (3.50, 3.51) with αβ = 1 and incoming normal characteristic fields set to zero at the boundary does not give rise to a wellposed IBVP when α > 0 or α < −2. This excludes the parameter range −3/2 < α < 0 for which the system is symmetric hyperbolic. This case is covered by the results in Section 5.2, which utilize energy estimates and show that symmetric hyperbolic problems with zero incoming normal characteristic fields are well posed.
Sufficient conditions for wellposedness and boundary stability
Next, let us discuss sufficient conditions for the linear, constant coefficient IBVP (5.6, 5.7, 5.8) to be well posed. For this, we first transform the problem to trivial initial data by replacing u(t, x) with u(t, x) − e^{−t}f(x). Then, we obtain the IBVP
with \(F(t,x) = {F_0}(t,x)  {e^{ t}}[f(x) + \sum\limits_{j = 1}^n {{A^j}} {\partial \over {\partial {x^j}}}f(x)]\) and g(t, x) replaced by g(t, x) + e^{−t}bf(x). By applying the LaplaceFourier transformation to it, one obtains the boundaryvalue problem (5.12, 5.13), which could be solved explicitly, provided the Lopatinsky condition holds. However, in view of the generalization to variable coefficients, one would like to have a method that does not rely on the explicit representation of the solution in Fourier space.
In order to formulate the next definition, let \(\Omega := [0,\infty) \times \bar \Sigma\) be the bulk and \({\mathcal T}: = [0,\infty) \times \partial \Sigma\) the boundary surface, and introduce the associated norms ∥ · ∥_{η,0,Ω} and \(\Vert \cdot \Vert_{\eta, 0, {\mathcal T}}\) defined by
where we have used the definition of u_{ η } as in Eq. (5.10). Using Parseval’s identities we may also rewrite these norms as
The relevant concept of wellposedness is the following one.
Definition 6. [258] The IBVP ( 5.50 , 5.51 , 5.52 ) is called strongly well posed in the generalized sense if there is a constant K > 0 such that each compatible data \(F \in C_0^\infty (\Omega)\) and \(g \in C_0^\infty ({\mathcal T})\) gives rise to a unique solution u satisfying the estimate
for all η > 0.
The inequality (5.55) implies that both the bulk norm ∥ · ∥_{η,0,Ω} and the boundary norm \(\Vert \cdot \Vert_{\eta, 0, {\mathcal T}}\) of u are bounded by the corresponding norms of F and g. For a trivial source term, F = 0, the inequality (5.55) implies, in particular,
which is an estimate for the solution at the boundary in terms of the norm of the boundary data g. In view of Eq. (5.54) this is equivalent to the following requirement.
Definition 7. [259, 267] The boundary problem ( 5.50 , 5.51 , 5.52 ) is called boundary stable if there is a constant K > 0 such that all solutions ũ(s,·,k) ∈ L^{2}(0,∞) of Eqs. ( 5.12 , 5.13 ) with \(\tilde F = 0\) satisfy
for all Re(s) > 0 and k ∈ ℝ^{n−1}.
Since boundary stability only requires considering solutions for trivial source terms, F = 0, it is a much simpler condition than Eq. (5.55). Clearly, strong wellposedness in the generalized sense implies boundary stability. The main result is that, modulo technical assumptions, the converse is also true: boundary stability implies strong wellposedness in the generalized sense.
Theorem 5. [258, 340] Consider the linear, constant coefficient IBVP ( 5.50 , 5.51 , 5.52 ) on the half space Σ = {(x_{1}, x_{2}, …, x_{ n }) ∈ ℝ^{n}: x_{1} > 0}. Assume that equation (5.50) is strictly hyperbolic, meaning that the eigenvalues of the principal symbol P_{0}(ik) are distinct for all k ∈ S^{n−1}. Assume that the boundary matrix A = A^{1} is invertible. Then, the problem is strongly well posed in the generalized sense if and only if it is boundary stable.
Maybe the importance of Theorem 5 is not so much its statement, which concerns only the linear, constant coefficient case for which the solutions can also be constructed explicitly, but rather the method for its proof, which is based on the construction of a smooth symmetrizer symbol, and which is amendable to generalizations to the variable coefficient case using pseudodifferential operators.
In order to formulate the result of this construction, define \(\rho := \sqrt {\vert s{\vert ^2} + \vert k{\vert ^2}}, {s{\prime}}: = s/\rho, {k{\prime}}: = k/\rho\), such that \(({s{\prime}},{k{\prime}}) \in S_ + ^n\) lies on the half sphere \(S_ + ^n: = \{({s{\prime}},{k{\prime}}) \in {\rm{C}} \times {{\rm{R}}^n}:\vert {s{\prime}}{\vert ^2} + \vert k{\vert ^2} = 1,{\rm Re} ({s{\prime}}) > 0\}\) for Re(s) > 0 and k ∈ ℝ^{n−1}. Then, we have,
Theorem 6. [258] Consider the linear, constant coefficient IBVP ( 5.50 , 5.51 , 5.52 ) on the half space Σ. Assume that equation (5.50) is strictly hyperbolic, that the boundary matrix A = A^{1} is invertible, and that the problem is boundary stable. Then, there exists a family of complex m × m matrices \(H({s{\prime}},k),({s{\prime}},k) \in S_ + ^n\), whose coefficients belong to the class \({C^\infty}(S_ + ^m)\), with the following properties:

(i)
H(s′, k′) = H(s′, k′)* is Hermitian.

(ii)
H(s′, k′)M(s′, k′) + M(s′, k′)*H(s′, k′) ≥ 2Re(s′)I for all (s′, k′) \(({s{\prime}},{k{\prime}}) \in S_ + ^n\).

(iii)
There is a constant C > 0 such that
$${\tilde u^{\ast}}H(s\prime ,k\prime)\tilde u + C\vert b\tilde u{\vert ^2}\; \geq \;\vert \tilde u{\vert ^2}$$(5.58)for all ũ ∈ ℂ^{m} and all (s′, k′) \(\in S_n^ +\).
Furthermore, H can be chosen to be a smooth function of the matrix coefficients of A^{j} and b.
Let us show how the existence of the symmetrizer H(s′, k′) implies the estimate (5.55). First, using Eq. (5.14) and properties (i) and (ii) we have
where we have used the fact that M(s, k) = ρM(s′, k′) in the second step, and the inequality \(2{{\rm Re}} (a^\ast b) \leq 2\vert a\Vert b\vert \, \leq {C_1}\vert a{\vert ^2} + C_1^{ 1}\vert b{\vert ^2}\) for complex numbers a and b and any positive constant C_{1} > 0 in the third step. Integrating both sides from x_{1} = 0 to ∞ and choosing C_{1} = Re(s), we obtain, using (iii),
Since H is bounded, there exists a constant C_{2} > 0 such that \(\vert H{A^{ 1}}\tilde F\vert \leq {C_2}\vert \tilde F\vert\) for all \(({s{\prime}},{k{\prime}}) \in S_ + ^n\). Integrating over ξ = Im(s) ∈ ℝ and k ∈ ℝ^{n−1} and using Parseval’s identity, we obtain from this
and the estimate (5.55) follows with \({K^2}: = \max \{C_2^2,C\}\).
Example 27. Let us go back to Example 25 of the 2D Dirac equation on the halfspace with boundary condition (5.34) at x = 0. The solution of Eqs. (5.35, 5.36) at the boundary is given by ũ(s, 0, k) = σ(ik, s +λ)^{T}, where \(\lambda = \sqrt {{s^2} + {k^2}}\), and
Therefore, the IBVP is boundary stable if and only if there exists a constant K > 0 such that
for all Re(s) > 0 and k ∈ ℝn^{−1}. We may assume b ≠ 0, otherwise the Lopatinsky condition is violated. For k = 0 the lefthand side is 1/∣b∣. For k ≠ 0 we can rewrite the condition as
for all Re(z) > 0, where \(\psi (z): = z + \sqrt {{z^2} + 1}\) and z:= s/∣k∣. This is satisfied if and only if the function \(\vert \psi (z) \pm i{a \over b}\vert\) is bounded away from zero, which is the case if and only if ∣a/b∣ < 1; see Figure 1.
This, together with the results obtained in Example 25, yields the following conclusions: the IBVP (5.32, 5.33, 5.34) gives rise to an illposed problem if b = 0 or if ∣a/b∣ > 1 and a/b ∈ ℝ and to a problem, which is strongly well posed in the generalized sense if b ≠ 0 and ∣a/b∣ < 1. The case a∣ = ∣b ≠ 0 is covered by the energy method discussed in Section 5.2. For the case ∣a/b∣ > 1 with a/b ∈ ℝ, see Section 10.5 in [228].
Before discussing secondorder systems, let us make a few remarks concerning Theorem 5:

The boundary stability condition (5.57) is often called the Kreiss condition. Provided the eigenvalues of the matrix M(s, k) are suitably normalized, it can be shown [258, 228, 241] that the determinant det(b_{−}(s, k)) in Eq. (5.27) can be extended to a continuous function defined for all Re(s) ≥ 0 and k ∈ ℝ^{n−1}, and condition (5.57) can be restated as the following algebraic condition:
$$\det ({b_ }(s,k)) \neq 0$$(5.64)for all Re(s) ≥ 0 and k ∈ ℝ^{n−1}. This is a strengthened version of the Lopatinsky condition, since it requires the determinant to be different from zero also for s on the imaginary axis.

As anticipated above, the importance of the symmetrizer construction in Theorem 6 relies on the fact that, based on the theory of pseudodifferential operators, it can be used to treat the linear, variable coefficient IBVP [258]. Therefore, the localization principle holds: if all the frozen coefficient IBVPs are boundary stable and satisfy the assumptions of Theorem 5, then the variable coefficient problem is strongly well posed in the generalized sense.

If the problem is boundary stable, it is also possible to estimate higherorder derivatives of the solutions. For example, if we multiply both sides of the inequality (5.59) by ∣k∣^{2}, integrate over ξ = Im(s) and k and use Parseval’s identity as before, we obtain the estimate (5.55) with u, F and g replaced by their tangential derivatives u_{ y }, F_{ y } and g_{ y }, respectively. Similarly, one obtains the estimate (5.55) with u, F and g replaced by their time derivatives u_{ t }, F_{ t } and g_{ t } if we multiply both sides of the inequality (5.59) by ∣s∣^{2} and assume that u_{ t }(0, x) = 0 for all x ∈ Σ.^{Footnote 21} Then, a similar estimate follows for the partial derivative, ∂_{1}u, in the x_{1}direction using the evolution equation (5.6) and the fact that the boundary matrix A^{1} is invertible. Estimates for higherorder derivatives of u follow by an analogous process.

Theorem 5 assumes that the initial data f is trivial, which is not an important restriction since one can always achieve f = 0 by transforming the source term F and the boundary data g, as described below Eq. (5.52). Since the transformed F involves derivatives of f, this means that derivatives of f would appear on the righthand side of the inequality (5.55), and at first sight it looks like one “loses a derivative” in the sense that one needs to control the derivatives of f to one degree higher than the ones of u. However, the results in [341, 342] improve the statement of Theorem 5 by allowing nontrivial initial data and by showing that the same hypotheses lead to a stronger concept of wellposedness (strong wellposedness, defined below in Definition 9 as opposed to strong wellposedness in the generalized sense).

The results mentioned so far assume strict hyperbolicity and an invertible boundary matrix, which are toorestrictive conditions for many applications. Unfortunately, there does not seem to exist a general theory, which removes these two assumptions. Partial results include [5], which treats strongly hyperbolic problems with an invertible boundary matrix that are not necessarily strictly hyperbolic, and [293], which discusses symmetric hyperbolic problems with a singular boundary matrix.
Secondorder systems
It has been shown in [267] that certain systems of wave problems can be reformulated in such a way that they satisfy the hypotheses of Theorem 6. In order to illustrate this, we consider the IBVP for the wave equation on the halfspace Σ:= {(x_{1}, x_{2}, …, x_{ n }) ∈ ℝ^{n}: x_{1} > 0}, n ≥ 1,
where \(F \in C_0^\infty ([0,\infty) \times \Sigma)\)) and \(g \in C_0^\infty ([0,\infty) \times \partial \Sigma)\), and where L is a firstorder linear differential operator of the form
where a, b, c_{2}, …, c_{ n } are real constants. We ask under which conditions on these constants the IBVP (5.65, 5.66, 5.67) is strongly well posed in the generalized sense. Since we are dealing with a secondorder system, the estimate (5.55) in Definition 6 has to be replaced with
where the norms \(\Vert \cdot \Vert _{\eta, 1, \Omega}^2\) and \(\Vert \cdot \Vert _{\eta, 1, {\mathcal T}}^2\) control the first partial derivatives of v,
with (x^{μ}) = (t, x_{1}, x_{2}, …, x_{ n }). Likewise, the inequality (5.57) in the definition of boundary stability needs to be replaced by
LaplaceFourier transformation of Eqs. (5.65, 5.67) leads to the secondorder differential problem
where we have defined \(c(k): = \sum\limits_{j = 2}^n {{c_j}{k_j}}\) and where \({\tilde F}\) and \({\tilde g}\) denote the LaplaceFourier transformations of F and g, respectively. In order to apply the theory described in Section 5.1.2, we rewrite this system in firstorder pseudodifferential form. Defining
where \(\rho := \sqrt {\vert s{\vert ^2} + \vert k{\vert ^2}}\), we find
where we have defined
with s′ := s/ρ, k′:= k/ρ. This system has the same form as the one described by Eqs. (5.14, 5.13), and the eigenvalues of the matrix M(s,k) are distinct for Re(s) > 0 and k ∈ ℝ^{n−1}. Therefore, we can construct a symmetrizer H(s′, k′) according to Theorem 6 provided that the problem is boundary stable. In order to check boundary stability, we diagonalize M(s, k) and consider the solution of Eq. (5.74) for \(\tilde f = 0\), which decays exponentially as x_{1} → ∞,
where σ_{−} is a complex constant and \(\lambda := \sqrt {{s^2} + \vert k{\vert ^2}}\) with the root chosen such that Re(λ) > 0 for Re(s) > 0. Introduced into the boundary condition (5.75), this gives
and the system is boundary stable if and only if the expression inside the square parenthesis is different from zero for all Re(s′) ≥ 0 and k′ ∈ ℝ^{n−1} with ∣s′∣^{2} + ∣k′∣^{2} = 1. In the onedimensional case, n = 1, this condition reduces to (a + b)s′ = 0 with ∣s′∣ = 1, and the system is boundary stable if and only if a + b ≠ 0; that is, if and only if the boundary vector field L is not proportional to the ingoing null vector at the boundary surface,
Indeed, if a + b = 0, Lu = a(u_{ t } + u_{x1}) is proportional to the outgoing characteristic field, for which it is not permitted to specify boundary data since it is completely determined by the initial data.
When n ≥ 2 it follows that b must be different from zero since otherwise the square parenthesis is zero for purely imaginary s′ satisfying as′ = ic(k′). Therefore, one can choose b=1 without loss of generality. It can then be shown that the system is boundary stable if and only if a > 0 and \(\sum\limits_{j = 2}^n {\vert{c_j}{\vert^2} < {a^2}}\); see [267], which is equivalent to the condition that the boundary vector field L is pointing outside the domain, and that its orthogonal projection onto the boundary surface \({\mathcal T}\),
is futuredirected timelike. This includes as a particular case the “Sommerfeld” boundary condition u_{ t } − u_{x1} = 0 for which L is the null vector obtained from the sum of the time evolution vector field ∂_{ t } and the normal derivative \(N =  {\partial _{{x_1}}}\). While N is uniquely determined by the boundary surface \({\mathcal T}\), ∂_{ t } is not unique, since one can transform it to an arbitrary futuredirected timelike vector field T, which is tangent to \({\mathcal T}\) by means of an appropriate Lorentz transformation. Since the wave equation is Lorentzinvariant, it is clear that the new boundary vector field \(\hat L = T + N\) must also give rise to a wellposed IBVP, which explains why there is so much freedom in the choice of L.
For a more geometric derivation of these results based on estimates derived from the stressenergy tensor associated to the scalar field v, which shows that the above construction for L is sufficient for strong wellposedness; see Appendix B in [263]. For a generalization to the shifted wave equation; see [369].
As pointed out in [267], the advantage of obtaining a strong wellposedness estimate (5.69) for the scalarwave problem is the fact that it allows the treatment of systems of wave equations where the boundary conditions can be coupled in a certain way through terms involving first derivatives of the fields. In order to illustrate this with a simple example, consider a system of two wave equations,
which is coupled through the boundary conditions
where N has the form
with (X^{0}, X^{1}, …, X^{n}) ∈ ℂ^{n+1} any vector. Since the wave equation and boundary condition for v_{1} decouples from the one for v_{2}, we can apply the estimate (5.69) to v_{1}, obtaining
If we set \({g_3}(t,x): = {g_2}(t,x) + X{\upsilon _1}(t,x),t \geq 0,x \in \partial \Sigma\), we have a similar estimate for v_{2},
However, since the boundary norm of v_{1} is controlled by the estimate (5.84), one also controls
with some constant C > 0 depending only on the vector field X. Therefore, the inequalities (5.84,5.85) together yield an estimate of the form (5.69) for v = (v_{1}, v_{2}), F = (F_{1},F_{2}) and g = (g_{1}, g_{2}), which shows strong wellposedness in the generalized sense for the coupled system. Notice that the key point, which allows the coupling of v_{1} and v_{2} through the boundary matrix operator N, is the fact that one controls the boundary norm of v_{1} in the estimate (5.84). The result can be generalized to larger systems of wave equations, where the matrix operator N is in triangular form with zero on the diagonal, or where it can be brought into this form by an appropriate transformation [267, 264].
Example 28. As an application of the theory for systems of wave equations, which are coupled through the boundary conditions, we discuss Maxwell’s equations in their potential formulation on the half space Σ [267]. In the Lorentz gauge and the absence of sources, this system is described by four wave equations ∂^{μ}∂_{ μ }A_{ν} = 0 for the components (A_{ t }, A_{ x }, A_{ y }, A_{ z }) of the vector potential A_{ μ }, which are subject to the constraint C:= ∂^{μ}A_{ μ } = 0, where we use the Einstein summation convention.
As a consequence of the wave equation for A_{ ν }, the constraint variable C also satisfies the wave equation, ∂^{μ}∂_{ μ }C = 0. Therefore, the constraint is correctly propagated if the initial data is chosen such that C and its first time derivative vanish, and if C is set to zero at the boundary. Setting C = 0 at the boundary amounts in the following condition for A_{ ν } at x = 0:
which can be rewritten as
Together with the boundary conditions
this yields a system of the form of Eq. (5.82) with N having the required triangular form, where v is the fourcomponent vector function v = (A_{ t } − A_{ x }, A_{ y }, A_{ z }, A_{ t } + A_{ x }). Notice that the Sommerfeldlike boundary conditions on A_{ y } and A_{ z } set the gaugeinvariant quantities E_{ y } + B_{ z } and E_{ z } − B_{ y } to zero, where E and B are the electric and magnetic fields, which is compatible with an outgoing plane wave traveling in the normal direction to the boundary.
For a recent development based on the Laplace method, which allows the treatment of secondorder IBVPs with more general classes of boundary conditions, including those admitting boundary phenomena like glancing and surface waves; see [262].
Maximal dissipative boundary conditions
An alternative technique for specifying boundary conditions, which does not require LaplaceFourier transformation and the use of pseudodifferential operators when generalizing to variable coefficients, is based on energy estimates. In order to understand this, we go back to Section 3.2.3, where we discussed such estimates for linear, firstorder symmetric hyperbolic evolution equations with symmetrizer H(t,x). We obtained the estimate (3.107), bounding the energy \(E({\Sigma _t}) = \int\nolimits_{{\Sigma _t}} {{J^0}(t,x){d^n}x}\) at any time t ∈ [0, T] in terms of the initial energy E(Σ_{0}), provided that the flux integral
was nonnegative. Here, the boundary surface is \({\mathcal T} = [0,T] \times \partial \Sigma\), and its unit outward normal e = (0, s_{1}, …, s_{ n }) is determined by the unit outward normal s to ∂Σ. Therefore, the integral is nonnegative if
where \({P_0}(t,x,s) = \sum\limits_{j = 1}^n {{A^j}(t,x){s_j}}\) is the principal symbol in the direction of the unit normal s. Hence, the idea is to specify homogeneous boundary conditions, b(t, x)u = 0 at \({\mathcal T}\), such that the condition (5.90) is satisfied.^{Footnote 22} In this case, one obtains an a priori energy estimate as in Section 3.2.3. Of course, there are many possible choices for b(t, x), which fulfill the condition (5.90); however, an additional requirement is that one should not overdetermine the IBVP. For example, setting all the components of u to zero at the boundary does not lead to a wellposed problem if there are outgoing modes, as discussed in Section 5.1.1 for the constant coefficient case. Correct boundary conditions turn out to be a minimal condition on u for which the inequality (5.90) holds. In other words, at the boundary surface, u has to be restricted to a space for which Eq. (5.90) holds and which cannot be extended. The precise definition, which captures this idea, is:
Definition 8. Denote for each boundary point \(p = (t,x) \in {\mathcal T}\) the boundary space
of state vectors satisfying the homogeneous boundary condition. V_{ p } is called maximal nonpositive if

(i)
u*H(t, x)P_{0}(t, x, s)u ≤ 0 for all u ∈ V_{ p },

(ii)
V_{ p } is maximal with respect to condition (i); that is, if W_{ p } ⊃ V_{ p } is a linear subspace of ℂ^{m} containing V_{ p }, which satisfies (i), then W_{ p } = V_{ p }.
The boundary condition b(t, x)u = g(t, x) is called maximal dissipative if the associated boundary spaces V_{ p } are maximal nonpositive for all \(p \in {\mathcal T}\).
Maximal dissipative boundary conditions were proposed in [189, 275] in the context of symmetric positive operators, which include symmetric hyperbolic operators as a special case. With such boundary conditions, the IBVP is well posed in the following sense:
Definition 9. Consider the linearized version of the IBVP ( 5.1 , 5.2 , 5.3 ), where the matrix functions A^{j}(t, x) and b(t, x) and the vector function F(t, x) do not depend on u. It is called well posed if there are constants K = K(T) and ε = ε(T) ≥ 0 such that each compatible data \(f \in C_b^\infty (\Sigma, {{\rm{\mathbb C}}^m})\) and \(g \in C_b^\infty ([0,T) \times \partial \Sigma, {{\rm{\mathbb C}}^r})\)) gives rise to a unique C∞solution u satisfying the estimate
for all t ∈ [0, T]. If, in addition, the constant ε > 0 can be chosen strictly positive, the problem is called strongly well posed.
This definition strengthens the corresponding definition in the Laplace analysis, where trivial initial data was assumed and only a timeintegral of the L^{2}(Σ)norm of the solution could be estimated (see Definition 6). The main result of the theory of maximal dissipative boundary conditions is:
Theorem 7. Consider the linearized version of the IBVP ( 5.1 , 5.2 , 5.3 ), where the matrix functions A^{j}(t, x) and b(t, x) and the vector function F(t, x) do not depend on u. Suppose the system is symmetric hyperbolic, and that the boundary conditions (5.3) are maximal dissipative. Suppose, furthermore, that the rank of the boundary matrix P_{0}(t, x, s) is constant in \((t,x) \in {\mathcal T}\).
Then, the problem is well posed in the sense of Definition 9. Furthermore, it is strongly well posed if the boundary matrix P_{0}(t, x, s) is invertible.
This theorem was first proven in [189, 275, 344] for the case where the boundary surface \({\mathcal T}\) is noncharacteristic, that is, the boundary matrix P_{0}(t, x, s) is invertible for all \((t,x) \in {\mathcal T}\). A difficulty with the characteristic case is the loss of derivatives of u in the normal direction to the boundary (see [422]). This case was studied in [293, 343, 387], culminating with the regularity theorem in [387], which is based on special function spaces, which control the L^{2}norms of 2k tangential derivatives and k normal derivatives at the boundary (see also [389]). For generalizations of Theorem 7 to the quasilinear case; see [218, 388].
A more practical way of characterizing maximal dissipative boundary conditions is the following. Fix a boundary point \((t,x) \in {\mathcal T}\), and define the scalar product (·,·) by (u, v):= u*H(t, x)v, u, v ∈ ℂ^{m}. Since the boundary matrix P_{0}(t, x, s) is Hermitian with respect to this scalar product, there exists a basis e_{1}, e_{2}, …, e_{ m } of eigenvectors of P_{0}(t, x, s), which are orthonormal with respect to (·, ·). Let λ_{1}, λ_{2}, …, λ_{ m } be the corresponding eigenvalues, where we might assume that the first r of these eigenvalues are strictly positive, and the last s are strictly negative. We can expand any vector u ∈ ℂ^{m} as \(u = \sum\limits_{j = 1}^m {{u^{(j)}}} {e_j}\), the coefficients u^{(j)} being the characteristic fields with associated speeds λ_{ j }. Then, the condition (5.90) at the point p can be written as
where we have used the fact that λ_{1}, …, λ_{ r } > 0, λ_{m−s+1}, …, λ_{ m } < 0 and the remaining λ_{ j }’s are zero. Therefore, a maximal dissipative boundary condition must have the form
with q a complex r × s matrix, since u_{−} = 0 must imply u_{+} = 0. Furthermore, the matrix q has to be small enough such that the inequality (5.93) holds. There can be no further conditions since an additional, independent condition on u would violate the maximality of the boundary space V_{ p }.
In conclusion, a maximal dissipative boundary condition must have the form of Eq. (5.94), which describes a linear coupling of the outgoing characteristic fields u_{−} to the incoming ones, u_{+}. In particular, there are exactly as many independent boundary conditions as there are incoming fields, in agreement with the Laplace analysis in Section 5.1.1. Furthermore, the boundary conditions must not involve the zero speed fields. The simplest choice for q is the trivial one, q = 0, in which case data for the incoming fields is specified. A nonzero value of q would be chosen if the boundary is to incorporate some reflecting properties, like the case of a perfect conducting surface in electromagnetism, for example.
Example 29. Consider the firstorder reformulation of the KleinGordon equation for the variables u = (Φ, Φ_{ t }, Φ_{ x }, Φ_{ y }); see Example 13. Suppose the spatial domain is x > 0, with the boundary located at x = 0. Then, s = (−1, 0) and the boundary matrix is
Therefore, the characteristic fields and speeds are Φ, Φ_{ y } (zero speed fields, λ = 0), Φ_{ t } − Φ_{ x }(incoming field with speed λ = 1) and Φ_{ t } + Φ_{ x } (outgoing field with speed λ = −1). It follows from Eqs. (5.93, 5.94) that the class of maximal dissipative boundary conditions is
where the function q satisfies ∣q(t, y)∣ ≤ 1 and g is smooth boundary data. Particular cases are:

q = 0: Sommerfeld boundary condition,

q = −1: Dirichlet boundary condition,

q = 1: Neumann boundary condition.
Example 30. For Maxwell’s equations on a domain Σ ⊂ ℝ^{3} with C^{∞}boundary ∂Σ, the boundary matrix is given by
see Example 14. In terms of the components E_{ ∥ } of E parallel to the boundary surface ∂Σ, and the ones E_{⊥}, which are orthogonal to it (and, hence, parallel to s) the characteristic speeds and fields are
Therefore, maximal dissipative boundary conditions have the form
with g_{∥} some smooth vectorvalued function at the boundary, which is parallel to ∂Σ, and q a matrixvalued function satisfying the condition ∣q∣ ≤ 1. Particular cases are:

q = −1, g_{∥} = 0: The boundary condition E_{∥} = 0 describes a perfectly conducting boundary surface.

q = 0, g_{∥} =0: This is a Sommerfeldtype boundary condition, which, locally, is transparent to outgoing plane waves traveling in the normal direction s,
$$E(t,x) = {\mathcal E}{e^{i(\omega t  k\cdot x)}},\qquad B(t,x) = s \wedge E(t,x),$$(5.99)where ω the frequency, k = ωs the wave vector, and \({\mathcal E}\) the polarization vector, which is orthogonal to k. The generalization of this boundary condition to inhomogeneous data g_{∥} ≠ 0 allows one to specify data on the incoming field E_{∥} +s Λ B_{∥} at the boundary surface, which is equal to \(2{\mathcal E}{e^{i\omega t}}\) for the plane waves traveling in the normal inward direction −s.
Recall that the constraints ∇ · E = ρ and ∇ · B = 0 propagate along the time evolution vector field ∂t, (∇ · E− ρ)_{ t } = 0, (∇ · B)_{ t } = 0, provided the continuity equation holds. Since ∂_{ t } is tangent to the boundary, no additional conditions controlling the constraints must be specified at the boundary; the constraints are automatically satisfied everywhere provided they are satisfied on the initial surface.
Example 31. Commonly, one writes Maxwell’s equations as a system of wave equations for the electromagnetic potential A_{ μ } in the Lorentz gauge, as discussed in Example 28. By reducing the problem to a firstorder symmetric hyperbolic system, one may wonder if it is possible to apply the theory of maximal dissipative boundary conditions and obtain a wellposed IBVP, as in the previous example. As we shall see in Section 5.2.1, the answer is affirmative, but the correct application of the theory is not completely straightforward. In order to illustrate why this is the case, introduce the new independent fields D_{ μν }:= ∂_{ μ }A_{ ν }. Then, the set of wave equations can be rewritten as the firstorder system for the 20component vector (A_{ ν }, D_{ tν }, D_{jν}), j = x, y, z,
which is symmetric hyperbolic. The characteristic fields with respect to the unit outward normal s = (−1, 0, 0) at the boundary are
According to Eq. (5.88) we can rewrite the Lorentz constraint in the following way:
The problem is that when written in terms of the characteristic fields, the Lorentz constraint not only depends on the in and outgoing fields, but also on the zero speed fields D_{ yy } and D_{ zz }. Therefore, imposing the constraint on the boundary in order to guarantee constraint preservation leads to a boundary condition, which couples the incoming fields to outgoing and zero speed fields,^{Footnote 23}, which does not fall in the class of admissible boundary conditions.
At this point, one might ask why we were able to formulate a wellposed IBVP based on the secondorder formulation in Example 28, while the firstorder reduction discussed here fails. As we shall see, the reason for this is that there exist many firstorder reductions, which are inequivalent to each other, and a slightly more sophisticated reduction works, while the simplest choice adopted here does not. See also [354, 14] for wellposed formulations of the IBVP in electromagnetism based on the potential formulation in a different gauge.
Example 32. A generalization of Maxwell’s equations is the evolution system
for the symmetric, tracefree tensor fields E_{ ij } and B_{ ij }, where here we use the Einstein summation convention, the indices i,j,k,l run over 1,2,3, (ij) denotes symmetrization over ij, and ε_{ ijk } is the totally antisymmetric tensor with ε_{123} = 1. Notice that the righthand sides of Eqs. (5.102, 5.103) are symmetric and tracefree, such that one can consistently assume that \({E^i}_i = {B^i}_i = 0\). The evolution system (5.102, 5.103), which is symmetric hyperbolic with respect to the trivial symmetrizer, describes the propagation of the electric and magnetic parts of the Weyl tensor for linearized gravity on a Minkowski background; see, for instance, [182].
Decomposing E_{ ij } into its parts parallel and orthogonal to the unit outward normal s,
where \({\gamma _{ij}}: = {\delta _{ij}}  {s_i}{s_j}\bar E: = {s^i}{s^j}{E_{ij}},{{\bar E}_i}:\gamma _i^k{E_{kj}}{s^j},{{\hat E}_{ij}}: = (\gamma _i^k\gamma _j^l  {\gamma _{ij}}{\gamma ^{kl}}/2){E_{kl}}\), and similarly for B_{ ij }, the eigenvalue problem λu = P_{0}(s)u for the boundary matrix is
from which one obtains the following characteristic speeds and fields,
Similar to the Maxwell case, the boundary condition \({{\hat E}_{ij}}  {\varepsilon _{kl(i}}{s^k}{{\hat B}^l}_{j)} = 0\) on the incoming, symmetric tracefree characteristic field is, locally, transparent to outgoing linear gravitational plane waves traveling in the normal direction s. In fact, this condition is equivalent to setting the complex Weyl scalar Ψ_{0} computed from the adapted, complex null tetrad K:= ∂_{ t } + s, L := ∂_{ t } − s, Q, \({\bar Q}\), to zero at the boundary surface.^{Footnote 24} Variants of this condition have been proposed in the literature in the context of the IBVP for Einstein’s field equations in order to approximately control the incoming gravitational radiation; see [187, 40, 253, 378, 363, 309, 286, 384, 366].
However, one also needs to control the incoming field \({{\bar E}_i}  {\varepsilon _{kli}}{s^k}{{\bar B}^l}\) at the boundary. This field, which propagates with speed 1/2, is related to the constraints in the theory. Like in electromagnetism, the fields E_{ ij } and B_{ ij } are subject to the divergence constraints P_{ j }:= ∂^{i}E_{ ij } = 0, Q_{ j }:= ∂^{i}B_{ ij } = 0. However, unlike the Maxwell case, these constraints do not propagate trivially. As a consequence of the evolution equations (5.102, 5.103), the constraint fields P_{ j } and Q_{ j } obey
which is equivalent to Maxwell’s equations except that the propagation speed for the transverse modes is 1/2 instead of 1. Therefore, guaranteeing constraint propagation requires specifying homogeneous maximal dissipative boundary conditions for this system, which have the form of Eq. (5.98) with E ↦ P, B ↦ −Q and g_{∥} = 0. A problem is that this yields conditions involving first derivatives of the fields E_{ ij } and B_{ ij }, when rewritten as a boundary condition for the main system (5.102, 5.103). Except in some particular cases involving totallyreflecting boundaries, it is not possible to cast these conditions into maximal dissipative form.
A solution to this problem has been presented in [181] and [187], where a similar system appears in the context of the IBVP for Einstein’s field equations for solutions with antide Sitter asymptotics, or for solutions with an artificial boundary, respectively. The method consists in modifying the evolution system (5.102, 5.103) by using the constraint equations P_{ j } = Q_{ j } = 0 in such a way that the constraint fields for the resulting boundary adapted system propagate along ∂_{ t } at the boundary surface. In order to describe this system, extend s to a smooth vector field on Σ with the property that ∣s∣ ≤ 1. Then, the boundaryadapted system reads:
This system is symmetric hyperbolic, and the characteristic fields in the normal direction are identical to the unmodified system with the important difference that the fields \({{\bar E}_i} \mp {\varepsilon _{kli}}{s^k}{{\bar B}^l}\) now propagate with zero speed. The induced evolution system for the constraint fields is symmetric hyperbolic, and has a trivial boundary matrix. As a consequence, the constraints propagate tangentially to the boundary surface, and no extra boundary conditions for controlling the constraints must be specified.
Application to systems of wave equations
As anticipated in Example 31, the theory of symmetric hyperbolic firstorder equations with maximal dissipative boundary conditions can also be used to formulate wellposed IBVP for systems of wave equations, which are coupled through the boundary conditions, as already discussed in Section 5.1.3 based on the Laplace method. Again, the key idea is to show strong wellposedness; that is, an a priori estimate, which controls the first derivatives of the fields in the bulk and at the boundary.
In order to explain how this is performed, we consider the simple case of the KleinGordon equation Φ_{ tt } = ΔΦ − m^{2}Φ on the half plane Σ:= {(x, y) ∈ ℝ^{2}: x > 0}. In Example 13 we reduced the problem to a firstorder symmetric hyperbolic system for the variables u = (Φ, Φ_{ t }, Φ_{ x }, Φ_{ y }) with symmetrizer H = diag(m^{2}, 1, 1, 1), and in Example 29 we determined the class of maximal dissipative boundary conditions for this firstorder reduction. Consider the particular case of Sommerfeld boundary conditions, where Φ_{ t } = Φ_{ x } is specified at x = 0. Then, Eq. (3.103) gives the following conservation law,
where \(E({\Sigma _t}) = \int\nolimits_{{\Sigma _t}} {{u^{\ast}}H\,u\,dxdy} = \int\nolimits_{{\Sigma _t}} {({m^2}\vert \Phi {\vert ^2} + \vert {\Phi _t}{\vert ^2} + \vert {\Phi _x}{\vert ^2} + \vert {\Phi _y}{\vert ^2})dxdy}\), and u*HP_{ 0 }(s)u = −2Re(Φ*_{ t }Φ_{ x }); see Example 29. Using the Sommerfeld boundary condition, we may rewrite −2Re(Φ*_{ t }Φ_{ x }) = −(∣Φ_{ t }∣^{2} + ∣Φ_{ x }∣^{2}), and obtain the energy equality
controlling the derivatives of Φ_{ t } and Φ_{ x } at the boundary surface. However, a weakness of this estimate is that it does not control the zero speed fields Φ and Φ_{ y } at the boundary, and so one does not obtain strong wellposedness.
On the other hand, the firstorder reduction is not unique, and as we show now, different reductions may lead to stronger estimates. For this, we choose a real constant b such that 0 < b ≤ 1/2 and define the new fields ū := (Φ, Φ_{ t } − bΦ_{ x }, Φ_{ x }, Φ_{ y }), which yield the symmetric hyperbolic system
with symmetrizer \(\bar H = {\rm{diag(}}{m^2},1,1  {b^2},1)\). The characteristic fields in terms of Φ and its derivatives are Φ, Φ_{ y }, Φ_{ t } + Φ_{ x }, and Φ_{ t } − Φ_{ x }, as before. However, the fields now have characteristic speeds −b, −b, −1, +1, respectively, whereas in the previous reduction they were 0, 0, −1, +1. Therefore, the effect of the new reduction versus the old one is to shift the speeds of the zero speed fields, and to convert them to outgoing fields with speed −b. Notice that the Sommerfeld boundary condition Φ_{ t } = Φ_{ x } is still maximal dissipative with respect to the new reduction. Repeating the energy estimates again leads to a conservation law of the form (5.108), but where now the energy and flux quantities are \(E({\Sigma _t}) = \int\nolimits_{{\Sigma _t}} {{{\bar u}^{\ast}}\bar H\,\bar u\,dx\,dy} = \int\nolimits_{{\Sigma _t}} {({m^2}\vert \Phi {\vert ^2} + \vert {\Phi _t}  b{\Phi _x}{\vert ^2} + (1  {b^2})\vert {\Phi _x}{\vert ^2} + \vert {\Phi _y}{\vert ^2})dx\,dy}\) dxdy and
Imposing the boundary condition Φ_{ t } = Φ_{ x } at x = 0 and using 2b ≤ 1 leads to the energy estimate
controlling Φ and all its first derivatives at the boundary surface.
Summarizing, we have seen that the most straightforward firstorder reduction of the KleinGordon equation does not lead to strong wellposedness. However, strong wellposedness can be obtained by choosing a more sophisticated reduction, in which the timederivative of Φ is replaced by its derivative Φ_{t} − bΦ_{ x } along the timelike vector (1, −b), which is pointing outside the domain at the boundary surface. In fact, it is possible to obtain a symmetric hyperbolic reduction leading to strong wellposedness for any futuredirected timelike vector field u, which is pointing outside the domain at the boundary. Based on the geometric definition of firstorder symmetric hyperbolic systems in [205], it is possible to generalize this result to systems of quasilinear wave equations on curved backgrounds [264].
In order to describe the result in [264], let π: E → M be a vector bundle over \(M = [0,T] \times \bar \Sigma\) with fiber ℝ^{N}; let ∇_{ μ } be a fixed, given connection on E and let g_{ μν } = g_{ μν }(Φ) be a Lorentz metric on M with inverse g^{μν}(Φ), which depends pointwise and smoothly on a vectorvalued function Φ = {Φ^{A}}_{ A }=1,2, …,N, parameterizing a local section of E. Assume that each timeslice Σ_{ t } = {t} × Σ is spacelike and that the boundary \({\mathcal T} = [0,T] \times \partial \Sigma\) is timelike with respect to g_{ μν }(Φ). We consider a system of quasilinear wave equations of the form
where F^{A}(Φ, ∇Φ) is a vectorvalued function, which depends pointwise and smoothly on its arguments. The wave system (5.113) is subject to the initial conditions
where \(\Phi _0^A\) and \(\Pi _0^A\) are given vectorvalued functions on Σ_{0}, and where n^{μ} = n^{μ}(Φ) denotes the futuredirected unit normal to Σ_{0} with respect to g_{ μν }. In order to describe the boundary conditions, let \({T^\mu} = {T^\mu}(p,\Phi),p \in {\mathcal T}\), be a futuredirected vector field on \({\mathcal T}\), which is normalized with respect to g_{ μν }, and let N^{μ} = N^{μ}(p, Φ) be the unit outward normal to \({\mathcal T}\) with respect to the metric g_{ μν }. We consider boundary conditions on \({\mathcal T}\) of the following form
where α = α(p, Φ) > 0 is a strictly positive, smooth function, G^{A} = G^{A}(p) is a given, vectorvalued function on \({\mathcal T}\) and the matrix coefficients \({c^{\mu A}}_B = {c^{\mu A}}_B(p,\Phi)\) and \({d^A}_B = {d^A}_B(p,\Phi)\) are smooth functions of their arguments. Furthermore, we assume that \({c^\mu}{^A_B}\) satisfies the following property. Given a local trivialization φ: U × ℝ^{N} ↦ π ^{−1}(U) of E such that Ū ⊂ M is compact and contains a portion \({\mathcal U}\) of the boundary \({\mathcal T}\), there exists a smooth map J: U → GL(N, ℝ), \(p \mapsto ({J^A}_B(p))\) such that the transformed matrix coefficients
are in upper triangular form with zeroes on the diagonal, that is
Theorem 8. [264] The IBVP ( 5.113 , 5.114 , 5.115 ) is well posed. Given T > 0 and sufficiently small and smooth initial and boundary data \(\Phi _0^A,\Pi _0^A\) and G^{A} satisfying the compatibility conditions at the edge S = {0} × ∂Σ, there exists a unique smooth solution on M satisfying the evolution equation (5.113) , the initial condition (5.114) and the boundary condition (5.115) . Furthermore, the solution depends continuously on the initial and boundary data.
Theorem 8 provides the general framework for treating wave systems with constraints, such as Maxwell’s equations in the Lorentz gauge and, as we will see in Section 6.1, Einstein’s field equations with artificial outer boundaries.
5.2.2 Existence of weak solutions and the adjoint problem
Here, we show how to prove the existence of weak solutions for linear, symmetric hyperbolic equations with variable coefficients and maximal dissipative boundary conditions. The method can also be applied to a more general class of linear symmetric operators with maximal dissipative boundary conditions; see [189, 275]. The proof below will shed some light on the maximality condition for the boundary space V_{ p }.
Our starting point is an IBVP of the form (5.1, 5.2, 5.3), where the matrix functions A^{j}(t, x) and b(t, x) do not depend on u, and where F(t, x, u) is replaced by B(t, x)u + F(t, x), such that the system is linear. Furthermore, we can assume that the initial and boundary data is trivial, f = 0, g = 0. We require the system to be symmetric hyperbolic with symmetrizer H(t, x) satisfying the conditions in Definition 4(iii), and assume the boundary conditions (5.3) are maximal dissipative. We rewrite the IBVP on Ω_{ T }:= [0, T] × Σ as the abstract linear problem
where L: D(L) ⊂ X → X is the linear operator on the Hilbert space X:= L^{2}(Ω_{ T }) defined by the evolution equation and the initial and boundary conditions:
where we have defined A^{0}:= − I and x^{0}:= t, where V_{ p } = {u ∈ ℂ^{m} : b(t, x)u = 0} is the boundary space, and where Σ_{0}:= {0} × Σ, Σ_{ T }:= {T} × Σ and \({\mathcal T}: = [0,T] \times \partial \Sigma\) denote the initial, the final and the boundary surface, respectively.
For the following, the adjoint IBVP plays an important role. This problem is defined as follows. First, the symmetrizer defines a natural scalar product on X,
which, because of the properties of H, is equivalent to the standard scalar product on L^{2}(Ω_{ T }). In order to obtain the adjoint problem, we take u ∈ D(L) and \(\upsilon \in C_b^\infty ({\Omega _T})\), and use Gauss’s theorem to find
where we have defined the formal adjoint L*: D(L*) ⊂ X → X of L by
In order for the integrals on the righthand side of Eq. (5.120) to vanish, such that 〈v, Lu〉_{H} = 〈L*v, u〉_{H}, we first notice that the integral over Σ_{0} vanishes, because u = 0 on Σ_{0}. The integral over Σ_{ T } also vanishes if we require v = 0 on Σ_{ T }. The last term also vanishes if we require v to lie in the dual boundary space
for each \(p \in {\mathcal T}\). Therefore, if we define
we have (v, Lu)_{ H } = 〈L*v, u〉_{H} for all u ∈ D(L) and v ∈ D(L*); that is, the operator L* is adjoint to L. There is the following nice relation between the boundary spaces V_{ p } and V*_{ p }:
Lemma 4. Let \(p \in {\mathcal T}\) be a boundary point. Then, V_{ p } is maximal nonpositive if and only if V*_{ p } is maximal nonnegative.
Proof. Fix a boundary point \(p = (t,x) \in {\mathcal T}\) and define the matrix \({\mathcal B}: = H(t,x){P_0}(t,x,s)\) with s the unit outward normal to ∂Σ at x. Since the system is symmetric hyperbolic, \({\mathcal B}\) is Hermitian. We decompose ℂ^{m} = E_{+} ⊕ E_{−} ⊕ E_{0} into orthogonal subspaces E_{+}, E_{−}, E_{0} on which \({\mathcal B}\) is positive, negative and zero, respectively. We equip E_{±} with the scalar products (·,·)_{±}, which are defined by
In particular, we have \({u^{\ast}}{{\mathcal B}_u} = {({u_ +},{u_ +})_ +}  {({u_ },{u_ })_ }\) for all u ∈ ℂ^{m}. Therefore, if V_{ p } is maximal nonpositive, there exists a linear transformation q: E_{−} → E_{+} satisfying ∣qu_{−}∣_{+} ≤ ∣u_{−}∣_{−} for all u_{−} ∈ E_{−}, such that (cf. Eq. (5.94))
Let v ∈ V*_{ p }. Then,
for all u ∈ V_{ p }, where q^{†}: E_{+} → E_{−} is the adjoint of q with respect to the scalar products (·,··)_{ ± } defined on E_{ ± }. Therefore, v_{−} = q^{†}v_{+}, and
Since q^{†} has the same norm as q, which is one, it follows that V*_{ p } is maximal nonnegative. The converse statement follows in an analogous way. □
The lemma implies that solving the original problem −Lu = F with u ∈ D(L) is equivalent to solving the adjoint problem L*v = F with v ∈ D(L*), which, since v(T, x) = 0 is held fixed at Σ_{ T }, corresponds to the timereversed problem with the adjoint boundary conditions. From the a priori energy estimates we obtain:
Lemma 5. There is a constant δ = δ(T) such that
for all u ∈ D(L) and v ∈ D(L*), where ∥·∥_{ H } is the norm induced by the scalar product {·,·}_{ H }.
Proof. Let u ∈ D(L) and set F := −Lu. From the energy estimates in Section 3.2.3 one easily obtains
for some positive constants C depending on T. Integrating both sides from t = 0 to t = T gives
which yields the statement for L setting δ:= (CT) −^{1/2}. The estimate for L* follows from a similar energy estimate for the adjoint problem. □
In particular, Lemma 5 implies that (strong) solutions to the IBVP and its adjoint are unique. Since L and L* are closable operators [345], their closures \(\overline L\) and \(\overline {{L^\ast}}\) satisfy the same inequalities as in Eq. (5.128). Now we are ready to define weak solutions and to prove their existence:
Definition 10. u ∈ X is called a weak solution of the problem (5.118) if
for all v ∈D(L*).
In order to prove the existence of such u ∈ X, we introduce the linear space \(Y = D(\overline {L\ast})\) and equip it with the scalar product 〈·, ·〉_{ Y } defined by
The positivity of this product is a direct consequence of Lemma 5, and since \(\overline {L^\ast}\) is closed, Y defines a Hilbert space. Next, we define the linear form J: Y → ℂ on Y by
This form is bounded, according to Lemma 5,
for all v ∈ Y. Therefore, according to the Riesz representation lemma there exists a unique w ∈ Y such that (w, ν)_{ Y } = J(v) for all v ∈ Y. Setting \(u: = \overline {{L^{\ast}}} w \in X\) gives a weak solution of the problem.
If u ∈ X is a weak solution, which is sufficiently smooth, it follows from the Green type identity (5.120) that u has vanishing initial data and that it satisfies the required boundary conditions, and hence is a solution to the original IBVP (5.118). The difficult part is to show that a weak solution is indeed sufficiently regular for this conclusion to be made. See [189, 275, 344, 343, 387] for such “weak=strong” results.
Absorbing boundary conditions
When modeling isolated systems, the boundary conditions have to be chosen such that they minimize spurious reflections from the boundary surface. This means that inside the computational domain, the solution of the IBVP should lie as close as possible to the true solution of the Cauchy problem on the unbounded domain. In this sense, the dynamics outside the computational domain is replaced by appropriate conditions on a finite, artificial boundary. Clearly, this can only work in particular situations, where the solutions outside the domain are sufficiently simple so that they can be computed and used to construct boundary conditions, which are, at least, approximately compatible with them. Boundary conditions, which give rise to a wellposed IBVP and achieve this goal are called absorbing, nonreflecting or radiation boundary conditions in the literature, and there has been a substantial amount of work on the construction of such conditions for wave problems in acoustics, electromagnetism, meteorology, and solid geophysics (see [206] for a review). Some recent applications to general relativity are mentioned in Sections 6 and 10.3.1.
One approach in the construction of absorbing boundary conditions is based on suitable series or Fourier expansions of the solution, and derives a hierarchy of local boundary conditions with increasing order of accuracy [153, 46, 240]. Typically, such higherorder local boundary conditions involve solving differential equations at the boundary surface, where the order of the differential equation is increasing with the order of the accuracy. This problem can be dealt with by introducing auxiliary variables at the boundary surface [207, 208].
The starting point for a slightly different approach is an exact nonlocal boundary condition, which involves the convolution with an appropriate integral kernel. A method based on an efficient approximation of this integral kernel is then implemented; see, for instance, [16, 17] for the case of the 2D and 3D flat wave equations and [271, 270, 272] for the ReggeWheeler [347] and Zerilli [453] equations describing linear gravitational waves on a Schwarzschild background. Although this method is robust, very accurate and stable, it is based on detailed knowledge of the solutions, which might not always be available in more general situations.
In the following, we illustrate some aspects of the problem of constructing absorbing boundary conditions on some simple examples [372]. Specifically, we construct local absorbing boundary conditions for the wave equation with a spherical outer boundary at radius R > 0.
The onedimensional wave equation
Consider first the onedimensional case,
The general solution is a superposition of a left and a rightmoving solution,
Therefore, the boundary conditions
are perfectly absorbing according to our terminology. Indeed, the operator b_{+} has as its kernel the rightmoving solutions f↗(x − t); hence, the boundary condition (b_{+}u)(t, R) = 0 at x = R is transparent to these solutions. On the other hand, b_{+}f↖(t + x) = 2f′_{↖}(t + x), which implies that at x = R, the boundary condition requires that f↖(v) = f↖(1) is constant for advanced time ν = t + x > R. A similar argument shows that the left boundary condition (b_{−}u)(t, − R) = 0 implies that f↗(−u) = f↗(−R) is constant for retarded time u = t − x > R. Together with initial conditions for u and its time derivative at t = 0 satisfying the compatibility conditions, Eqs. (5.135, 5.137) give rise to a wellposed IBVP. In particular, the solution is identically zero after one crossing time t ≥ 2R for initial data, which are compactly supported inside the interval (−R,R).
The threedimensional wave equation
Generalizing the previous example to higher dimensions is a nontrivial task. This is due to the fact that there are infinitely many propagation directions for outgoing waves, and not just two as in the onedimensional case. Ideally, one would like to control all the propagation directions k, which are outgoing at the boundary (k ··n > 0, where n is the unit outward normal to the boundary), but this is obviously difficult. Instead, one can try to control specific directions (starting with the one that is normal to the outer boundary). Here, we illustrate the method of [46] on the threedimensional wave equation,
The general solution can be decomposed into spherical harmonics Y^{ℓm} according to
which yields the family of reduced equations
for ℓ = 0 this equation reduces to the onedimensional wave equation, for which the general solution is u_{00}(t,r) = U_{00↗}(r − t) + U_{00↖}(r + t) with U_{00↗} and U_{00↖} two arbitrary functions. Therefore, the boundary condition
is perfectly absorbing for spherical waves. For ℓ ≥ 1, exact solutions can be generated from the solutions for ℓ = 0 by applying suitable differential operators to u_{00}(t,r). For this, we define the operators [92]
which satisfy the operator identities
As a consequence, for each ℓ=1,2,3, …, we have
Therefore, we have the explicit in and outgoing solutions
where V_{ ℓm } and U_{ ℓm } are arbitrary smooth functions with j’th derivatives \(V_{\ell m}^{(j)}\) and \(U_{\ell m}^{(j)}\), respectively. In order to construct boundary conditions, which are perfectly absorbing for u_{ ℓm }, one first notices the following identity:
for all ℓ = 0, 1, 2, … and all sufficiently smooth functions U. This identity follows easily from Eq. (5.144) and the fact that b^{ℓ+1}(r^{k}) = k(k + 1) ·… (k + ℓ)r^{k+ℓ+1} = 0 if k ∈ {0, −1, −2, …, −ℓ}. Therefore, given L ∈ {1, 2, 3, …}, the boundary condition
leaves the outgoing solutions with ℓ ≤ L unaltered. Notice that this condition is local in the sense that its formulation does not require the decomposition of u into spherical harmonics. Based on the Laplace method, it was proven in [46] (see also [369]) that each boundary condition \({{\mathcal B}_L}\) yields a wellposed IBVP. By uniqueness this implies that initial data corresponding to a purely outgoing solution with ℓ ≤ L yields a purely outgoing solution (without reflections). In this sense, the condition \({{\mathcal B}_L}\) is perfectly absorbing for waves with ℓ ≤ L. For waves with ℓ > L, one obtains spurious reflections; however, for monochromatic radiation with wave number k, the corresponding amplitude reflection coefficients can be calculated to decay as (kR)^{−2(L}+^{1)} in the wave zone kR ≫ 1 [88]. Furthermore, in most scenarios with smooth solutions, the amplitudes corresponding to the lower few ℓ’s will dominate over the ones with high ℓ so that reflections from high ℓ’s are unimportant. For a numerical implementation of the boundary condition \({{\mathcal B}_2}\) via spectral methods and a possible application to general relativity see [314].
The wave equation on a curved background
When the background is curved, it is not always possible to construct in and outgoing solutions explicitly, as in the previous example. Therefore, it is not even clear how a hierarchy of absorbing boundary conditions should be formulated. However, in many applications the spacetime is asymptotically flat, and if the boundary surface is placed sufficiently far from the strong field region, one can assume that the metric is a small deformation of the flat, Minkowski metric. To first order in M/R with M the ADM mass and R the areal radius of the outer boundary, these correction terms are given by those of the Schwarzschild metric, and approximate in and outgoing solutions for all (ℓ,m) modes can again be computed [372].^{Footnote 25} The M/R terms in the background metric induce two kind of corrections in the in and outgoing solutions u_{ℓm↖}. The first is a curvature correction term, which just adds M/R terms to the coefficients in the sum of Eq. (5.144). This term is local and still obeys Huygens’ principle. The second term is fast decaying (it decays as R/r^{ℓ+1}) and describes the backscatter off the curvature of the background. As a consequence, it is nonlocal (it depends on the past history of the unperturbed solution) and violates Huygens’ principle.
By construction, the boundary conditions \({{\mathcal B}_L}\) are perfectly absorbing for outgoing waves with angular momentum number ℓ ≤ L, including their curvature corrections to first order in M/R. If the firstorder correction terms responsible for the backscatter are taken into account, then \({{\mathcal B}_L}\) are not perfectly absorbing anymore, but the spurious reflections arising from these correction terms have been estimated in [372] to decay at least as fast as (M/R)(kR)^{−2} for monochromatic waves with wave number k satisfying M ≪ k^{−1} ≪ R.
The wellposedness of higherorder absorbing boundary conditions for wave equations on a curved background can be established by assuming the localization principle and the Laplace method [369]. Some applications to general relativity are discussed in Sections 6 and 10.3.1.
Boundary Conditions for Einstein’s Equations
The subject of this section is the discussion of the IBVP for Einstein’s field equations. There are at least three difficulties when formulating Einstein’s equations on a finite domain with artificial outer boundaries. First, as we have seen in Section 4, the evolution equations are subject to constraints, which, in general, propagate with nontrivial characteristic speeds. As a consequence, in general there are incoming constraint fields at the boundary that need to be controlled in order to make sure that the constraints propagate correctly, i.e., that constraintsatisfying initial data yields a solution of the evolution equations and the constraints on the complete computational domain, and not just on its domain of dependence. The control of these incoming constraint fields leads to constraintpreserving boundary conditions, and a nontrivial task is to fit these conditions into one of the admissible boundary conditions discussed in the previous Section 5, for which wellposedness can be shown.
A second issue is the construction of absorbing boundary conditions. Unlike the simple examples considered in Section 5.3, for which the fields evolve on a fixed background and in and outgoing solutions can be represented explicitly, or at least characterized precisely, in general relativity it is not even clear how to define in and outgoing gravitational radiation since there are no local expressions for the gravitational energy density and flux. Therefore, the best one can hope for is to construct boundary conditions, which approximately control the incoming gravitational radiation in certain regimes, like, for example, in the weak field limit where the field equations can be linearized around, say, a Schwarzschild or Minkowski spacetime.
Finally, the third issue is related to the diffeomorphism invariance of the theory. Ideally, one would like to formulate a geometric version of the IBVP, for which the data given on the initial and boundary surfaces Σ_{0} and \({\mathcal T}\) can be characterized in terms of geometric quantities such as the first and second fundamental forms of these surfaces as embedded in the yet unknown spacetime (M, g). In particular, this means that one should be able to identify equivalent data sets, i.e., those which are related to each other by a diffeomorphism of M, leaving Σ_{0} and \({\mathcal T}\) invariant, by local transformations on Σ_{0} and \({\mathcal T}\), without knowing the solution (M, g). It is currently not even clear if such a geometric uniqueness property does exist; see [186, 355] for further discussions on these points.
A wellposed IBVP for Einstein’s vacuum field equations was first formulated by Friedrich and Nagy [187] based on a tetrad formalism, which incorporates the Weyl curvature tensor as an independent field. This formulation exploits the freedom of choosing local coordinates and the tetrad orientation in order to impose very precise gauge conditions, which are adapted to the boundary surface \({\mathcal T}\) and tailored to the IBVP. These gauge conditions, together with a suitable modification of the evolution equations for the Weyl curvature tensor using the constraints (cf. Example 32), lead to a firstorder symmetric hyperbolic system in which all the constraint fields propagate tangentially to \({\mathcal T}\) at the boundary. As a consequence, no constraintpreserving boundary conditions need to be specified, and the only incoming fields are related to the gravitational radiation, at least in the context of the approximations mentioned above. With this, the problem can be shown to be well posed using the techniques described in Section 5.2.
After the pioneering work of [187], there has been much effort in formulating a wellposed IBVP for metric formulations of general relativity, on which most numerical calculations are based. However, with the exception of particular cases in spherical symmetry [249], the linearized field equations [309] or the restriction to flat, totally reflecting boundaries [404, 405, 106, 98, 219, 220, 410, 29, 15], not much progress had been made towards obtaining a manifestly wellposed IBVP with nonreflecting, constraintpreserving boundary conditions. The difficulties encountered were similar to those described in Examples 31 and 32. Namely, controlling the incoming constraint fields usually resulted in boundary conditions for the main system involving either derivatives of its characteristic fields or fields propagating with zero speed, when it was written in firstorder symmetric hyperbolic form. Therefore, the theory of maximal dissipative boundary conditions could not be applied in these attempts. Instead, boundary conditions controlling the incoming characteristic constraint fields were specified and combined with more or less ad hoc conditions controlling the gauge and gravitational degrees of freedom and verified to satisfy the Lopatinsky condition (5.27) using the Laplace method; see [395, 108, 378, 220, 363, 368].
The breakthrough in the metric case came with the work by Kreiss and Winicour [267] who formulated a wellposed IBVP for the linearized Einstein vacuum field equations with harmonic coordinates. Their method is based on the pseudodifferential firstorder reduction of the wave equation described in Section 5.1.3, which, when combined with Sommerfeld boundary conditions, yields a problem, which is strongly well posed in the generalized sense and, when applied to systems of equations, allows a certain hierarchical coupling in the boundary conditions. This work was then generalized to shifted wave equations and higherorder absorbing boundary conditions in [369]. Later, it was recognized that the results in [267] could also be established based on the usual a priori energy estimates based on integration by parts [263]. Finally, it was found that the boundary conditions imposed were actually maximal dissipative for a specific nonstandard class of firstorder symmetric hyperbolic reduction of the wave system; see Section 5.2.1. Unlike the reductions considered in earlier work, such nonstandard class has the property that the boundary surface is noncharacteristic, which implies that no zero speed fields are present, and yields a strong wellposed system. Based on this reduction and the theory of quasilinear symmetric hyperbolic formulations with maximal dissipative boundary conditions [218, 388], it was possible to extend the results in [267, 263] and formulate a wellposed IBVP for quasilinear systems of wave equations [264] with a certain class of boundary conditions (see Theorem 8), which was sufficiently flexible to treat the Einstein equations. Furthermore, the new reduction also offers the interesting possibility to extend the proof to the discretized case using finite difference operators satisfying the summation by parts property, discussed in Sections 8.3 and 9.4.
In order to parallel the presentation in Section 4, here we focus on the IBVP for Einstein’s equations in generalized harmonic coordinates and the IBVP for the BSSN system. The first case, which is discussed in Section 6.1, is an application of Theorem 8. In the BSSN case, only partial results have been obtained so far, but since the BSSN system is widely used, we nevertheless present some of these results in Section 6.2. In Section 6.3 we discuss some of the problems encountered when trying to formulate a geometric uniqueness theorem and, finally, in Section 6.4 we briefly mention alternative approaches to the IBVP, which do not require an artificial boundary.
For an alternative approach to treating the IBVP, which is based on the imposition of the GaussCodazzi equations at T; see [191, 192, 194, 193]. For numerical studies, see [249, 104, 40, 404, 405, 98, 287, 244, 378, 253, 61, 362, 35, 33, 368, 57, 56], especially [366] and [369] for a comparison between different boundary conditions used in numerical relativity and [365] for a numerical implementation of higher absorbing boundary conditions. For review articles on the IBVP in general relativity, see [372, 355, 435].
At present, there are no numerical simulations that are based directly on the wellposed IBVP for the tetrad formulation [187] or the wellposed IBVP for the harmonic formulation [267, 263, 264] described in Section 6.1, nor is there a numerical implementation of the constraintpreserving boundary conditions for the BSSN system presented in Section 6.2. The closest example is the harmonic approach described in [286, 363, 366], which has been shown to be well posed in the generalized sense in the highfrequency limit [369]. However, as mentioned above, the well posed IBVP in [264] opens the door for a numerical discretization based on the energy method, which can be proven to be stable, at least in the linearized case.
The harmonic formulation
Here, we discuss the IBVP formulated in [264] for the Einstein vacuum equations in generalized harmonic coordinates. The starting point is a manifold of the form M = [0, T] × Σ, with Σ a threedimensional compact manifold with C^{∞}boundary ∂Σ, and a given, fixed smooth background metric \({\overset \circ g _{\alpha \beta}}\) with corresponding LeviCivita connection \(\overset \circ \nabla\), as in Section 4.1. We assume that the time slices Σ_{ t }:= {t} × Σ are spacelike and that the boundary surface \({\mathcal T}: = [0,T] \times \partial \Sigma\) is timelike with respect to \({\overset \circ g_{\alpha \beta}}\).
In order to formulate the boundary conditions, we first construct a null tetrad \(\{{K^\mu},{L^\mu},{Q^\mu},{{\bar Q}^\mu}\}\), which is adapted to the boundary. This null tetrad is based on the choice of a futuredirected timelike vector field T^{μ} tangent to \({\mathcal T}\), which is normalized such that g_{ μν }T^{μ}T^{ν} = −1. One possible choice is to tie T^{μ} to the foliation Σ_{ t }, and then define it in the direction orthogonal to the cross sections ∂Σ_{ t } = {t} × ∂Σ of the boundary surface. A more geometric choice has been proposed in [186], where instead T^{μ} is chosen as a distinguished futuredirected timelike eigenvector of the second fundamental form of \({\mathcal T}\), as embedded in (M, g). Next, we denote by N^{μ} the unit outward normal to \({\mathcal T}\) with respect to the metric g_{ μν } and complete T^{μ} and N^{μ} to an orthonormal basis {T^{μ}, N^{μ}, V^{μ}, W^{μ}} of T_{ p }M at each point \(p \in {\mathcal T}\). Then, we define the complex null tetrad by
where \(i = \sqrt { 1}\). Notice that the construction of these vectors is implicit, since it depends on the dynamical metric g_{αβ}, which is yet unknown. However, the dependency is algebraic, and does not involve any derivatives of g_{ αβ }. We also note that the complex null vector Q^{μ} is not unique since it can be rotated by an angle φ ∈ ℝ, Q^{μ} ↦ e^{iφ}Q^{μ}. Finally, we define a radial function r on \({\mathcal T}\) as the areal radius of the cross sections ∂Σ_{ t } with respect to the background metric.
Then, the boundary conditions, which were proposed in [264] for the harmonic system (4.5), are:
where \({\overset \circ \nabla _K}{h_{LQ}}: = {K^\mu}{L^\alpha}{Q^\beta}{\overset \circ \nabla _\mu}{h_{\alpha \beta}},{h_{KL}}: = {K^\alpha}{L^\beta}{h_{\alpha \beta}},{H_K}:{K^\mu}{H_\mu}\), etc., and where q_{ K } and q_{ L } are realvalued given smooth functions on \({\mathcal T}\) and q_{ Q } and q_{ QQ } are complexvalued given smooth functions on \({\mathcal T}\). Since Q is complex, these constitute ten real boundary conditions for the metric coefficients h_{ αβ }. The content of the boundary conditions (6.2, 6.3, 6.4, 6.5) can be clarified by considering linearized gravitational waves on a Minkowski background with a spherical boundary. The analysis in [264] shows that in this context the four real conditions (6.2),(6.3, 6.4) are related to the gauge freedom; and the two conditions (6.5) control the gravitational radiation. The remaining conditions (6.6, 6.7, 6.8) enforce the constraint C^{μ} = 0 on the boundary, see Eq. (4.6), and so together with the constraint propagation system (4.14) and the initial constraints (4.15) they guarantee that the constraints are correctly propagated. Based on these observations, it is expected that these boundary conditions yield small spurious reflections in the case of a nearlyspherical boundary in the wave zone of an asymptoticallyflat curved spacetime.
Wellposedness of the IBVP
The IBVP consisting of the harmonic Einstein equations (4.5), initial data (4.7) and the boundary conditions (6.2–6.8) can be shown to be well posed as an application of Theorem 8. For this, we first notice that the evolution equations (4.5) have the required form of Eq. (5.113), where E is the vector bundle of symmetric, covariant tensor fields h_{ μν } on M. Next, the boundary conditions can be written in the form of Eq. (5.115) with α = 1. In order to compute the matrix coefficients \({c^\mu}{^A_B}\), it is convenient to decompose h_{ μν } = h^{A}e_{ Aμν } in terms of the basis vectors
with \({h^1} = {h_{LL}}/4,\,{h^2} = {{\bar h}^3} = {h_{LQ}}/4,\,{h^4} = {h_{Q\bar Q}}/4,\,{h^5} = {{\bar h}^6} = {h_{QQ}}/4,\,{h^7} = {{\bar h}^8} = {h_{KQ}}/4,\,{h^9} = {h_{KL}}/4,\,{h^{10}} = {h_{KK}}/4\). With respect to this basis, the only nonzero matrix coefficients are
which has the required upper triangular form with zeros in the diagonal. Therefore, the hypothesis of Theorem 8 are verified and one obtains a wellposed IBVP for Einstein’s equations in harmonic coordinates.
This result also applies the the modified system (4.16), since the constraint damping terms, which are added, do not modify the principal part of the main evolution system nor the one of the constraint propagation system.
Boundary conditions for BSSN
Here we discuss boundary conditions for the BSSN system (4.52–4.59), which is used extensively in numerical calculations of spacetimes describing dynamic black holes and neutron stars. Unfortunately, to date, this system lacks an initialboundary value formulation for which wellposedness in the full nonlinear case has been proven. Without doubt the reason for this relies on the structure of the evolution equations, which are mixed first/second order in space, and whose principal part is much more complicated than the harmonic case, where one deals with a system of wave equations.
A first step towards formulating a wellposed IBVP for the BSSN system was performed in [52], where the evolution equations (4.52, 4.53, 4.56–4.59) with a fixed shift and the relation f = μ ≡ (4m −1)/3 were reduced to a firstorder symmetric hyperbolic system. Then, a set of six boundary conditions consistent with this system could be formulated based on the theory of maximal dissipative boundary conditions. Although this gives rise to a wellposed IBVP, the boundary conditions specified in [52] are not compatible with the constraints, and therefore, one does not necessarily obtain a solution to the full set of Einstein’s equations beyond the domain of dependence of the initial data surface. In a second step, constraintpreserving boundary conditions for BSSN with a fixed shift were formulated in [220], and cast into maximal dissipative form for the linearized system (see also [15]). However, even at the linearized level, these boundary conditions are too restrictive because they constitute a combination of Dirichlet and Neumann boundary conditions on the metric components, and in this sense they are totally reflecting instead of absorbing. More general constraintpreserving boundary conditions were also considered in [220] and, based on the Laplace method, they were shown to satisfy the Lopatinsky condition (5.27).
Radiativetype constraintpreserving boundary conditions for the BSSN system (4.52–4.59) with dynamical lapse and shift were formulated in [315] and shown to yield a wellposed IBVP in the linearized case. The assumptions on the parameters in this formulation are m = 1, f > 0, κ = 4GH/3 > 0, f ≠ κ, which guarantee that the BSSN system is strongly hyperbolic, and as long as e^{4ϕ} ≠ 2α, they allow for the gauge conditions (4.62, 4.63) used in recent numerical calculations, where f = 2/α and κ = e^{4ϕ}/α^{2}; see Section 4.3.1. In the following, we describe this IBVP in more detail. First, we notice that the analysis in Section 4.3.1 reveals that for the standard choice m = 1 the characteristic speeds with respect to the unit outward normal si to the boundary are
where β^{s} = β^{i}s_{ i } is the normal component of the shift. According to the theory described in Section 5 it is the sign of these speeds, which determines the number of incoming fields and boundary conditions that must be specified. Namely, the number of boundary conditions is equal to the number of characteristic fields with positive speed. Assuming ∣β^{s}∣ is small enough such that \(\vert {\beta ^s}/\alpha \vert < \min \{1,\sqrt f, \sqrt {GH}, \sqrt \kappa \}\), which is satisfied asymptotically if β^{s} → 0 and α → 1, it is the sign of the normal component of the shift, which determines the number of boundary conditions. Therefore, in order to keep the number of boundary conditions fixed throughout evolution^{Footnote 26} one has to ensure that either β^{s} > 0 or β^{s} ≤ 0 at the boundary surface. If the condition β^{s} → 0 is imposed asymptotically, the most natural choice is to set the normal component of the shift to zero at the boundary, β^{s} = 0 at \({\mathcal T}\). The analysis in [52] then reveals that there are precisely nine incoming characteristic fields at the boundary, and thus, nine conditions have to be imposed at the boundary. These nine boundary conditions are as follows:

Boundary conditions on the gauge variables
There are four conditions that must be imposed on the gauge functions, namely the lapse and shift. These conditions are motivated by the linearized analysis, where the gauge propagation system, consisting of the evolution equations for lapse and shift obtained from the BSSN equations (4.52–4.55, 4.59), decouples from the remaining evolution equations. Surprisingly, this gauge propagation system can be cast into symmetric hyperbolic form [315], for which maximal dissipative boundary conditions can be specified, as described in Section 5.2. It is remarkable that the gauge propagation system has such a nice mathematical structure, since the equations (4.52, 4.54, 4.55) have been specified by hand and mostly motivated by numerical experiments instead of mathematical analysis.
In terms of the operator \({\Pi ^i}_j = {\delta ^i}_j  {s^i}{s_j}\) projecting onto vectors tangential to the boundary, the four conditions on the gauge variables can be written as
$${s^i}{\partial _i}\alpha = 0,$$(6.10)$${\beta ^s} = 0,$$(6.11)$${\Pi ^i}_j\,\left({{\partial _t} + {{\sqrt {3\kappa}} \over 2}{s^k}{\partial _k}} \right){\beta ^j} = {\kappa \over {f  \kappa}}{\Pi ^i}_j\,{\tilde \gamma ^{jk}}{\partial _k}\alpha .$$(6.12)Eq. (6.10) is a Neumann boundary condition on the lapse, and Eq. (6.11) sets the normal component of the shift to zero, as explained above. Geometrically, this implies that the boundary surface \({\mathcal T}\) is orthogonal to the time slices Σ_{ t }. The other two conditions in Eq. (6.12) are Sommerfeldlike boundary conditions involving the tangential components of the shift and the tangential derivatives of the lapse; they arise from the analysis of the characteristic structure of the gauge propagation system. An alternative to Eq. (6.12) also described in [315] is to set the tangential components of the shift to zero, which, together with Eq. (6.11) is equivalent to setting β^{i} = 0 at the boundary. This alternative may be better suited for IBVP with nonsmooth boundaries, such as cubes, where additional compatibility conditions must be enforced at the edges.

Constraintpreserving boundary conditions
Next, there are three conditions requiring that the momentum constraint be satisfied at the boundary. In terms of the BSSN variables this implies
$${\tilde D^j}{\tilde A_{ij}}  {2 \over 3}{\tilde D_i}K + 6{\tilde A_{ij}}{\tilde D^j}\phi = 8\pi {G_N}{j_i}.$$(6.13)As shown in [315], Eqs. (6.13) yields homogeneous maximal dissipative boundary conditions for a symmetric hyperbolic firstorder reduction of the constraint propagation system (4.74, 4.75, 4.76). Since this system is also linear and its boundary matrix has constant rank if β^{s} = 0, it follows from Theorem 7 that the propagation of constraint violations is governed by a wellposed IBVP. This implies, in particular, that solutions whose initial data satisfy the constraints exactly automatically satisfy the constraints on each time slice Σ_{ t }. Furthermore, small initial constraint violations, which are usually present in numerical applications yield solutions for which the growth of the constraint violations can be bounded in terms of the initial violations.

Radiation controlling boundary conditions
Finally, the last two boundary conditions are intended to control the incoming gravitational radiation, at least approximately, and specify the complex Weyl scalar Ψ_{0}, cf. Example 32. In order to describe this boundary condition we first define the quantities \({{\bar {\mathcal E}}_{ij}}: = {{\tilde R}_{ij}} + R_{ij}^\phi + {e^{4\phi}}({1 \over 3}K{{\tilde A}_{ij}}  {{\tilde A}_{il}}\tilde A_j^l)  4\pi {G_N}{\sigma _{ij}}\) and \({{\bar {\mathcal B}}_{kij}}: = {e^{4\phi}}\left[ {{{\tilde D}_k}{{\tilde A}_{ij}}  4\left({{{\tilde D}_{(i}}\phi} \right){{\tilde A}_{j)k}}} \right]\), which determine the electric and magnetic parts of the Weyl tensor through \({E_{ij}} = {{\bar {\mathcal E}}_{ij}}  {1 \over 3}{\gamma _{ij}}{\gamma ^{kl}}{{\bar {\mathcal E}}_{kl}}\) and \({B_{ij}} = {\varepsilon _{kl(i}}{{\bar {\mathcal B}}^{kl}}{\,_{j)}}\), respectively. Here, ε_{ kij } denotes the volume form with respect to the three metric γ_{ ij }. In terms of the operator \({P^{ij}}_{lm} = {\Pi ^i}_{(l}{\Pi ^j}_{m)}  {1 \over 2}{\Pi ^{ij}}{\Pi _{lm}}\) projecting onto symmetric traceless tangential tensors to the boundary, the boundary condition reads
$${P^{ij}}_{lm}{\bar {\mathcal E} _{ij}} + \left({{s^k}{P^{ij}}_{lm}  {s^i}{P^{kj}}_{lm}} \right){\bar {\mathcal B} _{kij}} = {P^{ij}}_{lm}{G_{ij}},$$(6.14)with G_{ ij } a given smooth tensor field on the boundary surface \({\mathcal T}\). The relation between G_{ ij } and Ψ_{0} is the following: if n = α^{−1}(∂_{ t } − β^{i}∂_{ i }) denotes the futuredirected unit normal to the time slices, we may construct an adapted NewmanPenrose null tetrad \(\{K,L,Q,\bar Q\}\) at the boundary by defining K := n + s, L := n − s, and by choosing Q to be a complex null vector orthogonal to K and L, normalized such that \({Q^\mu}{{\bar Q}_\mu} = 2\). Then, we have Ψ_{0} = (E_{ kl } − iB_{ kl })Q^{k}Q^{l} = G_{ kl }Q^{k}Q^{l}. For typical applications involving the modeling of isolated systems one may set G_{ ij } to zero. However, this in general is not compatible with the initial data (see the discussion in Section 10.3), an alternative is then to freeze the value of G_{ ij } to the one computed from the initial data.
The boundary condition (6.14) can be partially motivated by considering an isolated system, which, globally, is described by an asymptoticallyflat spacetime. Therefore, if the outer boundary is placed far enough away from the strong field region, one may linearize the field equations on a Minkowski background to a first approximation. In this case, one is in the same situation as in Example 32, where the Weyl scalar Ψ_{0} is an outgoing characteristic field when constructed from the adapted null tetrad. Furthermore, one can also appeal to the peeling behavior of the Weyl tensor [328], in which Ψ_{0} is the fastest decaying component along an outgoing null geodesics and describes the incoming radiation at past null infinity. While Ψ_{0} can only be defined in an unambiguous way at null infinity, where a preferred null tetrad exists, the boundary condition (6.14) has been successfully numerically implemented and tested for truncated domains with artificial boundaries in the context of the harmonic formulation; see, for example, [366]. Estimates on the amount of spurious reflection introduced by this condition have also been derived in [88, 89]; see also [135].
Geometric existence and uniqueness
The results mentioned so far concerning the wellposed IBVP for Einstein’s field equations in the tetrad formulation of [187], in the metric formulation with harmonic coordinates described in Section 6.1, or in the linearized BSSN formulation described in Section 6.2 allow one, from the PDE point of view, to construct unique solutions on a manifold of the form M = [0, T] × Σ, given appropriate initial and boundary data. However, since general relativity is a diffeomorphism invariant theory, one needs to pose the IBVP from a geometric perspective. In particular, the following questions arise, which, for simplicity, we only formulate for the vacuum case:

Geometric existence. Let (M, g) be any smooth solution of Einstein’s vacuum field equations on the manifold M = [0, T] × Σ corresponding to initial data (h, k) on Σ_{0} and boundary data ψ on \({\mathcal T}\), where h and k represent, respectively, the first and second fundamental forms of the initial surface Σ_{0} as embedded in (M, g). Is it possible to reproduce this solution with any of the wellposed IBVP mentioned so far, at least on a submanifold M′ = [0, T′] × Σ with 0 < T′ ≤ T? That is, does there exist initial data f and boundary data q for this IBVP and a diffeomorphism ϕ: M′ → ϕ(M′) ⊂ M, which leaves Σ_{0} and \({{\mathcal T}\prime}\) invariant, such that the metric constructed from this IBVP is equal to ⊂*g on M′?

Geometric uniqueness. Is the solution (M,g) uniquely determined by the data (h,k,ψ)? Given a wellposed IBVP for which geometric existence holds, the question about geometric uniqueness can be reduced to the analysis of this particular IBVP in the following way: let u_{1} and u_{2} be two solutions of the IBVP on the manifold M = [0, T] × Σ with corresponding data (f_{1}, q_{1}) and (f_{2}, q_{2}). Suppose the two solutions induce the same data (h, k) on Σ_{0} and ψ on \({\mathcal T}\). Does there exist a diffeomorphism ϕ: M′ = [0, T′] × Σ → ϕ(M′) ⊂ M, which leaves Σ_{0} and \({{\mathcal T}\prime}\) invariant, such that the metrics g_{1} and g_{2} corresponding to u_{1} and u_{2} are related to each other by g_{2} = φ*g_{1} on M′?
These geometric existence and uniqueness problems have been solved in the context of the Cauchy problem without boundaries; see [127] and Section 4.1.3. However, when boundaries are present, several new difficulties appear as pointed out in [186]; see also [187, 184]:

(i)
It is a priori not clear what the boundary data ψ should represent geometrically. Unlike the case of the initial surface, where the data represents the first and second fundamental forms of Σ_{0} as a spatial surface embedded in the constructed spacetime (M, g), it is less clear what the geometric meaning of ψ should be since it is restricted by the characteristic structure of the evolution equations, as discussed in Section 5.

(ii)
The boundary data (q_{ K }, q_{ L }, q_{ Q }, q_{ QQ }) in the boundary conditions (6.2, 6.3, 6.4, 6.5) for the harmonic formulation and the boundary data G_{ ij } in the boundary condition (6.14) for the BSSN formulation ultimately depend on the specific choice of a futuredirected timelike vector field T at the boundary surface \({\mathcal T}\). Together with the unit outward normal N to \({\mathcal T}\), this vector defines the preferred null directions K = T + N and L = T − N, which are used to construct the boundaryadapted null tetrad in the harmonic case and the projection operators \({\Pi ^\mu}_\nu = {\delta ^\mu}_\nu + {T^\mu}{T_\nu}  {N^\mu}{N_\nu}\) and \({P^{\mu \nu}}_{\alpha \beta} = {\Pi ^\mu}_\alpha {\Pi ^\nu}_\beta  {1 \over 2}{\Pi ^{\mu \nu}}{\Pi _{\alpha \beta}}\) in the BSSN one. Although it is tempting to define T as the unit, futuredirected timelike vector tangent to \({\mathcal T}\), which is orthogonal to the cross sections ∂Σ_{ t }, this definition would depend on the particular foliation Σ_{ t } the formulation is based on, and so the resulting vector T would be gaugedependent. A similar issue arises in the tetrad formulation of [187].

(iii)
When addressing the geometric uniqueness issue, an interesting question is whether or not it is possible to determine from the data sets (f_{1}, q_{1}) and (f_{2}, q_{2}) alone if they are equivalent in the sense that their solutions u_{1} and u_{2} induce the same geometric data (h, k, ψ). Therefore, the question is whether or not one can identify equivalent data sets by considering only transformations on the initial and boundary surfaces Σ_{0} and \({\mathcal T}\), without knowing the solutions u_{1} and u_{2}.
Although a complete answer to these questions remains a difficult task, there has been some recent progress towards their understanding. In [186] a method was proposed to geometrically single out a preferred time direction T at the boundary surface \({\mathcal T}\). This is done by considering the tracefree part of the second fundamental form, and proving that under certain conditions, which are stable under perturbations, the corresponding linear map on the tangent space possesses a unique timelike eigenvector. Together with the unit outward normal vector N, the vector field T defines a distinguished adapted null tetrad at the boundary, from which geometrically meaningful boundary data could be defined. For instance, the complex Weyl scalar Ψ_{0} can then be defined as the contraction Ψ_{0} = C_{ αβγδ }K^{α}Q^{β}K^{γ}Q^{δ} of the Weyl tensor C_{ αβγδ } associated to the metric g_{ μν } along the null vectors K and Q, and the definition is unique up to the usual spin rotational freedom Q ↦ e e^{tφ}Q, and therefore, the Weyl scalar Ψ_{0} is a good candidate for forming part of the boundary data ψ.
In [355] it was suggested that the unique specification of a vector field T may not be a fundamental problem, but rather the manifestation of the inability to specify a nonincoming radiation condition correctly. In the linearized case, for example, setting the Weyl scalar Ψ_{0} to zero computed from the boundaryadapted tetrad is transparent to gravitational plane waves traveling along the specific null direction K = T + N, see Example 32, but it induces spurious reflections for outgoing plane waves traveling in other null direction. Therefore, a genuine nonincoming radiation condition should be, in fact, independent of any specific null or timelike direction at the boundary, and can only depend on the normal vector N. This is indeed the case for much simpler systems like the scalar wave equation on a Minkowski background [153], where perfectly absorbing boundary conditions are formulated as a nonlocal condition, which is independent of a preferred time direction at the boundary.
Aside from controlling the incoming gravitational degrees of freedom, the boundary data ψ should also comprise information related to the geometric evolution of the boundary surface. In [187] this was achieved by specifying the mean curvature of \({\mathcal T}\) as part of the boundary data. In the harmonic formulation described in Section 6.1 this information is presumably contained in the functions q_{ K }, q_{ L } and q_{ Q }, but their geometric interpretation is not clear.
In order to illustrate some of the issues related to the geometric existence and uniqueness problem in a simpler context, in what follows we analyze the IBVP for linearized gravitational waves propagating on a Minkowski background. Before analyzing this case, however, we make two remarks. First, it should be noted [186] that the geometric uniqueness problem, especially an understanding of point (iii), also has practical interest, since in long term evolutions it is possible that the gauge threatens to break down at some point, requiring a redefinition. The second remark concerns the formulation of the Einstein IBVP in generalized harmonic coordinates, described in Sections 4.1 and 6.1, where general covariance was maintained by introducing a background metric \({\overset \circ g _{\mu \nu}}\) on the manifold M. IBVPs based on this approach have been formulated in [369] and [264] and further developed in [434] and [433]. However, one has to emphasize that this approach does not automatically solve the geometric existence and uniqueness problems described here: although it is true that the IBVP is invariant with respect to any diffeomorphism ϕ: M → M, which acts on the dynamical and the background metric at the same time, the question of the dependency of the solution on the background metric remains.
Geometric existence and uniqueness in the linearized case
Here we analyze some of the geometric existence and uniqueness issues of the IBVP for Einstein’s field equations in the much simpler setting of linearized gravity on Minkowski space, where the vacuum field equations reduce to
where h_{ αβ } denotes the first variation of the metric, h:= η^{αβ}h_{ αβ } its trace with respect to the Minkowski background metric η_{ αβ }, and ∇_{ μ } is the covariant derivative with respect to η_{ αβ }. An infinitesimal coordinate transformation parametrized by a vector field ξ^{μ} induces the transformation
where ξ_{ α } := η_{ αβ }ξ^{β}.
Let us consider the linearized Cauchy problem without boundaries first, where initial data is specified at the initial surface Σ_{0} = {0} × ℝ^{3}. The initial data is specified geometrically by the first and second fundamental forms of Σ_{0}, which, in the linearized case, are represented by a pair \((h_{ij}^{(0)},k_{ij}^{(0)})\) of covariant symmetric tensor fields on Σ_{0}. We assume \((h_{ij}^{(0)},k_{ij}^{(0)})\) to be smooth and to satisfy the linearized Hamiltonian and momentum constraints
where G^{ijrs}:= δ^{i(r}δ^{s)j} − δ^{ij}δ^{rs}. A solution h_{ αβ } of Eq. (6.15) with the induced data corresponding to \((h_{ij}^{(0)},k_{ij}^{(0)})\) up to a gauge transformation (6.16) satisfies
where X_{ j } = ξ_{ j } and f = ξ_{0} are smooth and represent the initial gauge freedom. Then, one has:
Theorem 9. The initialvalue problem ( 6.15 , 6.18 ) possesses a smooth solution h_{ αβ }, which is unique up to an infinitesimal coordinate transformation h_{ αβ } = h_{ αβ } + 2∇(_{α}ξ_{ β }) generated by a vector field ξ^{α}.
Proof. We first show the existence of a solution in the linearized harmonic gauge \({C_\beta} = {\nabla ^\mu}{h_{\beta \mu}}  {1 \over 2}{\nabla _\beta}h = 0\), for which Eq. (6.15) reduces to the system of wave equations ∇^{μ}∇_{ μ }h_{ αβ } = 0. The initial data, \(({h_{\alpha \beta}}{\vert _{{\Sigma _0}}},{\partial _t}{h_{\alpha \beta}}{\vert _{{\Sigma _0}}})\), for this system is chosen such that \({h_{ij}}{\vert _{{\Sigma _0}}} = h_{ij}^{(0)},{\partial _t}{h_{ij}}{\vert _{{\Sigma _0}}} = 2{\partial _{(i}}{h_{j)0}}{\vert _{{\Sigma _0}}}  2k_{ij}^{(0)}\) and \({\partial _t}{h_{00}}{\vert _{{\Sigma _0}}} = 2{\delta ^{ij}}k_{ij}^{(0)},{\partial _t}{h_{{0_j}}}{\vert _{{\Sigma _0}}} = {\partial ^i}(h_{ij}^{(0)}  {1 \over 2}{\delta _{ij}}{\delta ^{kl}}h_{kl}^{(0)}) + {1 \over 2}{\partial _j}{h_{00}}{\vert _{{\Sigma _0}}}\), where \((h_{ij}^{(0)},k_{ij}^{(0)})\) satisfy the constraint equations (6.17) and where the initial data for h_{00} and h_{0j} is chosen smooth but otherwise arbitrary. This choice implies the satisfaction of Eq. (6.18) with X_{ j } = 0 and f = 0 and the initial conditions \({C_\beta}{\vert _{{\Sigma _0}}} = 0\) and \({\partial _t}{C_\beta}{\vert _{{\Sigma _0}}} = 0\) on the constraint fields C_{ β }. Therefore, solving the wave equation ∇^{μ}∇_{ μ }h_{ αβ } = 0 with such data, we obtain a solution of the linearized Einstein equations (6.15) in the harmonic gauge with initial data satisfying (6.18) with X_{ j } = 0 and f = 0. This shows geometric existence for the linearized harmonic formulation.
As for uniqueness, suppose we had two smooth solutions of Eqs. (6.15, 6.18). Then, since the equations are linear, the difference h_{ αβ } between these two solutions also satisfies Eqs. (6.15, 6.18) with trivial data \(h_{ij}^{(0)} = 0,\,\,k_{ij}^{(0)} = 0\). We show that h_{ αβ } can be transformed away by means of an infinitesimal gauge transformation (6.16). For this, define \({\tilde h_{\alpha \beta}}: = {h_{\alpha \beta}} + 2{\nabla _{(\alpha}}{\xi _{\beta)}}\) where ξ_{ β } is required to satisfy the inhomogeneous wave equation
with initial data for ξ_{ β } defined by \({\xi _0}{\vert _{{\Sigma _0}}} =  f,\,\,{\xi _i}{\vert _{{\Sigma _0}}} =  {X_i},\,\,{\partial _t}{\xi _0}{\vert _{{\Sigma _0}}} =  {h_{00}}/2,\,\,{\partial _t}{\xi _i}{\vert _{{\Sigma _0}}} =  {h_{0i}} + {\partial _i}f\). Then, by construction, \({\tilde h_{\alpha \beta}}\) satisfies the harmonic gauge, and it can be verified that \({\tilde h_{\alpha \beta}}{\vert _{{\Sigma _0}}} = {\partial _t}{\tilde h_{\alpha \beta}}{\vert _{{\Sigma _0}}} = 0\). Therefore, \({\tilde h_{\alpha \beta}}\) is a solution of the wave equation \({\nabla ^\mu}{\nabla _\mu}{\tilde h_{\alpha \beta}} = 0\) with trivial initial data, and it follows that \({\tilde h_{\alpha \beta}} = 0\) and that h_{ αβ } = −2∇(_{ α }ξ_{ β }) is a pure gauge mode. □
It follows from the existence part of the proof that the quantities \({h_{00}}{\vert _{{\Sigma _0}}}\) and \({h_{0j}}{\vert_{{\Sigma _0}}}\), corresponding to linearized lapse and shift, parametrize pure gauge modes in the linearized harmonic formulation.
Next, we turn to the IBVP on the manifold M = [0, T] × Σ. Let us first look at the boundary conditions (6.2–6.5), which, in the linearized case, reduce to
There is no problem in repeating the geometric existence part of the proof on M imposing these boundary condition, and using the IBVP described in Section 6.1. However, there is a problem when trying to prove the uniqueness part. This is because a gauge transformation (6.16) induces the following transformations on the boundary data,
which overdetermines the vector field ξ_{ β } at the boundary. On the other hand, replacing the boundary condition (6.5) by the specification of the Weyl scalar Ψ_{0}, leads to [286, 369]
Since the lefthand side is gaugeinvariant, there is no overdetermination of ξ_{ β } at the boundary any more, and the transformation properties of the remaining boundary data q_{ K }, q_{ L } and q_{ Q } provides a complete set of boundary data for ξ_{ K }, ξ_{ L } and ξ_{ Q }, which may be used in conjunction with the wave equation ∇^{μ}∇_{ μ }ξ_{ β } = 0 in order to formulate a wellposed IBVP [369]. Provided Ψ_{0} is smooth and the compatibility conditions are satisfied at the edge \(S = {\Sigma _0} \cap {\mathcal T}\), it follows:
Theorem 10. [355] The IBVP ( 6.15 , 6.18 , 6.21 ) possesses a smooth solution h_{ αβ }, which is unique up to an infinitesimal coordinate transformation \({{\tilde h}_{\alpha \beta}} = {h_{\alpha \beta}} + 2{\nabla _{(\alpha}}{\xi _{\beta)}}\) generated by a vector field ξ^{α}.
In conclusion, we can say that, in the simple case of linear gravitational waves propagating on a Minkowksi background, we have resolved the issues (i–iii). Correct boundary data is given to the linearized Weyl scalar Ψ_{0} computed from the boundaryadapted tetrad. To linear order, Ψ_{0} is invariant with respect to coordinate transformations, and the timelike vector field T appearing in its definition can be defined geometrically by taking the futuredirected unit normal to the initial surface Σ_{0} and parallel transport it along the geodesics orthogonal to Σ_{0}.
Whether or not this result can be generalized to the full nonlinear case is not immediately clear. In our linearized analysis we have imposed no restrictions on the normal component ξN of the vector field generating the infinitesimal coordinate transformation. However, such a restriction is necessary in order to keep the boundary surface fixed under a diffeomorphism. Unfortunately, it does not seem possible to restrict ξ_{N} in a natural way with the boundary conditions constructed so far.
Alternative approaches
Although the formulation of Einstein’s equations on a finite space domain with an artificial timelike boundary is currently the most used approach in numerical simulations, there are a number of difficulties associated with it. First, as discussed above, spurious reflections from the boundary surface may contaminate the solution unless the boundary conditions are chosen with great care. Second, in principle there is a problem with wave extraction, since gravitational waves can only be defined in an unambiguous (gaugeinvariant) way at future null infinity. Third, there is an efficiency problem, since in the far zone the waves propagate along outgoing null geodesics so that hyperboloidal surfaces, which are asymptotically null, should be better adapted to the problem. These issues have become more apparent as numerical simulations have achieved higher accuracy to the point that boundary and wave extraction artifacts are noticeable, and have driven a number of other approaches.
One of them is that of compactification schemes, which include spacelike or null infinity into the computational domain. For schemes compactifying spacelike infinity; see [335, 336]. Conformal compactifications are reviewed in [172, 183], and a partial list of references to date includes [328, 176, 177, 180, 179, 170, 245, 172, 247, 100, 446, 447, 316, 87, 451, 452, 448, 449, 450, 305, 364, 42].
Another approach is Cauchycharacteristic matching (CCM) [99, 392, 401, 143, 148, 53], which combines a Cauchy approach in the strong field regime (thereby avoiding the problems that the presence of caustics would cause on characteristic evolutions) with a characteristic one in the wave zone. Data from the Cauchy evolution is used as inner boundary conditions for the characteristic one and, viceversa, the latter provides outer boundary conditions for the Cauchy IBVP. An understanding of the Cauchy IBVP is still a requisite. CCM is reviewed in [432]. A related idea is Cauchyperturbative matching [455, 356, 4, 370], where the Cauchy code is instead coupled to one solving gaugeinvariant perturbations of Schwarzschild black holes or flat spacetime. The multipole decomposition in the ReggeWheelerZerilli equations [347, 453, 376, 294, 307] implies that the resulting equations are 1+1 dimensional and can therefore extend the region of integration to very large distances from the source. As in CCM, an understanding of the IBVP for the Cauchy sector is still a requisite.
One way of dealing with the ambiguity of extracting gravitational waves from Cauchy evolutions at finite radii is by extrapolating procedures; see, for example, [72, 331] for some approaches and quantification of their accuracies. Another approach is Cauchy characteristic extraction (CCE) [350, 37, 349, 32, 34, 54]. In CCE a Cauchy IBVP is solved, and the numerical data on a world tube is used to provide inner boundary conditions for a characteristic evolution that “transports” the data to null infinity. The difference with CCM is that in CCE there is no “feedback” from the characteristic evolution to the Cauchy one, and the extraction is done as a postprocessing step.
Numerical Stability
In the previous sections we have discussed continuum initial and IBVPs. In this section we start with the study of the discretization of such problems. In the same way that a PDE can have a unique solution yet be ill posed^{Footnote 27}, a numerical scheme can be consistent yet not convergent due to the unbounded growth of small perturbations as resolution is increased. The definition of numerical stability is the discrete version of wellposedness. One wants to ensure that small initial perturbations in the numerical solution, which naturally appear due to discretization errors and finite precision, remain bounded for all resolutions at any given time t > 0. Due to the classical LaxRichtmyer theorem [276], this property, combined with consistency of the scheme, is equivalent in the linear case to convergence of the numerical solution, and the latter approaches the continuum one as resolution is increased (at least within exact arithmetic). Convergence of a scheme is in general difficult to prove directly, especially because the exact solution is in general not known. Instead, one shows stability.
The different definitions of numerical stability follow those of wellposedness, with the L^{2} norm in space replaced by a discrete one, which is usually motivated by the spatial approximation. For example, discrete norms under which the summation by parts property holds are natural in the context of some finite difference approximations and collocation spectral methods (see Sections 8 and 9).
We start with a general discussion of some aspects of stability, and explicit analyses of simple, loworder schemes for test models. There follows a discussion of different variations of the von Neumann condition, including an eigenvalue version, which can be used to analyze in practice necessary conditions for IBVPs. Next, we discuss a rather general stability approach for the method of lines, the notion of timestability, RungeKutta methods, and we close the section with some references to other approaches not covered here, as well as some discussion in the context of numerical relativity.
Definitions and examples
Consider a wellposed linear initialvalue problem (see Definition 3)
Definition 11. An approximationdiscretization to the Cauchy problem ( 7.1 , 7.2 ) is numerically stable if there is some discrete norm in space ∥ · ∥_{d} and constants K_{ d }, α_{ d } such that the corresponding approximation v satisfies
for high enough resolutions, smooth initial data f, and t ≥ 0.
Note:

The previous definition applies both to the semidiscrete case (where space but not time is discretized) as well as the fullydiscrete one. In the latter case, Eq. (7.3) is to be interpreted at fixed time. For example, if the timestep discretization is constant,
$${t_k} = k\Delta t,\qquad k = 0,1,2 \ldots$$(7.4)then Eq. (7.3) needs to hold for fixed t_{ k } and arbitrarily large k. In other words, the solution is allowed to grow with time, but not with the number of timesteps at fixed time when resolution is increased.

The norm ∥ · ∥_{d} in general depends on the spatial approximation, and in Sections 8 and 9 we discuss some definitions for the finite difference and spectral cases.

From Definition 11, one can see that an illposed problem cannot have a stable discretization, since otherwise one could take the continuum limit in (7.3) and reach the contradiction that the original system was well posed.

As in the continuum, Eq. (7.3) implies uniqueness of the numerical solution v.

In Section 3 we discussed that if, in a wellposed homogeneous Cauchy problem, a forcing term is added to Eq. (7.1),
$${u_t}(t,x) = P(t,x,\partial /\partial x)u(t,x)\qquad \mapsto \qquad {u_t}(t,x) = P(t,x,\partial /\partial x)u(t,x) + F(t,x),$$(7.5)then the new problem admits another estimate, related to the original one via Duhamel’s formula, Eq. (3.23). A similar concept holds at the semidiscrete level, and the discrete estimates change accordingly (in the fullydiscrete case the integral in time is replaced by a discrete sum),
$$\Vert v(t,\cdot) \Vert_{\rm{d}} \leq{K_{\rm{d}}}{e^{{\alpha _{\rm{d}}}t}}\Vert f \Vert_{\rm{d}}\qquad \mapsto \qquad \Vert v(t,\cdot) \Vert_{\rm{d}} \leq{K_{\rm{d}}}{e^{{\alpha _{\rm{d}}}t}} \left(\Vert f \Vert_{\rm{d}} + \int\limits_0^t \Vert F(s, \cdot) \Vert_{\rm{d}} ds \right).$$(7.6)In other words, the addition of a lowerorder term does not affect numerical stability, and without loss of generality one can restrict stability analyses to the homogeneous case.

The difference w:= u − v between the exact solution and its numerical approximation satisfies an equation analogous to (7.5), where F is related to the truncation error of the approximation. If the scheme is numerically stable, then in the linear and semidiscrete cases Eq. (7.6) implies
$${\Vert {w(t,\cdot)} \Vert_{\rm{d}}} \leq{K_{\rm{d}}}{e^{{\alpha _{\rm{d}}}t}}\int\limits_0^t {{{\Vert {F(s,\cdot)} \Vert}_{\rm{d}}}} ds\,.$$(7.7)If the approximation is consistent, the truncation error converges to zero as resolution is increased, and Equation (7.7) implies that so does the norm of the error ∥w(t, ·)∥_{d}· That is, stability implies convergence. The inverse is also true and this equivalence between convergence and stability is the celebrated LaxRichtmyer theorem. The equivalence also holds in the fullydiscrete case.

In the quasilinear case, one follows the principle of linearization, as described in Section 3.3. One linearizes the problem, and constructs a stable numerical scheme for the linearization. The expectation, then, is that the scheme also converges for the nonlinear scheme. For particular problems and discretizations this expectation can be rigorously proven (see, for example, [259]).
From here on {x_{ j }, t_{ k }} denotes some discretization of space and time. This includes both finite difference and spectral collocation methods, which are the ones discussed in Sections 8 and 9, respectively. In addition, we use the shorthand notation
In order to gain some intuition into the general problem of numerical stability we start with some examples of simple, loworder approximations for a test problem. Consider uniform grids both in space and time
and the advection equation,
on a periodic domain with 2π = N Δx, and smooth periodic initial data. Then the solution u can be represented by a Fourier series:
where
and the stability of the following schemes can be analyzed in Fourier space.
Example 33. The onesided Euler scheme.
Eq. (7.10) is discretized with a onesided FD approximation for the spatial derivative and evolved in time with the forward Euler scheme,
In Fourier space the approximation becomes
where
is called the amplification factor and
the CourantFriedrichLevy (CFL) factor
Using Parseval’s identity, we find
and therefore, we see that the inequality (7.3) can only hold for all k if
for a > 0, this is the case if and only if the CFL factor satisfies
and in this case the wellposedness estimate (7.3) holds with K_{ d } = 1 and α_{ d } = 0. The upper bound in condition (7.19) for this example is known as the CFL limit, and (7.18) as the von Neumann condition. If \(a = 0,\,\hat q(\omega) = 1\), while for a < 0 the scheme is unconditionally unstable even though the underlying continuum problem is well posed.
Next we consider a scheme very similar to the previous one, but which turns out to be unconditionally unstable for a ≠ 0, regardless of the direction of propagation.
Example 34. A centered Euler scheme.
Consider first the semidiscrete approximation to Eq. (7.10),
it is easy to check that it is stable for all values of Δx. Next discretize time through an Euler scheme, leading to
The solution again has the form given by Eq. (7.14), now with
At fixed time t_{ k }, the norm of the solution to the fullydiscrete approximation (7.21) for arbitrary small initial data with ωΔx ∉ πℤ grows without bound as the timestep decreases.
The semidiscrete centered approximation (7.20) and the fullydiscrete centered Euler scheme (7.21) constitute the simplest example of an approximation, which is not fullydiscrete stable, even though its semidiscrete version is. This is related to the fact that the Euler time integration is not locally stable, as discussed in Section 7.3.2.
The previous two examples were onestep methods, where v^{k+1} can be computed in terms of v^{k}. The following is an example of a twostep method.
Example 35. Leapfrog.
A way to stabilize the centered Euler scheme is by approximating the time derivative by a centered difference instead of a forward, onesided operator:
Enlarging the system by introducing
it can be cast into the onestep method
By a similar procedure, a general multistep method can always be reduced to a onestep one. Therefore, in the stability results below we can assume without loss of generality that the schemes are onestep.
In the above example the amplification matrix \(\hat Q(\omega)\) can be diagonalized through a transformation that is uniformly bounded:
with μ_{ ± } = z ± (1 + z^{2})^{1/2}, z:= iaλ sin (ωΔx), and
The eigenvalues μ_{±} are of unit modulus, ∣μ_{ ± }∣ = 1. In addition, the norms of \(\hat T(\omega)\) and its inverse are
Therefore, the condition number of \(\hat T(\omega)\) can be bounded for all ω:
provided that
and it follows that the Leapfrog scheme is stable under the condition (7.30).
The previous examples were explicit methods, where the solution \(\upsilon _j^{k + 1}\) (or \(w_j^{k + 1}\)) can be explicitly computed from the one at the previous timestep, without inverting any matrices.
Example 36. CrankNicholson.
Approximating Eq. (7.10) by
with
defines an implicit method. Fourier transform leads to
The expressions inside the square brackets on both sides are different from zero and have equal magnitude. As a consequence, the amplification factor in this case satisfies
and the scheme is unconditionally stable at the expense of having to invert a matrix to advance the solution in time.
Example 37. Iterated CrankNicholson.
Approximating the CrankNicholson scheme through an iterative scheme with a fixed number of iterations is usually referred to as the Iterated CrankNicholson (ICN) method. For Eq. (7.10) it proceeds as follows [414]:

First iteration: an intermediate variable ^{(1)}ṽ is calculated using a secondorderinspace centered difference (7.32) and an Euler, firstorder forwardtime approximation,
$${1 \over {\Delta t}}\left({{}^{(1)}\tilde v_j^{n + 1}  v_j^n} \right) = {D_0}\,v_j^n\,.$$(7.35)Next, a second intermediate variable is computed through averaging,
$$^{(1)}\bar v_j^{n + 1/2} = {1 \over 2}\left({{}^{(1)}\tilde v_j^{n + 1} + v_j^n} \right).$$(7.36)The full time step for this first iteration is
$${1 \over {\Delta t}}\left({v_j^{n + 1}  v_j^n} \right) = {D_0}{\,^{(1)}}\bar v_j^{n + 1/2}.$$(7.37) 
Second iteration: it follows the same steps. Namely, the intermediate variables
$$\begin{array}{*{20}c} {{1 \over {\Delta t}}\left({{}^{(2)}\tilde v_j^{n + 1}  v_j^n} \right) = {D_0}{\,^{(1)}}\bar v_j^{n + 1/2},\quad \quad \quad \quad \quad \quad} \\ {{}^{(2)}\bar v_j^{n + 1/2} = {1 \over 2}\left({{}^{(2)}\tilde v_j^{n + 1} + v_j^n} \right),} \\ \end{array}$$are computed, and the full step is obtained from
$${1 \over {\Delta t}}\left({v_j^{n + 1}  v_j^n} \right) = {D_0}{\,^{(2)}}\bar v_j^{n + 1/2}\,.$$(7.38) 
Further iterations proceed in the same way.
The resulting discretization is numerically stable for λ ≤ 2/a and p = 2, 3, 6, 7, 10, 11, … iterations, and unconditionally unstable otherwise. In the limit p → ∞ the ICN scheme becomes the implicit, unconditionallystable CrankNicholson scheme of the previous example. For any fixed number of iterations, though, the method is explicit and stability is contingent on the CFL condition λ ≤ 2/a. The method is unconditionally unstable for p = 4, 5, 8, 9, 12, 13, … because the limit of the amplification factor approaching one in absolute value [cf. Eq. (7.34)] as p increases is not monotonic. See [414] for details and [380] for a similar analysis for “theta” schemes.
Similar definitions to the one of Definition 11 are introduced for the IBVP. For simplicity we explicitly discuss the semidiscrete case. In analogy with the definition of a stronglywellposed IBVP (Definition 9) one has
Definition 12. A semidiscrete approximation to the linearized version of the IBVP ( 5.1 , 5.2 , 5.3 ) is numerically stable if there are discrete norms ∥ · ∥_{d} at Σ and ∥ · ∥_{∂, d} at ∂Σ and constants K_{d} = K_{d}(T) and ε_{d} = ε_{d}(T) ≥ 0 such that for highenough resolution the corresponding approximation v satisfies
for all t ∈ [0, T]. If the constant ε_{ d } can be chosen strictly positive, the problem is called strongly stable.
In addition, the semidiscrete version of Definitions 6 and 7 lead to the concepts of strong stability in the generalized sense and boundary stability, respectively, which we do not write down explicitly here. The definitions for the fullydiscrete case are similar, with time integrals such as those in Eq. (7.39) replaced by discrete sums.
The von Neumann condition
Consider a discretization for a linear system with variable, timeindependent coefficients such that
where v^{k} denotes the gridfunction \({{\rm{v}}^k} = \{\upsilon _j^k:j = 0,1, \ldots, N\}\) and Q is called the amplification matrix. We assume that Q is also timeindependent. Then
and the approximation (7.40, 7.41) is stable if and only if there are constants K_{d} and α_{d} such that
for all k = 0, 1, 2, … and high enough resolutions.
In practice, condition (7.43) is not very manageable as a way of determining if a given scheme is stable since it involves computing the norm of the power of a matrix. A simpler condition based on the eigenvalues {q_{ i }} of Q as opposed to the norm of Q^{k} is von Neumann’s one:
This condition is necessary for numerical stability: if q_{ i } is an eigenvalue of Q, \(q_i^k\) is an eigenvalue of Q^{k} and
That is,
which, m order to be valid for all k, implies Eq. (7.44).
As already mentioned, in order to analyze numerical stability, one can drop lowerorder terms. Doing so typically leads to Q depending on Δt and Δx only through a quotient (the CFL factor) of the form (with p = 1 for hyperbolic equations)
then for Eq. (7.44) to hold for all Δt > 0 while keeping the CFL factor fixed (in particular, for small Δt > 0), the following condition has to be satisfied:
and one has a stronger version of the von Neumann condition, which is the one encountered in Example 33; see Eq. (7.18).
The periodic, scalar case
We return to the periodic scalar case, such as the schemes discussed in Examples 33, 34, 35, and 36 with some more generality. Suppose then, in addition to the linearity and timeindependent assumptions of the continuum problem, that the initial data and discretization (7.40, 7.41) are periodic on the interval [0, 2π]. Through a Fourier expansion we can write the grid function f = (f(x_{0}), f(x_{1}), …, f(x_{ N })) corresponding to the initial data as
where \({{\rm{e}}^{i\omega}} = ({e^{i\omega {x_0}}},{e^{i\omega {x_1}}}, \ldots, {e^{i\omega {x_N}}})\). The approximation becomes
Assuming that Q is diagonal in the basis e^{iω}, such that
as is many times the case, we obtain, using Parseval’s identity,
if
for some constant α_{d} then
and stability follows. Conversely, if the scheme is stable and (7.52) holds, (7.54) has to be satisfied. Take
then
or
for arbitrary k, which implies (7.54). Therefore, provided the condition (7.52) holds, stability is equivalent to the requirement (7.54) on the eigenvalues of Q.
The general, linear, timeindependent case
However, as mentioned, the von Neumann condition is not sufficient for stability, neither in its original form (7.44) nor in its strong one (7.49), unless, for example, Q can be uniformly diagonalized. This means that there exists a matrix T such that
is diagonal and the condition number of T with respect to the same norm,
is bounded
for some constant K_{d} independent of resolution (an example is that of Q being normal, QQ* = Q*Q). In that case
and
Next, we discuss two examples where the von Neumann condition is satisfied but the resulting scheme is unconditionally unstable. The first one is for a wellposed underlying continuum problem and the second one for an illposed one.
Example 38. An unstable discretization, which satisfies the von Neumann condition for a triviallywellposed problem [228].
Consider the following system on a periodic domain with periodic initial data
discretized as
with D_{0} given by Eq. (7.32). The Fourier transform of the amplification matrix and its kth power are
The von Neumann condition is satisfied, since the eigenvalues are 1. However, the discretization is unstable for any value of λ > 0. For the unit vector e = (0, 1)^{T}, for instance, we have
which grows without bound as k is increased for sin (ωΔx) ≠ 0.
The vonNeumann condition is clearly not sufficient for stability in this example because the amplification matrix not only cannot be uniformly diagonalized, but it cannot be diagonalized at all because of the Jordan block structure in (7.65).
Example 39. Illposed problems are unconditionally unstable, even if they satisfy the von Neumann condition. The following example is drawn from [107].
Consider the periodic Cauchy problem
where u = (u_{1}, u_{2})^{T}, A is a 2 × 2 constant matrix, and the following discretization. The righthand side of the equation is approximated by a secondorder centered derivative plus higher (third) order numerical dissipation (see Section 8.5)
where I is the 2 × 2 identity matrix, ϵ ≥ 0 an arbitrary parameter regulating the strength of the numerical dissipation and D_{+}, D_{−} are firstorder forward and backward approximations of d/dx,
The resulting system of ordinary differential equations is marched in time (method of lines, discussed in Section 7.3) through an explicit method: the iterated CrankNicholson (ICN) one with an arbitrary but fixed number of iterations p (see Example 37).
If the matrix A is diagonalizable, as in the scalar case of Example 37, the resulting discretization is numerically stable for λ ≤ 2/a and p = 2, 3, 6, 7, 10, 11,…, even without dissipation. On the other hand, if the system (7.68) is weakly hyperbolic, as when the principal part has a Jordan block,
one can expect on general grounds that any discretization will be unconditionally unstable. As an illustration, this was explicitly shown in [107] for the above scheme and variations of it. In Fourier space the amplification matrix and its kth power take the form
with coefficients c, b depending on {a, λ, ωΔx, ϵ} such that for an arbitrary small initial perturbation at just one gridpoint,
the solution satisfies
and is therefore unstable regardless of the value of λ and ϵ. On the other hand, the von Neumann condition ∣a∣ ≤ 1 is satisfied if and only if
Notice that, as expected, the addition of numerical dissipation cannot stabilize the scheme independent of its amount. Furthermore, adding dissipation with a strength parameter ϵ > 1/(8λ) violates the von Neumann condition (7.75) and the growth rate of the numerical instability worsens.
The method of lines
A convenient approach both from an implementation point of view as well as for analyzing numerical stability or constructing numericallystable schemes is to decouple spatial and time discretizations. That is, one first analyzes stability under some spatial approximation assuming time to be continuous (semidiscrete stability) and then finds conditions for time integrators to preserve stability in the fullydiscrete case.
In general, this method provides only a subclass of numericallystable approximations. However, it is a very practical one, since spatial and time stability are analyzed separately and stable semidiscrete approximations and appropriate time integrators can then be combined at will, leading to modularity in implementations.
Semidiscrete stability
Consider the approximation
for the initial value problem (7.1, 7.2). The scheme is semidiscrete stable if the solution to Eqs. (7.76, 7.77) satisfies the estimate (7.3).
In the timeindependent case, the solution to (7.76, 7.77) is
and stability holds if and only if there are constants K_{d} and α_{d} such that
The von Neumann condition now states that there exists a constant α_{d}, independent of spatial resolution (i.e., the size of the matrix L), such that the eigenvalues ℓ_{ i } of L satisfy
This is the discreteinspace version of the Petrovskii condition; see Lemma 2. As already pointed out, it is not always a sufficient condition for stability, unless L can be uniformly diagonalized. Also, if the lowerorder terms are dropped from the analysis then
with \({{\rm{\tilde L}}}\) independent of Δx, and in order for (7.80) to hold for all Δx (in particular small Δx),
which is a stronger version of the semidiscrete von Neumann condition.
Semidiscrete stability also follows if L is semibounded, that is, there is a constant α_{d} independent of resolution such that (cf. Eq. (3.25) in Theorem 1)
In that case, the semidiscrete approximation (7.76, 7.77) is numerically stable, as follows immediately from the following energy estimate arguments,
For a large class of problems, which can be shown to be well posed using the energy estimate, one can construct semibounded operators L by satisfying the discrete counterpart of the properties of the differential operator P in Eq. (7.1) that were used to show wellposedness. This leads to the construction of spatial differential approximations satisfying the summation by parts property, discussed in Sections 8.3 and 9.4.
Fullydiscrete stability
Now we consider explicit time integration for systems of the form (7.76, 7.77) with timeindependent coefficients. That is, if there are N points in space we consider the system of ordinary differential equations (ODEs)
where L is an N × N matrix.
In the previous Section 7.3.1 we derived necessary conditions for semidiscrete stability of such systems. Namely, the von Neumann one in its weak (7.80) and strong (7.82) forms. Below we shall derive necessary conditions for fullydiscrete stability for a large class of time integration methods, including RungeKutta ones. Upon time discretization, stability analyses of (7.85) require the introduction of the notion of the region of absolute stability of ODE solvers. Part of the subtlety in the stability analysis of fullydiscrete systems is that the size N of the system of ODEs is not fixed; instead, it depends on the spatial resolution. However, the obtained necessary conditions for fullydiscrete stability will also turn out to be sufficient when combined with additional assumptions. We will also discuss sufficient conditions for fullydiscrete stability using the energy method.
Necessary conditions. Recall the von Neumann condition for the semidiscrete system (7.85): if ℓ_{ i } is an eigenvalue of L, a necessary condition for semidiscrete stability is [cf. Eq.