 Research
 Open Access
Algorithms for overcoming the curse of dimensionality for certain Hamilton–Jacobi equations arising in control theory and elsewhere
 Jérôme Darbon^{1}Email author and
 Stanley Osher^{2}
https://doi.org/10.1186/s4068701600687
© The Author(s) 2016
 Received: 10 October 2015
 Accepted: 10 May 2016
 Published: 1 September 2016
Abstract
It is well known that timedependent Hamilton–Jacobi–Isaacs partial differential equations (HJ PDEs) play an important role in analyzing continuous dynamic games and control theory problems. An important tool for such problems when they involve geometric motion is the level set method (Osher and Sethian in J Comput Phys 79(1):12–49, 1988). This was first used for reachability problems in Mitchell et al. (IEEE Trans Autom Control 50(171):947–957, 2005) and Mitchell and Tomlin (J Sci Comput 19(1–3):323–346, 2003). The cost of these algorithms and, in fact, all PDE numerical approximations is exponential in the space dimension and time. In Darbon (SIAM J Imaging Sci 8(4):2268–2293, 2015), some connections between HJ PDE and convex optimization in many dimensions are presented. In this work, we propose and test methods for solving a large class of the HJ PDE relevant to optimal control problems without the use of grids or numerical approximations. Rather we use the classical Hopf formulas for solving initial value problems for HJ PDE (Hopf in J Math Mech 14:951–973, 1965). We have noticed that if the Hamiltonian is convex and positively homogeneous of degree one (which the latter is for all geometrically based level set motion and control and differential game problems) that very fast methods exist to solve the resulting optimization problem. This is very much related to fast methods for solving problems in compressive sensing, based on \(\ell _1\) optimization (Goldstein and Osher in SIAM J Imaging Sci 2(2):323–343, 2009; Yin et al. in SIAM J Imaging Sci 1(1):143–168, 2008). We seem to obtain methods which are polynomial in the dimension. Our algorithm is very fast, requires very low memory and is totally parallelizable. We can evaluate the solution and its gradient in very high dimensions at \(10^{4}\)–\(10^{8}\) s per evaluation on a laptop. We carefully explain how to compute numerically the optimal control from the numerical solution of the associated initial valued HJ PDE for a class of optimal control problems. We show that our algorithms compute all the quantities we need to obtain easily the controller. In addition, as a step often needed in this procedure, we have developed a new and equally fast way to find, in very high dimensions, the closest point y lying in the union of a finite number of compact convex sets \(\Omega \) to any point x exterior to the \(\Omega \). We can also compute the distance to these sets much faster than Dijkstra type “fast methods,” e.g., Dijkstra (Numer Math 1:269–271, 1959). The term “curse of dimensionality” was coined by Bellman (Adaptive control processes, a guided tour. Princeton University Press, Princeton, 1961; Dynamic programming. Princeton University Press, Princeton, 1957), when considering problems in dynamic optimization.
1 Introduction to Hopf formulas, HJ PDEs and level set evolutions
We briefly introduce Hamilton–Jacobi equations with initial data and the Hopf formulas to represent the solution. We give some examples to show the potential of our approach, including examples to perform level set evolutions.
We wish to compute the viscosity solution [11, 12] for a given \(x \in \mathbb {R}^n\) and \(t > 0\).
Using numerical approximations is essentially impossible for \(n \ge 4\). The complexity of a finite difference equation is exponential in n because the number of grid points is also exponential in n. This has been found to be impossible, even with the use of sophisticated, e.g., ENO, WENO, DG, methods [31, 42, 52]. Highorder accuracy is no cure for this curse of dimensionality [3, 4].
We propose and test a new approach, borrowing ideas from convex optimization, which arise in the \(\ell _1\) regularization convex optimization [25, 51] used in compressive sensing [6, 17]. It has been shown experimentally that these \(\ell _1\)based methods converge quickly when we use Bregman and split Bregman iterative methods. These are essentially the same as Augmented Lagrangian methods [26] and Alternating Direction Method of Multipliers methods [24]. These and related firstorder and splitting techniques have enjoyed a renaissance since they were recently used very successfully for these \(\ell _1\) and related problems [25, 51]. One explanation for their rapid convergence is the “error forgetting” property discovered and analyzed in [43] for \(\ell _1\) regularization.
Similarly, if we take \(H = \Vert \cdot \Vert _2\) then for any \(t>0\) the resulting zero level set of \(x \mapsto \varphi (x,t)\) will be the points in \(\mathbb {R}^n{\setminus } \Omega \) whose Euclidean distance to \(\Omega \) is equal to t. This fact will be useful later when we find the projection from a point \(x \in \mathbb {R}^n\) to a compact, convex set \(\Omega \).
We present here two somewhat simple but illustrative examples to show the potential power of our approach. Time results are presented in Sect. 4 and show that we can compute solution of some HJ PDEs in fairly high dimensions at a rate below a millisecond per evaluation on a standard laptop.
We note that in the above case we were able to compute the solution analytically and the dimension n played no significant role. Of course this is rather a special problem, but this gives us some idea of what to expect in more complicated cases, discussed in Sect. 3.
These two examples will be generalized below so that we can, with extreme speed, compute the signed distance, either Euclidean, Manhattan or various generalizations, to the boundary of the union of a finite collection of compact convex sets.
The remainder of this paper is organized as follows: Sect. 2 contains an introduction to optimal control and its connection to HJ PDE. Section 3 gives the details of our numerical methods. Section 4 presents numerical results with some details. We draw some concluding remarks and give future plans in Sect. 5. The “Appendix” links our approach to the concepts of gauge and support functions in convex analysis.
2 Introduction to optimal control
First, we give a short introduction to optimal control and its connection to HJ PDE which is given in (11). We also introduce positively homogeneous of degree one Hamiltonians and describe their relationship to optimal control problems. We explain how to recover the optimal control from the solution of the HJ PDE. An “Appendix” describes further connections between these Hamiltonians and gauge in convex analysis. Second, we present some extensions of our work.
2.1 Optimal control and HJ PDE
We are largely following the discussion in [16], see also [20], about optimal control and its link with HJ PDE. We briefly present it formally, and we specialize it to the cases considered in this paper.
We present in Sect. 3 our efficient algorithm that computes not only \(\varphi (x,t)\) but also \(\nabla _x\varphi (x,t)\). We emphasize that we do not need to use some numerical approximations to compute the spatial gradient. In other words, our algorithm computes all the quantities we need to get the optimal control without using any approximations.
We will devise very fast, low memory, totally parallelizable and apparently low time complexity methods for solving (11) with H given by (14) in the next section.
2.2 Some extensions and future work
In this section we show that we can solve the problem for a much more general class of Hamiltonians and initial data which arise in optimal control, including an interesting class of nonconvex initial data.
We end this section by showing that explicit formulas can be obtained for the terminal value \(\mathrm {x}(T)\) and the control \(\beta (t)\) for another class of running cost L. Suppose that C is a convex compact set containing the origin and take \(f(c) = c \) for any \(c \in C\). Assume also that \(L: \mathbb {R}^n\rightarrow \mathbb {R}\cup \{+\infty \}\) is strictly convex, differentiable when its subdifferential is nonempty and that \(\mathrm{dom}\, L\) has a nonempty interior with \(\mathrm{dom}\, L \subseteq C\). Then, the associated Hamiltonian H is defined by \(H=L^*\). Then, using the results of [13], we have that the \((x,t) \mapsto \varphi (x,t)\) which solves (11) is given by the Hopf–Lax formula \(\varphi (x,t) = \min _{y\in \mathbb {R}^n}\left\{ J(y) + tH^*(\frac{xy}{t})\right\} \) where the minimizer is unique and denoted by \(\bar{y}(x,t)\). Note that the Hopf–Lax formula corresponds to a convex optimization problem which allows us to compute \(\bar{y}(x,t)\). In addition, we can compute the gradient with respect to x since we have \(\nabla _x \varphi (x,t) = \nabla H^*\left( \frac{x\bar{y}(x,t)}{t}\right) \in \partial J(y(x,t))\) for any given \(x\in \mathbb {R}^n\) and \(t>0\). For any \(t \in (\infty , T)\) and fixed \(x\in \mathbb {R}^n\) the control is given by \(\beta (t) = \nabla H(\nabla _x \varphi (x,Txt))\) while the terminal value satisfies \(\mathrm {x}(T) = \bar{y}(x,Tt) = x  (Tt) \nabla H(\nabla \varphi (x,(Tt)))\). Note that both the control and the terminal value can be easily computed. More details about these facts will be given in a forthcoming paper.
3 Overcoming the curse of dimensionality for convex initial data and convex homogeneous degree one Hamiltonians: optimal control
We first present our approach for evaluating the solution of the HJ PDE and its gradient using the Hopf formula [30], Moreau’s identity [38] and the split Bregman algorithm [25]. We note that the split Bregman algorithm can be replaced by other algorithms which converge rapidly for problems of this type. An example might be the primaldual hybrid gradient method [8, 53]. Then, we show that our approach can be adapted to compute a closest point on a closed set, which is the union of disjoint closed convex sets with a nonempty interior, to a given point.
3.1 Numerical optimization algorithm
An evaluation of the solution at \(x\in \mathbb {R}^n\) and \(t>0\) for the examples we consider in this paper is of the order of \(10^{8}\)–\(10^{4}\) s on a standard laptop (see Sect. 4). The apparent time complexity seems to be polynomial in n with remarkably small constants.
The Hopf formula (20) requires only the continuity of H, but we will also require the Hamiltonian H be convex as well. We recall that the previous section shows how to relax this condition.
Closedform formulas exist for the proximal of map for some specific cases. For instance, we have seen in the introduction that \(\left( I + {\alpha \ \partial \Vert \cdot \Vert _i} \right) ^{1} = {\hbox {shrink}}_i(\cdot , \alpha )\) for \(i=1,2\), where we recall that \({\hbox {shrink}}_1\) and \({\hbox {shrink}}_2\) are defined by (6) and (7), respectively. Another classical example consists of considering a quadratic form \(\frac{1}{2}\Vert \cdot \Vert _A^2 = \frac{1}{2} \langle \cdot , A \cdot \rangle \), with A a symmetric positive definite matrix with real values, which yields \(\left( I + {\alpha \, \partial \left( \frac{1}{2}\Vert \cdot \Vert _A ^2\right) } \right) ^{1}= \left( I_n + \alpha \, A\right) ^{1}\), where \(I_n\) denotes the identity matrix of size n.
Assume f is twice differentiable with a bounded Hessian, then the proximal map can be efficiently computed using Newton’s method. Algorithms based on Newton’s method require us to solve a linear system that involves an \(n \times n\) matrix. Note that typical high dimensions for optimal control problems are about \(n=10\). For computational purposes, these order of values for n are small.
We describe an efficient algorithm to compute the proximal map of \(\Vert \cdot \Vert _\infty \) in Sect. 4.2 using parametric programming [46, Chap. 11, Section 11.M]. An algorithm to compute the proximal map for \(\frac{1}{2}\Vert \cdot \Vert _1^2\) is described in Sect. 4.4.
We shall see that Moreau’s identity (25) can be very useful to compute the proximal maps of convex and positively 1homogeneous functions.
Let us consider an example. Consider Hamiltonians of the form \(H = \Vert \cdot \Vert _A = \sqrt{\langle \cdot , A \cdot \rangle }\) where A is a symmetric positive matrix. Here the Wulff shape is the ellipsoid \(C = \left\{ y\in \mathbb {R}^n\,\, \langle x, A ^{1} x \rangle \; \le 1 \right\} \). We describe in Sect. 4.3 an efficient algorithm for computing the projection on an ellipsoid. Thus, this allows us to compute efficiently the proximal map of norms of the form \(\Vert \cdot \Vert _A\) using (26).
3.2 Projection on closed convex set with the level set method
We now describe an algorithm based on the level set method [41] to compute the projection \(\pi _\Omega \) on a compact convex set \(\Omega \subset \mathbb {R}^n\) with a nonempty interior. This problem appears to be of great interest for its own sake.
Recall that we need to get initial data which behaves as a level set function should, i.e., as defined by (28). We also want either L or \(L^*\) to be smooth enough, actually twice differentiable with Lipschitz continuous Hessian, so that Newton’s method can be used.
 (a)
\(2 \le p < + \infty \)
 (b)
\(1 < p \le 2\)
4 Numerical results

\(H = \Vert \cdot \Vert _p\) for \(p = 1, 2, \infty \),

\(H = \sqrt{\langle \cdot , A \cdot \rangle }\) with A symmetric positive definite matrix,

\(J = \frac{1}{2}\Vert \cdot \Vert _p^2\) for \(p = 1, 2, \infty \),

\(J = \frac{1}{2}\langle \cdot , A\cdot \rangle \) with A a positive definite diagonal matrix.
4.1 Some explicit formulas for simple specific cases
4.2 The case of \(\Vert \cdot \Vert _\infty \)
4.3 The case \(\Vert \cdot \Vert _A\) and projection on a ellipsoid
4.4 The cases \(\frac{1}{2}\Vert \cdot \Vert _1^2\) and \(\frac{1}{2}\Vert \cdot \Vert _\infty ^2\)
The function g is decreasing, piecewise affine and the breakpoints of g (i.e., the points where g is not differentiable) are \(B= \{0\} \cup _{i=1,\ldots ,n} \{ z_i\}\). We now proceed similarly as for the case \(\Vert \cdot \Vert _\infty \). We note \((l_i,\ldots , l_{m}) \in B^{m}\) the breakpoints sorted in increasing order, i.e., such that \(l_i < l_{i+1}\) for \(i=1,\ldots ,(m1)\), where \(m \le n\) is the number of breakpoints. We use a bitonic search to find the two consecutive breakpoints \(l_i\) and \(l_{i+1}\), such that \(g(l_i) \;\ge \; 0 \;>\; g(l_{i+1})\). Since g is affine on \([l_i, l_{i+1}]\) a simple interpolation yields the value \(\bar{\beta }\). We then compute \(\left( I + {\frac{\alpha }{2} \partial \Vert \cdot \Vert _1^2} \right) ^{1}(z)\) using (40).
4.5 Time results and illustrations
We now give numerical results for several Hamiltonians and initial data. We present time results on a standard laptop using a single core which show that our approach allows us to evaluate very rapidly HJ PDE solutions. We also present some time results on a 16 cores computer to show that our approach scales very well. We also present some plots that depict the solution of some HJ PDEs.

\(H = \Vert \cdot \Vert _p\) for \(p = 1, 2, \infty \),

\(H = \sqrt{\langle \cdot , D \cdot \rangle }\) with D a diagonal positive definite matrix,

\(H = \sqrt{\langle \cdot , A \cdot \rangle }\) with A symmetric positive definite matrix,

\(J = \frac{1}{2}\Vert \cdot \Vert _p^2\) for \(p = 1, 2, \infty \).

\(J = \frac{1}{2}\langle \cdot , D^{1}\cdot \rangle \) with D a positive definite diagonal matrix,
All computations are performed using IEEE doubleprecision floating points where denormalized number mode has been disabled. The quantities (x, t) are drawn uniformly in \([10,10]^n \times [0,10]\). We present the average time to evaluate a solution for 1,000,000 runs.
We set \(\lambda =1\) in the split Bregman algorithm (21)–(23). We stop the iterations when the following stopping criteria is met: \(\Vert v^k  v^{k1}\Vert _2^2 \le 10^{8}\) and \(\Vert d^k  d^{k1}\Vert _2^2 \le 10^{8}\) and \(\Vert d^k  v^{k}\Vert _2^2 \le 10^{8}\).
Time results in seconds for the average time per call for evaluating the solution of the HJ PDE with the initial data \(J = \frac{1}{2}\Vert \cdot \Vert _2^2\), several Hamiltonians and various dimensions n
\(\varvec{n}\)  \(\varvec{\Vert y\Vert _1}\)  \(\varvec{\Vert y\Vert _2}\)  \(\varvec{\Vert y\Vert _\infty }\)  \(\varvec{\Vert y\Vert _D}\)  \(\varvec{\Vert y\Vert _A}\) 

4  6.36e−08  1.20e−07  2.69e−07  7.00e−07  8.83e−07 
8  6.98e−08  1.28e−07  4.89e−07  1.07e−06  1.57e−06 
12  8.72e−08  1.56e−07  7.09e−07  1.59e−06  2.23e−06 
16  9.24e−08  1.50e−07  9.92e−07  2.04e−06  2.95e−06 
Time results in seconds for the average time per call for evaluating the solution of the HJ PDE with the initial data \(J = \frac{1}{2}\Vert \cdot \Vert _\infty ^2\), several Hamiltonians and various dimensions n
\(\varvec{n}\)  \(\varvec{\Vert y\Vert _1}\)  \(\varvec{\Vert y\Vert _2}\)  \(\varvec{\Vert y\Vert _\infty }\)  \(\varvec{\Vert y\Vert _D}\)  \(\varvec{\Vert y\Vert _A}\) 

4  1.79e−06  1.53e−06  1.84e−06  4.88e−06  7.77e−06 
8  3.77e−06  2.31e−06  3.50e−06  9.73e−06  1.92e−05 
12  6.31e−06  3.14e−06  5.54e−06  1.44e−05  2.91e−05 
16  9.61e−06  3.88e−06  8.22e−06  1.80e−05  4.04e−05 
Time results in seconds for the average time per call for evaluating the solution of the HJ PDE with the initial data \(J = \frac{1}{2}\Vert \cdot \Vert _1^2\), several Hamiltonians and various dimensions n
\(\varvec{n}\)  \(\varvec{\Vert y\Vert _1}\)  \(\varvec{\Vert y\Vert _2}\)  \(\varvec{\Vert y\Vert _\infty }\)  \(\varvec{\Vert y\Vert _D}\)  \(\varvec{\Vert y\Vert _A}\) 

4  2.86e−06  4.42e−06  9.17e−06  1.79e−05  1.97e−05 
8  9.85e−06  1.63e−05  4.38e−05  9.37e−05  1.09e−04 
12  2.35e−05  3.84−05  1.19e−04  2.63e−04  3.24e−04 
16  4.35e−05  7.03e−05  2.46e−04  5.19e−04  6.92e−04 
Time results in seconds for the average time per call for evaluating the solution of the HJ PDE with the initial data \(J = \frac{1}{2}\langle \cdot , D^{1}\cdot \rangle \), several Hamiltonians and various dimensions n
\(\varvec{n}\)  \(\varvec{\Vert y\Vert _1}\)  \(\varvec{\Vert y\Vert _2}\)  \(\varvec{\Vert y\Vert _\infty }\)  \(\varvec{\Vert y\Vert _D}\)  \(\varvec{\Vert y\Vert _A}\) 

4  3.62e−07  5.19e−07  9.35e−07  2.79e−06  3.50e−06 
8  3.83e−07  5.25e−07  1.42e−06  4.40e−06  5.75e−06 
12  4.97e−07  6.62e−07  1.73e−06  5.70e−06  7.98−06 
16  5.92e−07  6.88e−07  2.27e−06  6.64e−06  1.04e−05 
Time results in seconds for the average time per call for evaluating the solution of the HJ PDE with the initial data \(J = \frac{1}{2}\Vert \cdot \Vert _1^2\), and the Hamiltonian \(H=\Vert \cdot \Vert _\infty \), for various dimensions, and several cores
\(\varvec{n}\)  1 core  4 cores  8 cores  16 cores 

4  1.11e−05  2.81e−06  1.56e−06  8.36e−07 
8  4.77e−05  1.33e−05  6.81e−06  3.48e−06 
12  1.35e−04  3.90e−05  1.94e−05  9.90e−06 
16  3.24e−04  8.76e−05  4.40e−05  2.22e−05 
5 Conclusion
We have designed algorithms which enable us to solve certain Hamilton–Jacobi equations very rapidly. Our algorithms not only evaluate the solution but also compute the gradient of the solution. These include equations arising in control theory leading to Hamiltonians which are convex and positively homogeneous of degree 1. We were motivated by ideas coming from compressed sensing; we borrowed algorithms devised to solve \(\ell _1\) regularized problems which are known to rapidly converge. We apparently extended this fast convergence to include convex positively 1homogeneous regularized problems.
There are no grids involved. Instead of complexity which is exponential in the dimension of the problems, which is typical of grid based methods, ours appears to be polynomial in the dimension with very small constants. We can evaluate the solution on a laptop at about \(10^{4}{}10^{8}\) s per evaluation for fairly high dimensions. Our algorithm requires very low memory and is totally parallelizable which suggest that it is suitable for lowenergy embedded systems. We have chosen to restrict the presentation of the numerical experiments to normbased Hamiltonians, and we emphasize that our approach naturally extends to more elaborate positively 1homogeneous Hamiltonians (using the min/max algebra results as we did for instance).
As an important step in this procedure, we have also derived an equally fast method to find a closest point lying on \(\Omega \), a finite union of compact convex sets \(\Omega _i\), such that \(\Omega = \cup _{i}^{k}\Omega _i\) has a nonempty interior, to a given point.
Of course the same approach could be used for any convex, positively 1homogeneous Hamiltonian H (instead of \(\Vert \cdot \Vert _2\)), e.g., \(H = \Vert \cdot \Vert _1\). This will give us results related to computing the Manhattan distance.
 1.
We will do experiments involving linear controls, allowing x and \(t>0\) dependence while the Hamiltonian \((p,x,t) \mapsto H(p,x,t)\) is still convex and positively 1homogeneous in p. The procedure was described in Sect. 2.
 2.
We will extend our fast computation of the projection in several ways. We will consider in detail the case of polyhedral regions defined by the intersection of sets \(\Omega _i = \{x\in \mathbb {R}^n\;\; \langle a_i, x \rangle  b_i \,\le \,0\}\), \(a_i,b_i \,\in \,\mathbb {R}^n\), \(\Vert a_i\Vert _2=1\), for \(i=1,\ldots ,k\). This is of interest in linear programming (LP) and related problems. We expect to develop alternate approaches to several issues arising in LP, including rapidly finding the existence and location of a feasible point.
 3.
We will consider nonconvex but positively 1homogeneous Hamiltonians. These arise in: differential games as well as in the problem of finding a closest point on the boundary of a given compact convex set \(\Omega \), to an arbitrary point in the interior of \(\Omega \).
 4.As an example of a nonconvex Hamiltonians we consider the following problems arising in differential games [21, 36, 37]. We need to solve the following scalar problem for any \(z\in \mathbb {R}^n\) and any \(\alpha >0\)It is easy to see that the minimizer is the stretch \(_1\) operator which we define for any \(i=1,\ldots ,n\) as:$$\begin{aligned} \min _y \left\{ \frac{1}{2} \Vert yx\Vert _2^2  \alpha \Vert y\Vert _1\right\} . \end{aligned}$$We note that the discontinuity in the minimizer will lead to a jump in the derivatives \((x,t)\mapsto \frac{\partial \varphi }{\partial x_i}(x,t)\), which is no surprise, given that this interface associated with the equation$$\begin{aligned} \left( \hbox {stretch}_1 (x,\alpha )\right) _i = {\left\{ \begin{array}{ll} x_i+\alpha &{}\quad \hbox {if }\; \ x_i > 0, \\ 0 &{} \quad \hbox {if } \; x_i=0,\\ x_i\alpha &{}\quad \hbox {if }\; \ x_i < 0. \end{array}\right. } \end{aligned}$$(41)and the previous initial data, will move inwards, and characteristics will intersect. The solution \(\varphi (x,t)\) will remain locally Lipschitz continuous, even though a point inside the ellipsoid may be equally close to two points on the boundary of the original ellipsoid in the Manhattan metric. So we are solving$$\begin{aligned} \frac{\partial \varphi }{\partial t}(x,t)  \sum _{i=1}^n \left \frac{\partial \varphi }{\partial x_i}(x,t)\right = 0, \end{aligned}$$The zero level set disappears when \(t \ge \max _{i} a_i\) as it should.$$\begin{aligned} \varphi (x,t)= & {} \frac{1}{2}  \min _{v\in \mathbb {R}^n}\left\{ \frac{1}{2} \sum _{i=1}^n a_i^2 v_i^2  t \sum _{i=1}^n v_i + \langle x,v\rangle \right\} \\= & {} \frac{1}{2} + \frac{1}{2} \sum _{i=1}^n \frac{x_i^2}{a_i^2}  \min _{v\in \mathbb {R}^n} \left\{ \frac{1}{2} \sum _i^n a_i^2 \left( v_i  \frac{x_i}{a^2}\right) ^2  t \sum _{i=1}^n v_i\right\} \\= & {} \frac{1}{2} + \frac{1}{2} \sum _{i=1}^n \frac{(x_i + t)^2}{a_i^2} \end{aligned}$$For completeness, we also consider the nonconvex optimization problemIts minimizer is given by the \(\mathrm{stretch}_2\) operator formally defined by$$\begin{aligned} \min _{v\in \mathbb {R}^n} \left\{ \frac{1}{2} \Vert vx\Vert _2^2  \alpha \,\Vert v\Vert _2\right\} . \end{aligned}$$This formula, although multivalued at \(x=0\), is useful to solve the following problem: Move the unit sphere inwards with normal velocity 1. The solution comes from finding the zero level set of$$\begin{aligned} \mathrm{stretch}_2(x, \alpha ) = {\left\{ \begin{array}{ll} x + \alpha \frac{x}{\Vert x\Vert _2} &{}\quad \mathrm{ if }\; x\ne 0,\\ \alpha \theta &{}\quad \mathrm{ with }\; \Vert \theta \Vert _2=1 \mathrm{ if } x=0. \end{array}\right. } \end{aligned}$$and, of course, the zero level set is the set of x satisfying \(\Vert x\Vert _2 = t1\) if \(t \le 1\) and the zero level set vanishes for \(t > 1\).$$\begin{aligned} \varphi (x,t)= & {} \min _{v\in \mathbb {R}^n} \left\{ \frac{v_2^2}{2}  t\Vert v\Vert _2  \langle x,v\rangle \right\} \frac{1}{2}\\= & {} \min _{v\in \mathbb {R}^n} \left\{ \frac{1}{2} \Vert vx\Vert _2^2  t\Vert v\Vert _2\right\} +\frac{1}{2} \left( \Vert x\Vert _2^2  1\right) \\= & {} \frac{1}{2} t^2 + t\Vert x\Vert _2 \left( 1 + \frac{t}{\Vert x\Vert _2}\right) + \frac{1}{2} \left( \Vert x\Vert _2^2  1\right) \\= & {} \frac{1}{2} \left( \Vert x\Vert _2 + t\right) ^2  \frac{1}{2} \end{aligned}$$
Declarations
Author's contributions
JD and SO equally contributed to this work. Both authors read and approved the final manuscript.
Acknowledgements
The authors deeply thank Gary Hewer (Naval Air Weapon Center, China Lake) for fruitful discussions, carefully reading drafts and helping us to improve the paper. Research supported by ONR Grants N000141410683, N000141210838 and DOE Grant DESC00183838.
Competing interests
The authors declare that they have no competing interests.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Akian, M., Bapat, R., Gaubert, S.: Maxplus algebras. In: Hogben, L. (ed.) Handbook of Linear Algebra (Discrete Mathematics and Its Applications), Chapter 25, vol. 39. Chapman & Hall/CRC, Boca Raton (2006)Google Scholar
 Aubin, J.P., Cellina, A.: Differential Inclusions. Springer, Berlin (1984)View ArticleMATHGoogle Scholar
 Bellman, R.: Adaptive Control Processes, A Guided Tour. Princeton University Press, Princeton (1961)View ArticleMATHGoogle Scholar
 Bellman, R.: Dynamic Programming. Princeton University Press, Princeton (1957)MATHGoogle Scholar
 Brezis, H.: Opérateurs maximaux monotones et semigroupes de contractions dans les espaces de Hilbert. NorthHolland, Amsterdam (1973)MATHGoogle Scholar
 Candes, E.J., Romberg, J., Tao, T.: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2008)MathSciNetView ArticleMATHGoogle Scholar
 Chambolle, A., Darbon, J.: On total variation minimization and surface evolution using parametric maximum flows. Int. J. Comput. Vis. 84(3), 288–307 (2009)View ArticleGoogle Scholar
 Chambolle, A., Pock, T.: A firstorder primal–dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 41(1), 120–145 (2011)MathSciNetView ArticleMATHGoogle Scholar
 Cheng, L.T., Tsai, Y.H.: Redistancing by flow of time dependent eikonal equation. J. Comput. Phys. 227(8), 4002–4017 (2008)MathSciNetView ArticleMATHGoogle Scholar
 Combettes, P.L., Pesquet, J.C.: Proximal thresholding algorithm for minimization over orthonormal bases. SIAM J. Optim. 18(4), 1351–1376 (2007)MathSciNetView ArticleMATHGoogle Scholar
 Crandall, M.G., Evans, L.C., Lions, P.L.: Some properties of viscosity solutions of Hamilton–Jacobi equations. Trans. AMS 282(2), 487–502 (1984)MathSciNetView ArticleMATHGoogle Scholar
 Crandall, M.G., Lions, P.L.: Viscosity solutions of Hamilton–Jacobi equations. Trans. AMS 277(1), 1–42 (1983)MathSciNetView ArticleMATHGoogle Scholar
 Darbon, J.: On convex finitedimensional variational methods in imaging sciences, and Hamilton–Jacobi equations. SIAM J. Imaging Sci. 8(4), 2268–2293 (2015)MathSciNetView ArticleMATHGoogle Scholar
 Daubechies, I., Defrise, M., De Mol, C.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57(11), 1413–1457 (2004)MathSciNetView ArticleMATHGoogle Scholar
 Dijkstra, E.W.: A note on two problems in connexion with graphs. Numer. Math. 1, 269–271 (1959)MathSciNetView ArticleMATHGoogle Scholar
 Dolcetta, I.C.: Representations of solutions of HamiltonJacobi equations. In: Lupo, D., Pagani, C.D. (eds.) Nonlinear Equations: Methods, Models and Applications, pp. 79–90. Birkhäuser, Basel (2003)View ArticleGoogle Scholar
 Donoho, D.L.: Compressed sensing. IEEE Trans. Image Inf. Theory 52(4), 1289–1305 (2006)MathSciNetView ArticleMATHGoogle Scholar
 Eckstein, J., Bertsekas, D.P.: On the Douglas–Rachford splittingmethod and the proximal point algorithm for maximal monotone operators. Math. Program. Ser. A 55(3), 293–318 (1992)MathSciNetView ArticleMATHGoogle Scholar
 Ekeland, I., Temam, R.: Convex Analysis and Variational Problems. NorthHolland, Amsterdam (1976)MATHGoogle Scholar
 Evans, L.C.: Partial Differential Equations, Graduate Studies in Mathematics, vol. 19. AMS, Providence (2010)Google Scholar
 Evans, L.C., Souganidis, P.E.: Differential games and representation formulas for solutions of Hamilton–Jacobi Isaacs equations. Indiana Univ. Math. J. 38, 773–797 (1984)MathSciNetView ArticleMATHGoogle Scholar
 Figueiredo, M.A.T., Nowak, R.D.: Bayesian waveletbased signal estimation using noninformative priors. In: Conference Record of the ThirtySecond Asilomar Conference on Signals, Systems and Computers, vol. 2, IEEE, pp. 1368–1373. Piscataway, NJ (1998)Google Scholar
 Fleming, W.H.: Deterministic nonlinear filtering. Annali della Scuola Normale Superiore di Pisa  Classe di Scienze 23(3–4), 435–454 (1997)MathSciNetMATHGoogle Scholar
 Glowinski, R., Marrocco, A.: Sur l’approximation par éléments finis d’ordre, et la resolution par pénalzationdualité, d’une classe de problemémes de Dirichlet nonlinéares, C.R. hebd. Séanc. Acad. Sci., Paris 278, série A, pp. 1649–1652 (1974)Google Scholar
 Goldstein, T., Osher, S.: The split Bregman method for \(L_1\) regularized problems. SIAM J. Imaging Sci. 2(2), 323–343 (2009)MathSciNetView ArticleMATHGoogle Scholar
 Hestenes, M.R.: Multiplier and gradient methods. J. Optim. Theory Appl. 4(3), 303–320 (1969)MathSciNetView ArticleMATHGoogle Scholar
 HiriartUrruty, J.B.: Optimisation et Analyse Convexe. Presse Universitaire de France, Paris (1998)MATHGoogle Scholar
 HiriartUrruty, J.B., Lemaréchal, C.: Convex Analysis and Minimization Algorithms Part I. Springer, Heidelberg (1996)MATHGoogle Scholar
 HiriartUrruty, J.B., Lemaréchal, C.: Convex Analysis and Minimization Algorithms Part II. Springer, Heidelberg (1996)MATHGoogle Scholar
 Hopf, E.: Generalized solutions of nonlinear equations of the first order. J. Math. Mech. 14, 951–973 (1965)MathSciNetMATHGoogle Scholar
 Hu, C., Shu, C.W.: A discontinuous Galerkin finite element method for Hamilton–Jacobi equations. SIAM J. Sci. Comput. 21(2), 666–690 (1999)MathSciNetView ArticleMATHGoogle Scholar
 Kurzhanski, A.B., Varaiya, P.: Dynamics and Control of Trajectory Tubes: Theory and Computation. Birkhäuser, Boston (2014)View ArticleMATHGoogle Scholar
 Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16(6), 964–979 (1979)MathSciNetView ArticleMATHGoogle Scholar
 Lions, P.L., Rochet, J.C.: Hopf formula and multitime Hamilton–Jacobi equations. Proc. Am. Math. Soc. 96(1), 79–84 (1986)MathSciNetView ArticleMATHGoogle Scholar
 McEneaney, W.M.: MaxPlus Methods for Nonlinear Control and Estimation. Birkhäuser, Boston (2006)MATHGoogle Scholar
 Mitchell, I.M., Bayen, A.M., Tomlin, C.J.: A time dependent Hamilton–Jacobi formulation of reachable sets for continuous dynamic games. IEEE Trans. Autom. Control 50(171), 947–957 (2005)MathSciNetView ArticleGoogle Scholar
 Mitchell, I.M., Tomlin, C.J.: Overapproximating reachable sets by Hamilton–Jacobi projections. J. Sci. Comput. 19(1–3), 323–346 (2003)MathSciNetView ArticleMATHGoogle Scholar
 Moreau, J.J.: Proximité et dualité dans un espace hilbertien. Bull. Soc. Math. France 93, 273–299 (1965)MathSciNetMATHGoogle Scholar
 Oberman, A., Osher, S., Takei, R., Tsai, R.: Numerical methods for anisotropic mean curvature flow based on a discrete time variational formulation. Commun. Math. Sci. 9(3), 637–662 (2011)MathSciNetView ArticleMATHGoogle Scholar
 Osher, S., Merriman, B.: The Wulff shape as the asymptotic limit of a growing crystalline interface. Asian J. Math. 1(3), 560–571 (1997)MathSciNetView ArticleMATHGoogle Scholar
 Osher, S., Sethian, J.A.: Fronts propagating with curvature dependent speech: algorithms based on Hamilton–Jacobi formulations. J. Comput. Phys. 79(1), 12–49 (1988)MathSciNetView ArticleMATHGoogle Scholar
 Osher, S., Shu, C.W.: High order essentially nonosicllatory schemes for Hamilton–Jacobi equations. SIAM J. Numer. Anal. 28(4), 907–922 (1991)MathSciNetView ArticleMATHGoogle Scholar
 Osher, S., Yin, W.: Error forgetting of Bregman iteration. J. Sci. Comput. 54(2), 684–695 (2013)MathSciNetMATHGoogle Scholar
 Rockafellar, R.T.: Convex Analysis. Princeton Landmarks in Mathematics. Princeton University Press, Princeton (1997). (reprint of the 1970 original, Princeton paperbacks)Google Scholar
 Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14(5), 877–898 (1976)MathSciNetView ArticleMATHGoogle Scholar
 Rockafellar, R.T.: Network Flows and Monotropic Optimization. Athena Scientific, Belmont (1998). (reprint of the 1984 original)MATHGoogle Scholar
 Tsai, Y.H.R., Cheng, L.T., Osher, S., Zhao, H.K.: Fast sweeping algorithms for a class of Hamilton–Jacobi equations. SIAM J. Numer. Anal. 41(2), 673–694 (2003)MathSciNetView ArticleMATHGoogle Scholar
 Teboulle, M.: Convergence of proximallike algorithms. SIAM J. Optim. 7, 1069–1083 (1997)MathSciNetView ArticleMATHGoogle Scholar
 Tsitsiklis, J.N.: Efficient algorithms for globally optimal trajectories. IEEE Trans. Autom. Control 40(9), 1528–1538 (1995)MathSciNetView ArticleMATHGoogle Scholar
 Winkler, G.: Image Analysis, Random Fields and Dynamic Monte Carlo Methods. Applications of Mathematics, 2nd edn. Springer, Berlin (2006)Google Scholar
 Yin, W., Osher, S., Goldfarb, D., Darbon, J.: Bregman iterative algorithms for \(\ell _1\) minimization with applications to compressed sensing. SIAM J. Imaging Sci. 1(1), 143–168 (2008)MathSciNetView ArticleMATHGoogle Scholar
 Zhang, Y.T., Shu, C.W.: High order WENO schemes for Hamilton–Jacobi equations on triangular meshes. SIAM J. Sci. Comput. 24(3), 1005–1030 (2013)MathSciNetView ArticleMATHGoogle Scholar
 Zhu, M., Chan, T.F.: An efficient primal–dual hybrid gradient algorithm for total variation image restoration. UCLA CAM report 08–34 (2008)Google Scholar