Kepler's Equation

1. The Equation of Kepler ... 1.1. Elliptic Orbits ... 1.2. Parabolic Orbits ... 1.3. Hyperbolic Orbits ... 2. Auxiliary Quantities ... 3. Distance and Coordinates ... 4. Summary ... 5. Charts of Anomalies ... 6. Some Specific Solutions ... 7. Finding a Solution ... 7.1. Improving the Estimate ... 7.1.1. First-Order Method ... 7.1.2. Second-Order Method ... 7.2. How Many Iterations Will We Need? ... 7.3. Stopping Criterion ... 7.4. Initial Estimate ... 7.4.1. Small Eccentric Anomaly ... 7.4.2. Large Eccentric Anomaly ... 7.4.3. Combined ... 7.5. How to Select the Right One for the Circumstances

\(\def\|{&}\DeclareMathOperator{\D}{\bigtriangleup\!} \DeclareMathOperator{\d}{\text{d}\!}\)

\( \DeclareMathOperator{\sgn}{sgn} \DeclareMathOperator{\d}{d} \DeclareMathOperator{\artanh}{artanh} \DeclareMathOperator{\arsinh}{arsinh} \DeclareMathOperator{\arcosh}{arcosh} \DeclareMathOperator{\del}{∆\!} \newcommand{\pdd}[2]{\frac{\partial #1}{\partial #2}} \)

Kepler's Equation is an equation that describes the relationship between time and place for objects in an elliptic orbit under the influence of gravity. There are similar equations for parabolic and hyperbolic orbits, which we for convenience here also call Kepler's Equations.

Using Kepler's Equation, you can calculate the time at which a certain position is reached. If you want to calculate the position at a certain time, then for elliptic and hyperbolic orbits this cannot be done as a calculation involving a small number of evaluations of well-known functions, but only by first estimating a solution and then successively approaching the true solution ever closer until it cannot get closer anymore.

Using the methods described below, you can calculate the true anomaly
\( ν \), which describes the position of the planet (or other
celestial object) in its orbit around the Sun (or other celestial
object). The true anomaly is the angle between the direction to the
perifocus and the direction to the planet, as seen from the focus of
the orbit. The described method requires (for
double
precision floating-point numbers, to a relative accuracy of about
2 × 10^{−16}) at most 5 iterations.

For an elliptic orbit, the eccentricity \( e \) is greater than or equal to 0 and less than 1. The mean anomaly \( M \), eccentric anomaly \( E \), and true anomaly \( ν \) (Greek letter 'nu') of an object in an elliptic orbit are determined by

\begin{align} M \| = t\sqrt{\frac{Γ}{a^3}} \\ M \| = E − e \sin(E) \label{eq:ekepler} \\ τ_E \| ≡ \tan\left( \frac{1}{2}E \right) \\ τ_ν \| ≡ \tan\left( \frac{1}{2}ν \right) = \sqrt{\frac{1 + e}{1 − e}} τ_E \label{eq:eEtoν} \end{align}

where \( a \) is the length of the semimajor axis of the orbit, \( Γ \) (greek capital 'gamma') is the gravity parameter of the system, \( t \) is the time since the (last) passage through the perifocus, \( e \) is the eccentricity of the orbit, and all angles are measured in radians (2π radians equals 360 degrees). \( M \) increases at a constant rate (at 2π radians per orbital period), so it is a measure of time. \( ν \) is the angular distance of the object from its perifocus, as seen from the focus.

Equation \eqref{eq:ekepler} is Kepler's Equation (for elliptic orbits). The problem is that we want to know \( E \) given \( M \) and \( e \), and that equation cannot be rearranged into a form \( E = … \) in which the right-hand side is a finite collection of terms that do not have \( E \) in them.

The eccentricity \( e \) of a parabolic orbit is equal to exactly 1. For such an orbit, there is a direct method to find the position as a function of time, without approximations. That's Barker's method:

\begin{align} W \| = t\sqrt{\frac{9Γ}{8q^3}} = \frac{3}{2\sqrt{2}} t\sqrt{\frac{Γ}{q^3}} = \frac{3}{2\sqrt{2}} M_q \label{eq:pkepler} \\ u \| = \sqrt[3]{W + \sqrt{W^2 + 1}} \\ τ_ν \| ≡ \tan\left( \frac{1}{2}ν \right) = u − \frac{1}{u} \end{align}

For \( M_q \) see Equation \eqref{eq:M_q}.

For a hyperbolic orbit, the eccentricity \( e \) is greater than 1.

\begin{align} M \| = t\sqrt{\frac{Γ}{|a|^3}} \\ M \| = e \sinh E − E \label{eq:hkepler} \\ τ_E \| ≡ \tanh\left( \frac{1}{2}E \right) \\ τ_ν \| ≡ \tan\left( \frac{1}{2}ν \right) = \sqrt{\frac{e + 1}{e − 1}} τ_E \label{eq:hEtoν} \end{align}

Equation \eqref{eq:hkepler} has the same problem that equation \eqref{eq:ekepler} has.

We define some auxiliary quantities that will come in handy later.
Firstly the *parabolic excess* \( δ \), which is the amount by
which the eccentricity exceeds that of a parabolic orbit:

\begin{equation} δ ≡ e − 1 \label{eq:δ} \end{equation}

Then the *perifocal distance* \( q \), which is the distance
between the focus and perifocus of the orbit. The focus of the orbit
is the point that the object orbits around. The perifocus is the
point in the orbit that is closest to the focus.

\begin{equation} q ≡ |a| |1 − e| = |a| |δ| \end{equation}

Then the *perifocal anomaly* \( M_q \), which is like the mean
anomaly but based on the perifocal distance \( q \) instead of on the
length \( a \) of the semimajor axis:

\begin{equation} M_q ≡ t \sqrt{\frac{Γ}{q^3}} = \frac{M}{\sqrt{|δ|^3}} \label{eq:M_q} \end{equation}

and finally the *reduced eccentric anomaly* \( E_r \), which is

\begin{equation} E_r ≡ \frac{E}{\sqrt{|δ|}} \label{eq:E_r} \end{equation}

If you know the eccentricity \( e \), the perifocal distance \( q = a(1 − e) \) between the focal point and the perifocus, and the true anomaly \( ν \), then you can calculate the distance \( r \) from the planet to the focal point, and the coordinates \( x \) and \( y \):

\begin{eqnarray} r \| = \| \dfrac{a(1 − e^2)}{1 + e \cos ν} \| \qquad (e ≠ 1) \\ \| = \| q \dfrac{1 + e}{1 + e \cos ν} \\ x \| = \| r \cos ν \\ \| = \| q \dfrac{(1 + e) \cos ν}{1 + e \cos ν} \\ y \| = \| r \sin ν \\ \| = \| q \dfrac{(1 + e) \sin ν}{1 + e \cos ν} \end{eqnarray}

For example, if \( e = 0.5 \), \( a = 1 \), and \( ν = 30° = π/6 \text{ rad} \), then

\begin{eqnarray*} q \| = \| 1×(1 − 0.5) \| = \| 0.5 \\ r \| = \| \dfrac{1×(1 − 0.5^2)}{1 + 0.5×\cos(30°)} \| = \| 0.52337 \\ \| = \| 0.5×\dfrac{1 + 0.5}{1 + 0.5×\cos(30°)} \| = \| 0.52337 \\ x \| = \| 0.52337×\cos(30°) \| = \| 0.45325 \\ \| = \| 0.5×\dfrac{1.5×\cos(30°)}{1 + 0.5×\cos(30°)} \| = \| 0.45325 \\ y \| = \| 0.52337×\sin(30°) \| = \| 0.26169 \\ \| = \| 0.5×\dfrac{1.5×\sin(30°)}{1 + 0.5×\sin(30°)} \| = \| 0.26169 \end{eqnarray*}

And if \( e = 1.5 \), \( a = −1 \), and \( ν = 30° = π/6 \text{ rad} \), then

\begin{eqnarray*} q \| = \| −1×(1 − 1.5) \| = \| 0.5 \\ r \| = \| \dfrac{−1×(1 − 1.5^2)}{1 + 1.5×\cos(30°)} \| = \| 0.54371 \\ \| = \| 0.5×\dfrac{1 + 1.5}{1 + 1.5×\cos(30°)} \| = \| 0.54371 \\ x \| = \| 0.54371×\cos(30°) \| = \| 0.47086 \\ \| = \| 0.5×\dfrac{1.5×\cos(30°)}{1 + 1.5×\cos(30°)} \| = \| 0.47086 \\ y \| = \| 0.54371×\sin(30°) \| = \| 0.27185 \\ \| = \| 0.5×\dfrac{1.5×\sin(30°)}{1 + 1.5×\sin(30°)} \| = \| 0.27185 \end{eqnarray*}

It isn't always necessary to calculate the true anomaly \( ν \). If what you're really after is the distance and/or the cartesian coordinates, then that can be done through \( τ_E \) and/or \( τ_ν \) without \( ν \) ― which saves a couple of trigonometric function calls.

You get the shortest calcuations if you go via \( τ_ν \) and \( τ_E \):

\begin{eqnarray} r \| = \| q \dfrac{1 + τ_ν^2}{1 − \sgn(δ) τ_E^2} \\ x \| = \| q \dfrac{1 − τ_ν^2}{1 − \sgn(δ) τ_E^2} \\ y \| = \| q \dfrac{2τ_ν}{1 − \sgn(δ) τ_E^2} \end{eqnarray}

For example, when \( e = 0.5 \), \( a = 1 \), and \( ν = 30° = π/6 \text{ rad} \), then

\begin{eqnarray*} τ_ν \| = \| \tan( 0.5×30° ) = \tan(15°) \| = \| 0.26795 \\ τ_E \| = \| \sqrt{\dfrac{1 − 0.5}{1 + 0.5}} × 0.26795 \| = \| 0.15470 \end{eqnarray*}

When you solve Kepler's Equation then you don't go from \( ν \) to \( τ_ν \) to \( τ_E \) but in the exact opposite direction. We begin this example with \( ν \) so we can compare the results with those of the previous example.

With \( τ_E \) and \( τ_ν \) we now calculate the coordinates:

\begin{eqnarray*} r \| = \| 0.5×\dfrac{1 + 0.26795^2}{1 + 0.15470^2} \| = \| 0.52337 \\ x \| = \| 0.5×\dfrac{1 − 0.26795^2}{1 + 0.15470^2} \| = \| 0.45325 \\ y \| = \| 0.5×\dfrac{2×0.26795}{1 + 0.15470^2} \| = \| 0.26169 \end{eqnarray*}

And when \( e = 1.5 \), \( a = −1 \), and \( ν = 30° = π/6 \text{ rad} \), then

\begin{eqnarray*} τ_ν \| = \| \tan( 0.5×30° ) = \tan(15°) \| = \| 0.26795 \\ τ_E \| = \| \sqrt{\dfrac{1.5 − 1}{1.5 + 1}} × 0.26795 \| = \| 0.11983 \end{eqnarray*}

and then

\begin{eqnarray*} r \| = \| 0.5×\dfrac{1 + 0.26795^2}{1 − 0.11983^2} \| = \| 0.54371 \\ x \| = \| 0.5×\dfrac{1 − 0.26795^2}{1 − 0.11983^2} \| = \| 0.47086 \\ y \| = \| 0.5×\dfrac{2×0.26795}{1 − 0.11983^2} \| = \| 0.27185 \end{eqnarray*}

For reference, here are the formulas based on only \( τ_ν \), only \( τ_E \), and only \( E \).

\begin{eqnarray} r \| = \| q \dfrac{(1 + e) (τ_ν^2 + 1)}{1 + e + (1 − e) τ_ν^2} \\ \| = \| q \dfrac{1 − e − (1 + e) \sgn(δ) τ_E^2}{(1 − e)(1 − \sgn(δ) τ_E^2)} \\ \| = \| q \dfrac{1 − e \cos(E)}{1 − e} \| \qquad (e \lt 1) \\ \| = \| q \dfrac{1 − e \cosh(E)}{1 − e} \| \qquad (e \gt 1) \\ x \| = \| q \dfrac{(1 + e) (1 − τ_ν^2)}{1 + e + (1 − e) τ_ν^2} \\ \| = \| q \dfrac{1 − e + (1 + e) \sgn(δ) τ_E^2}{(1 − e)(1 − \sgn(δ) τ_E^2)} \\ \| = \| q \dfrac{\cos(E) − e}{1 − e} \| \qquad (e \lt 1) \\ \| = \| q \dfrac{\cosh(E) − e}{1 − e} \| \qquad (e \gt 1) \\ y \| = \| 2q \dfrac{(1 + e) τ_ν}{1 + e + (1 − e) τ_ν^2} \\ \| = \| 2q \sqrt{\dfrac{e+1}{|e-1|}} \dfrac{τ_E}{1 − \sgn(δ) τ_E^2} \\ \| = \| q \sqrt{\dfrac{1 + e}{1 − e}} \sin E \| \qquad (e \lt 1) \\ \| = \| q \sqrt{\dfrac{e + 1}{e − 1}} \sinh E \| \qquad (e \gt 1) \end{eqnarray}

Here is a summary of the method that is explained in detail later. The goal is to calculate the true anomaly \( ν \) and \( τ_ν ≡ \tan(ν/2) \) for time \( t \) (since passage through the perifocus) for an object in an orbit with semimajor axis \( a \) or perifocal distance \( q \) and eccentricity \( e \) and gravity parameter \( Γ \), or for mean anomaly \( M \) and eccentricity \( e \), or for perifocal anomaly \( M_q \) and eccentricity \( e \).

- Calculate the parabolic excess \( δ \):
\begin{equation} δ = e − 1 \end{equation}

- Then
calculate the mean anomaly \( M \) and perifocal anomaly \( M_q
\), if they aren't known yet. If you begin with time \( t \) and
semimajor axis \( a \) (this is possible only for a non-parabolic
orbit, \( e ≠ 1 \)), then calculate
\begin{equation} M = t\sqrt{\dfrac{Γ}{a^3}} \qquad (e ≠ 1) \end{equation}

If you have the mean anomaly \( M \) but not yet the perifocal anomaly \( M_q \), then calculate

\begin{eqnarray} M \| ← \| M − 2π \left[ \dfrac{M}{2π} \right] \| \qquad ( e \lt 1 ) \\ M_q \| = \| \dfrac{M}{\sqrt{|δ|^3}} \| \qquad (e ≠ 1) \end{eqnarray}

The first formula shifts \( M \) by a multiple of \( 2π \) until it lies between \( −π \) and \( +π \). In that formula, \( [...] \) means "round to the nearest whole number". This shift is necessary only for elliptic orbits (\( e \lt 1 \)) because for such orbits the positions repeat themselves whenever \( M \) increases by \( 2π \), and then the rest of this procedure performs worse. For parabolic (\( e = 1 \)) and hyperbolic (\( e \gt 1 \)) orbits the positions do not repeat themselves.

If you begin with time \( t \) and perifocal distance \( q \) (this is possible for all orbits including parabolic ones), then calculate

\begin{equation} M_q = t\sqrt{\dfrac{Γ}{q^3}} \end{equation}

If you have the perifocal anomaly \( M_q \) but not yet the mean anomaly \( M \), then calculate

\begin{eqnarray} M \| = \| M_q \sqrt{|δ|^3} \\ M \| ← \| M − 2π \left[ \dfrac{M}{2π} \right] \| \qquad (e \lt 1) \end{eqnarray}

- Calculate intermediate
value \( T \):
\begin{eqnarray} W \| = \| \sqrt{\frac{9}{8}} \frac{M_q}{e} \\ u \| = \| \sqrt[3]{W + \sqrt{W^2 + \frac{1}{e^3}}} \\ T \| = \| u − \frac{1}{eu} \end{eqnarray}

- If \( e = 1 \) (for a parabolic
orbit), then calculate \( τ_ν \) and the true anomaly \( ν \):
\begin{eqnarray} τ_ν \| = \| T \\ ν \| = \| 2\arctan(τ_ν) = 2\arctan(T) \end{eqnarray}

Now you are done.

- Otherwise (if \( e ≠ 1 \)) calculate the
initial estimate for small anomalies:
\begin{equation} E_s = T\sqrt{2|δ|} \end{equation}

If \( e \lt 1 \) (for elliptic orbits), then we use this initial estimate and call it \( E_0 \).

- If \( e \gt 1 \) (for
hyperbolic orbits), then also calculate the initial estimate for
large anomalies:
\begin{equation} E_h = \arsinh\left( \frac{M}{e} \right) \end{equation}

If

\[ |E_h| \lt 0.53 |e \sinh(E_s) − E_s − M| \]

then we continue the calculations with initial estimate \( E_h \), and otherwise initial estimate \( E_s \). We call the selected initial estimate \( E_0 \).

- Assign \( i = 0 \).
- If \(
e \lt 1 \) (for elliptic orbits), then calculate
\begin{eqnarray} s_i \| = \| e \sin(E_i) \\ c_i \| = \| 1 − e \cos(E_i) \\ d_i \| = \| M − E_i + s_i \end{eqnarray}

If \( e \gt 1 \) (for hyperbolic orbits), then calculate

\begin{eqnarray} s_i \| = \| e \sinh(E_i) \\ c_i \| = \| e \cosh(E_i) − 1 \\ d_i \| = \| M + E_i − s_i \end{eqnarray}

Then calculate

\begin{eqnarray} \del E_i \| = \| \frac{d_i}{c_i} \\ B_i \| = \| \left| \frac{2εE_ic_i}{s_i} \right| \\ E_{i+1} \| = \| E_i + \del E_i \end{eqnarray}

If \( \del E_i^2 \lt B_i \), then you're done. Otherwise set \( i ← i + 1 \) and repeat this step.

- The last found \( E_i \) is the \( E \) that we seek.
If \( e \lt 1 \) (for elliptic orbits), then calculate

\begin{equation} τ_ν = \sqrt{\frac{e + 1}{|δ|}} \tan\left( \frac{1}{2}E \right) \end{equation}

and if \( e \gt 1 \) (for hyperbolic orbits), then calculate

\begin{equation} τ_ν = \sqrt{\frac{e + 1}{|δ|}} \tanh\left( \frac{1}{2}E \right) \end{equation}

Then calculate

\begin{equation} ν = 2 \arctan(τ_ν) \end{equation}

Now you're done.

The values of \( B_i \) are in practice nearly the same for all \( i \), and the precise value is not terribly important, so if calculation speed is very important then you could consider calculating only \( B_0 \) and using it instead of \( B_i \).

Here are some worked examples. We assume \( t = 1 \), \( Γ = 1 \), \( q = 1 \), and \( ε = 2.2×10^{−16} \), except where noted. I write the numbers with up to 5 significant figures, but take into account all significant figures in the calculations.

With \( e = 1 \) we find

\begin{eqnarray*} M_q \| = \| 1 × \sqrt{\frac{1}{1^3}} \| = 1 \\ W \| = \| \sqrt{\frac{9}{8}} × \frac{1}{1} \| = 1.0607 \\ u \| = \| \sqrt[3]{1.0607 + \sqrt{1.0607^2 + \frac{1}{1^3}}} \| = 1.3605 \\ T = τ_ν \| = \| 1.3605 − \frac{1}{1 × 1.3605} \| = 0.62552 \\ ν \| = \| 2\arctan(0.6255) \| = 1.1179 \end{eqnarray*}

With \( e = 0.99 \) we find

\begin{eqnarray*} δ \| = \| 0.99 − 1 \| = −0.01 \\ a \| = \| \frac{1}{0.01} \| = 100 \\ M \| = \| 1 × \sqrt{\frac{1}{100^3}} \| = 0.001 \\ M \| ← \| 0.001 − 2π\left[ \frac{0.001}{2π} \right] \| = 0.001 \\ M_q \| = \| 1 × \sqrt{\frac{1}{1^3}} \| = 1 \\ W \| = \| \sqrt{\frac{9}{8}} × \frac{1}{0.99} \| = 1.0714 \\ u \| = \| \sqrt[3]{1.0714 + \sqrt{1.0714^2 + \frac{1}{0.99^3}}} \| = 1.3657 \\ T \| = \| 1.3657 − \frac{1}{0.99 × 1.3657} \| = 0.62611 \\ E_s \| = \| 0.62611 × \sqrt{2×0.01} \| = 0.088545 \\ E_0 \| = \| 0.088545 \\ s_0 \| = \| 0.99\sin(0.088545) \| = 0.087545 \\ c_0 \| = \| 1 − 0.99 \cos(0.088545) \| = 0.013878 \\ d_0 \| = \| 0.001 − 0.088545 − 0.087545 \| = 4.4895 × 10^{−8} \\ \del E_0 \| = \| \frac{4.4895 × 10^{−8}}{0.013878} \| = 3.2349 × 10^{−6} \\ B_0 \| = \| \left| \frac{2 × 2.2 × 10^{−16} × 0.088545 × 0.013878}{0.087545} \right| \| = 7.0401 × 10^{−17} \\ E_1 \| = \| 0.088545 + 3.32349 × 10^{−6} \| = 0.088549 \end{eqnarray*}

We find \( \del E_0^2 \gt |B_0| \) so we haven't found \( E \) yet.

\begin{eqnarray*} s_1 \| = \| 0.99\sin(0.088549) \| = 0.087549 \\ c_1 \| = \| 1 − 0.99 \cos(0.088549) \| = 0.013879 \\ d_1 \| = \| 0.001 − 0.088549 − 0.087549 \| = −4.5806 × 10^{−13} \\ \del E_1 \| = \| \frac{−4.5806 × 10^{−13}}{0.013879} \| = −3.3005 × 10^{−11} \\ B_1 \| = \| \left| \frac{2 × 2.2 × 10^{−16} × 0.088542 × 0.013879}{0.087549} \right| \| = 7.0399 × 10^{−17} \\ E_2 \| = \| 0.088549 + (−3.3005 × 10^{−11}) \| = 0.088549 \end{eqnarray*}

Now \( \del E_1^2 \lt |B_1| \) so we've found \( E \).

\[ E − e \sin(E) = 0.088549 − 0.99 \sin(0.088549) = 0.0010000 = M \]

Then

\begin{eqnarray*} E \| = \| 0.088549 \\ τ_ν \| = \| \sqrt{\frac{1.99}{0.01}} \tan\left( \frac{0.088549}{2} \right) \| = 0.62497 \\ ν \| = \| 2\arctan(0.62497) \| = 1.1172 \end{eqnarray*}

Now we look at \( e = 2 \) and \( t = 100 \). Then

\begin{eqnarray*} δ \| = \| 2 − 1 \| = 1 \\ a \| = \| \frac{1}{1} \| = 1 \\ M \| = \| 100 × \sqrt{\frac{1}{1^3}} \| = 100 \\ M_q \| = \| 100 × \sqrt{\frac{1}{1^3}} \| = 100 \\ W \| = \| \sqrt{\frac{9}{8}} × \frac{100}{2} \| = 53.033 \\ u \| = \| \sqrt[3]{53.033 + \sqrt{53.033^2 + \frac{1}{2^3}}} \| = 4.7336 \\ T \| = \| 4.7336 − \frac{1}{2×4.7336} \| = 4.6280 \\ E_s \| = \| 4.6280 × \sqrt{2×1} \| = 6.5450 \\ E_h \| = \| \arsinh\left( \frac{100}{2} \right) \| = 4.6053 \end{eqnarray*}

Now \( |E_h| \lt 0.53 |e\sinh(E_s) − E_s − M| \) so \( E_0 = E_h \).

\begin{eqnarray*} E_0 \| = \| 4.6053 \\ s_0 \| = \| 2\sinh(4.6053) \| = 100 \\ c_0 \| = \| 2 \cosh(4.6053) − 1 \| = 99.020 \\ d_0 \| = \| 100 + 4.6053 − 100 \| = 4.6053 \\ \del E_0 \| = \| \frac{4.6053}{99.020} \| = 0.046508 \\ B_0 \| = \| \left| \frac{2 × 2.2 × 10^{−16} × 4.6053 × 99.010}{100} \right| \| = 4.3974 × 10^{−16} \\ E_1 \| = \| 4.6053 + 0.046508 \| = 4.6518 \\ s_1 \| = \| 2\sinh(4.6518) \| = 104.76 \\ c_1 \| = \| 2 \cosh(4.6518) − 1 \| = 103.78 \\ d_1 \| = \| 100 + 4.6518 − 104.76 \| = −0.10985 \\ \del E_1 \| = \| \frac{−0.10985}{103.78} \| = −0.0010585 \\ B_1 \| = \| \left| \frac{2 × 2.2 × 10^{−16} × 4.6518 × 103.78}{104.76} \right| \| = 4.3993 × 10^{−16} \\ E_2 \| = \| 4.6518 − 0.0010585 \| = 4.6507 \\ s_2 \| = \| 2\sinh(4.6507) \| = 104.65 \\ c_2 \| = \| 2 \cosh(4.6507) − 1 \| = 103.67 \\ d_2 \| = \| 100 + 4.6507 − 104.65 \| = −5.8664 × 10^{−5} \\ \del E_2 \| = \| \frac{−5.8664 × 10^{−5}}{103.67} \| = −5.6588 × 10^{−7} \\ B_2 \| = \| \left| \frac{2 × 2.2 × 10^{−16} × 4.6507 × 103.67}{104.65} \right| \| = 4.3993 × 10^{−16} \\ E_3 \| = \| 4.6507 + (−5.6588 × 10^{−7}) \| = 4.6507 \\ s_3 \| = \| 2\sinh(4.6507) \| = 104.65 \\ c_3 \| = \| 2 \cosh(4.6507) − 1 \| = 103.67 \\ d_3 \| = \| 100 + 4.6507 − 104.65 \| = −1.6712 × 10^{−11} \\ \del E_3 \| = \| \frac{−1.6712 × 10^{−11}}{103.67} \| = −1.6120 × 10^{−13} \\ B_3 \| = \| \left| \frac{2 × 2.2 × 10^{−16} × 4.6507 × 103.67}{104.65} \right| \| = 4.3993 × 10^{−16} \\ E_4 \| = \| 4.6507 + (−1.6120 × 10^{−13}) \| = 4.6507 \end{eqnarray*}

Now \( \del E_3^2 \lt |B_3| \) so we found \( E \). Let's check:

\[ e \sinh(E) − E = 2 \sinh(4.6507) − 4.6507 = 100.00 = M \]

Then

\begin{eqnarray*} E \| = \| 4.6507 \\ τ_ν \| = \| \sqrt{\frac{3}{1}} \tan\left( \frac{4.6507}{2} \right) \| = 1.6993 \\ ν \| = \| 2\arctan(1.6993) \| = 2.0778 \end{eqnarray*}

Below are charts associated with the Kepler Equation. Only results for positive mean anomaly \( M \) or perifocal anomaly \( M_q \) are shown. The results for negative anomalies are equally large but negative (except for the distance \( r \)).

Fig. 1: Kepler Equation: ν versus M and e

Figure 1 shows the true anomaly \( ν \) (color coded) versus mean anomaly \( M \) on the vertical axis and eccentricity \( e \) on the horizontal axis. The true anomaly is what we're really after. Solving Kepler's Equation is one step on the way to get there.

The white area at the top left represents \( M \gt π \) for elliptic orbits (\( e \lt 1 \)), for which no solution is needed because for elliptic orbits \( M \) can always be reduced to the range from −π to +π to represent the same solution. For hyperbolic orbits (\( e \gt 1 \)) the mean anomaly can get arbitrarily large. The true anomaly is at most equal to π ≈ 3.14.

The white vertical line at \( e = 1 \) represents a parabolic orbit. For a parabolic orbit no mean anomaly can be defined.

It is clear that for near-parabolic orbits (\( e ≈ 1 \)) the true anomaly for a particular mean anomaly depends strongly on the eccentricity.

Fig. 2: Kepler Equation: E versus M and e

As in the previous figure, the white vertical line at \( e = 1 \) represents a parabolic orbit. For a parabolic orbit no mean anomaly can be defined.

For near-parabolic orbits, the eccentric anomaly for a particular mean anomaly depends strongly on the eccentricity, too.

Fig. 3: Kepler Equation: ν versus M_q and e

Figure 3 is like Figure 1 but has the perifocal anomaly \( M_q \) instead of the mean anomaly \( M \) along the vertical axis.

For near-parabolic orbits, the true anomaly for a fixed perifocal anomaly does not depend strongly on the eccentricity, which shows that the perifocal anomaly is a convenient measure of time also for near-parabolic orbits.

Fig. 4: Kepler Equation: E versus M_q and e

Figure 4 is like Figure 2 but has the perifocal anomaly \( M_q \) instead of the mean anomaly \( M \) along the vertical axis.

The white vertical line at \( e = 1 \) represents a parabolic orbit. For a parabolic orbit no eccentric anomaly can be defined.

For near-parabolic orbits, the eccentric anomaly for a fixed perifocal anomaly depends strongly on the eccentricity.

Fig. 5: Kepler Equation: E_r versus M_q and e

Figure 5 is like Figure 4 but shows reduced eccentric anomaly \( E_r \) instead of (regular) eccentric anomaly \( E \).

For near-parabolic orbits, the reduced eccentric anomaly for a fixed perifocal anomaly does not depend strongly on the eccentricity, which shows that the reduced eccentric anomaly is a convenient measure of location for near-parabolic orbits.

Fig. 6: Kepler Equation: r/q versus M_q and e

Figure 6 shows the distance \( r \) between the object and the central object, divided by the perifocus distance \( q \), as a function of the perifocal anomaly \( M_q \) and eccentricity \( e \).

Fig. 7: Kepler Equation: r/q − 1 versus M_q and e

Figure 7 show by how much the distance \( r \) is greater than the perifocal distance \( q \), divided by \( q \), as a function of the perifocal anomaly \( M_q \) and eccentricity \( e \).

The next few tables show a some specific solutions to Kepler's Equation. All angles are given in radians. One radian is 180/π ≈ 57,2957795131 degrees.

Table 1 is for an anomaly of 0.0001 radians.

Table 1: Equation of Kepler: Solutions for 0.0001 Radian

\({M}\) | \({M_q}\) | \({e}\) | \({E}\) | \({E_r}\) | \({τ_ν}\) | \({ν}\) |
---|---|---|---|---|---|---|

0.000100000000 | 0.000100000000 | 0.00000000 | 0.000100000000 | 0.000100000000 | 5.00000000 × 10^{−05} | 0.000100000000 |

0.000100000000 | 0.000101518971 | 0.0100000000 | 0.000101010101 | 0.000101518971 | 5.10126517 × 10^{−05} | 0.000102025303 |

0.000100000000 | 0.00316227766 | 0.900000000 | 0.000999998500 | 0.00316227292 | 0.00217944638 | 0.00435888587 |

0.000100000000 | 0.100000000 | 0.990000000 | 0.00998358122 | 0.0998358122 | 0.0704184571 | 0.140604812 |

0.000100000000 | 3.16227766 | 0.999000000 | 0.0614230944 | 1.94236879 | 1.37355061 | 1.88299657 |

0.000100000000 | 100.000000 | 0.999900000 | 0.0819842185 | 8.19842185 | 5.80026395 | 2.80013747 |

0.000100000000 | 100.000000 | 1.00010000 | 0.0819610818 | 8.19610818 | 5.79242631 | 2.79968440 |

0.000100000000 | 3.16227766 | 1.00100000 | 0.0613913007 | 1.94136339 | 1.37266327 | 1.88238152 |

0.000100000000 | 0.100000000 | 1.01000000 | 0.00998325102 | 0.0998325102 | 0.0707679178 | 0.141300268 |

0.000100000000 | 0.00316227766 | 1.10000000 | 0.000999998167 | 0.00316227186 | 0.00229128346 | 0.00458255889 |

0.000100000000 | 1.01518971 × 10^{−07} | 100.000000 | 1.01010101 × 10^{−06} | 1.01518971 × 10^{−07} | 5.10126517 × 10^{−07} | 1.02025303 × 10^{−06} |

0.000100000000 | 1.00000150 × 10^{−13} | 1000000.00 | 1.00000100 × 10^{−10} | 1.00000150 × 10^{−13} | 5.00001000 × 10^{−11} | 1.00000200 × 10^{−10} |

9.85037563 × 10^{−05} | 0.000100000000 | 0.0100000000 | 9.94987437 × 10^{−05} | 0.000100000000 | 5.02493781 × 10^{−05} | 0.000100498756 |

3.16227766 × 10^{−06} | 0.000100000000 | 0.900000000 | 3.16227766 × 10^{−05} | 9.99999999 × 10^{−05} | 6.89202437 × 10^{−05} | 0.000137840487 |

1.00000000 × 10^{−07} | 0.000100000000 | 0.990000000 | 9.99999998 × 10^{−06} | 9.99999998 × 10^{−05} | 7.05336798 × 10^{−05} | 0.000141067359 |

3.16227766 × 10^{−09} | 0.000100000000 | 0.999000000 | 3.16227765 × 10^{−06} | 9.99999998 × 10^{−05} | 7.06929981 × 10^{−05} | 0.000141385996 |

1.00000000 × 10^{−10} | 0.000100000000 | 0.999900000 | 9.99999998 × 10^{−07} | 9.99999998 × 10^{−05} | 7.07089102 × 10^{−05} | 0.000141417820 |

0.000100000000 | 1.00000000 | 7.07106780 × 10^{−05} | 0.000141421356 | |||

1.00000000 × 10^{−10} | 0.000100000000 | 1.00010000 | 9.99999998 × 10^{−07} | 9.99999998 × 10^{−05} | 7.07124457 × 10^{−05} | 0.000141424891 |

3.16227766 × 10^{−09} | 0.000100000000 | 1.00100000 | 3.16227765 × 10^{−06} | 9.99999998 × 10^{−05} | 7.07283535 × 10^{−05} | 0.000141456707 |

1.00000000 × 10^{−07} | 0.000100000000 | 1.01000000 | 9.99999998 × 10^{−06} | 9.99999998 × 10^{−05} | 7.08872343 × 10^{−05} | 0.000141774468 |

3.16227766 × 10^{−06} | 0.000100000000 | 1.10000000 | 3.16227765 × 10^{−05} | 9.99999998 × 10^{−05} | 7.24568836 × 10^{−05} | 0.000144913767 |

0.0985037563 | 0.000100000000 | 100.000000 | 0.000994987271 | 9.99999833 × 10^{−05} | 0.000502493656 | 0.00100498723 |

99999.8500 | 0.000100000000 | 1000000.00 | 0.0998340290 | 9.98340789 × 10^{−05} | 0.0498756461 | 0.0996687023 |

Line 4 in Table 1 says that for \( M = 0.0001 \) radians and \( e = 0.99 \), the eccentric anomaly is equal to \( E = 0.00998358122 \) radians, and the true anomaly is equal to \( ν = 0.140602484 \) radians = 8.05592894°. Is that correct?

Equation \eqref{eq:ekepler} says that

\[ M = E − e \sin(E) \]

With the current values of \( E \) and \( e \) we find

\[ E − e \sin(E) = 0.00998358122 − 0.99 × \sin(0.00998358122) = 0.000100000000 \]

which is equal to \( M \). Equation \eqref{eq:eEtoν} then yields

\begin{align*} \tan\left( \frac{1}{2}ν \right) \| = \sqrt{\frac{1 + e}{1 − e}} \tan\left( \frac{1}{2}E \right) \\ \| = \sqrt{\frac{1.99}{0.01}} × \tan(0.5×0.0998358122) \\ \| = 14.106736 × \tan(0.00499179061) \\ \| = 14.106736 × 0.00499183207 \\ \| = 0.0704184571 \end{align*}

which fits what is listed in the τ_ν column.

Then

\[ ν = 2\arctan(0.0704184571) = 0.140604812 \]

which also fits what it says in the table.

Table 2 is for an anomaly of 1 radian.

Table 2: Equation of Kepler: Solutions for 1 Radian

\({M}\) | \({M_q}\) | \({e}\) | \({E}\) | \({E_r}\) | \({τ_ν}\) | \({ν}\) |
---|---|---|---|---|---|---|

1.00000000 | 1.00000000 | 0.00000000 | 1.00000000 | 1.00000000 | 0.546302490 | 1.00000000 |

1.00000000 | 1.01518971 | 0.0100000000 | 1.00846012 | 1.01354055 | 0.557353696 | 1.01694301 |

1.00000000 | 31.6227766 | 0.900000000 | 1.86208669 | 5.88843513 | 5.85747591 | 2.80340907 |

1.00000000 | 1000.00000 | 0.990000000 | 1.92763555 | 19.2763555 | 20.3140949 | 3.04321826 |

1.00000000 | 31622.7766 | 0.999000000 | 1.93387356 | 61.1544515 | 64.8144720 | 3.11073780 |

1.00000000 | 1000000.00 | 0.999900000 | 1.93449428 | 193.449428 | 205.143679 | 3.13184347 |

1.00000000 | 1000000.00 | 1.00010000 | 1.72897376 | 172.897376 | 98.7940852 | 3.12134922 |

1.00000000 | 31622.7766 | 1.00100000 | 1.72768618 | 54.6342340 | 31.2337093 | 3.07758114 |

1.00000000 | 1000.00000 | 1.01000000 | 1.71487376 | 17.1487376 | 9.85240023 | 2.93928924 |

1.00000000 | 31.6227766 | 1.10000000 | 1.59281168 | 5.03691279 | 3.03376885 | 2.50477756 |

1.00000000 | 0.00101518971 | 100.000000 | 0.0101008366 | 0.00101517228 | 0.00510113418 | 0.0102021799 |

1.00000000 | 1.00000150 × 10^{−09} | 1000000.00 | 1.00000100 × 10^{−06} | 1.00000150 × 10^{−09} | 5.00001000 × 10^{−07} | 1.00000200 × 10^{−06} |

0.985037563 | 1.00000000 | 0.0100000000 | 0.993416520 | 0.998421169 | 0.547483734 | 1.00181857 |

0.0316227766 | 1.00000000 | 0.900000000 | 0.282532839 | 0.893447285 | 0.619895127 | 1.10983994 |

0.00100000000 | 1.00000000 | 0.990000000 | 0.0885485963 | 0.885485963 | 0.624974249 | 1.11716160 |

3.16227766 × 10^{−05} | 1.00000000 | 0.999000000 | 0.0279769359 | 0.884708395 | 0.625467687 | 1.11787112 |

1.00000000 × 10^{−06} | 1.00000000 | 0.999900000 | 0.00884630818 | 0.884630818 | 0.625516891 | 1.11794185 |

1.00000000 | 1.00000000 | 0.625522357 | 1.11794971 | |||

1.00000000 × 10^{−06} | 1.00000000 | 1.00010000 | 0.00884613583 | 0.884613583 | 0.625527822 | 1.11795757 |

3.16227766 × 10^{−05} | 1.00000000 | 1.00100000 | 0.0279714858 | 0.884536046 | 0.625576995 | 1.11802825 |

0.00100000000 | 1.00000000 | 1.01000000 | 0.0883762467 | 0.883762467 | 0.626067340 | 1.11873295 |

0.0316227766 | 1.00000000 | 1.10000000 | 0.277078928 | 0.876200504 | 0.630836813 | 1.12557114 |

985.037563 | 1.00000000 | 100.000000 | 2.98623497 | 0.300127907 | 0.912981379 | 1.47988203 |

999998500. | 1.00000000 | 1000000.00 | 7.60090122 | 0.00760090502 | 0.999001498 | 1.56979733 |

Line 18 from Table 2 says that for \( M_q = 1 \) and \( e = 1 \) (i.e., for a parabolic orbit) we have dat \( \tan\left( \frac{1}{2}ν \right) = 0.625522357 \). Is that correct?

According to Equations \eqref{eq:pkepler}ff and \eqref{eq:M_q} we have

\[ W = \sqrt{\frac98}M_q = 1.06066017×1 = 1.06066017 \]

and then

\begin{align*} u \| = \left( W + \sqrt{W^2 + 1} \right)^{1/3} \\ \| = \left( 1.06066017 + \sqrt{1.06066017^2 + 1} \right)^{1/3} \\ \| = (1.06066017 + 1.45773797)^{1/3} \\ \| = 2.51839815^{1/3} \\ \| = 1.36053002 \end{align*}

and then

\begin{align*} τ_ν = \tan\left( \frac{1}{2}ν \right) \| = u − \frac{1}{u} \\ \| = 1.36053002 − \frac{1}{1.36053002} \\ \| = 0.625522357 \end{align*}

which fits what it says in the τ_ν column.

Table 3 is for an anomaly of 10000 radians. That is relevant only for parabolic and hyperbolic orbits.

Table 3: Equation of Kepler: Solutions for 10000 Radians

\({M}\) | \({M_q}\) | \({e}\) | \({E}\) | \({E_r}\) | \({τ_ν}\) | \({ν}\) |
---|---|---|---|---|---|---|

10000.0000 | 1.00000000 × 10^{+10} | 1.00010000 | 9.90437751 | 990.437751 | 141.410763 | 3.12744969 |

10000.0000 | 316227766. | 1.00100000 | 9.90347791 | 313.175470 | 44.7280654 | 3.09688545 |

10000.0000 | 10000000.0 | 1.01000000 | 9.89452619 | 98.9452619 | 14.1760164 | 3.00074262 |

10000.0000 | 316227.766 | 1.10000000 | 9.80915781 | 31.0192806 | 4.58207213 | 2.71184720 |

10000.0000 | 10.1518971 | 100.000000 | 5.29887209 | 0.532556682 | 1.00000580 | 1.57080212 |

10000.0000 | 1.00000150 × 10^{−05} | 1000000.00 | 0.00999984334 | 9.99984834 × 10^{−06} | 0.00499988501 | 0.00999968669 |

10000.0000 | 1.00000000 | 27.6461704 | 3.06928143 | |||

0.0100000000 | 10000.0000 | 1.00010000 | 0.389974639 | 38.9974639 | 27.2318138 | 3.06818213 |

0.316227766 | 10000.0000 | 1.00100000 | 1.20643179 | 38.1507231 | 24.1257778 | 3.05874120 |

10.0000000 | 10000.0000 | 1.01000000 | 3.27015981 | 32.7015981 | 13.1393971 | 2.98967154 |

316.227766 | 10000.0000 | 1.10000000 | 6.37425935 | 20.1571779 | 4.56697679 | 2.71047028 |

9850375.63 | 10000.0000 | 100.000000 | 12.1909984 | 1.22524144 | 1.01004025 | 1.58078634 |

9.99998500 × 10^{+12} | 10000.0000 | 1000000.00 | 16.8112413 | 0.0168112497 | 1.00000090 | 1.57079723 |

Line 3 of Table 3 says that for \( M = 10000 \) radians and \( e = 1.01 \), the eccentric anomaly is equal to 9.89452619, and the true anomaly is equal to \( ν = 2×\arctan(14.1760164) = 3.00074262 \) radians = 171.929887°. Is that correct?

According to Equation \eqref{eq:hkepler} we must then have

\[ M = e \sinh(E) − E \]

With the current values of \( E \) and \( e \) we find

\[ e \sinh(E) − E = 1.01 × \sinh(9.89452619) − 9.89452619 = 10000 \]

which is exactly the value of \( M \). Equation \eqref{eq:hEtoν} then leads to

\begin{align*} τ_ν = \tan\left( \frac{1}{2}ν \right) \| = \sqrt{\frac{e + 1}{e − 1}} \tanh\left( \frac{1}{2}E \right) \\ \| = \sqrt{\frac{2.01}{0.01}} × \tanh(0.5 × 9.89452619) \\ \| = 14.1774469 × \tanh(4.9472631) \\ \| = 14.1774469 × 0.999899105 \\ \| = 14.1760164 \end{align*}

which fits the value from the τ_ν column.

The above tables show that near \( e = 1 \) the values of \( E \) and \( τ_ν \) vary much faster with \( e \) for fixed \( M \) than for fixed \( M_q \). A parabolic orbit (with \( e = 1 \)) cannot be described at all with \( M \) or \( E \) (which are exactly 0 then), but they can be described with \( M_q \).

One cannot solve Kepler's Equation directly for the eccentric anomaly \( E \) given a particular time \( t \). The best that we can do is to calculate a hopefully good estimate of \( E \), then see how good of an estimate it is, deduce a better estimate, and repeat until the estimate is good enough and cannot be improved anymore.

We need a good initial estimate for every \( M \) and \( e \), a good method for improving the estimate, and a good way to decide whether we can still improve the estimate.

Suppose we have an estimate \( E_n \) for the eccentric anomaly \( E \). How can we use it and Kepler's equation to find a better estimate \( E_{n+1} \)?

Kepler's equations for elliptic and hyperbolic orbits can be written in the form

\begin{equation} E = g(E) \end{equation}

for a suitable function \( g \):

\begin{eqnarray} g(E) \| = \| M + e \sin(E) \| \qquad (e \lt 1) \\ g(E) \| = \| \arsinh\left( \frac{M + E}{e} \right) \| \qquad (e \gt 1) \end{eqnarray}

This suggests an approximation scheme

\begin{equation} E_{n+1} = g(E_n) \end{equation}

How much better is \( E_{n+1} \) than \( E_n \)?

\begin{equation} E + \del E_{n+1} = E_{n+1} = g(E_n) = g(E + \del E_n) ≈ g(E) + g'(E)\del E_n = E + g'(E)\del E_n \end{equation}

where \( g'(E) \) is the derivative of \( g \) with respect to \( E \). So

\begin{equation} \del E_{n+1} ≈ g'(E)\del E_n \label{eq:improve1} \end{equation}

so \( E_{n+1} \) is only better than \( E_n \) if \( |g'(E)| \lt 1 \), and the deviation from the true solution \( E \) gets multiplied by approximately the same factor \( g'(E) \) for every iteration, so the number of correct digits after the decimal mark increases by roughly the same amount for every iteration. We can do much better than that.

Kepler's equations for elliptic and hyperbolic orbits have the form

\begin{equation} M = f(E) \label{eq:f} \end{equation}

for a suitable function \( f \), so if we make a small change \( \del E \) in \( E \), then the corresponding change \( \del M \) in \( M \) is approximately

\begin{equation} \del M ≈ f'(E) \del E \label{eq:∆M} \end{equation}

where \( f'(E) \) is the derivative of \( f \) with respect to \( E \). We seek the value of \( E \) that satisfies \( M = f(E) \), but our current guess \( E_n \) instead yields

\begin{equation} f(E_n) = M + \del M_n \end{equation}

From this we can estimate how far \( E_n \) is from \( E \). The estimated offset is

\begin{equation} \del E_n ≈ −\frac{\del M_n}{f'(E_n)} ≈ \frac{M − f(E_n)}{f'(E_n)} \label{eq:∆E} \end{equation}

so the next estimate is

\begin{equation} E_{n+1} = E_n + \del E_n = E_n + \frac{M − f(E_n)}{f'(E_n)} \end{equation}

This method breaks down when \( f'(E_n) = 0 \), but for Kepler's Equation that cannot happen, as we'll see later.

How much better is \( E_{n+1} \) than \( E_n \)? We define

\begin{equation} β(E) ≡ \frac{f''(E)}{2f'(E)} \end{equation}

We have

\begin{eqnarray} f(E_n) \| = \| f(E + \del E_n) ≈ f(E) + \del E_n f'(E) + \frac{1}{2} \del E_n^2 f''(E) \notag \\ \| = \| M + \del E_n f'(E) (1 + \del E_n β(E)) \\ f'(E_n) \| = \| f'(E + \del E_n) ≈ f'(E) + \del E_n f''(E) \notag \\ \| = \| f'(E) (1 + 2 \del E_n β(E)) \end{eqnarray}

so

\begin{eqnarray} E + \del E_{n+1} = E_{n+1} \| = \| E_n + \frac{M − f(E + \del E_n)}{f'(E + \del E_n)} \notag \\ \| ≈ \| E_n + \frac{−\del E_n f'(E) (1 + \del E_n β(E))}{f'(E) (1 + 2 \del E_n β(E))} \| \quad (|\del E_nβ(E)| \ll 1 ∧ |\del E_nf'''(E)| \ll |f''(E)|) \notag \\ \| = \| E_n − \del E_n \frac{1 + \del E_n β(E)}{1 + 2 \del E_n β(E)} \notag \\ \| ≈ \| E_n − \del E_n (1 + \del E_n β(E))(1 − 2 \del E_n β(E)) \| \quad (|\del E_nβ(E)| \ll 1) \notag \\ \| ≈ \| E_n − \del E_n (1 + \del E_n β(E) − 2 \del E_n β(E)) \| \quad (|\del E_nβ(E)| \ll 1) \notag \\ \| = \| E_n − \del E_n (1 − \del E_n β(E)) \notag \\ \| = \| E_n − \del E_n + \del E_n^2 β(E) \notag \\ \| = \| E + \del E_n^2 β(E) \end{eqnarray}

where we've used Equation \eqref{eq:f}. So, we find

\begin{eqnarray} \del E_{n+1} \| ≈ \| \del E_n^2 β(E) \\ \del E_{n+1} β(E) \| ≈ \| (\del E_n β(E))^2 \label{eq:improve} \end{eqnarray}

The next estimate \( E_{n+1} \) is better than the previous one \( E_n \) if \( |\del E_{n+1}| \lt |\del E_n| \), which corresponds to

\begin{equation} |\del E_nβ(E)| \lt 1 \end{equation}

The boundary indicated by \( β(E) \) is vague, because we derived it after neglecting certain terms, and at the beginning we don't know the true value of \( E \) anyway and have to calculate \( β(E) \) based on the estimate \( E_n \) instead, which is not accurate, either. So if \( |\del E_n| \) isn't comfortably less than \( 1/|β(E)| \) then that may still not be small enough to locate the solution quickly. But if \( |\del E_n| \ll 1/|β(E)| \) then the number of correct digits after the decimal marker roughly doubles for each next estimate. This is much faster improvement than for the first-order method described earlier. We'll only consider the second-order method from now on.

For elliptic orbits,

\begin{eqnarray} f(E) \| = \| E − e \sin(E) \\ f'(E) \| = \| 1 − e \cos(E) \\ f''(E) \| = \| e \sin(E) = E − f(E) \\ \del E_n \| = \| \frac{M − f(E_n)}{f'(E_n)} = \frac{M − E_n + e \sin(E_n)}{1 − e \cos(E_n)} \\ E_{n+1} \| = \| E_n + \del E_n = E_n + \frac{M − E_n + e \sin(E_n)}{1 − e \cos(E_n)} \label{eq:nextEe} \\ β(E_n) \| = \| \frac{f''(E)}{2f'(E)} = \frac{e \sin(E_n)}{2(1 − e\cos(E_n))} \end{eqnarray}

The calculation of \( \del E_n \) and \( β(E_n) \) would fail if \( 1 − e \cos(E_n) = 0 \), because then you'd divide by 0, but that equation corresponds to \( \cos(E_n) = 1/e \gt 1 \) which has no solution because \( |\cos(E_n)| \le 1 \).

For fixed \( e \), the greatest value of \( |β(E)| \) is reached when \( \cos(E) = e \); That greatest value is \( β_\text{max} = e/(2\sqrt{1 − e^2}) \). For near-parabolic orbits, \( β_\text{max} ≈ 1/\sqrt{8|δ|} \).

For hyperbolic orbits,

\begin{eqnarray} f(E) \| = \| e \sinh(E) − E \\ f'(E) \| = \| e \cosh(E) − 1 \\ f''(E) \| = \| e \sinh(E) = E + f(E) \\ \del E_n \| = \| \frac{M − f(E_n)}{f'(E_n)} = \frac{M + E_n − e \sinh(E_n)}{e \cosh(E_n) − 1} \\ E_{n+1} \| = \| E_n + \del E_n = E_n + \frac{M + E_n − e \sinh(E_n)}{e \cosh(E_n) − 1} \label{eq:nextEh} \\ β(E_n) \| = \| \frac{f''(E_n)}{2f'(E_n)} = \frac{e \sinh(E_n)}{2(e\cosh(E_n) − 1)} \end{eqnarray}

The calculation of \( \del E_n \) and \( β(E_n) \) would fail if \( e\cosh(E_n) − 1 = 0 \), because then you'd divide by 0, but that equation corresponds to \( \cosh(E_n) = 1/e \lt 1 \) which has no solution because \( |\cosh(E_n)| \ge 1 \).

For fixed \( e \), the greatest value of \( |β(E)| \) is reached when \( \cosh(E) = e \); That greatest value is \( β_\text{max} = e/(2\sqrt{e^2 − 1}) \). For near-parabolic orbits, \( β_\text{max} ≈ 1/\sqrt{8|δ|} \).

So, for elliptic and hyperbolic orbits,

\begin{equation} β_\text{max} = \frac{e}{2\sqrt{|e^2 − 1|}} \end{equation}

and for near-parabolic orbits

\begin{equation} β_\text{max} ≈ \frac{1}{\sqrt{8|δ|}} \end{equation}

If \( |\del E_nβ(E_n)| \lt 1 \) then the next estimate is better than the previous one.

\begin{equation} |\del E_nβ(E_n)| ≤ |\del E_nβ_\text{max}| = |\del E_n| \frac{e}{2\sqrt{|e^2 − 1|}} \end{equation}

so if \( |\del E_n| \lt 2\sqrt{|e^2 − 1|}/e \) then the next estimate is better than the previous one. For near-parabolic orbits, this corresponds to \( |\del E_n| \lt \sqrt{8|δ|} \), so for near-parabolic orbits the initial estimate for \( E \) must be quite accurate already to be able to find the solution.

For \( e = 0.02 \) and \( E_0 = M = 1 \), and using "double precision arithmetic", we find

first order | second order | ||||

\({n}\) | \({E_n}\) | \({\del E_n}\) | \({E_n}\) | \({\del E_n}\) | |
---|---|---|---|---|---|

0 | 1.0000 | −0.017 | 1.0000 | −0.017013 | |

1 | 1.0168 | −0.00018 | 1.0170 | +2.4704 × 10^{−6} | |

2 | 1.0170 | −1.9 × 10^{−6} | 1.0170 | +5.2511 × 10^{−14} | |

3 | 1.0170 | −2.0 × 10^{−8} | 1.0170 | 0 | |

4 | 1.0170 | −2.1 × 10^{−10} | |||

5 | 1.0170 | −2.2 × 10^{−12} | |||

6 | 1.0170 | −2.3 × 10^{−14} | |||

7 | 1.0170 | −2.2 × 10^{−16} | |||

8 | 1.0170 | 0 |

where I've shown \( E_n \) to a few figures after the decimal mark, but have calculated with all available figures after the decimal mark.

We see that the first-order method needs 8 iterations to achieve the maximum attainable accuracy, but the second-order method needs only 3. And this is a relatively simple case.

Now assume \( e = 0.99 \) and \( E_0 = M = 1 \). Then

first order | second order | |||

\({n}\) | \({E_n}\) | \({\del E_n}\) | \({E_n}\) | \({\del E_n}\) |
---|---|---|---|---|

0 | 1.0000 | −0.91 | 1.0000 | −1.7911 |

1 | 1.8331 | −0.095 | 2.7911 | +0.75200 |

2 | 1.9561 | +0.029 | 2.0391 | +0.10763 |

3 | 1.9174 | −0.010 | 1.9315 | +0.0038570 |

4 | 1.9311 | +0.0035 | 1.9276 | +5.1219 × 10^{−6} |

5 | 1.9264 | −0.0012 | 1.9276 | +9.0412 × 10^{−12} |

6 | 1.9281 | +0.00042 | 1.9276 | 0 |

7 | 1.9275 | −0.00014 | ||

8 | 1.9278 | +5.0 × 10^{−5} | ||

9 | 1.9276 | −1.7 × 10^{−5} | ||

... | ... | |||

33 | 1.9276 | −2.2 × 10^{−16} | ||

34 | 1.9276 | 0 |

Now the first-order method needs 34 iterations where the second-order method needs only 6, even though the first-order method was closer to the correct value than the second-order method was for the first few iterations.

Now we assume that \( e = 1.01 \) and \( E_0 = M = 1 \). Then

first order | second order | |||

\({n}\) | \({E_n}\) | \({\del E_n}\) | \({E_n}\) | \({\del E_n}\) |
---|---|---|---|---|

0 | 1.0000 | +0.4347 | 1.0000 | −1.4557 |

1 | 1.4347 | +0.1788 | 2.4557 | +0.48421 |

2 | 1.6135 | +0.0658 | 1.9715 | +0.21686 |

3 | 1.6793 | +0.0232 | 1.7547 | +0.038692 |

4 | 1.7025 | +0.0081 | 1.7160 | +0.0011009 |

5 | 1.7106 | +0.0028 | 1.7149 | +8.6813 × 10^{−7} |

6 | 1.7134 | +0.0010 | 1.7149 | +5.3924 × 10^{−13} |

7 | 1.7144 | +3.3 × 10^{−4} | 1.7149 | 0 |

8 | 1.7147 | +1.2 × 10^{−4} | ||

9 | 1.7148 | +4.0 × 10^{−5} | ||

... | ... | |||

33 | 1.7149 | +2.2 × 10^{−16} | ||

34 | 1.7149 | 0 |

Now the first-order method takes 34 iterations and the second-order method takes 7.

How many iterations will we need to get as close to the value of \( E \) as we can?

To get some idea of the answer, let's assume that Equation \eqref{eq:improve} is not approximate but exact, with a fixed value for \( β(E) \). Our initial estimate has error \( \del E_0 \). Then

\begin{eqnarray} |\del E_1 β(E)| \| = \| |\del E_0 β(E)|^2 \\ |\del E_2 β(E)| \| = \| |\del E_1 β(E)|^2 = |\del E_0 β(E)|^4 = |\del E_0β(E)|^{\left( 2^2 \right)} \\ |\del E_n β(E)| \| = \| |\del E_0 β(E)|^{\left( 2^n \right)} \\ |\del E_n| \| = \| \frac{|\del E_0 β(E)|^{\left( 2^n \right)}}{|β(E)|} \end{eqnarray}

How close can we get?

A typical computer application is limited in its *relative
precision*: It cannot tell the difference between the number 1 and
nearby numbers unless the difference between those numbers and 1 is
greater than \( ε \), which is the relative precision limit. For an
arbitrary number \( x ≠ 0 \), the application cannot tell the
difference between \( x \) and some other number \( y \) unless \( |y
− x| \gt ε|x| \). For an application that uses "single precision
floating point numbers", \( ε \) is typically around \( 10^{−7} \).
For an application using "double precision floating point numbers", \(
ε \) is usually around \( 10^{−16} \).

So, if the next iteration would provide a relative improvement less than \( ε \), then the application won't be able to express that improvement, and the current estimate is as precise as it can be. Then we have found the solution, to relative precision \( ε \).

So, there's no point continuing if \( |\del E| ≤ ε|E| \). Suppose we end up at exactly \( |\del E_n| = ε|E| \). Then what is \( n \)? Then we have

\begin{eqnarray} |εβ(E)E| \| = \| |\del E_n β(E)| = |\del E_0 β(E)|^{\left( 2^n \right)} \\ \log(|εβ(E)E|) \| = \| 2^n \log(|\del E_0β(E)|) \\ 2^n \| = \| \frac{\log(|\del E_0β(E)|)}{\log(|εβ(E)E|)} \\ n \| = \| \log_2\left( \frac{\log(|\del E_0β(E)|)}{\log(|εβ(E)E|)} \right) \label{eq:n} \end{eqnarray}

This calculation of \( n \) is only possible if \( |\del E_0β(E)| \lt 1 \)
and if \( |εβ(E)E| \lt 1 \). If the first condition is not met, then
the next estimate will be *worse* than the previous one, and
then the derivation of Equation \eqref{eq:n} isn't valid, either. For
elliptic orbits, \( |β(E)E| \lt 1 \) so the inequality is always
satisfied (because \( ε \ll 1 \)). For hyperbolic orbits, \( |β(E)E|
\gt 10 \) only when the distance \( r \gt 10^8 q \), which is so far
from the central object that the trajectory is nearly
indistinguishable from a straight line being traversed at fixed speed.

How can we tell that we cannot improve the value of \( E \) anymore? The simplest ways that come to mind have drawbacks in practice.

- Stop when \( \del E_n = 0 \). This is not a good criterion, because \( \del E_n \) might be non-zero but yet so small relative to \( E_n \) that the value of \( E_n \) doesn't change when \( \del E_n \) is added to it, and then we're in an endless loop. Or the iterations may end up in an endless loop involving two or more nearly identical values of \( E \) with a relative difference near \( ε \).
- Stop when \( |\del E_n| \) does not decrease anymore. This criterion is troublesome during the first couple of iterations, because if the initial estimate isn't close enough to the true solution already, then the estimate may get worse before it gets better, and then we'd stop too early, when the estimate is in fact still far from the solution.

Let's find a better way. If \( |\del E_n| ≤ |εE_n| \), then we
certainly cannot improve the estimate anymore. What does that tell us
about the *previous* improvement \( \del E_{n−1} \)?

\begin{equation} \begin{split} |\del E_n| \| ≤ \| |εE| \\ |\del E_nβ(E)| \| ≤ \| |εEβ(E)| \\ |\del E_{n−1}β(E)|^2 \| ≤ \| |εEβ(E)| \\ |\del E_{n−1}|^2 \| ≤ \| \left| \frac{εE}{β(E)} \right| \end{split} \end{equation}

So we should look for \( |\del E_n|^2 ≤ |εE/β(E_n)| ≈ |εE_n/β(E_n)| \). When we meet that condition, then we should do one more iteration, and then stop. The limit on \( \del E_n \) is considerably higher than \( |εE_n| \), which should avoid the endless loop problem, and it is still small enough that it should not yet apply when the estimate is so far from the solution that it might still temporarily get worse before it zooms in to the solution.

For elliptic orbits the condition is

\begin{equation} |\del E_n|^2 ≤ \left| \frac{εE_n}{β(E_n)} \right| = \left| \frac{2εE_n(1 − e\cos(E_n))}{e\sin(E_n)} \right| \end{equation}

and for hyperbolic orbits

\begin{equation} |\del E_n|^2 ≤ \left| \frac{εE_n}{β(E_n)} \right| = \left| \frac{2εE_n(e\cosh(E_n) − 1)}{e\sinh(E_n)} \right| \end{equation}

Now we still need an initial estimate of \( E \). We find such an estimate by replacing Kepler's Equation by a simpler one that we can solve directly. The closer that simpler equation is to the real one, the larger the fraction of all combinations of \( M \) and \( e \) is for which that simpler equation yields a good estimate for the real solution.

We judge the quality of an initial estimate by figuring out how many iterations it requires to find the solution (to a specified relative precision \( ε \)) for a wide range of \( M \) and \( e \). The one that needs fewer iterations is deemed better than the one that needs more iterations.

There isn't a single initial estimate that is best (or even good) for all \( M \) and \( e \). All estimates that I've looked at that are good for at least some \( M \) and \( e \) require very many iterations for other \( M \) and \( e \). Below, I describe two initial estimates that are each good for a wide range of \( M \) and \( e \), and in combination are at least reasonable for all \( M \) and \( e \).

It is possible to find other estimates that are even better for some \( M \) and \( e \), but all of the ones that I looked at are better in only a narrow range of \( M \) and \( e \), and reduce the iteration count by at most a few, so that it doesn't seem worth the effort to include them in the procedure.

The Equations of Kepler for elliptic and hyperbolic orbits are

\begin{eqnarray} M \| = \| E − e \sin(E) \| \qquad (e \lt 1) \label{eq:ekepler2} \\ M \| = \| e \sinh(E) − E \| \qquad (e \gt 1) \end{eqnarray}

We replace the \( \sin \) and \( \sinh \) calls by their third-order Taylor expansion around zero:

\begin{eqnarray} M \| ≈ \| E − e \left( E − \frac{1}{6}E^3 \right) = \frac{1}{6}eE^3 − δE \| \qquad (e \lt 1) \\ M \| ≈ \| e \left( E + \frac{1}{6}E^3 \right) − E = \frac{1}{6}eE^3 + δE \| \qquad (e \gt 1) \end{eqnarray}

The two equations for \( M \) can be combined into

\begin{equation} M ≈ \frac{1}{6}eE^3 + |δ| E \label{eq:M3} \end{equation}

However, for near-parabolic orbits, \( M \) and \( E \) are not very convenient quantities to work with, because for fixed \( t \) and \( q \) they depend strongly on \( e \). With Equations \eqref{eq:M_q} and \eqref{eq:E_r} we can rewrite Equation \eqref{eq:M3} to

\begin{equation} M_q ≈ E_r + \frac{1}{6}eE_r^3 \end{equation}

We replace \( E_r \) by \( E_r' \) and \( ≈ \) by \( = \). Then \( E_r' \) approximates \( E_r \) and we get a third-order equation in \( E_r' \) that we can solve.

\begin{equation} M_q = E_r'+ \frac{1}{6}eE_r'^3 \label{eq:Mq3} \end{equation}

We solve this equation for \( E_r' \):

\begin{eqnarray} W \| = \| \frac{3}{2\sqrt{2}} \frac{M_q}{e} \\ u \| = \| \sqrt[3]{W + \sqrt{W^2 + \frac{1}{e^3}}} \\ E_r' \| = \| \sqrt{2} \left( u − \frac{1}{eu} \right) \label{eq:E_r3} \\ E \| ≈ \| E_r'\sqrt{|δ|} \label{eq:est_ep} \end{eqnarray}

This procedure to calculate an initial estimate for \( E \) looks a lot like Equations \eqref{eq:pkepler}ff for parabolic orbits, especially if you take into account that for small \( δ \) and small \( E \) we have \( τ_ν ≈ E_r\sqrt{2} \). In fact, if you go on to calculate \( ν \) from \( E \), and then let \( e \) go to 1 (which corresponds to \( δ \) going to 0), then you get Barker's method. This suggests that the above estimate for \( E \) should be quite good for near-parabolic orbits.

Fig. 8: Kepler Equation: Iterations Count for Small Anomalies

Figure 8 shows the iterations count for this initial estimate, as a function of eccentricity \( e \) and perifocal anomaly \( M_q \), both between 0.01 and 1000. The count was estimated using Equation \eqref{eq:n}. The estimated count was not rounded to a whole number. This makes gradual changes easier to see.

The white area at the top left corresponds to \( M \gt π \) for elliptic orbits, for which we don't need a separate initial estimate because we can always reduce \( M \) to the range between −π and +π.

The dark vertical line at \( e = 1 \) corresponds to a parabolic orbit, for which no iteration is needed.

The yellow area at the top right indicates an iteration count that is larger than in any non-yellow area (and greater than 38).

The number of iterations is fairly low except for hyperbolic orbits (\( e \gt 1 \)) when the anomaly gets large.

It is not easy to see from the preceding equations how their results behave. The right-hand side of Equation \eqref{eq:Mq3} has two terms. The first one dominates when \( |E_r| \gg e|E_r|^3/6 \), which means \( |E_r| \ll \sqrt{6/e} \), from which follows \( |M_q| \ll \sqrt{6/e} \) which means \( |M| \ll \sqrt{6|δ|^3/e} \). We call this range of parameter space area Ⅰ. For elliptic orbits, the whole orbit is part of area Ⅰ when \( e \ll 0.11 \). In area Ⅰ,

\begin{eqnarray} E_r \| ≈ \| M_q \| = \| \frac{M}{|δ|^{3/2}} \\ E \| ≈ \| M_q \sqrt{|δ|} \| = \| \frac{M}{|δ|} \\ τ_ν \| ≈ \| \frac{1}{2} \sqrt{1+e} M_q \| = \| \frac{1}{2} \sqrt{1+e} \frac{M}{|δ|^{3/2}} \\ ν \| ≈ \| \sqrt{1+e} M_q \| = \| \sqrt{1+e} \frac{M}{|δ|^{3/2}} \end{eqnarray}

If the second term in Equation \eqref{eq:Mq3} dominates (when \( \sqrt{6/e} \ll |E_r| \ll \sqrt{20/|δ|} \)), then we're in area Ⅱ. There

\begin{eqnarray} E_r \| ≈ \| \left( \frac{6M_q}{e} \right)^{1/3} \| = \| \frac{ \left( \frac{6M}{e} \right)^{1/3} }{\sqrt{|δ|}} \\ E \| ≈ \| \left( \frac{6M_q|δ|^{3/2}}{e} \right)^{1/3} \| = \| \left( \frac{6M}{e} \right)^{1/3} \\ τ_ν \| ≈ \| \frac{1}{2}\sqrt{1 + e} \left( \frac{6M_q}{e} \right)^{1/3} \| = \| \frac{1}{2}\sqrt{\frac{1 + e}{|δ|}} \left( \frac{6M}{e} \right)^{1/3} \\ ν \| ≈ \| \left( \frac{6M_q|δ|^{3/2}}{e} \right)^{1/3} \| = \| \left( \frac{6M}{e} \right)^{1/3} \end{eqnarray}

With this initial estimate, \( |\del E_0 β(E_0)| ≤ 0.5 \) for all \( M \) and \( e \), which is less than 1, so the initial estimate is good enough to be able to find the desired \( E \) using the second-order method.

For elliptic orbits, the case with \( |E| \gg 1 \) is not relevant, because for an elliptic orbit the positions keep repeating themselves, and you end up with the same position if you add to \( E \) an arbitrary multiple of 2π radians. For hyperbolic orbits, case \( |E| \gg 1 \) is relevant, because for a hyperbolic orbit the positions do not repeat themselves. Here we investigate only hyperbolic orbits (with \( e \gt 1 \)).

For a hyperbolic orbit,

\begin{equation} M = e \sinh(E) − E \end{equation}

We can rearrange this to

\begin{equation} E = \arsinh\left( \frac{M + E}{e} \right) \end{equation}

This leads to the approximation method

\begin{equation} E_{n+1} = \arsinh\left( \frac{M + E_n}{e} \right) \end{equation}

but that is a first-order approximation method, so it is much less suitable than the second-order method for finding the solution. However, this method is fine for finding a good initial estimate.

If \( |E| \gg 1 \) then \( |\sinh(E)| \gg |E| \) so \( |M| \gg |E| \), so \( E_0 = 0 \) is a good initial estimate (for \( n = 0 \)). Then

\begin{equation} E ≈ \arsinh\left( \frac{M}{e} \right) \label{eq:est_h} \end{equation}

Fig. 9: Kepler Equation: Iterations Count for Large Anomalies

Figure 9 shows the iterations count for this initial estimate, similar to Figure 8, estimated using Equation \eqref{eq:n}. The number of iterations is low for high \( e \) and high \( M_q \), and gets high for near-parabolic orbits (\( e ≈ 1 \)) for high \( M_q \).

For large mean anomaly \( M \), the behavior is approximately as follows:

\begin{eqnarray} E \| ≈ \| \log\left( \frac{2M}{e} \right) \\ τ_ν \| ≈ \| \sqrt{\frac{e+1}{e−1}} \left( 1 − \frac{e}{M} \right) \\ ν \| ≈ \| 2 \arctan\left( \sqrt{\frac{e+1}{e−1}} \right) − \frac{\sqrt{e^2 − 1}}{M} = \arccos\left( −\frac{1}{e} \right) − \frac{\sqrt{e^2 − 1}}{M} \end{eqnarray}

The asymptote of the hyperbola is at \( ν = \arccos(−1/e) \).

With this initial estimate, \( |\del E_0 β(E_0)| \) is at most 10.8, which is greater than 1, so the initial estimate is not always suitable for finding the desired \( E \) using the second-order method. The excessive values can only occur if \( e \) is between 0.83 and 1.26 and if in addition \( M \) is between 0.0033 and 1.91. We must not use this initial estimate for the combinations of \( M \) and \( e \) for which the value is too large. We'll see shortly that this is not a problem.

Fig. 10: Kepler Equation: Combined Iterations Count

Figure 10 shows the smallest iteration count across both initial estimates. The iterations count is small for small \( e \) and \( M_q \), and for large \( e \) and \( M_q \), and is at most about 5.5.

Fig. 11: Kepler Equation: Best Initial Estimate Selection

Figure 11 shows where each of the two initial estimates is best: the small-eccentric-anomaly estimate is best for low \( M_q \) and \( e \) and for most elliptic cases, and the large-eccentric-anomaly estimate is best for high \( M_q \) and \( e \), but also for a narrow range of \( M_q \) and \( e \) for elliptic orbits.

How do we select the best initial estimate for a given \( M \) and \( e \)? It is reasonable to compare the \( \del M \) from Equation \eqref{eq:∆M} for the two initial estimates. Whichever initial estimate gives the \( \del M \) that is smallest in magnitude is likely to be the best choice. For the current case this yields a maximum iteration count of 5.88, compared to 5.52 for the ideal combination of the initial estimates.

We can move the location of the boundary a bit by giving more weight to one or the other of the initial estimates in the comparison. Some trial and error leads to the following criterion:

Use the initial estimate based on large anomalies if

\[ e \gt 1 ∧ |e \sinh(E_h) − E_h − M| \lt 0.53 |e \sinh(E_s) − E_s − M| \]

and otherwise use the small-anomaly estimate. Here \( E_h \) is the large-anomaly estimate (according to Equation \eqref{eq:est_h}), and \( E_s \) is the small-anomaly estimate (according to Equation \eqref{eq:est_ep}), and \( ∧ \) means "and".

Because \( E_h = \arsinh(M/e) \), the condition can be simplified to

\[ e \gt 1 ∧ |E_h| \lt 0.53 |e \sinh(E_s) − E_s − M| \]

With this criterion, the maximum iteration count is 5.52, which is indistinguishable from maximum iteration count for the ideal combination. And the greatest value of \( |\del E_0β(E_0)| \) is 0.27, which is comfortably less than 1, so with this condition the initial estimate is good enough to find \( E \) for all \( M \) and \( e \).

Fig. 12: Kepler Equation: Iterations Count

Figure 12 shows the number of iterations that are needed if the procedure described above is used. There isn't much difference compared to Figure 10.

Fig. 13: Kepler Equation: Used Initial Estimate

Figure 13 shows which initial estimate is used in the procedure described before, similar to Figure 11. The most obvious difference with respect to that figure is the absence of the small yellow area for \( e \lt 1 \) and \( M_q ≈ 3 \), but the boundary between the large red and yellow areas is also shifted slightly.

Fig. 14: Kepler Equation: Iterations Count for a Real Implementation

Figure 14 shows the iterations count for solving
Kepler's Equation using an implementation of the abovementioned
procedure in C++ on a 64-bit GNU/Linux system, calculating in "double
precision". Because this is a genuine count and not a calculated
estimate, all measurements are whole numbers. Taking that into
account, this chart looks a lot like Figure 12.
If I calculate the mean anomaly from the found eccentric anomaly and
compare it with the mean anomaly that I started with, then I find that
the greatest difference is 7 × 10^{−13} radians.

*//aa.quae.nl/en/reken/kepler.html;
Last updated: 2020-07-18
*