Skip to main content
Log in

An iterative and shrinking generalized ridge regression for ill-conditioned geodetic observation equations

  • Original Article
  • Published:
Journal of Geodesy Aims and scope Submit manuscript

Abstract

In geodesy, Tikhonov regularization and truncated singular value decomposition (TSVD) are commonly used to derive a well-defined solution for ill-conditioned observation equations. However, as single-parameter regularization methods, they may face some limitations in application due to their lack of flexibility. In this contribution, a kind of multiparameter regularization method is considered, called generalized ridge regression (GRR). Generally, GRR projects observations into several orthogonal spectral domains and then uses different regularization parameters to minimize the mean squared error of the estimated parameters in corresponding spectral domains. To find suitable regularization parameters for GRR, an iterative and shrinking generalized ridge regression (IS-GRR) is proposed. The IS-GRR procedure starts by introducing a predetermined approximation of unknown parameters. Subsequently, in each spectral domain, the signal and noise of the observations are estimated in an iterative and shrinking manner, and the regularization parameters are updated according to the estimated signal-to-noise ratio. Compared to conventional regularization schemes, IS-GRR has the following advantages: Tikhonov regularization usually oversmooths signals in the low-spectral domains and undersuppresses noise in the high-spectral domains, whereas TSVD usually undersuppresses noise in the low-spectral domains and oversmooths signals in the high-spectral domains. However, IS-GRR strikes a balance between retaining signals and suppressing noise in different spectral domains, thereby exhibiting better performance. Two experiments (simulation and mascon modelling examples) verify the effectiveness of IS-GRR for solving ill-conditioned equations in geodesy.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

Data availability

The GIA model ICE6G-D was available at https://www.atmosp.physics.utoronto.ca/~peltier/data.php; The CSR RL06 SH coefficients data can be downloaded from DOI: https://podaac.jpl.nasa.gov/dataset/GRACE_GSM_L2_GRAV_CSR_RL06; The CSR RL06 mascon solutions can be downloaded from https://www2.csr.utexas.edu/grace/RL06_mascons.html.

References

Download references

Acknowledgements

This work is sponsored by the Natural Science Foundation of China (42192532, 42274005, 41974002). The authors would like to thank, with gratitude, Prof Peiliang Xu from Kyoto University for providing valuable suggestions and for polishing the English which lead to a significant improvement and clarification of the paper.

Author information

Authors and Affiliations

Authors

Contributions

YY proposed the key idea, designed the research, processed data, and wrote the paper draft; YS supervised the research, revised the manuscript, and checked all the formulate; LY, BL, and QC revised the manuscript; WW provided and processed data.

Corresponding author

Correspondence to Yunzhong Shen.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Appendix of Proof

Appendix of Proof

1.1 Proof of Theorem 1

To find the value of \({\alpha }_{i}\) which make \(\mathrm{Tr}\left[\mathrm{MSE}\left({\hat{{\varvec{x}}}}_{\mathrm{GRR}}^{\prime}\right)\right]\) to be minimized, we differentiate \(\mathrm{Tr}\left[\mathrm{MSE}\left({\hat{{\varvec{x}}}}_{\mathrm{GRR}}^{\prime}\right)\right]\) with respect to \({\alpha }_{i}\), there is (Hoerl and Kennard 1970a, b).

$$ \frac{{\partial {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{GRR}}}}^{\prime} } \right)} \right]}}{{\partial \alpha_{i} }} = 2\lambda_{i}^{2} \frac{{\alpha_{i} \left( {{\varvec{v}}_{i}^{{\text{T}}} {{\varvec{x}}^{\prime}}} \right)^{2} - \sigma^{2} }}{{\left( {\lambda_{i}^{2} + \alpha_{i} } \right)^{3} }}. $$
(A1)

It is easy to find that the first derivative function has one and only one zero point which is \(\frac{{\sigma }^{2}}{{\left({{\varvec{v}}}_{i}^{\mathrm{T}}{{\varvec{x}}}^{\prime}\right)}^{2}}\). Furthermore, at the point of \({\alpha }_{i}=\frac{{\sigma }^{2}}{{\left({{\varvec{v}}}_{i}^{\mathrm{T}}{\varvec{x}^\prime}\right)}^{2}}\), the second derivative function is given as

$$ \frac{{\partial^{2} {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{GRR}}}}^{\prime} } \right)} \right]}}{{\partial \left( {\alpha_{i} } \right)^{2} }} = 2\lambda_{i}^{2} \frac{{\lambda_{i}^{2} \left( {{\varvec{v}}_{i}^{{\text{T}}} \varvec{x}^{\prime}} \right)^{2} + \sigma^{2} }}{{\left[ {\lambda_{i}^{2} + \frac{{\sigma^{2} }}{{\left( {{\varvec{v}}_{i}^{{\text{T}}} \varvec{x}^{\prime}} \right)^{2} }}} \right]^{4} }}. $$
(A2)

The second derivative function is greater than zero, which indicates that this point is a unique minimum point of the function \(\mathrm{Tr}\left[\mathrm{MSE}\left({\hat{{\varvec{x}}}}_{\mathrm{GRR}}^{\prime}\right)\right]\). Therefore, the optimal \({\alpha }_{i}\) which minimizes the \(\mathrm{Tr}\left[\mathrm{MSE}\left({\hat{{\varvec{x}}}}_{\mathrm{GRR}}^{\prime}\right)\right]\) is given by

$$ \frac{{\alpha_{i}^{*} }}{{\lambda_{i}^{2} }} = \frac{{\frac{{\sigma^{2} }}{{\lambda_{i}^{2} }}}}{{\left( {{\varvec{v}}_{i}^{{\text{T}}} \varvec{x}^{\prime}} \right)^{2} }}. $$
(A3)

1.2 Proof of Theorem 2

On the one hand, considering the proof by contradiction, if there is a null hypothesis that.

$$ \min {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{Tik}}}}^{\prime} } \right)} \right] > {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{LS}}}}^{\prime} } \right)} \right], $$
(A4)

in which

$$ \begin{aligned} \alpha _{0} = & \mathop {{\text{argmin}}}\limits_{\alpha } {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{Tik}}}}^{\prime} } \right)} \right] \\ = & \mathop {{\text{argmin}}}\limits_{\alpha } \mathop \sum \limits_{{i = 1}}^{n} \frac{{\alpha ^{2} \left( {\varvec{v}_{i}^{{\text{T}}} {\varvec{x}}^{\prime}} \right)^{2} + \lambda _{i}^{2} \sigma ^{2} }}{{\left( {\lambda _{i}^{2} + \alpha } \right)^{2} }}. \\ \end{aligned} $$
(A5)

If we set \({\alpha }_{0}=0, \left(i=1,\dots ,n\right)\), then there is

$$ \begin{aligned} &{\text{Tr}} \left[ {\text{MSE}}\left( \hat{\varvec{x}}_{{\text{Tik}}}^{\prime} \right)|\left\{ \alpha_{0} = 0|i = 1, \ldots ,n \right\} \right] \\ &< \min {\text{Tr}}\left[ {\text{MSE}}\left( \hat{\varvec{x}}_{{\text{Tik}}}^{\prime} \right) \right], \end{aligned} $$
(A6)

which is contradictory. It means that the null hypothesis is wrong, thus we have

$$ \min {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{Tik}}}}^{\prime} } \right)} \right] \le {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{LS}}}}^{\prime} } \right)} \right]. $$
(A7)

Also, if there is a null hypothesis that

$$ \min {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{TSVD}}}}^{\prime} } \right)} \right] > {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{LS}}}}^{\prime} } \right)} \right], $$
(A8)

in which

$$ \begin{aligned} \left\{ {\alpha_{j} } \right\} = \mathop {{\text{argmin}}}\limits_{{\left\{ {\alpha_{j} = 0|j = 1, \ldots ,k} \right\} \cup \left\{ {\alpha_{j} = + \infty |j = k + 1, \ldots ,n} \right\}}} {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{TSVD}}}}^{\prime} } \right)} \right] \hfill \\ \begin{array}{*{20}c} = &\, \mathop {{\text{argmin}}}\limits_{{\left\{ {\alpha_{j} = 0|j = 1, \ldots ,k} \right\} \cup \left\{ {\alpha_{j} = + \infty |j = k + 1, \ldots ,n} \right\}}} \mathop \sum \limits_{j = 1}^{n} \frac{{\alpha_{j}^{2} \left( {{\varvec{v}}_{j}^{{\text{T}}} \varvec{x}^{\prime}} \right)^{2} + \lambda_{j}^{2} \sigma^{2} }}{{\left( {\lambda_{j}^{2} + \alpha_{j} } \right)^{2} }}. \\ \end{array} \hfill \\ \end{aligned} $$
(A9)

If we set \({\alpha }_{j}=0,\left(i=1,\dots ,n\right)\), then there is

$$ \begin{aligned} {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{TSVD}}}}^{\prime} } \right)|\left\{ {\alpha_{i} = \alpha_{j} |i = 1, \ldots ,n} \right\}} \right] \\ < \min {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{TSVD}}}}^{\prime} } \right)} \right], \end{aligned}$$
(A10)

which is contradictory. It means that the null hypothesis is wrong, thus we have

$$ \min {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{TSVD}}}}^{\prime} } \right)} \right] \le {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{LS}}}}^{\prime} } \right)} \right]. $$
(A11)

Combining Eqs. (A17) and (A22), we have

$$ \begin{aligned}&\max \left\{ {\min {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{Tik}}}}^{\prime} } \right)} \right],\min {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{TSVD}}}}^{\prime} } \right)} \right]} \right\}\\ &\quad\le {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{LS}}}}^{\prime} } \right)} \right]. \end{aligned}$$
(A12)

On the other hand, we also consider the proof by contradiction. If there is a null hypothesis that

$$ \min {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{GRR}}}}^{\prime} } \right)} \right] > \min {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{Tik}}}}^{\prime} } \right)} \right], $$
(A13)

in which

$$ \begin{gathered} \left\{ {\alpha_{i} } \right\} = \mathop {{\text{argmin}}}\limits_{{\left\{ {\alpha_{i} |i = 1, \ldots ,n} \right\}}} {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{GRR}}}}^{\prime} } \right)} \right] \hfill \\ \qquad = \mathop {{\text{argmin}}}\limits_{{\left\{ {\alpha_{i} |i = 1, \ldots ,n} \right\}}} \mathop \sum \limits_{i = 1}^{n} \frac{{\alpha_{i}^{2} \left( {{\varvec{v}}_{i}^{{\text{T}}} \varvec{x}^{\prime}} \right)^{2} + \lambda_{i}^{2} \sigma^{2} }}{{\left( {\lambda_{i}^{2} + \alpha_{i} } \right)^{2} }}, \hfill \\ \end{gathered} $$
(A14)

and

$$ \begin{gathered} \alpha_{0} = \mathop {{\text{argmin}}}\limits_{\alpha } {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{Tik}}}}^{\prime} } \right)} \right] \hfill \\ \qquad = \mathop {{\text{argmin}}}\limits_{\alpha } \mathop \sum \limits_{i = 1}^{n} \frac{{\alpha^{2} \left( {{\varvec{v}}_{i}^{{\text{T}}} {\varvec{y}}^{\prime}} \right)^{2} + \lambda_{i}^{2} \sigma^{2} }}{{\left( {\lambda_{i}^{2} + \alpha } \right)^{2} }}. \hfill \\ \end{gathered} $$
(A15)

As \({\alpha }_{i}\) could be any non-negative value, if we set \({\alpha }_{i}={\alpha }_{0}, \left(i=1,\dots ,n\right)\), then there is

$$ \begin{aligned} {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{GRR}}}}^{\prime} } \right)|\left\{ {\alpha_{i} = \alpha_{0} |i = 1, \ldots ,n} \right\}} \right] \\ < \min {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{GRR}}}}^{\prime} } \right)} \right], \end{aligned}$$
(A16)

which is contradictory. It means that the null hypothesis is wrong, thus we have

$$ \min {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{GRR}}}}^{\prime} } \right)} \right] \le \min {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{Tik}}}}^{\prime} } \right)} \right]. $$
(A17)

Also, if there is a null hypothesis that

$$ \min {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{GRR}}}}^{\prime} } \right)} \right] > \min {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{TSVD}}}}^{\prime} } \right)} \right], $$
(A18)

in which

$$ \begin{array}{*{20}c} \begin{gathered} \left\{ {\alpha_{i} } \right\} = \mathop {{\text{argmin}}}\limits_{{\left\{ {\alpha_{i} |i = 1, \ldots ,n} \right\}}} {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{GRR}}}}^{\prime} } \right)} \right] \hfill \\ = \mathop {{\text{argmin}}}\limits_{{\left\{ {\alpha_{i} |i = 1, \ldots ,n} \right\}}} \mathop \sum \limits_{i = 1}^{n} \frac{{\alpha_{i}^{2} \left( {{\varvec{v}}_{i}^{{\text{T}}} {{\varvec{x}}^{\prime}}} \right)^{2} + \lambda_{i}^{2} \sigma^{2} }}{{\left( {\lambda_{i}^{2} + \alpha_{i} } \right)^{2} }}, \hfill \\ \end{gathered} \\ \end{array} $$
(A19)

and

$$\begin{aligned} \left\{ {\alpha _{j} } \right\}&\, = \mathop {{\text{argmin}}}\limits_{{\left\{ {\alpha _{j} = 0|j = 1, \ldots ,k} \right\} \cup \left\{ {\alpha _{j} = + \infty |j = k + 1, \ldots ,n} \right\}}} {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{TSVD}}}}^{\prime } } \right)} \right] \\ &\, \begin{array}{*{20}c} = \mathop {{\text{argmin}}}\limits_{{\left\{ {\alpha _{j} = 0|j = 1, \ldots ,k} \right\} \cup \left\{ {\alpha _{j} = + \infty |j = k + 1, \ldots ,n} \right\}}} \mathop \sum \limits_{{j = 1}}^{n} \frac{{\alpha _{j}^{2} \left( {\varvec{v}_{j}^{{\text{T}}} \varvec{x}^{\prime } } \right)^{2} + \lambda _{j}^{2} \sigma ^{2} }}{{\left( {\lambda _{j}^{2} + \alpha _{j} } \right)^{2} }} \\ \end{array} \\ \end{aligned} $$
(A20)

As \({\alpha }_{i}\) could be any non-negative value, if we set \({\alpha }_{i}={\alpha }_{j},\left(i=1,\dots ,n\right)\), then there is

$$\begin{aligned} {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{GRR}}}}^{\prime} } \right)|\left\{ {\alpha_{i} = \alpha_{0} |i = 1, \ldots ,n} \right\}} \right] \\ < \min {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{GRR}}}}^{\prime} } \right)} \right], \end{aligned}$$
(A21)

which is contradictory. It means that the null hypothesis is wrong, thus we have

$$ \min {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{GRR}}}}^{\prime} } \right)} \right] \le \min {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{TSVD}}}}^{\prime} } \right)} \right]. $$
(A22)

Combining Eqs. (A17) and (A22), we have

$$\begin{aligned} &\min {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{GRR}}}}^{\prime} } \right)} \right] \\ &\quad \le \min \left\{ {\min {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{Tik}}}}^{\prime} } \right)} \right],\min {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{TSVD}}}}^{\prime} } \right)} \right]} \right\}.\end{aligned} $$
(A23)

Finally, combining Eqs. (A17) and (A23), we have

$$ \begin{aligned} &\, \min {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{GRR}}}}^{\prime} } \right)} \right] \le \min \\ &\left\{ {\min {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{Tik}}}}^{\prime} } \right)} \right],\min {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{TSVD}}}}^{\prime} } \right)} \right]} \right\} \\ &\quad \le \max \left\{ {\min {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{Tik}}}}^{\prime} } \right)} \right],\min {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{TSVD}}}}^{\prime} } \right)} \right]} \right\}\\ &\quad \le {\text{Tr}}\left[ {{\text{MSE}}\left( {\hat{\varvec{x}}_{{{\text{LS}}}}^{\prime} } \right)} \right]. \\ \end{aligned} $$
(A24)

1.3 Proof of Theorem 3

Assuming that \({\alpha }_{i}^{\left(t\right)}>0\) for all i and that the iterative procedure is convergent such that (Hemmerle 1975).

$$ \mathop {\lim }\limits_{t \to \infty } \alpha_{i}^{\left( t \right)} /\lambda_{i}^{2} = \alpha_{i}^{\left( * \right)} /\lambda_{i}^{2} . $$
(A25)

According to Eq. (53), we must then have the relationship

$$ \frac{{\alpha_{i}^{\left( * \right)} }}{{\lambda_{i}^{2} }} = \frac{1}{{T_{i} }}\left( {\lambda_{i} + \frac{{\alpha_{i}^{\left( * \right)} }}{{\lambda_{i}^{2} }}} \right)^{2} , $$
(A26)

where

$$ T_{i} = \frac{{\left( {{\varvec{u}}_{i}^{{\text{T}}} \varvec{y}^{\prime}} \right)^{2} }}{{\hat{\sigma }^{2} }}. $$
(A27)

Solving Eq. (A26) for \({\alpha }_{i}^{\left(*\right)}\) we have

$$ \frac{{\alpha_{i}^{\left( * \right)} }}{{\lambda_{i}^{2} }} = \frac{1}{2}T_{i} - 1 \pm \sqrt {\frac{1}{4}T_{i}^{2} - T_{i} } , $$
(A28)

It is easy to find that the necessary condition for convergence is \({T}_{i}-4>0\).

Therefore, on the one hand, when \({T}_{i}-4>0\), if the initial value satisfied \(0\le {\alpha }_{i}^{\left(0\right)}/{\lambda }_{i}^{2}\le \frac{1}{2}{T}_{i}-1-\sqrt{\frac{1}{4}{T}_{i}^{2}-{T}_{i}}\), then the \({\alpha }_{i}^{\left(t\right)}/{\lambda }_{i}^{2}\) would experience an increasing tendency and finally converge to \(\frac{1}{2}{T}_{i}-1-\sqrt{\frac{1}{4}{T}_{i}^{2}-{T}_{i}}\), which is

$$ \frac{{\alpha_{i}^{\left( t \right)} }}{{\lambda_{i}^{2} }} < \frac{{\alpha_{i}^{{\left( {t + 1} \right)}} }}{{\lambda_{i}^{2} }} \le \frac{1}{2}T_{i} - 1 - \sqrt {\frac{1}{4}T_{i}^{2} - T_{i} } . $$
(A29)

Else if \(\frac{1}{2}{T}_{i}-1-\sqrt{\frac{1}{4}{T}_{i}^{2}-{T}_{i}}<{\alpha }_{i}^{\left(0\right)}/{\lambda }_{i}^{2}<\frac{1}{2}{T}_{i}-1+\sqrt{\frac{1}{4}{T}_{i}^{2}-{T}_{i}}\), then the \({\alpha }_{i}^{\left(t\right)}/{\lambda }_{i}^{2}\) would experience a decreasing tendency and finally converge to \(\frac{1}{2}{T}_{i}-1-\sqrt{\frac{1}{4}{T}_{i}^{2}-{T}_{i}}\) as the same, which is

$$ \frac{{\alpha_{i}^{\left( t \right)} }}{{\lambda_{i}^{2} }} > \frac{{\alpha_{i}^{{\left( {t + 1} \right)}} }}{{\lambda_{i}^{2} }} \ge \frac{1}{2}T_{i} - 1 - \sqrt {\frac{1}{4}T_{i}^{2} - T_{i} } . $$
(A30)

However, if the initial value \({\alpha }_{i}^{\left(0\right)}/{\lambda }_{i}^{2}>\frac{1}{2}{T}_{i}-1+\sqrt{\frac{1}{4}{T}_{i}^{2}-{T}_{i}}\), then the \({\alpha }_{i}^{\left(t\right)}\) would experience a divergence tendency, which is

$$ \frac{{\alpha_{i}^{\left( t \right)} }}{{\lambda_{i}^{2} }} < \frac{{\alpha_{i}^{{\left( {t + 1} \right)}} }}{{\lambda_{i}^{2} }} < + \infty . $$
(A31)

On the other hand, when \({T}_{i}-4>0\), then the \({\alpha }_{i}^{\left(t\right)}\) would always experience a divergence tendency, which is

$$ \frac{{\alpha_{i}^{\left( t \right)} }}{{\lambda_{i}^{2} }} < \frac{{\alpha_{i}^{{\left( {t + 1} \right)}} }}{{\lambda_{i}^{2} }} < + \infty . $$
(A32)

1.4 Proof of Corollary 3.1

According to Eqs. (36) and (54), given a certain \({\hat{\sigma }}^{2}\), the expectation of \({T}_{i}\) is given by

$$ \begin{aligned} {\text{E}}\left( {T_{i} } \right) = &\, {\text{E}}\left[ {\frac{{\frac{{\left( {{\varvec{u}}_{i}^{{\text{T}}} \varvec{y}^{\prime}} \right)^{2} }}{{\lambda_{i}^{2} }}}}{{\frac{{\hat{\sigma }^{2} }}{{\lambda_{i}^{2} }}}}} \right] \hfill \\ \approx &\, \frac{{\left( {{\varvec{v}}_{i}^{{\text{T}}} \varvec{x}^{\prime}} \right)^{2} + \frac{{{\text{E}}\left( {{\varvec{u}}_{i}^{{\text{T}}} \varvec{e}^{\prime}\varvec{e}^{{^{\prime}{\text{T}}}} {\varvec{u}}_{i} } \right)}}{{\lambda_{i}^{2} }}}}{{\frac{{\sigma^{2} }}{{\lambda_{i}^{2} }}}} \hfill \\ =&\, \frac{{\left( {{\varvec{v}}_{i}^{{\text{T}}} \varvec{x}^{\prime}} \right)^{2} + \frac{{{\text{Tr}}\left[ {{\varvec{u}}_{i} {\varvec{u}}_{i}^{{\text{T}}} {\text{E}}\left( {\varvec{e}^{\prime}\varvec{e}^{{^{\prime}{\text{T}}}} } \right)} \right]}}{{\lambda_{i}^{2} }}}}{{\frac{{\sigma^{2} }}{{\lambda_{i}^{2} }}}} \hfill \\ = &\, \frac{{\left( {{\varvec{v}}_{i}^{{\text{T}}} \varvec{x}^{\prime}} \right)^{2} }}{{\frac{{\sigma^{2} }}{{\lambda_{i}^{2} }}}} + \frac{{\sigma^{2} }}{{\sigma^{2} }} \hfill \\ = &\, {\text{SNR}}_{i} + 1. \hfill \\ \end{aligned} $$
(A33)

1.5 Proof of Corollary 3.2

On the one hand, when \({T}_{i}>4\), the derivation of \({B}_{i}\) to \({T}_{i}\) satisfies that

$$ \begin{aligned} \frac{{\partial_{{B_{i} }} }}{{\partial_{{T_{i} }} }} = &\, \frac{1}{2} + \frac{1}{2}\left( {\frac{1}{4}T_{i}^{2} - T_{i} } \right)^{{ - \frac{1}{2}}} \left( {\frac{1}{2}T_{i} - 1} \right) \\ > &\, \frac{1}{2} + \frac{1}{2}\left( {\frac{1}{4}T_{i}^{2} } \right)^{{ - \frac{1}{2}}} \left( {\frac{1}{2}T_{i} - 1} \right) \\ = &\, \frac{1}{2} + T_{i} \left( {\frac{1}{2}T_{i} - 1} \right) \\ >&\, 0 \\ \end{aligned} $$
(A34)

which means that \({B}_{i}\) increases monotonically with \({T}_{i}\). Thus \({B}_{i}\) get the minimum value in case of \({T}_{i}=4\), which can be written as

$$ \begin{aligned} B_{i} = &\, \frac{1}{2}T_{i} - 1 + \sqrt {\frac{1}{4}T_{i}^{2} - T_{i} } \\ \ge &\, \frac{1}{2}*4 - 1 + \sqrt {\frac{1}{4}*4^{2} - 4} \\ = &\, 1. \\ \end{aligned} $$
(A35)

Thus, we have

$$ B_{i} \ge 1. $$
(A36)

On the other hand

$$ \begin{aligned} B_{i} = &\, \frac{1}{2}T_{i} - 1 + \sqrt {\frac{1}{4}T_{i}^{2} - T_{i} } \\ < &\, \frac{1}{2}T_{i} - 1 + \sqrt {\frac{1}{4}T_{i}^{2} } \\ = &\, \frac{1}{2}T_{i} - 1 + \frac{1}{2}T_{i} \\ = &\, T_{i} - 1 \\ \end{aligned} $$
(A37)

1.6 Proof of Corollary 3.3

On the one hand, when \({T}_{i}>4\), the derivation of \({P}_{i}\) to \({T}_{i}\) satisfies that

$$ \begin{aligned} \frac{{\partial_{{P_{i} }} }}{{\partial_{{T_{i} }} }}&\, = \frac{1}{2} - \frac{1}{2}\left( {\frac{1}{4}T_{i}^{2} - T_{i} } \right)^{{ - \frac{1}{2}}} \left( {\frac{1}{2}T_{i} - 1} \right)\; \\ &\, < \frac{1}{2} - \frac{1}{2}\left( {\frac{1}{4}T_{i}^{2} - T_{i} + 1} \right)^{{ - \frac{1}{2}}} \left( {\frac{1}{2}T_{i} - 1} \right) \\ &\, = \frac{1}{2} - \frac{1}{2}\left( {\frac{1}{2}T_{i} - 1} \right)^{ - 1} \left( {\frac{1}{2}T_{i} - 1} \right) \\ &\, = 0, \end{aligned} $$
(A38)

which means that \({P}_{i}\) decreases monotonically with \({T}_{i}\). Thus \({P}_{i}\) get the maximum value in case of \({T}_{i}=4\) which can be written as

$$ \begin{aligned} P_{i} = &\, \frac{1}{2}T_{i} - 1 - \sqrt {\frac{1}{4}T_{i}^{2} - T_{i} } \\ \le &\, \frac{1}{2}*4 - 1 - \sqrt {\frac{1}{4}*4^{2} - 4} \\ = &\, 1. \\ \end{aligned} $$
(A39)

Thus, we have

$$ P_{i} \le 1. $$
(A40)

On the other hand, to approve \({P}_{i}>\frac{1}{{T}_{i}-1}\) is equivalent to approve

$$ \frac{1}{2}T_{i} - 1 - \frac{1}{{T_{i} - 1}} > \sqrt {\frac{1}{4}T_{i}^{2} - T_{i} } . $$
(A41)

Since both sides of the inequality are positive when \({T}_{i}>4\), it is equivalent to

$$ \left( {\frac{1}{2}T_{i} - 1 - \frac{1}{{T_{i} - 1}}} \right)^{2} > \frac{1}{4}T_{i}^{2} - T_{i} , $$
(A42)

in which the left side can be expanded to

$$ \begin{aligned} \left( {\frac{1}{2}T_{i} - 1 - \frac{1}{{T_{i} - 1}}} \right)^{2} = &\, \frac{1}{4}T_{i}^{2} - T_{i} + \left( {\frac{1}{{T_{i} - 1}}} \right)^{2} + \frac{1}{{T_{i} - 1}} \\ >&\, \frac{1}{4}T_{i}^{2} - T_{i}. \\ \end{aligned} $$
(A43)

Thus, the original inequality is proved.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yu, Y., Yang, L., Shen, Y. et al. An iterative and shrinking generalized ridge regression for ill-conditioned geodetic observation equations. J Geod 98, 3 (2024). https://doi.org/10.1007/s00190-023-01795-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00190-023-01795-1

Keywords

Navigation