Abstract
We introduce the notion of consistent error bound functions which provides a unifying framework for error bounds for multiple convex sets. This framework goes beyond the classical Lipschitzian and Hölderian error bounds and includes logarithmic and entropic error bounds found in the exponential cone. It also includes the error bounds obtainable under the theory of amenable cones. Our main result is that the convergence rate of several projection algorithms for feasibility problems can be expressed explicitly in terms of the underlying consistent error bound function. Another feature is the usage of Karamata theory and functions of regular variations which allows us to reason about convergence rates while bypassing certain complicated expressions. Finally, applications to conic feasibility problems are given and we show that a number of algorithms have convergence rates depending explicitly on the singularity degree of the problem.
Similar content being viewed by others
Notes
The relevant fact is that if \(\{u_k\},\{v_k\}\) are nonnegative sequences with \(\sum u_k = \infty \) and \(\sum u_kv_k < \infty \), then \(\liminf v_k = 0\).
We note that for the \(x_k\) such that \(k \ge 2\ell \) but \(k \le \frac{\ell c_1}{\tau }\), the rate for those iterates is governed by the second expression in (4.18), so overall, we have a sublinear convergence rate for all \(k \ge 2\ell \).
The only subtlety is that in the proof of Case 1 in the uniform case, (4.19) holds for all \(k \ge 2\ell \) and there is no need to impose \(k > \ell c_1/\tau \).
References
S. Agmon, The relaxation method for linear inequalities, Canad. J. Math. 6 (1954), 382–392.
R. Aharoni and Y. Censor, Block-iterative projection methods for parallel computation of solutions to convex feasibility problems, Linear Algebra Appl. 120 (1989), 165 – 175.
H. Attouch, J. Bolte, P. Redont, and A. Soubeyran, Proximal alternating minimization and projection methods for nonconvex problems: an approach based on the Kurdyka-Łojasiewicz inequality, Math. Oper. Res. 35 (2010), 438–457.
H. Attouch, J. Bolte, and B. F. Svaiter, Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward–backward splitting, and regularized Gauss–Seidel methods, Math. Program. 137 (2013), 91–129.
J.-B. Baillon, P. Combettes, and R. Cominetti, There is no variational characterization of the cycles in the method of periodic projections, J. Funct. Anal. 262 (2012), 400 – 408.
H. H. Bauschke and J. M. Borwein, On projection algorithms for solving convex feasibility problems, SIAM Review 38 (1996), 367–426.
H. H. Bauschke, J. M. Borwein, and W. Li, Strong conical hull intersection property, bounded linear regularity, Jameson’s property (G), and error bounds in convex optimization, Math. Program. 86 (1999), 135–160.
A. Beck and M. Teboulle, Convergence rate analysis and error bounds for projection algorithms in convex feasibility problems, Optim. Methods Softw. 18 (2003), 377–394.
N. H. Bingham, C. M. Goldie, and E. Omey, Regularly varying probability densities, Publ. Inst. Math. (Beograd) (N.S.) 80 (2006), 47–57.
N. H. Bingham, C. M. Goldie, and J. L. Teugels, Regular Variation, Encyclopedia of Mathematics and its Applications, Cambridge University Press, 1987.
J. Bochnak, M. Coste, and M.-F. Roy, Real Algebraic Geometry, Springer Science, 1998.
J. Bolte, A. Daniilidis, O. Ley, and L. Mazet, Characterizations of Łojasiewicz inequalities: subgradient flows, talweg, convexity, Trans. Amer. Math. Soc. 362 (2010), 3319–3363.
J. Bolte, T. P. Nguyen, J. Peypouquet, and B. W. Suter, From error bounds to the complexity of first-order descent methods for convex functions, Math. Program. 165 (2017), 471–507.
J. M. Borwein, G. Li, and M. K. Tam, Convergence rate analysis for averaged fixed point iterations in common fixed point problems, SIAM J. Optim. 27 (2017), 1–33.
J. M. Borwein, G. Li, and L. Yao, Analysis of the convergence rate for the cyclic projection algorithm applied to basic semialgebraic convex sets, SIAM J. Optim. 24 (2014), 498–527.
J. M. Borwein and H. Wolkowicz, Regularizing the abstract convex program, J. Math. Anal. Appl. 83 (1981), 495 – 530.
Y. Censor, Row-action methods for huge and sparse systems and their applications, SIAM Rev. 23 (1981), 444–466.
V. Chandrasekaran and P. Shah, Relative entropy optimization and its applications, Math. Program. 161 (2017), 1–32.
R. Chares, Cones and interior-point algorithms for structured convex optimization involving powers and exponentials, Phd thesis, Université catholique de Louvain (2009).
C. Coey, L. Kapelevich, and J. P. Vielma, Solving natural conic formulations with Hypatia.jl, ArXiv e-prints arXiv:2005.01136.
P. L. Combettes, The convex feasibility problem in image recovery, volume 95, 155–270, Elsevier, 1996.
P. L. Combettes, Hilbertian convex feasibility problem: Convergence of projection methods, Appl. Math. Optim. 35 (1997), 311–330.
D. Davis and W. Yin, Faster convergence rates of relaxed Peaceman-Rachford and ADMM under regularity assumptions, Math. Oper. Res. 42 (2017), 783–805.
D. Djurčić and A. Torgašev, Some asymptotic relations for the generalized inverse, J. Math. Anal. Appl. 335 (2007), 1397–1402.
J. Douglas and H. H. Rachford, On the numerical solution of heat conduction problems in two and three space variables, Trans. Amer. Math. Soc. 82 (1956), 421–439.
D. Drusvyatskiy, G. Li, and H. Wolkowicz, A note on alternating projections for ill-posed semidefinite feasibility problems, Math. Program. 162 (2017), 537–548.
P. Embrechts and M. Hofert, A note on generalized inverses, Math. Methods Oper. Res. 77 (2013), 423–432.
J. Faraut and A. Korányi, Analysis on Symmetric Cones, Oxford Mathematical Monographs, Clarendon Press, Oxford, 1994.
L. Faybusovich, Several Jordan-algebraic aspects of optimization, Optimization 57 (2008), 379–393.
H. A. Friberg, Projection onto the exponential cone: a univariate root-finding problem, Optimization Online, Jan. 2021.
P. Gilbert, Iterative methods for the three-dimensional reconstruction of an object from projections, J. Theoret. Biol. 36 (1972), 105–117.
D. Henrion and J. Malick, Projection methods for conic feasibility problems: applications to polynomial sum-of-squares decompositions, Optim. Methods Softw. 26 (2011), 23–46.
G. T. Herman, A. Lent, and P. H. Lutz, Relaxation methods for image reconstruction, Comm. ACM 21 (1978), 152–158.
M. Ito and B. F. Lourenço, A bound on the Carathéodory number, Linear Algebra Appl. 532 (2017), 347 – 363.
M. Karimi and L. Tunçel, Domain-Driven Solver (DDS) Version 2.0: a MATLAB-based software package for convex optimization problems in domain-driven form, ArXiv e-prints arXiv:1908.03075.
M. Koecher, The Minnesota notes on Jordan algebras and their applications, number 1710 in Lecture Notes in Mathematics, Springer, Berlin, 1999.
A. S. Lewis and J.-S. Pang, Error bounds for convex inequality systems, in Generalized Convexity, Generalized Monotonicity: Recent Results (J.-P. Crouzeix, J.-E. Martinez-Legaz, and M. Volle, eds.) , Springer US, 1998, pp. 75–110.
G. Li and T. K. Pong, Douglas-Rachford splitting for nonconvex optimization with application to nonconvex feasibility problems, Math. Program. 159 (2016), 371–401.
G. Li and T. K. Pong, Calculus of the exponent of Kurdyka–Łojasiewicz inequality and its applications to linear convergence of first-order methods, Found. Comput. Math. 18 (2018), 1199–1232.
S. B. Lindstrom, B. F. Lourenço, and T. K. Pong, Error bounds, facial residual functions and applications to the exponential cone, Math. Program. (accepted).
P. L. Lions and B. Mercier, Splitting algorithms for the sum of two nonlinear operators, SIAM J. Numer. Anal. 16 (1979), 964–979.
J. D. Loera, J. Haddock, and D. Needell, A sampling Kaczmarz-Motzkin algorithm for linear feasibility, SIAM J. Sci. Comput. 39 (2017), S66–S87.
B. F. Lourenço, Amenable cones: error bounds without constraint qualifications, Math. Program. 186 (2021), 1–48.
B. F. Lourenço, M. Muramatsu, and T. Tsuchiya, Facial reduction and partial polyhedrality, SIAM J. Optim. 28 (2018), 2304–2326.
M. Lubin, E. Yamangil, R. Bent, and J. P. Vielma, Extended formulations in mixed-integer convex programming, in Lecture Notes in Computer Science 9682 (Q. Louveaux and M. Skutella, eds.), IPCO, 2016, pp. 102–113.
Z. Luo and P. Tseng, Error bounds and convergence analysis of feasible descent methods: a general approach, Ann. Oper. Res. 46 (1993), 157–178.
MOSEK ApS, MOSEK Modeling Cookbook Release 3.2.3 (2021), https://docs.mosek.com/modeling-cookbook/index.html.
T. S. Motzkin and I. J. Schoenberg, The relaxation method for linear inequalities, Canad. J. Math. 6 (1954), 393–404.
I. Necoara, P. Richtárik, and A. Patrascu, Randomized projection methods for convex feasibility: conditioning and convergence rates, SIAM J. Optim. 29 (2019), 2814–2852.
P. Ochs, Unifying abstract inexact convergence theorems and block coordinate variable metric iPiano, SIAM J. Optim. 29 (2019), 541–570.
J.-S. Pang, Error bounds in mathematical programming, Math. Program. 79 (1997), 299–332.
D. Papp and S. Yıldız, alfonso: Matlab package for nonsymmetric conic optimization, ArXiv e-prints arXiv:2101.04274.
G. Pataki, Strong duality in conic linear programming: facial reduction and extended duals, in Computational and Analytical Mathematics, volume 50, Springer New York, 2013, pp. 613–634.
R. T. Rockafellar, Convex Analysis, Princeton University Press, 1997.
E. Seneta, Regularly Varying Functions, Lecture Notes in Mathematics, Springer Berlin Heidelberg, 1976.
J. F. Sturm, Error bounds for linear matrix inequalities, SIAM J. Optim. 10 (2000), 1228–1248.
H. Waki and M. Muramatsu, Facial reduction algorithms for conic optimization problems, J. Optim. Theory Appl. 158 (2013), 188–215.
X. Wang and Z. Wang, The exact modulus of the generalized concave Kurdyka-Łojasiewicz property, Math. Oper. Res. https://doi.org/10.1287/moor.2021.1227
D. C. Youla and H. Webb, Image restoration by the method of convex projections: Part 1-theory, IEEE Trans. Med. Imaging 1 (1982), 81–94.
Acknowledgements
We thank the referees and the associate editor for their comments, which helped to improve the paper. The authors would like to thank Masaru Ito and Ting Kei Pong for the feedback and helpful comments during the writing of this paper. The first author is supported by ACT-X, Japan Science and Technology Agency (Grant No. JPMJAX210Q). The second author is partially supported by the JSPS Grant-in-Aid for Young Scientists 19K20217 and the Grant-in-Aid for Scientific Research (B)18H03206 and 21H03398.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Communicated by Jérôme Bolte.
Appendix A
Appendix A
Proof
The fact that \(f^{-}(0) = 0\) follows from \(f(0) = 0\) and definition (4.2). We also note that in (4.2), if we increase s, the set after the “\(\inf \)” potentially shrinks, so \(f^{-}\) is monotone nondecreasing. Next, we prove each item.
-
(i)
Fix any \(s\in (0,\, \sup f)\). Suppose that \(f^{-}(s) = 0\). By definition (4.2), given any \(\epsilon _k > 0\), there exists \(t_k\in [0,\, \epsilon _k]\) such that \(f(t_k)\ge s\). Consequently, there exists a sequence \(t_k\rightarrow 0_+\) with \(f(t_k)\ge s > 0\). This together with \(f(0) = 0\) contradicts the (right)-continuity of f at 0, and thus proves (i).
-
(ii)
Let \(s\ge 0, t \ge 0\) be such that \(s\le f(t)\). Since f is monotone increasing, \(\sup f\) is never attained, which implies \(0\le s\le f(t) < \sup f\). Furthermore, by definition (4.2), we have \(f^{-}(s)\le t\).
-
(iii)
Let \(s\ge 0, t \ge 0\) be such that \(s < \sup f\) and \(f(t) < s\). By definition, \(f^{-}(f(t)):=\inf \left\{ u\ge 0: f(u)\ge f(t)\right\} \), therefore \(f^{-}(f(t))\le t\). On the other hand, the strict monotonicity of f implies that there is no \(u < t\) with \(f(u)\ge f(t)\). This implies \(f^{-}(f(t))\ge t\) and thus \(f^{-}(f(t)) = t\). Together with the monotonicity of \(f^{-}\), we obtain \(t = f^{-}(f(t))\le f^{-}(s)\).
-
(iv)
Suppose that there exists some \(\bar{s}\in (0,\, \sup f)\) such that \(f^{-}\) is not continuous at \(\bar{s}\). Since \(f^{-}\) is monotone, both the left-sided limit \(f^{-}(\bar{s}-)\) and the right-sided limit \(f^{-}(\bar{s}+)\) exist and \(f^{-}(\bar{s}-) < f^{-}(\bar{s}+)\). Fix any \(t\in (f^{-}(\bar{s}-),\, f^{-}(\bar{s}+))\). From the monotonicity of \(f^{-}\), there exist \(\epsilon > 0\) such that whenever \(s_1,s_2\) satisfy \(0< s_1< \bar{s}< s_2 < \sup f\) we have
$$\begin{aligned} f^{-}(s_1)< t - \epsilon< t + \epsilon < f^{-}(s_2). \end{aligned}$$We now show that \(f(t) = \bar{s}\). Suppose that \(f(t)\ne \bar{s}\). Then either \(f(t) < \bar{s}\) or \(f(t) > \bar{s}\). If \(f(t) < \bar{s}\), let \(s_1 = (f(t) + \bar{s})/2 \in (f(t),\, \bar{s})\). Thus, we know from item (iii) that \(f^{-}(s_1)\ge t\), which contradicts \(f^{-}(s_1) < t - \epsilon \). If \(f(t) > \bar{s}\), let \(s_2 = (f(t) + \bar{s})/2 \in (\bar{s},\, f(t))\). Then, from item (ii), we have \(f^{-}(s_2) \le t\), which contradicts \(t + \epsilon < f^{-}(s_2)\). This proves \(f(t) = \bar{s}\). The arbitrariness of \(t\in (f^{-}(\bar{s}-),\, f^{-}(\bar{s}+))\) contradicts the strict monotonicity of f. Consequently, \(f^{-}\) is continuous on \((0,\, \sup f)\).
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Liu, T., Lourenço, B.F. Convergence Analysis under Consistent Error Bounds. Found Comput Math 24, 429–479 (2024). https://doi.org/10.1007/s10208-022-09586-4
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10208-022-09586-4