First-Order Methods in Large-Scale Semidenite Optimization
eBook - PDF

First-Order Methods in Large-Scale Semidenite Optimization

,
  1. 205 pages
  2. English
  3. PDF
  4. Available on iOS & Android
eBook - PDF

First-Order Methods in Large-Scale Semidenite Optimization

,

About this book

Semidefinite Optimization has attracted the attention of many researchers over the last twenty years. It has nowadays a huge variety of applications in such different fields as Control, Structural Design, Statistics, or in the relaxation of hard combinatorial problems. In this thesis, we focus on the practical tractability of large-scale semidefinite optimization problems. From a theoretical point of view, these problems can be solved by polynomial-time Interior-Point methods approximately. The complexity estimate of Interior-Point methods grows logarithmically in the inverse of the solution accuracy, but with the order 3.5 in both the matrix size and the number of constraints. The later property prohibits the resolution of large-scale problems in practice.In this thesis, we present new approaches based on advanced First-Order methods such as Smoothing Techniques and Mirror-Prox algorithms for solving structured large-scale semidefinite optimization problems up to a moderate accuracy. These methods require a very specific problem format. However, generic semidefinite optimization problems do not comply with these requirements. In a preliminary step, we recast slightly structured semidefinite optimization problems in an alternative form to which these methods are applicable, namely as matrix saddle-point problems. The final methods have a complexity result that depends linearly in both the number of constraints and the inverse of the target accuracy.Smoothing Techniques constitute a two-stage procedure: we derive a smooth approximation of the objective function at first and apply an optimal First-Order method to the adapted problem afterwards. We present a refined version of this optimal First-Order method in this thesis. The worst-case complexity result for this modified scheme is of the same order as for the original method. However, numerical results show that this alternative scheme needs much less iterations than its original counterpart to find an approximate solution in practice. Using this refined version of the optimal First-Order method in Smoothing Techniques, we are able to solve randomly generated matrix saddle-point problems involving a hundred matrices of size 12'800 x 12'800 up to an absolute accuracy of 0.0012 in about four hours.Smoothing Techniques and Mirror-Prox methods require the computation of one or two matrix exponentials at every iteration when applied to the matrix saddle-point problems obtained from the above transformation step. Using standard techniques, the efficiency estimate for the exponentiation of a symmetric matrix grows cubically in the size of the matrix. Clearly, this operation limits the class of problems that can be solved by Smoothing Techniques and Mirror-Prox methods in practice. We present a randomized Mirror-Prox method where we replace the exact matrix exponential by a stochastic approximation. This randomized method outperforms all its competitors with respect to the theoretical complexity estimate on a significant class of large-scale matrix saddle-point problems. Furthermore, we show numerical results where the randomized method needs only about 58% of the CPU time of the deterministic counterpart for solving approximately randomly generated matrix saddle-point problems with a hundred matrices of size 800 x 800.As a side result of this thesis, we show that the Hedge algorithm - a method that is heavily used in Theoretical Computer Science - can be interpreted as a Dual Averaging scheme. The embedding of the Hedge algorithm in the framework of Dual Averaging schemes allows us to derive three new versions of this algorithm. The efficiency guarantees of these modified Hedge algorithms are at least as good as, sometimes even better than, the complexity estimates of the original method. We present numerical experiments where the refined methods significantly outperform their vanilla counterpart.

Trusted by 375,005 students

Access to over 1 million titles for a fair monthly price.

Study more efficiently using our study tools.

Information

Year
2012
Print ISBN
9783954041329
eBook ISBN
9783736941328
Edition
1

Table of contents

  1. Acknowledgments
  2. Abstract
  3. Zusammenfassung
  4. Contents
  5. Chapter 1 Introduction
  6. Chapter 2 Convex Optimization andcomputational tractability
  7. Chapter 3 Black-Box optimization methods
  8. Chapter 4 Solution methods inStructural Optimization
  9. Chapter 5 Hedge algorithm andDual Averaging schemes
  10. Chapter 6 An introduction tolarge-scale Semidefinite Optimization
  11. Chapter 7 A matrix version of the Hedge algorithm inSemidefinite Optimization
  12. Chapter 8 From semidefinite optimization problems tomatrix saddle-point problems
  13. Chapter 9 Smoothing Techniques formatrix saddle-point problems
  14. Chapter 10 Applying randomized Mirror-Prox methods
  15. Chapter 11 Numerical results
  16. Chapter 12 Conclusions and outlook
  17. Appendix A Regularity of norms
  18. Appendix B Proofs
  19. Bibliography