@GuilleAngeris Ah! It's stuff downstream of https://t.co/xhFrpE8U4N, and one of the references to check was https://t.co/VUdkCKoe6X,
@GuilleAngeris Some of the posts in this chain - https://t.co/A4SO6BIaBZ - could be a bit useful. I think that one issue with many of the current conditions is that they are a bit tricky to { establish, compose, etc. }, but are nice tools for studying cert
RT @GuillaumeG_: @konstmish @qberthet @fpedregosa From this result you trivially deduce rates for the distance to minimizers (as mentioned…
@fpedregosa @MatthieuTerris Note that this paper is in the convex case! If you want non convex arguments, see https://t.co/H6051zIW2J
@konstmish @qberthet @fpedregosa From this result you trivially deduce rates for the distance to minimizers (as mentioned by @konstmish ). It comes from a general result from Lojasiewicz inequalities (valid in non convex setting) that "iterates<values"
@tylermaunu @sp_monte_carlo For posterity, convergence under the KL condition is quite well studied for smooth and non-smooth settings. See e.g. Theorem 5 about Gradient Descent in https://t.co/fznwCZBEt7 (and the older references around) https://t.co/BRp7
Splitting methods with variable metric for KL functions. http://t.co/CpTmnOePyj
Pierre Frankel, Guillaume Garrigos, Juan Peypouquet : Splitting methods with variable metric for KL functions http://t.co/ppVX92vOfO