*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Following the seminal work of Nesterov, accelerated optimization methods
(sometimes referred to as momentum methods) have been used to powerfully
boost the performance of first-order, gradient-based parameter
estimation in scenarios were second-order optimization strategies are
either inapplicable or impractical. Not only does accelerated gradient
descent converge considerably faster than traditional gradient descent,
but it performs a more robust local search of the parameter space by
initially overshooting and then oscillating back as it settles into a
final configuration, thereby selecting only local minimizers with a
attraction basin large enough to accommodate the initial overshoot. This
behavior has made accelerated search methods particularly popular within
the machine learning community where stochastic variants have been
proposed as well. So far, however, accelerated optimization methods
have been applied to searches over finite parameter spaces. We show how
a variational framework for these finite dimensional methods (recently
formulated by Wibisono, Wilson, and Jordan) can be extended to the
infinite dimensional setting and, in particular, to the manifold of
planar curves in order to yield a new class of accelerated geometric,
PDE-based active contours.