1. Subgradient Descent

    Thu 11 April 2013

    Not far from Gradient Descent is another first-order descent algorithm (that is, an algorithm that only relies on the first derivative) is Subgradient Descent. In implementation, they are in fact identical. The only difference is on the assumptions placed on the objective function we wish to minimize, \(f(x)\). If …

  2. Gradient Descent

    Wed 10 April 2013

    Gradient Descent is perhaps the most intuitive of all optimization algorithms. Imagine you're standing on the side of a mountain and want to reach the bottom. You'd probably do something like this,

    1. Look around you and see which way points the most downwards
    2. Take a step in that direction, then …
  3. ADMM: parallelizing convex optimization

    Sun 24 June 2012

    In the previous post, we considered Stochastic Gradient Descent, a popular method for optimizing "separable" functions (that is, functions that are purely sums of other functions) in a large, distributed environment. However, Stochastic Gradient Descent is not the only algorithm out there.

    So why consider anything else? First of all …

←   newer