ICML 2009
=============================================
Sparse codind is a method that model data vectors as sparse linear combinations of basis elements,and widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses
on learning the basis set, also called dictionary,to adapt it to specific data, This paper proposes a new online optimization algorithm for dictionary learning, based on stochastic approximations, which scales up gracefully to large datasets with millions of training samples.
Contribution:
1.optimization of a smooth nonconvex objective function over a convex set
2.iterative online algorithm that solves this problem by efficiently minimizing at each step a
quadratic surrogate function of the empirical cost over the set of constraints.
3.significantly faster than previous approaches to dictionary learning on both small and large datasets
of natural images
1.Problem Statement
2.Online Dictionary Learning:
1.Problem Statement
we define l(x,D) as the optimal value of the ℓ1-sparse coding problem:
define a convex set
convert the problem into convex optimization problem

2.Online Dictionary Learning:
Observed that a Cholesky-based implementation of the LARS-Lasso algorithm that provides the whole regularization path.
Optimizing the Algorithm
When dealing with large training sets, it is impossible to store all the past coefficients, but we can use the repeated data that appeared frequently.
2.Mini-Batch Extension
Speed up by using multiple signals at a time rather than only one.
3.Purging the Dictionary from Unused Atoms
Replace the dictionary atoms which are seldom used.
Author also gives some intelligent proof which is beyond my comprehension.
=====================================================
Pros and Cons:
Pros:
1. This is an extraordinary work in the sparse coding area, it gives us a good and efficient way to implement it on the real-world problem.
Cons:
1. The proves are difficult to understand, it's better to give more details or even tutorial to reader.
沒有留言:
張貼留言