An implementation of sparse coding with dictionary learning that achieves sparsity via an l1-norm regularizer on the codes (lasso) or an (l1+l2)-norm regularizer on the codes (the elastic net).
SparseCoding (const arma::mat &data, const size_t atoms, const double lambda1, const double lambda2=0)
Set the parameters to SparseCoding. const arma::mat & Codes () const
Access the sparse codes. arma::mat & Codes ()
Modify the sparse codes. const arma::mat & Data () const
Access the data. const arma::mat & Dictionary () const
Access the dictionary. arma::mat & Dictionary ()
Modify the dictionary. void Encode (const size_t maxIterations=0, const double objTolerance=0.01, const double newtonTolerance=1e-6)
Run Sparse Coding with Dictionary Learning. double Objective () const
Compute the objective function. void OptimizeCode ()
Sparse code each point via LARS. double OptimizeDictionary (const arma::uvec &adjacencies, const double newtonTolerance=1e-6)
Learn dictionary via Newton method based on Lagrange dual. void ProjectDictionary ()
Project each atom of the dictionary back onto the unit ball, if necessary. std::string ToString () const
size_t atoms
Number of atoms. arma::mat codes
Sparse codes (columns are points). const arma::mat & data
Data matrix (columns are points). arma::mat dictionary
Dictionary (columns are atoms). double lambda1
l1 regularization term. double lambda2
l2 regularization term.
An implementation of Sparse Coding with Dictionary Learning that achieves sparsity via an l1-norm regularizer on the codes (LASSO) or an (l1+l2)-norm regularizer on the codes (the Elastic Net).
Let d be the number of dimensions in the original space, m the number of training points, and k the number of atoms in the dictionary (the dimension of the learned feature space). The training data X is a d-by-m matrix where each column is a point and each row is a dimension. The dictionary D is a d-by-k matrix, and the sparse codes matrix Z is a k-by-m matrix. This program seeks to minimize the objective:
\[ \min_{D,Z} 0.5 ||X - D Z||_{F}^2 + \lambda_1 \sum_{i=1}^m ||Z_i||_1 + 0.5 \lambda_2 \sum_{i=1}^m ||Z_i||_2^2 \].PP subject to $ ||D_j||_2 <= 1 $ for $ 1 <= j <= k $ where typically $ lambda_1 > 0 $ and $ lambda_2 = 0 $.
This problem is solved by an algorithm that alternates between a dictionary learning step and a sparse coding step. The dictionary learning step updates the dictionary D using a Newton method based on the Lagrange dual (see the paper below for details). The sparse coding step involves solving a large number of sparse linear regression problems; this can be done efficiently using LARS, an algorithm that can solve the LASSO or the Elastic Net (papers below).
Here are those papers:
@incollection{lee2007efficient, title = {Efficient sparse coding algorithms}, author = {Honglak Lee and Alexis Battle and Rajat Raina and Andrew Y. Ng}, booktitle = {Advances in Neural Information Processing Systems 19}, editor = {B. Sch publisher = {MIT Press}, address = {Cambridge, MA}, pages = {801--808}, year = {2007} }
@article{efron2004least, title={Least angle regression}, author={Efron, B. and Hastie, T. and Johnstone, I. and Tibshirani, R.}, journal={The Annals of statistics}, volume={32}, number={2}, pages={407--499}, year={2004}, publisher={Institute of Mathematical Statistics} }
@article{zou2005regularization, title={Regularization and variable selection via the elastic net}, author={Zou, H. and Hastie, T.}, journal={Journal of the Royal Statistical Society Series B}, volume={67}, number={2}, pages={301--320}, year={2005}, publisher={Royal Statistical Society} }
Before the method is run, the dictionary is initialized using the DictionaryInitializationPolicy class. Possible choices include the RandomInitializer, which provides an entirely random dictionary, the DataDependentRandomInitializer, which provides a random dictionary based loosely on characteristics of the dataset, and the NothingInitializer, which does not initialize the dictionary -- instead, the user should set the dictionary using the Dictionary() mutator method.
Template Parameters:
DictionaryInitializationPolicy The class to use to initialize the dictionary; must have 'void Initialize(const arma::mat& data, arma::mat& dictionary)' function.
Definition at line 119 of file sparse_coding.hpp.
Set the parameters to SparseCoding. lambda2 defaults to 0.
Parameters:
data Data matrix
atoms Number of atoms in dictionary
lambda1 Regularization parameter for l1-norm penalty
lambda2 Regularization parameter for l2-norm penalty
Access the sparse codes.
Definition at line 187 of file sparse_coding.hpp.
References mlpack::sparse_coding::SparseCoding< DictionaryInitializer >::codes.
Modify the sparse codes.
Definition at line 189 of file sparse_coding.hpp.
References mlpack::sparse_coding::SparseCoding< DictionaryInitializer >::codes.
Access the data.
Definition at line 179 of file sparse_coding.hpp.
References mlpack::sparse_coding::SparseCoding< DictionaryInitializer >::data.
Access the dictionary.
Definition at line 182 of file sparse_coding.hpp.
References mlpack::sparse_coding::SparseCoding< DictionaryInitializer >::dictionary.
Modify the dictionary.
Definition at line 184 of file sparse_coding.hpp.
References mlpack::sparse_coding::SparseCoding< DictionaryInitializer >::dictionary.
Run Sparse Coding with Dictionary Learning.
Parameters:
maxIterations Maximum number of iterations to run algorithm. If 0, the algorithm will run until convergence (or forever).
objTolerance Tolerance for objective function. When an iteration of the algorithm produces an improvement smaller than this, the algorithm will terminate.
newtonTolerance Tolerance for the Newton's method dictionary optimization step.
Compute the objective function.
Sparse code each point via LARS.
Learn dictionary via Newton method based on Lagrange dual.
Parameters:
adjacencies Indices of entries (unrolled column by column) of the coding matrix Z that are non-zero (the adjacency matrix for the bipartite graph of points and atoms).
newtonTolerance Tolerance of the Newton's method optimizer.
Returns:
the norm of the gradient of the Lagrange dual with respect to the dual variables
Project each atom of the dictionary back onto the unit ball, if necessary.
Number of atoms.
Definition at line 196 of file sparse_coding.hpp.
Sparse codes (columns are points).
Definition at line 205 of file sparse_coding.hpp.
Referenced by mlpack::sparse_coding::SparseCoding< DictionaryInitializer >::Codes().
Data matrix (columns are points).
Definition at line 199 of file sparse_coding.hpp.
Referenced by mlpack::sparse_coding::SparseCoding< DictionaryInitializer >::Data().
Dictionary (columns are atoms).
Definition at line 202 of file sparse_coding.hpp.
Referenced by mlpack::sparse_coding::SparseCoding< DictionaryInitializer >::Dictionary().
l1 regularization term.
Definition at line 208 of file sparse_coding.hpp.
l2 regularization term.
Definition at line 211 of file sparse_coding.hpp.
Generated automatically by Doxygen for MLPACK from the source code.