sparse-learning.github.io - Sparse Learning

Description: Sparse Learning in Neural Networks and Robust Statistical Analysis

Example domain paragraphs

The great success of deep neural networks is built upon their over-parameterization, which smooths the optimization landscape without degrading the generalization ability. Despite the beneļ¬ts of over-parameterization, a huge amount of parameters makes deep networks cumbersome in daily life applications. On the other hand, training neural networks without over-parameterization faces many practical problems, e.g., being trapped in the local optimal. Though techniques such as pruning and distillation are devel

Deep learning based models have excelled in many computer vision tasks and appear to surpass human performance. However, these models require an avalanche of expensive human labeled training data and many iterations to train their large number of parameters. This severely limits their scalability to the real-world long-tail distributed categories of noisy training set, some of which are with a large number of instances, but with only a few manually annotated. Learning from such extremely limited labeled exa

Boosting, as gradient descent method, is known as the 'best off-the-shelf' methods in machine learning. The Inverse Scale Space method, is a Boosting-type algorithm as restricted gradient descent for sparsity learning whose underlying dynamics are governed by differential inclusions. It is also known as Mirror Descent in optimization and (Linearized) Bregman Iterations in applied mathematics. Such algorithms generate iterative regularization paths with structural sparsity where significant features or param

Links to sparse-learning.github.io (2)