[an error occurred while processing this directive] [an error occurred while processing this directive]
Saiprasad Ravishankar

Adaptive Sparse Representations and Their Applications

Saiprasad Ravishankar, 2/11/2015, 4:00-5:00pm, BI 4369

The sparsity of signals and images in a certain transform domain or dictionary has been exploited in many applications in signal and image processing, machine learning, and medical imaging. Analytical sparsifying transforms such as Wavelets and DCT have been widely used in compression standards. Recently, the data-driven learning of synthesis sparsifying dictionaries has become popular especially in applications such as denoising, inpainting, and compressed sensing. In this talk, we instead focus on the data-driven adaptation of sparsifying transforms, which offers numerous advantages. We propose formulations for learning square or overcomplete (tall) sparsifying transforms from data. We also discuss specific structures for these adaptive transforms such as double sparsity or union-of-transforms, which enable their efficient learning and usage. The various proposed algorithms for transform learning typically alternate between a sparse coding step and a transform update step, and are highly efficient. As opposed to sparse coding in the synthesis or noisy analysis models which is NP-hard, the sparse coding step in transform learning can be performed exactly and cheaply by zeroing out all but a certain number of nonzero transform coefficients of largest magnitude. For the transform update step too, we derive efficient and analytical closed-form solutions in various scenarios. We discuss methods for both batch learning and online learning of sparsifying transforms. Online learning is particularly useful when dealing with big data, and for signal processing applications such as real-time sparse representation (compression) and denoising. We show the superiority of the various proposed transform learning methods over analytical sparsifying transforms such as the DCT or Wavelets for image representation. We also show promising performance in denoising, classification, and blind compressed sensing tasks using the learnt transforms. Importantly, we establish convergence guarantees for many of the proposed transform learning or image reconstruction schemes, which were lacking for prior adaptive synthesis dictionary-based methods. All our proposed approaches are much faster than previous methods such as those involving learnt synthesis, or analysis dictionaries.


Saiprasad Ravishankar received the B.Tech. degree in Electrical Engineering from the Indian Institute of Technology Madras, in 2008. He received the M.S. and Ph.D. degrees in Electrical and Computer Engineering, in 2010 and 2014 respectively, from the University of Illinois at Urbana-Champaign, where he is currently an Adjunct Lecturer at the Department of Electrical and Computer Engineering, and a Postdoctoral Research Associate at the Coordinated Science Laboratory. His current research interests include signal and image processing, medical imaging, inverse problems, image analysis, dictionary learning, compressed sensing, machine learning, computer vision, and big data applications. [an error occurred while processing this directive]