TR2016-079
Deep Gaussian Conditional Random Field Network: A Model-based Deep Network for Discriminative Denoising
-
- "Deep Gaussian Conditional Random Field Network: A Model-based Deep Network for Discriminative Denoising", IEEE Conference on Computer Vision and Pattern Recognition (CVPR), DOI: 10.1109/CVPR.2016.351, June 2016, pp. 4801-4809.BibTeX TR2016-079 PDF
- @inproceedings{Vemulapalli2016jun2,
- author = {Vemulapalli, Raviteja and Tuzel, C. Oncel and Liu, Ming-Yu},
- title = {Deep Gaussian Conditional Random Field Network: A Model-based Deep Network for Discriminative Denoising},
- booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
- year = 2016,
- pages = {4801--4809},
- month = jun,
- doi = {10.1109/CVPR.2016.351},
- url = {https://www.merl.com/publications/TR2016-079}
- }
,
- "Deep Gaussian Conditional Random Field Network: A Model-based Deep Network for Discriminative Denoising", IEEE Conference on Computer Vision and Pattern Recognition (CVPR), DOI: 10.1109/CVPR.2016.351, June 2016, pp. 4801-4809.
-
Research Areas:
Abstract:
We propose a novel end-to-end trainable deep network architecture for image denoising based on a Gaussian Conditional Random Field (GCRF) model. In contrast to the existing discriminative denoising methods that train a separate model for each individual noise level, the proposed deep network explicitly models the input noise variance and hence is capable of handling a range of noise levels. Our deep network, which we refer to as deep GCRF network, consists of two sub-networks: (i) a parameter generation network that generates the pairwise potential parameters based on the noisy input image, and (ii) an inference network whose layers perform the computations involved in an iterative GCRF inference procedure. We train two deep GCRF networks (each network operates over a range of noise levels: one for low input noise levels and one for high input noise levels) discriminatively by maximizing the peak signal-to-noise ratio measure. Experiments on Berkeley segmentation and PASCALVOC datasets show that the proposed approach produces results on par with the state of-the-art without training a separate network for each individual noise level.
Related News & Events
-
NEWS MERL presents three papers at the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Date: June 27, 2016 - June 30, 2016
Where: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV
MERL Contacts: Michael J. Jones; Tim K. Marks
Research Area: Machine LearningBrief- MERL researchers in the Computer Vision group presented three papers at the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), which had a paper acceptance rate of 29.9%.