Jing Liu
- Phone: 617-621-7584
- Email:
-
Position:
Research / Technical Staff
Visiting Research Scientist -
Education:
Ph.D., University of California, San Diego, 2019 -
Research Areas:
Jing's Quick Links
-
Biography
Before joining MERL, Jing was an Illinois Future Faculty fellow at the Computer Science department of the University of Illinois, Urbana Champaign (UIUC). Prior to that, he was a Postdoctoral Research Associate at the Coordinated Science Lab of UIUC. His research interests include Trustworthy AI, Distributed Learning and Inference, Robust and Efficient Internet-of-Things (IoT), and green AI.
-
Recent News & Events
-
NEWS MERL Papers and Workshops at CVPR 2024 Date: June 17, 2024 - June 21, 2024
Where: Seattle, WA
MERL Contacts: Petros T. Boufounos; Moitreya Chatterjee; Anoop Cherian; Michael J. Jones; Toshiaki Koike-Akino; Jonathan Le Roux; Suhas Lohit; Tim K. Marks; Pedro Miraldo; Jing Liu; Kuan-Chuan Peng; Pu (Perry) Wang; Ye Wang; Matthew Brand
Research Areas: Artificial Intelligence, Computational Sensing, Computer Vision, Machine Learning, Speech & AudioBrief- MERL researchers are presenting 5 conference papers, 3 workshop papers, and are co-organizing two workshops at the CVPR 2024 conference, which will be held in Seattle, June 17-21. CVPR is one of the most prestigious and competitive international conferences in computer vision. Details of MERL contributions are provided below.
CVPR Conference Papers:
1. "TI2V-Zero: Zero-Shot Image Conditioning for Text-to-Video Diffusion Models" by H. Ni, B. Egger, S. Lohit, A. Cherian, Y. Wang, T. Koike-Akino, S. X. Huang, and T. K. Marks
This work enables a pretrained text-to-video (T2V) diffusion model to be additionally conditioned on an input image (first video frame), yielding a text+image to video (TI2V) model. Other than using the pretrained T2V model, our method requires no ("zero") training or fine-tuning. The paper uses a "repeat-and-slide" method and diffusion resampling to synthesize videos from a given starting image and text describing the video content.
Paper: https://www.merl.com/publications/TR2024-059
Project page: https://merl.com/research/highlights/TI2V-Zero
2. "Long-Tailed Anomaly Detection with Learnable Class Names" by C.-H. Ho, K.-C. Peng, and N. Vasconcelos
This work aims to identify defects across various classes without relying on hard-coded class names. We introduce the concept of long-tailed anomaly detection, addressing challenges like class imbalance and dataset variability. Our proposed method combines reconstruction and semantic modules, learning pseudo-class names and utilizing a variational autoencoder for feature synthesis to improve performance in long-tailed datasets, outperforming existing methods in experiments.
Paper: https://www.merl.com/publications/TR2024-040
3. "Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling" by X. Liu, Y-W. Tai, C-T. Tang, P. Miraldo, S. Lohit, and M. Chatterjee
This work presents a new strategy for rendering dynamic scenes from novel viewpoints. Our approach is based on stratifying the scene into regions based on the extent of motion of the region, which is automatically determined. Regions with higher motion are permitted a denser spatio-temporal sampling strategy for more faithful rendering of the scene. Additionally, to the best of our knowledge, ours is the first work to enable tracking of objects in the scene from novel views - based on the preferences of a user, provided by a click.
Paper: https://www.merl.com/publications/TR2024-042
4. "SIRA: Scalable Inter-frame Relation and Association for Radar Perception" by R. Yataka, P. Wang, P. T. Boufounos, and R. Takahashi
Overcoming the limitations on radar feature extraction such as low spatial resolution, multipath reflection, and motion blurs, this paper proposes SIRA (Scalable Inter-frame Relation and Association) for scalable radar perception with two designs: 1) extended temporal relation, generalizing the existing temporal relation layer from two frames to multiple inter-frames with temporally regrouped window attention for scalability; and 2) motion consistency track with a pseudo-tracklet generated from observational data for better object association.
Paper: https://www.merl.com/publications/TR2024-041
5. "RILA: Reflective and Imaginative Language Agent for Zero-Shot Semantic Audio-Visual Navigation" by Z. Yang, J. Liu, P. Chen, A. Cherian, T. K. Marks, J. L. Roux, and C. Gan
We leverage Large Language Models (LLM) for zero-shot semantic audio visual navigation. Specifically, by employing multi-modal models to process sensory data, we instruct an LLM-based planner to actively explore the environment by adaptively evaluating and dismissing inaccurate perceptual descriptions.
Paper: https://www.merl.com/publications/TR2024-043
CVPR Workshop Papers:
1. "CoLa-SDF: Controllable Latent StyleSDF for Disentangled 3D Face Generation" by R. Dey, B. Egger, V. Boddeti, Y. Wang, and T. K. Marks
This paper proposes a new method for generating 3D faces and rendering them to images by combining the controllability of nonlinear 3DMMs with the high fidelity of implicit 3D GANs. Inspired by StyleSDF, our model uses a similar architecture but enforces the latent space to match the interpretable and physical parameters of the nonlinear 3D morphable model MOST-GAN.
Paper: https://www.merl.com/publications/TR2024-045
2. “Tracklet-based Explainable Video Anomaly Localization” by A. Singh, M. J. Jones, and E. Learned-Miller
This paper describes a new method for localizing anomalous activity in video of a scene given sample videos of normal activity from the same scene. The method is based on detecting and tracking objects in the scene and estimating high-level attributes of the objects such as their location, size, short-term trajectory and object class. These high-level attributes can then be used to detect unusual activity as well as to provide a human-understandable explanation for what is unusual about the activity.
Paper: https://www.merl.com/publications/TR2024-057
MERL co-organized workshops:
1. "Multimodal Algorithmic Reasoning Workshop" by A. Cherian, K-C. Peng, S. Lohit, M. Chatterjee, H. Zhou, K. Smith, T. K. Marks, J. Mathissen, and J. Tenenbaum
Workshop link: https://marworkshop.github.io/cvpr24/index.html
2. "The 5th Workshop on Fair, Data-Efficient, and Trusted Computer Vision" by K-C. Peng, et al.
Workshop link: https://fadetrcv.github.io/2024/
3. "SuperLoRA: Parameter-Efficient Unified Adaptation for Large Vision Models" by X. Chen, J. Liu, Y. Wang, P. Wang, M. Brand, G. Wang, and T. Koike-Akino
This paper proposes a generalized framework called SuperLoRA that unifies and extends different variants of low-rank adaptation (LoRA). Introducing new options with grouping, folding, shuffling, projection, and tensor decomposition, SuperLoRA offers high flexibility and demonstrates superior performance up to 10-fold gain in parameter efficiency for transfer learning tasks.
Paper: https://www.merl.com/publications/TR2024-062
- MERL researchers are presenting 5 conference papers, 3 workshop papers, and are co-organizing two workshops at the CVPR 2024 conference, which will be held in Seattle, June 17-21. CVPR is one of the most prestigious and competitive international conferences in computer vision. Details of MERL contributions are provided below.
-
TALK [MERL Seminar Series 2024] Sanmi Koyejo presents talk titled Are Emergent Abilities of Large Language Models a Mirage? Date & Time: Wednesday, March 20, 2024; 1:00 PM
Speaker: Sanmi Koyejo, Stanford University
MERL Host: Jing Liu
Research Areas: Artificial Intelligence, Machine LearningAbstract- Recent work claims that large language models display emergent abilities, abilities not present in smaller-scale models that are present in larger-scale models. What makes emergent abilities intriguing is two-fold: their sharpness, transitioning seemingly instantaneously from not present to present, and their unpredictability, appearing at seemingly unforeseeable model scales. Here, we present an alternative explanation for emergent abilities: that for a particular task and model family, when analyzing fixed model outputs, emergent abilities appear due to the researcher's choice of metric rather than due to fundamental changes in model behavior with scale. Specifically, nonlinear or discontinuous metrics produce apparent emergent abilities, whereas linear or continuous metrics produce smooth, continuous predictable changes in model performance. We present our alternative explanation in a simple mathematical model. Via the presented analyses, we provide evidence that alleged emergent abilities evaporate with different metrics or with better statistics, and may not be a fundamental property of scaling AI models.
See All News & Events for Jing -
-
Research Highlights
-
MERL Publications
- "Slaying the HyDRA: Parameter-Efficient Hyper Networks with Low-Displacement Rank Adaptation", Advances in Neural Information Processing Systems (NeurIPS), December 2024.BibTeX TR2024-157 PDF
- @inproceedings{Chen2024dec,
- author = {Chen, Xiangyu and Wang, Ye and Brand, Matthew and Wang, Pu and Liu, Jing and Koike-Akino, Toshiaki}},
- title = {Slaying the HyDRA: Parameter-Efficient Hyper Networks with Low-Displacement Rank Adaptation},
- booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
- year = 2024,
- month = dec,
- url = {https://www.merl.com/publications/TR2024-157}
- }
, - "SuperLoRA: Parameter-Efficient Unified Adaptation of Large Foundation Models", British Machine Vision Conference (BMVC), November 2024.BibTeX TR2024-156 PDF
- @inproceedings{Chen2024nov,
- author = {Chen, Xiangyu and Liu, Jing and Wang, Ye and Wang, Pu and Brand, Matthew and Wang, Guanghui and Koike-Akino, Toshiaki}},
- title = {SuperLoRA: Parameter-Efficient Unified Adaptation of Large Foundation Models},
- booktitle = {British Machine Vision Conference (BMVC)},
- year = 2024,
- month = nov,
- url = {https://www.merl.com/publications/TR2024-156}
- }
, - "Forget to Flourish: Leveraging Model-Unlearning on Pretrained Language Models for Privacy Leakage," Tech. Rep., MS Thesis at Penn State University, November 2024.BibTeX
- @techreport{Rashid2024nov,
- author = {Rashid, Md Rafi Ur and Liu, Jing and Koike-Akino, Toshiaki and Mehnaz, Shagufta and Wang, Ye}},
- title = {Forget to Flourish: Leveraging Model-Unlearning on Pretrained Language Models for Privacy Leakage},
- institution = {MS Thesis at Penn State University},
- year = 2024,
- month = nov
- }
, - "Quantum Diffusion Models for Few-Shot Learning", arXiv, November 2024. ,
- "Analyzing Inference Privacy Risks Through Gradients In Machine Learning", ACM Conference on Computer and Communications Security (CCS), October 2024.BibTeX TR2024-141 PDF
- @inproceedings{Li2024oct,
- author = {Li, Zhuohang and Lowy, Andrew and Liu, Jing and Koike-Akino, Toshiaki and Parsons, Kieran and Malin, Bradley and Wang, Ye}},
- title = {Analyzing Inference Privacy Risks Through Gradients In Machine Learning},
- booktitle = {ACM Conference on Computer and Communications Security (CCS)},
- year = 2024,
- month = oct,
- url = {https://www.merl.com/publications/TR2024-141}
- }
,
- "Slaying the HyDRA: Parameter-Efficient Hyper Networks with Low-Displacement Rank Adaptation", Advances in Neural Information Processing Systems (NeurIPS), December 2024.
-
Other Publications
- "Robust mean estimation in high dimensions: An outlier fraction agnostic and efficient algorithm", 2022 IEEE International Symposium on Information Theory (ISIT), 2022, pp. 1115-1120.BibTeX
- @Inproceedings{deshmukh2022robust,
- author = {Deshmukh, Aditya and Liu, Jing and Veeravalli, Venugopal V},
- title = {Robust mean estimation in high dimensions: An outlier fraction agnostic and efficient algorithm},
- booktitle = {2022 IEEE International Symposium on Information Theory (ISIT)},
- year = 2022,
- pages = {1115--1120},
- organization = {IEEE}
- }
, - "CoPur: Certifiably Robust Collaborative Inference via Feature Purification", Advances in Neural Information Processing Systems, 2022.BibTeX
- @Inproceedings{liu2022copur,
- author = {Liu, Jing and Xie, Chulin and Koyejo, Oluwasanmi O and Li, Bo},
- title = {CoPur: Certifiably Robust Collaborative Inference via Feature Purification},
- booktitle = {Advances in Neural Information Processing Systems},
- year = 2022
- }
, - "Rvfr: Robust vertical federated learning via feature subspace recovery", NeurIPS Workshop New Frontiers in Federated Learning: Privacy, Fairness, Robustness, Personalization and Data Ownership, 2021.BibTeX
- @Inproceedings{liu2021rvfr,
- author = {Liu, Jing and Xie, Chulin and Kenthapadi, Krishnaram and Koyejo, Sanmi and Li, Bo},
- title = {Rvfr: Robust vertical federated learning via feature subspace recovery},
- booktitle = {NeurIPS Workshop New Frontiers in Federated Learning: Privacy, Fairness, Robustness, Personalization and Data Ownership},
- year = 2021
- }
, - "Information flow optimization in inference networks", ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020, pp. 8289-8293.BibTeX
- @Inproceedings{deshmukh2020information,
- author = {Deshmukh, Aditya and Liu, Jing and Veeravalli, Venugopal V and Verma, Gunjan},
- title = {Information flow optimization in inference networks},
- booktitle = {ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
- year = 2020,
- pages = {8289--8293},
- organization = {IEEE}
- }
, - "Sparse Bayesian learning for robust PCA: Algorithms and analyses", IEEE Transactions on Signal Processing, Vol. 67, No. 22, pp. 5837-5849, 2019.BibTeX
- @Article{liu2019sparse,
- author = {Liu, Jing and Rao, Bhaskar D},
- title = {Sparse Bayesian learning for robust PCA: Algorithms and analyses},
- journal = {IEEE Transactions on Signal Processing},
- year = 2019,
- volume = 67,
- number = 22,
- pages = {5837--5849},
- publisher = {IEEE}
- }
, - "Robust PCA via ℓ0-ℓ1 Regularization", IEEE Transactions on Signal Processing, Vol. 67, No. 2, pp. 535-549, 2018.BibTeX
- @Article{liu2018robust,
- author = {Liu, Jing and Rao, Bhaskar D},
- title = {Robust PCA via ℓ0-ℓ1 Regularization},
- journal = {IEEE Transactions on Signal Processing},
- year = 2018,
- volume = 67,
- number = 2,
- pages = {535--549},
- publisher = {IEEE}
- }
, - "Robust Linear Regression via ℓ0 Regularization", IEEE Transactions on Signal Processing, Vol. 66, No. 3, pp. 698-713, 2017.BibTeX
- @Article{liu2017robust,
- author = {Liu, Jing and Cosman, Pamela C and Rao, Bhaskar D},
- title = {Robust Linear Regression via ℓ0 Regularization},
- journal = {IEEE Transactions on Signal Processing},
- year = 2017,
- volume = 66,
- number = 3,
- pages = {698--713},
- publisher = {IEEE}
- }
,
- "Robust mean estimation in high dimensions: An outlier fraction agnostic and efficient algorithm", 2022 IEEE International Symposium on Information Theory (ISIT), 2022, pp. 1115-1120.
-
Software & Data Downloads