James Queeney
- Phone: 617-621-7511
- Email:
-
Position:
Research / Technical Staff
Postdoctoral Research Fellow -
Education:
Ph.D., Boston University, 2023 -
Research Areas:
External Links:
Jimmy's Quick Links
-
Biography
Jimmy conducts research on data-driven methods for decision making and control. During his PhD, he developed reliable deep reinforcement learning algorithms with guarantees on training stability, robustness, and safety.
-
Recent News & Events
-
NEWS MERL researchers present 7 papers at CDC 2024 Date: December 16, 2024 - December 19, 2024
Where: Milan, Italy
MERL Contacts: Ankush Chakrabarty; Vedang M. Deshpande; Stefano Di Cairano; James Queeney; Abraham P. Vinod; Avishai Weiss; Gordon Wichern
Research Areas: Artificial Intelligence, Control, Dynamical Systems, Machine Learning, Multi-Physical Modeling, Optimization, RoboticsBrief- MERL researchers presented 7 papers at the recently concluded Conference on Decision and Control (CDC) 2024 in Milan, Italy. The papers covered a wide range of topics including safety shielding for stochastic model predictive control, reinforcement learning using expert observations, physics-constrained meta learning for positioning, variational-Bayes Kalman filtering, Bayesian measurement masks for GNSS positioning, divert-feasible lunar landing, and centering and stochastic control using constrained zonotopes.
As a sponsor of the conference, MERL maintained a booth for open discussions with researchers and students, and hosted a special session to discuss highlights of MERL research and work philosophy.
In addition, Ankush Chakrabarty (Principal Research Scientist, Multiphysical Systems Team) was an invited speaker in the pre-conference Workshop on "Learning Dynamics From Data" where he gave a talk on few-shot meta-learning for black-box identification using data from similar systems.
- MERL researchers presented 7 papers at the recently concluded Conference on Decision and Control (CDC) 2024 in Milan, Italy. The papers covered a wide range of topics including safety shielding for stochastic model predictive control, reinforcement learning using expert observations, physics-constrained meta learning for positioning, variational-Bayes Kalman filtering, Bayesian measurement masks for GNSS positioning, divert-feasible lunar landing, and centering and stochastic control using constrained zonotopes.
-
-
Internships with Jimmy
-
ST0134: Internship - Generalization in Reinforcement Learning
MERL is seeking a motivated and qualified individual to conduct research in the area of reinforcement learning (RL), with a focus on generalization. Topics include robustness, safety, and adaptation in single-agent or multi-agent applications. The ideal candidate will be a PhD student with a solid background in RL or imitation learning. Experience with deep RL implementations is a plus. Publication of the results produced during the internship is expected. Duration of the internship is expected to be 3 months. Start date is flexible.
-
-
MERL Publications
- "A Model-Based Approach for Improving Reinforcement Learning Efficiency Leveraging Expert Observations", IEEE Conference on Decision and Control (CDC), December 2024.BibTeX TR2024-178 PDF
- @inproceedings{Ozcan2024dec,
- author = {Ozcan, Erhan Can and Giammarino, Vittorio and Queeney, James and Paschalidis, Ioannis Ch.}},
- title = {A Model-Based Approach for Improving Reinforcement Learning Efficiency Leveraging Expert Observations},
- booktitle = {IEEE Conference on Decision and Control (CDC)},
- year = 2024,
- month = dec,
- url = {https://www.merl.com/publications/TR2024-178}
- }
, - "GRAM: Generalization in Deep RL with a Robust Adaptation Module", arXiv, December 2024. ,
- "Generalized Policy Improvement Algorithms with Theoretically Supported Sample Reuse", arXiv, October 2024. ,
- "PIETRA: Physics-Informed Evidential Learning for Traversing Out-of-Distribution Terrain", arXiv, September 2024.BibTeX arXiv
- @article{Cai2024sep,
- author = {Cai, Xiaoyi and Queeney, James and Xu, Tong and Datar, Aniket and Pan, Chenhui and Miller, Max and Flather, Ashton and Osteen, Philip R. and Roy, Nicholas and Xiao, Xuesu and How, Jonathan P.}},
- title = {PIETRA: Physics-Informed Evidential Learning for Traversing Out-of-Distribution Terrain},
- journal = {arXiv},
- year = 2024,
- month = sep,
- url = {https://www.arxiv.org/abs/2409.03005}
- }
, - "Visually Robust Adversarial Imitation Learning from Videos with Contrastive Learning", arXiv, June 2024. ,
- "A Model-Based Approach for Improving Reinforcement Learning Efficiency Leveraging Expert Observations", IEEE Conference on Decision and Control (CDC), December 2024.
-
Other Publications
- "Opportunities and Challenges from Using Animal Videos in Reinforcement Learning for Navigation", 22nd IFAC World Congress, 2023.BibTeX
- @Inproceedings{giammarino_2023_ifac,
- author = {Giammarino, Vittorio and Queeney, James and Carstensen, Lucas C. and Hasselmo, Michael E. and Paschalidis, Ioannis Ch.},
- title = {Opportunities and Challenges from Using Animal Videos in Reinforcement Learning for Navigation},
- booktitle = {22nd IFAC World Congress},
- year = 2023
- }
, - "Adversarial Imitation Learning from Visual Observations using Latent Information", 2023.BibTeX
- @Misc{giammarino_2023_laifo,
- author = {Giammarino, Vittorio and Queeney, James and Paschalidis, Ioannis Ch.},
- title = {Adversarial Imitation Learning from Visual Observations using Latent Information},
- year = 2023
- }
, - "Reliable Deep Reinforcement Learning: Stable Training and Robust Deployment", 2023, Boston University.BibTeX
- @Phdthesis{queeney_2023_dissertation,
- author = {Queeney, James},
- title = {Reliable Deep Reinforcement Learning: Stable Training and Robust Deployment},
- school = {Boston University},
- year = 2023
- }
, - "Optimal Transport Perturbations for Safe Reinforcement Learning with Robustness Guarantees", 2023.BibTeX
- @Misc{queeney_2023_otp,
- author = {Queeney, James and Ozcan, Erhan Can and Paschalidis, Ioannis Ch. and Cassandras, Christos G.},
- title = {Optimal Transport Perturbations for Safe Reinforcement Learning with Robustness Guarantees},
- year = 2023
- }
, - "Risk-Averse Model Uncertainty for Distributionally Robust Safe Reinforcement Learning", Advances in Neural Information Processing Systems, 2023, vol. 36.BibTeX
- @Inproceedings{queeney_2023_ramu,
- author = {Queeney, James and Benosman, Mouhacine},
- title = {Risk-Averse Model Uncertainty for Distributionally Robust Safe Reinforcement Learning},
- booktitle = {Advances in Neural Information Processing Systems},
- year = 2023,
- volume = 36,
- publisher = {Curran Associates, Inc.}
- }
, - "Generalized Policy Improvement Algorithms with Theoretically Supported Sample Reuse", 2022.BibTeX
- @Misc{queeney_2022_gpi,
- author = {Queeney, James and Paschalidis, Ioannis Ch. and Cassandras, Christos G.},
- title = {Generalized Policy Improvement Algorithms with Theoretically Supported Sample Reuse},
- year = 2022
- }
, - "Generalized Proximal Policy Optimization with Sample Reuse", Advances in Neural Information Processing Systems, 2021, vol. 34.BibTeX
- @Inproceedings{queeney_2021_geppo,
- author = {Queeney, James and Paschalidis, Ioannis Ch. and Cassandras, Christos G.},
- title = {Generalized Proximal Policy Optimization with Sample Reuse},
- booktitle = {Advances in Neural Information Processing Systems},
- year = 2021,
- volume = 34,
- publisher = {Curran Associates, Inc.}
- }
, - "Uncertainty-Aware Policy Optimization: A Robust, Adaptive Trust Region Approach", Proceedings of the AAAI Conference on Artificial Intelligence, 2021, vol. 35, pp. 9377-9385.BibTeX
- @Inproceedings{queeney_2021_uatrpo,
- author = {Queeney, James and Paschalidis, Ioannis Ch. and Cassandras, Christos G.},
- title = {Uncertainty-Aware Policy Optimization: A Robust, Adaptive Trust Region Approach},
- booktitle = {Proceedings of the {AAAI} Conference on Artificial Intelligence},
- year = 2021,
- volume = 35,
- pages = {9377--9385},
- publisher = {{AAAI} Press}
- }
,
- "Opportunities and Challenges from Using Animal Videos in Reinforcement Learning for Navigation", 22nd IFAC World Congress, 2023.
-
Software & Data Downloads