- Date & Time: Wednesday, March 29, 2023; 1:00 PM
Speaker: Zoltan Nagy, The University of Texas at Austin
MERL Host: Ankush Chakrabarty
Research Areas: Control, Machine Learning, Multi-Physical Modeling
Abstract - The decarbonization of buildings presents new challenges for the reliability of the electrical grid because of the intermittency of renewable energy sources and increase in grid load brought about by end-use electrification. To restore reliability, grid-interactive efficient buildings can provide flexibility services to the grid through demand response. Residential demand response programs are hindered by the need for manual intervention by customers. To maximize the energy flexibility potential of residential buildings, an advanced control architecture is needed. Reinforcement learning is well-suited for the control of flexible resources as it can adapt to unique building characteristics compared to expert systems. Yet, factors hindering the adoption of RL in real-world applications include its large data requirements for training, control security and generalizability. This talk will cover some of our recent work addressing these challenges. We proposed the MERLIN framework and developed a digital twin of a real-world 17-building grid-interactive residential community in CityLearn. We show that 1) independent RL-controllers for batteries improve building and district level KPIs compared to a reference RBC by tailoring their policies to individual buildings, 2) despite unique occupant behaviors, transferring the RL policy of any one of the buildings to other buildings provides comparable performance while reducing the cost of training, 3) training RL-controllers on limited temporal data that does not capture full seasonality in occupant behavior has little effect on performance. Although, the zero-net-energy (ZNE) condition of the buildings could be maintained or worsened because of controlled batteries, KPIs that are typically improved by ZNE condition (electricity price and carbon emissions) are further improved when the batteries are managed by an advanced controller.
-
- Date & Time: Tuesday, March 14, 2023; 1:00 PM
Speaker: Suraj Srinivas, Harvard University
MERL Host: Suhas Lohit
Research Areas: Artificial Intelligence, Computer Vision, Machine Learning
Abstract - In this talk, I will discuss our recent research on understanding post-hoc interpretability. I will begin by introducing a characterization of post-hoc interpretability methods as local function approximators, and the implications of this viewpoint, including a no-free-lunch theorem for explanations. Next, we shall challenge the assumption that post-hoc explanations provide information about a model's discriminative capabilities p(y|x) and instead demonstrate that many common methods instead rely on a conditional generative model p(x|y). This observation underscores the importance of being cautious when using such methods in practice. Finally, I will propose to resolve this via regularization of model structure, specifically by training low curvature neural networks, resulting in improved model robustness and stable gradients.
-
- Date & Time: Wednesday, March 1, 2023; 1:00 PM
Speaker: Shaowu Pan, Rensselaer Polytechnic Institute
MERL Host: Saviz Mowlavi
Research Areas: Computational Sensing, Data Analytics, Machine Learning
Abstract - High-dimensional spatio-temporal dynamics can often be encoded in a low-dimensional subspace. Engineering applications for modeling, characterization, design, and control of such large-scale systems often rely on dimensionality reduction to make solutions computationally tractable in real-time. Common existing paradigms for dimensionality reduction include linear methods, such as the singular value decomposition (SVD), and nonlinear methods, such as variants of convolutional autoencoders (CAE). However, these encoding techniques lack the ability to efficiently represent the complexity associated with spatio-temporal data, which often requires variable geometry, non-uniform grid resolution, adaptive meshing, and/or parametric dependencies. To resolve these practical engineering challenges, we propose a general framework called Neural Implicit Flow (NIF) that enables a mesh-agnostic, low-rank representation of large-scale, parametric, spatial-temporal data. NIF consists of two modified multilayer perceptrons (MLPs): (i) ShapeNet, which isolates and represents the spatial complexity, and (ii) ParameterNet, which accounts for any other input complexity, including parametric dependencies, time, and sensor measurements. We demonstrate the utility of NIF for parametric surrogate modeling, enabling the interpretable representation and compression of complex spatio-temporal dynamics, efficient many-spatial-query tasks, and improved generalization performance for sparse reconstruction.
-
- Date & Time: Tuesday, February 28, 2023; 12:00 PM
Speaker: Prof. Kevin Lynch, Northwestern University
MERL Host: Diego Romeres
Research Areas: Machine Learning, Robotics
Abstract - Research at the Center for Robotics and Biosystems at Northwestern University includes bio-inspiration, neuromechanics, human-machine systems, and swarm robotics, among other topics. In this talk I will focus on our work on manipulation, including autonomous in-hand robotic manipulation and safe, intuitive human-collaborative manipulation among one or more humans and a team of mobile manipulators.
-
- Date: December 9, 2022
Where: Pittsburg, PA
MERL Contact: Jonathan Le Roux
Research Areas: Artificial Intelligence, Machine Learning, Speech & Audio
Brief - MERL Senior Principal Research Scientist and Speech and Audio Senior Team Leader, Jonathan Le Roux, was invited by Carnegie Mellon University's Language Technology Institute (LTI) to give an invited talk as part of the LTI Colloquium Series. The LTI Colloquium is a prestigious series of talks given by experts from across the country related to different areas of language technologies. Jonathan's talk, entitled "Towards general and flexible audio source separation", presented an overview of techniques developed at MERL towards the goal of robustly and flexibly decomposing and analyzing an acoustic scene, describing in particular the Speech and Audio Team's efforts to extend MERL's early speech separation and enhancement methods to more challenging environments, and to more general and less supervised scenarios.
-
- Date: August 27, 2024 - August 30, 2024
Where: Kyoto, Japan
Research Areas: Control, Machine Learning, Multi-Physical Modeling, Optimization, Robotics
Brief - MERL researcher Rien Quirynen has been appointed as Vice-Chair from Industry of the International Program Committee of the 8th IFAC Conference on Nonlinear Model Predictive Control, which will be held in Kyoto, Japan, in August 2024.
IFAC NMPC is the main symposium focused on model predictive control, theory, methods and applications, includes contributions on control, optimization, and machine learning research, and is held every 3 years.
-
- Date: February 16, 2023 - February 17, 2023
Where: Pennsylvania State University
MERL Contact: Christopher R. Laughman
Research Areas: Control, Machine Learning, Multi-Physical Modeling
Brief - On February 16 and 17, Chris Laughman, Senior Team Leader of the Multiphysical Systems Team, presented lectures for the Systems, Robotics, and Controls Seminar Series in the School of Engineering, and for the Distinguished Speaker Series in Architectural Engineering. His talk was titled "Architectural Thermofluid Systems: Next-Generation Challenges and Opportunities," and described characteristics of these systems that require specific attention in model-based system engineering processes, as well as MERL research to address these challenges.
-
- Date: January 12, 2023
Awarded to: William T. Freeman, Thouis R. Jones, and Egon C. Pasztor
Awarded by: IEEE Computer Society
Research Areas: Computer Vision, Machine Learning
Brief - The MERL paper entitled, "Example-Based Super-Resolution" by William T. Freeman, Thouis R. Jones, and Egon C. Pasztor, published in a 2002 issue of IEEE Computer Graphics and Applications, has been awarded a 2021 Test of Time Award by the IEEE Computer Society. This work was done while the principal investigator, Prof. Freeman, was a research scientist at MERL; he is now a Professor of Electrical Engineering and Computer Science at MIT.
This best paper award recognizes regular or special issue papers published by the magazine that have made profound and long-lasting research impacts in bridging the theory and practice of computer graphics. "This paper is an early example of using learning for a low-level vision task and we are very proud of the pioneering work that MERL has done in this area prior to the deep learning revolution," says Anthony Vetro, VP & Director at MERL.
-
- Date: December 15, 2022 - December 17, 2022
MERL Contacts: Jianlin Guo; Philip V. Orlik; Kieran Parsons
Research Areas: Artificial Intelligence, Data Analytics, Machine Learning
Brief - The performance of manufacturing systems is heavily affected by downtime – the time period that the system halts production due to system failure, anomalous operation, or intrusion. Therefore, it is crucial to detect and diagnose anomalies to allow predictive maintenance or intrusion detection to reduce downtime. This talk, titled "Anomaly detection and diagnosis in manufacturing systems using autoencoder", focuses on tackling the challenges arising from predictive maintenance in manufacturing systems. It presents a structured autoencoder and a pre-processed autoencoder for accurate anomaly detection, as well as a statistical-based algorithm and an autoencoder-based algorithm for anomaly diagnosis.
-
- Date: December 8, 2022
MERL Contacts: Toshiaki Koike-Akino; Pu (Perry) Wang
Research Areas: Artificial Intelligence, Communications, Computational Sensing, Machine Learning, Signal Processing
Brief - On December 8, 2022, MERL researchers Toshiaki Koike-Akino and Pu (Perry) Wang gave a 3.5-hour tutorial presentation at the IEEE Global Communications Conference (GLOBECOM). The talk, titled "Post-Deep Learning Era: Emerging Quantum Machine Learning for Sensing and Communications," addressed recent trends, challenges, and advances in sensing and communications. P. Wang presented on use cases, industry trends, signal processing, and deep learning for Wi-Fi integrated sensing and communications (ISAC), while T. Koike-Akino discussed the future of deep learning, giving a comprehensive overview of artificial intelligence (AI) technologies, natural computing, emerging quantum AI, and their diverse applications. The tutorial was conducted remotely. MERL's quantum AI technology was partly reported in the recent press release (https://us.mitsubishielectric.com/en/news/releases/global/2022/1202-a/index.html).
The IEEE GLOBECOM is a highly anticipated event for researchers and industry professionals in the field of communications. Organized by the IEEE Communications Society, the flagship conference is known for its focus on driving innovation in all aspects of the field. Each year, over 3,000 scientific researchers submit proposals for program sessions at the annual conference. The theme of this year's conference was "Accelerating the Digital Transformation through Smart Communications," and featured a comprehensive technical program with 13 symposia, various tutorials and workshops.
-
- Date: December 2, 2022
MERL Contacts: Toshiaki Koike-Akino; Kieran Parsons; Pu (Perry) Wang; Ye Wang
Research Areas: Artificial Intelligence, Computational Sensing, Machine Learning, Signal Processing, Human-Computer Interaction
Brief - Mitsubishi Electric Corporation announced its development of a quantum artificial intelligence (AI) technology that automatically optimizes inference models to downsize the scale of computation with quantum neural networks. The new quantum AI technology can be integrated with classical machine learning frameworks for diverse solutions.
Mitsubishi Electric has confirmed that the technology can be incorporated in the world's first applications for terahertz (THz) imaging, Wi-Fi indoor monitoring, compressed sensing, and brain-computer interfaces. The technology is based on recent research by MERL's Connectivity & Information Processing team and Computational Sensing team.
Mitsubishi Electric's new quantum machine learning (QML) technology realizes compact inference models by fully exploiting the enormous capacity of quantum computers to express exponentially larger-state space with the number of quantum bits (qubits). In a hybrid combination of both quantum and classical AI, the technology can compensate for limitations of classical AI to achieve superior performance while significantly downsizing the scale of AI models, even when using limited data.
-
- Date: December 2, 2022 - December 8, 2022
MERL Contacts: Matthew Brand; Toshiaki Koike-Akino; Jing Liu; Saviz Mowlavi; Kieran Parsons; Ye Wang
Research Areas: Artificial Intelligence, Control, Dynamical Systems, Machine Learning, Signal Processing
Brief - In addition to 5 papers in recent news (https://www.merl.com/news/news-20221129-1450), MERL researchers presented 2 papers at the NeurIPS Conference Workshop, which was held Dec. 2-8. NeurIPS is one of the most prestigious and competitive international conferences in machine learning.
- “Optimal control of PDEs using physics-informed neural networks” by Saviz Mowlavi and Saleh Nabi
Physics-informed neural networks (PINNs) have recently become a popular method for solving forward and inverse problems governed by partial differential equations (PDEs). By incorporating the residual of the PDE into the loss function of a neural network-based surrogate model for the unknown state, PINNs can seamlessly blend measurement data with physical constraints. Here, we extend this framework to PDE-constrained optimal control problems, for which the governing PDE is fully known and the goal is to find a control variable that minimizes a desired cost objective. We validate the performance of the PINN framework by comparing it to state-of-the-art adjoint-based optimization, which performs gradient descent on the discretized control variable while satisfying the discretized PDE.
- “Learning with noisy labels using low-dimensional model trajectory” by Vasu Singla, Shuchin Aeron, Toshiaki Koike-Akino, Matthew E. Brand, Kieran Parsons, Ye Wang
Noisy annotations in real-world datasets pose a challenge for training deep neural networks (DNNs), detrimentally impacting generalization performance as incorrect labels may be memorized. In this work, we probe the observations that early stopping and low-dimensional subspace learning can help address this issue. First, we show that a prior method is sensitive to the early stopping hyper-parameter. Second, we investigate the effectiveness of PCA, for approximating the optimization trajectory under noisy label information. We propose to estimate the low-rank subspace through robust and structured variants of PCA, namely Robust PCA, and Sparse PCA. We find that the subspace estimated through these variants can be less sensitive to early stopping, and can outperform PCA to achieve better test error when trained on noisy labels.
- In addition, new MERL researcher, Jing Liu, also presented a paper entitled “CoPur: Certifiably Robust Collaborative Inference via Feature Purification" based on his previous work before joining MERL. His paper was elected as a spotlight paper to be highlighted in lightening talks and featured paper panel.
-
- Date: December 5, 2022
Where: Cancun, Mexico
Research Areas: Control, Machine Learning
Brief - Karl Berntorp was an invited speaker at the workshop on Gaussian Process Learning-Based Control organized at the Conference on Decision and Control (CDC) 2022 in Cancun, Mexico.
The talk was part of a tutorial-style workshop aimed to provide insight into the fundamentals behind Gaussian processes for modeling and control and sketching some of the open challenges and opportunities using Gaussian processes for modeling and control. The talk titled ``Gaussian Processes for Learning and Control: Opportunities for Real-World Impact" described some of MERL's efforts in using Gaussian processes (GPs) for learning and control, with several application examples and discussing some of the key benefits and limitations with using GPs for learning-based control.
-
- Date & Time: Monday, December 12, 2022; 1:00pm-5:30pm ET
Location: Mitsubishi Electric Research Laboratories (MERL)/Virtual
Research Areas: Applied Physics, Artificial Intelligence, Communications, Computational Sensing, Computer Vision, Control, Data Analytics, Dynamical Systems, Electric Systems, Electronic and Photonic Devices, Machine Learning, Multi-Physical Modeling, Optimization, Robotics, Signal Processing, Speech & Audio, Digital Video
Brief - Join MERL's virtual open house on December 12th, 2022! Featuring a keynote, live sessions, research area booths, and opportunities to interact with our research team. Discover who we are and what we do, and learn about internship and employment opportunities.
-
- Date: November 29, 2022 - December 9, 2022
Where: NeurIPS 2022
MERL Contacts: Moitreya Chatterjee; Anoop Cherian; Michael J. Jones; Suhas Lohit
Research Areas: Artificial Intelligence, Computer Vision, Machine Learning, Speech & Audio
Brief - MERL researchers are presenting 5 papers at the NeurIPS Conference, which will be held in New Orleans from Nov 29-Dec 1st, with virtual presentations in the following week. NeurIPS is one of the most prestigious and competitive international conferences in machine learning.
MERL papers in NeurIPS 2022:
1. “AVLEN: Audio-Visual-Language Embodied Navigation in 3D Environments” by Sudipta Paul, Amit Roy-Chowdhary, and Anoop Cherian
This work proposes a unified multimodal task for audio-visual embodied navigation where the navigating agent can also interact and seek help from a human/oracle in natural language when it is uncertain of its navigation actions. We propose a multimodal deep hierarchical reinforcement learning framework for solving this challenging task that allows the agent to learn when to seek help and how to use the language instructions. AVLEN agents can interact anywhere in the 3D navigation space and demonstrate state-of-the-art performances when the audio-goal is sporadic or when distractor sounds are present.
2. “Learning Partial Equivariances From Data” by David W. Romero and Suhas Lohit
Group equivariance serves as a good prior improving data efficiency and generalization for deep neural networks, especially in settings with data or memory constraints. However, if the symmetry groups are misspecified, equivariance can be overly restrictive and lead to bad performance. This paper shows how to build partial group convolutional neural networks that learn to adapt the equivariance levels at each layer that are suitable for the task at hand directly from data. This improves performance while retaining equivariance properties approximately.
3. “Learning Audio-Visual Dynamics Using Scene Graphs for Audio Source Separation” by Moitreya Chatterjee, Narendra Ahuja, and Anoop Cherian
There often exist strong correlations between the 3D motion dynamics of a sounding source and its sound being heard, especially when the source is moving towards or away from the microphone. In this paper, we propose an audio-visual scene-graph that learns and leverages such correlations for improved visually-guided audio separation from an audio mixture, while also allowing predicting the direction of motion of the sound source.
4. “What Makes a "Good" Data Augmentation in Knowledge Distillation - A Statistical Perspective” by Huan Wang, Suhas Lohit, Michael Jones, and Yun Fu
This paper presents theoretical and practical results for understanding what makes a particular data augmentation technique (DA) suitable for knowledge distillation (KD). We design a simple metric that works very well in practice to predict the effectiveness of DA for KD. Based on this metric, we also propose a new data augmentation technique that outperforms other methods for knowledge distillation in image recognition networks.
5. “FeLMi : Few shot Learning with hard Mixup” by Aniket Roy, Anshul Shah, Ketul Shah, Prithviraj Dhar, Anoop Cherian, and Rama Chellappa
Learning from only a few examples is a fundamental challenge in machine learning. Recent approaches show benefits by learning a feature extractor on the abundant and labeled base examples and transferring these to the fewer novel examples. However, the latter stage is often prone to overfitting due to the small size of few-shot datasets. In this paper, we propose a novel uncertainty-based criteria to synthetically produce “hard” and useful data by mixing up real data samples. Our approach leads to state-of-the-art results on various computer vision few-shot benchmarks.
-
- Date & Time: Tuesday, November 1, 2022; 1:00 PM
Speaker: Jiajun Wu, Stanford University
MERL Host: Anoop Cherian
Research Areas: Artificial Intelligence, Computer Vision, Machine Learning
Abstract - The visual world has its inherent structure: scenes are made of multiple identical objects; different objects may have the same color or material, with a regular layout; each object can be symmetric and have repetitive parts. How can we infer, represent, and use such structure from raw data, without hampering the expressiveness of neural networks? In this talk, I will demonstrate that such structure, or code, can be learned from natural supervision. Here, natural supervision can be from pixels, where neuro-symbolic methods automatically discover repetitive parts and objects for scene synthesis. It can also be from objects, where humans during fabrication introduce priors that can be leveraged by machines to infer regular intrinsics such as texture and material. When solving these problems, structured representations and neural nets play complementary roles: it is more data-efficient to learn with structured representations, and they generalize better to new scenarios with robustly captured high-level information; neural nets effectively extract complex, low-level features from cluttered and noisy visual data.
-
- Date: May 28, 2023 - June 1, 2023
Where: Rome, Italy
Research Areas: Artificial Intelligence, Communications, Computational Sensing, Machine Learning, Signal Processing
Brief - Kyeong Jin Kim, a Senior Principal Research Scientist in the Connectivity & Information Processing Team, organizes the second international workshop in 2023 IEEE International Conference on Communications (ICC). The workshop is titled, "Industrial Private 5G-and-beyond Wireless Networks," and aims to bring researchers for technical discussion on fundamental and practically relevant questions to many emerging challenges in industrial private wireless networks. This workshop is also being organized with the help of other researchers from industry and academia such as Huawei Technology, University of South Florida, Aalborg University, Jinan University, and South China University of Technology. IEEE ICC is one of two IEEE Communications Society's flagship conferences.
-
- Date & Time: Thursday, October 13, 2022; 1:30pm-2:30pm
Speaker: Prof. Shaoshuai Mou, Purdue University
MERL Host: Yebin Wang
Research Areas: Control, Machine Learning, Optimization
Abstract - Modern society has been relying more and more on engineering advance of autonomous systems, ranging from individual systems (such as a robotic arm for manufacturing, a self-driving car, or an autonomous vehicle for planetary exploration) to cooperative systems (such as a human-robot team, swarms of drones, etc). In this talk we will present our most recent progress in developing a fundamental framework for learning and control in autonomous systems. The framework comes from a differentiation of Pontryagin’s Maximum Principle and is able to provide a unified solution to three classes of learning/control tasks, i.e. adaptive autonomy, inverse optimization, and system identification. We will also present applications of this framework into human-autonomy teaming, especially in enabling an autonomous system to take guidance from human operators, which is usually sparse and vague.
-
- Date: August 25, 2022
MERL Contact: Anthony Vetro
Research Areas: Dynamical Systems, Machine Learning, Multi-Physical Modeling
Brief - MERL researcher Saleh Nabi was interviewed by Globest.com regarding the use of airflow optimization for smarter energy use and disease prevention. The article titled "High Tech Airflow Control for Smarter Energy Use: Reducing costs and improving effectiveness means a lot of tricky math" was recently published and describes how the solutions to complex fluid dynamical equations leads to improved HVAC control.
Globest.com is a trusted and independent team of experts providing commercial real estate professionals with comprehensive coverage and best practices necessary to innovate and build their businesses. More details about Globest can be found here: https://www.globest.com/static/about-us/
-
- Date: Thursday, October 6, 2022
Location: Kendall Square, Cambridge, MA
MERL Contacts: Anoop Cherian; Jonathan Le Roux
Research Areas: Artificial Intelligence, Computer Vision, Machine Learning, Speech & Audio
Brief - SANE 2022, a one-day event gathering researchers and students in speech and audio from the Northeast of the American continent, was held on Thursday October 6, 2022 in Kendall Square, Cambridge, MA.
It was the 9th edition in the SANE series of workshops, which started in 2012 and was held every year alternately in Boston and New York until 2019. Since the first edition, the audience has grown to a record 200 participants and 45 posters in 2019. After a 2-year hiatus due to the pandemic, SANE returned with an in-person gathering of 140 students and researchers.
SANE 2022 featured invited talks by seven leading researchers from the Northeast: Rupal Patel (Northeastern/VocaliD), Wei-Ning Hsu (Meta FAIR), Scott Wisdom (Google), Tara Sainath (Google), Shinji Watanabe (CMU), Anoop Cherian (MERL), and Chuang Gan (UMass Amherst/MIT-IBM Watson AI Lab). It also featured a lively poster session with 29 posters.
SANE 2022 was co-organized by Jonathan Le Roux (MERL), Arnab Ghoshal (Apple), John Hershey (Google), and Shinji Watanabe (CMU). SANE remained a free event thanks to generous sponsorship by Bose, Google, MERL, and Microsoft.
Slides and videos of the talks will be released on the SANE workshop website.
-
- Date: October 10, 2022 - October 11, 2022
Where: University of Freiburg, Germany
Research Areas: Control, Machine Learning, Optimization
Brief - Rien Quirynen is an invited speaker at an international workshop on Embedded Optimization and Learning for Robotics and Mechatronics, which is organized by the ELO-X project at the University of Freiburg in Germany. This talk, entitled "Embedded learning, optimization and predictive control for autonomous vehicles", presents recent results from multiple projects at MERL that leverage embedded optimization, machine learning and optimal control for autonomous vehicles.
This workshop is part of the ELO-X Fall School and Workshop. Invited external lecturers will present state-of-the-art techniques and applications in the field of Embedded Optimization and Learning. ELO-X is a Marie Curie Innovative Training Network (ITN) funded by the European Commission Horizon 2020 program.
-
- Date: September 21, 2022
MERL Contacts: Philip V. Orlik; Anthony Vetro
Research Areas: Applied Physics, Artificial Intelligence, Communications, Computational Sensing, Computer Vision, Control, Data Analytics, Dynamical Systems, Electric Systems, Electronic and Photonic Devices, Machine Learning, Multi-Physical Modeling, Optimization, Robotics, Signal Processing, Speech & Audio
Brief - Mitsubishi Electric Research Laboratories (MERL) invites qualified postdoctoral candidates to apply for the position of Postdoctoral Research Fellow. This position provides early career scientists the opportunity to work at a unique, academically-oriented industrial research laboratory. Successful candidates will be expected to define and pursue their own original research agenda, explore connections to established laboratory initiatives, and publish high impact articles in leading venues. Please refer to our web page for further details.
-
- Date & Time: Tuesday, September 6, 2022; 12:00 PM EDT
Speaker: Chuang Gan, UMass Amherst & MIT-IBM Watson AI Lab
MERL Host: Jonathan Le Roux
Research Areas: Artificial Intelligence, Computer Vision, Machine Learning, Speech & Audio
Abstract - Human sensory perception of the physical world is rich and multimodal and can flexibly integrate input from all five sensory modalities -- vision, touch, smell, hearing, and taste. However, in AI, attention has primarily focused on visual perception. In this talk, I will introduce my efforts in connecting vision with sound, which will allow machine perception systems to see objects and infer physics from multi-sensory data. In the first part of my talk, I will introduce a. self-supervised approach that could learn to parse images and separate the sound sources by watching and listening to unlabeled videos without requiring additional manual supervision. In the second part of my talk, I will show we may further infer the underlying causal structure in 3D environments through visual and auditory observations. This enables agents to seek the sound source of repeating environmental sound (e.g., alarm) or identify what object has fallen, and where, from an intermittent impact sound.
-
- Date: August 22, 2022
MERL Contacts: Chiori Hori; Jonathan Le Roux; Anthony Vetro
Research Areas: Artificial Intelligence, Machine Learning, Speech & Audio
Brief - IEEE has announced that the recipient of the 2023 IEEE James L. Flanagan Speech and Audio Processing Award will be Prof. Alex Waibel (CMU/Karlsruhe Institute of Technology), “For pioneering contributions to spoken language translation and supporting technologies.” Mitsubishi Electric Research Laboratories (MERL), which has become the new sponsor of this prestigious award in 2022, extends our warmest congratulations to Prof. Waibel.
MERL Senior Principal Research Scientist Dr. Chiori Hori, who worked with Dr. Waibel at Carnegie Mellon University and collaborated with him as part of national projects on speech summarization and translation, comments on his invaluable contributions to the field: “He has contributed not only to the invention of groundbreaking technology in speech and spoken language processing but also to the promotion of an abundance of research projects through international research consortiums by linking American, European, and Asian research communities. Many of his former laboratory members and collaborators are now leading R&D in the AI field.”
The IEEE Board of Directors established the IEEE James L. Flanagan Speech and Audio Processing Award in 2002 for outstanding contributions to the advancement of speech and/or audio signal processing. This award has recognized the contributions of some of the most renowned pioneers and leaders in their respective fields. MERL is proud to support the recognition of outstanding contributions to the field of speech and audio processing through its sponsorship of this award.
-
- Date: June 8, 2022
Where: 2022 American Control Conference
MERL Contacts: Ankush Chakrabarty; Christopher R. Laughman
Research Areas: Control, Machine Learning, Multi-Physical Modeling, Optimization
Brief - Researchers from EPFL (Wenjie Xu, Colin Jones) and EMPA (Bratislav Svetozarevic), in collaboration with MERL researchers Ankush Chakrabarty and Chris Laughman, recently won the ASME Energy Systems Technical Committee Best Paper Award at the 2022 American Control Conference for their work on "VABO: Violation-Aware Bayesian Optimization for Closed-Loop Performance Optimization with Unmodeled Constraints" out of 19 nominations and 3 finalists. The paper describes a data-driven framework for optimizing the performance of constrained control systems by systematically re-evaluating how cautiously/aggressively one should explore the search space to avoid sustained, large-magnitude constraint violations while tolerating small violations, and demonstrates these methods on a physics-based model of a vapor compression cycle.
-