- Date: October 17, 2024
Awarded to: Niccolò Turcato, Alberto Dalla Libera, Giulio Giacomuzzo, Ruggero Carli, Diego Romeres
MERL Contact: Diego Romeres
Research Areas: Artificial Intelligence, Dynamical Systems, Machine Learning, Robotics
Brief - The team composed of the control group at the University of Padua and MERL's Optimization and Robotic team ranked 1st out of the 4 finalist teams that arrived to the 2nd AI Olympics with RealAIGym competition at IROS 24, which focused on control of under-actuated robots. The team was composed by Niccolò Turcato, Alberto Dalla Libera, Giulio Giacomuzzo, Ruggero Carli and Diego Romeres. The competition was organized by the German Research Center for Artificial Intelligence (DFKI), Technical University of Darmstadt and Chalmers University of Technology.
The competition and award ceremony was hosted by IEEE International Conference on Intelligent Robots and Systems (IROS) on October 17, 2024 in Abu Dhabi, UAE. Diego Romeres presented the team's method, based on a model-based reinforcement learning algorithm called MC-PILCO.
-
- Date: August 29, 2024
Awarded to: Yoshiki Masuyama, Gordon Wichern, Francois G. Germain, Christopher Ick, and Jonathan Le Roux
MERL Contacts: François Germain; Jonathan Le Roux; Gordon Wichern
Research Areas: Artificial Intelligence, Machine Learning, Speech & Audio
Brief - MERL's Speech & Audio team ranked 1st out of 7 teams in Task 2 of the 1st SONICOM Listener Acoustic Personalisation (LAP) Challenge, which focused on "Spatial upsampling for obtaining a high-spatial-resolution HRTF from a very low number of directions". The team was led by Yoshiki Masuyama, and also included Gordon Wichern, Francois Germain, MERL intern Christopher Ick, and Jonathan Le Roux.
The LAP Challenge workshop and award ceremony was hosted by the 32nd European Signal Processing Conference (EUSIPCO 24) on August 29, 2024 in Lyon, France. Yoshiki Masuyama presented the team's method, "Retrieval-Augmented Neural Field for HRTF Upsampling and Personalization", and received the award from Prof. Michele Geronazzo (University of Padova, IT, and Imperial College London, UK), Chair of the Challenge's Organizing Committee.
The LAP challenge aims to explore challenges in the field of personalized spatial audio, with the first edition focusing on the spatial upsampling and interpolation of head-related transfer functions (HRTFs). HRTFs with dense spatial grids are required for immersive audio experiences, but their recording is time-consuming. Although HRTF spatial upsampling has recently shown remarkable progress with approaches involving neural fields, HRTF estimation accuracy remains limited when upsampling from only a few measured directions, e.g., 3 or 5 measurements. The MERL team tackled this problem by proposing a retrieval-augmented neural field (RANF). RANF retrieves a subject whose HRTFs are close to those of the target subject at the measured directions from a library of subjects. The HRTF of the retrieved subject at the target direction is fed into the neural field in addition to the desired sound source direction. The team also developed a neural network architecture that can handle an arbitrary number of retrieved subjects, inspired by a multi-channel processing technique called transform-average-concatenate.
-
- Date: January 1, 2024
Awarded to: Jonathan Le Roux
MERL Contact: Jonathan Le Roux
Research Areas: Artificial Intelligence, Machine Learning, Speech & Audio
Brief - MERL Distinguished Scientist and Speech & Audio Senior Team Leader Jonathan Le Roux has been elevated to IEEE Fellow, effective January 2024, "for contributions to multi-source speech and audio processing."
Mitsubishi Electric celebrated Dr. Le Roux's elevation and that of another researcher from the company, Dr. Shumpei Kameyama, with a worldwide news release on February 15.
Dr. Jonathan Le Roux has made fundamental contributions to the field of multi-speaker speech processing, especially to the areas of speech separation and multi-speaker end-to-end automatic speech recognition (ASR). His contributions constituted a major advance in realizing a practically usable solution to the cocktail party problem, enabling machines to replicate humans’ ability to concentrate on a specific sound source, such as a certain speaker within a complex acoustic scene—a long-standing challenge in the speech signal processing community. Additionally, he has made key contributions to the measures used for training and evaluating audio source separation methods, developing several new objective functions to improve the training of deep neural networks for speech enhancement, and analyzing the impact of metrics used to evaluate the signal reconstruction quality. Dr. Le Roux’s technical contributions have been crucial in promoting the widespread adoption of multi-speaker separation and end-to-end ASR technologies across various applications, including smart speakers, teleconferencing systems, hearables, and mobile devices.
IEEE Fellow is the highest grade of membership of the IEEE. It honors members with an outstanding record of technical achievements, contributing importantly to the advancement or application of engineering, science and technology, and bringing significant value to society. Each year, following a rigorous evaluation procedure, the IEEE Fellow Committee recommends a select group of recipients for elevation to IEEE Fellow. Less than 0.1% of voting members are selected annually for this member grade elevation.
-
- Date: December 15, 2023
Awarded to: Lingfeng Sun, Devesh K. Jha, Chiori Hori, Siddharth Jain, Radu Corcodel, Xinghao Zhu, Masayoshi Tomizuka and Diego Romeres
MERL Contacts: Radu Corcodel; Chiori Hori; Siddarth Jain; Devesh K. Jha; Diego Romeres
Research Areas: Artificial Intelligence, Machine Learning, Robotics
Brief - MERL Researchers received an "Honorable Mention award" at the Workshop on Instruction Tuning and Instruction Following at the NeurIPS 2023 conference in New Orleans. The workshop was on the topic of instruction tuning and Instruction following for Large Language Models (LLMs). MERL researchers presented their work on interactive planning using LLMs for partially observable robotic tasks during the oral presentation session at the workshop.
-
- Date: December 16, 2023
Awarded to: Zexu Pan, Gordon Wichern, Yoshiki Masuyama, Francois Germain, Sameer Khurana, Chiori Hori, and Jonathan Le Roux
MERL Contacts: François Germain; Chiori Hori; Sameer Khurana; Jonathan Le Roux; Gordon Wichern
Research Areas: Artificial Intelligence, Machine Learning, Speech & Audio
Brief - MERL's Speech & Audio team ranked 1st out of 12 teams in the 2nd COG-MHEAR Audio-Visual Speech Enhancement Challenge (AVSE). The team was led by Zexu Pan, and also included Gordon Wichern, Yoshiki Masuyama, Francois Germain, Sameer Khurana, Chiori Hori, and Jonathan Le Roux.
The AVSE challenge aims to design better speech enhancement systems by harnessing the visual aspects of speech (such as lip movements and gestures) in a manner similar to the brain’s multi-modal integration strategies. MERL’s system was a scenario-aware audio-visual TF-GridNet, that incorporates the face recording of a target speaker as a conditioning factor and also recognizes whether the predominant interference signal is speech or background noise. In addition to outperforming all competing systems in terms of objective metrics by a wide margin, in a listening test, MERL’s model achieved the best overall word intelligibility score of 84.54%, compared to 57.56% for the baseline and 80.41% for the next best team. The Fisher’s least significant difference (LSD) was 2.14%, indicating that our model offered statistically significant speech intelligibility improvements compared to all other systems.
-
- Date: August 25, 2023
Awarded to: Alberto Dalla Libera, Niccolo' Turcato, Giulio Giacomuzzo, Ruggero Carli, Diego Romeres
MERL Contact: Diego Romeres
Research Areas: Artificial Intelligence, Machine Learning, Robotics
Brief - A joint team consisting of members of University of Padua and MERL ranked 1st in the IJCAI2023 Challenge "Al Olympics With RealAlGym: Is Al Ready for Athletic Intelligence in the Real World?". The team was composed by MERL researcher Diego Romeres and a team from University Padua (UniPD) consisting of Alberto Dalla Libera, Ph.D., Ph.D. Candidates: Niccolò Turcato, Giulio Giacomuzzo and Prof. Ruggero Carli from University of Padua.
The International Joint Conference on Artificial Intelligence (IJCAI) is a premier gathering for AI researchers and organizes several competitions. This year the competition CC7 "AI Olympics With RealAIGym: Is AI Ready for Athletic Intelligence in the Real World?" consisted of two stages: simulation and real-robot experiments on two under-actuated robotic systems. The two robotics systems were treated as separate tracks and one final winner was selected for each track based on specific performance criteria in the control tasks.
The UniPD-MERL team competed and won in both tracks. The team's system made strong use of a Model-based Reinforcement Learning algorithm called (MC-PILCO) that we recently published in the journal IEEE Transaction on Robotics.
-
- Date: June 9, 2023
Awarded to: Darius Petermann, Gordon Wichern, Aswin Subramanian, Jonathan Le Roux
MERL Contacts: Jonathan Le Roux; Gordon Wichern
Research Areas: Artificial Intelligence, Machine Learning, Speech & Audio
Brief - Former MERL intern Darius Petermann (Ph.D. Candidate at Indiana University) has received a Best Student Paper Award at the 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2023) for the paper "Hyperbolic Audio Source Separation", co-authored with MERL researchers Gordon Wichern and Jonathan Le Roux, and former MERL researcher Aswin Subramanian. The paper presents work performed during Darius's internship at MERL in the summer 2022. The paper introduces a framework for audio source separation using embeddings on a hyperbolic manifold that compactly represent the hierarchical relationship between sound sources and time-frequency features. Additionally, the code associated with the paper is publicly available at https://github.com/merlresearch/hyper-unmix.
ICASSP is the flagship conference of the IEEE Signal Processing Society (SPS). ICASSP 2023 was held in the Greek island of Rhodes from June 04 to June 10, 2023, and it was the largest ICASSP in history, with more than 4000 participants, over 6128 submitted papers and 2709 accepted papers. Darius’s paper was first recognized as one of the Top 3% of all papers accepted at the conference, before receiving one of only 5 Best Student Paper Awards during the closing ceremony.
-
- Date: June 9, 2023
Awarded to: Cristian J. Vaca-Rubio, Pu Wang, Toshiaki Koike-Akino, Ye Wang, Petros Boufounos and Petar Popovski
MERL Contacts: Petros T. Boufounos; Toshiaki Koike-Akino; Pu (Perry) Wang; Ye Wang
Research Areas: Artificial Intelligence, Communications, Computational Sensing, Dynamical Systems, Machine Learning, Signal Processing
Brief - A MERL Paper on Wi-Fi sensing was recognized as a Top 3% Paper among all 2709 accepted papers at the 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2023). Co-authored by Cristian Vaca-Rubio and Petar Popovski from Aalborg University, Denmark, and MERL researchers Pu Wang, Toshiaki Koike-Akino, Ye Wang, and Petros Boufounos, the paper "MmWave Wi-Fi Trajectory Estimation with Continous-Time Neural Dynamic Learning" was also a Best Student Paper Award finalist.
Performed during Cristian’s stay at MERL first as a visiting Marie Skłodowska-Curie Fellow and then as a full-time intern in 2022, this work capitalizes on standards-compliant Wi-Fi signals to perform indoor localization and sensing. The paper uses a neural dynamic learning framework to address technical issues such as low sampling rate and irregular sampling intervals.
ICASSP, a flagship conference of the IEEE Signal Processing Society (SPS), was hosted on the Greek island of Rhodes from June 04 to June 10, 2023. ICASSP 2023 marked the largest ICASSP in history, boasting over 4000 participants and 6128 submitted papers, out of which 2709 were accepted.
-
- Date: June 1, 2023
Awarded to: Shih-Lun Wu, Xuankai Chang, Gordon Wichern, Jee-weon Jung, Francois Germain, Jonathan Le Roux, Shinji Watanabe
MERL Contacts: François Germain; Jonathan Le Roux; Gordon Wichern
Research Areas: Artificial Intelligence, Machine Learning, Speech & Audio
Brief - A joint team consisting of members of CMU Professor and MERL Alumn Shinji Watanabe's WavLab and members of MERL's Speech & Audio team ranked 1st out of 11 teams in the DCASE2023 Challenge's Task 6A "Automated Audio Captioning". The team was led by student Shih-Lun Wu and also featured Ph.D. candidate Xuankai Chang, Postdoctoral research associate Jee-weon Jung, Prof. Shinji Watanabe, and MERL researchers Gordon Wichern, Francois Germain, and Jonathan Le Roux.
The IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events (DCASE Challenge), started in 2013, has been organized yearly since 2016, and gathers challenges on multiple tasks related to the detection, analysis, and generation of sound events. This year, the DCASE2023 Challenge received over 428 submissions from 123 teams across seven tasks.
The CMU-MERL team competed in the Task 6A track, Automated Audio Captioning, which aims at generating informative descriptions for various sounds from nature and/or human activities. The team's system made strong use of large pretrained models, namely a BEATs transformer as part of the audio encoder stack, an Instructor Transformer encoding ground-truth captions to derive an audio-text contrastive loss on the audio encoder, and ChatGPT to produce caption mix-ups (i.e., grammatical and compact combinations of two captions) which, together with the corresponding audio mixtures, increase not only the amount but also the complexity and diversity of the training data. The team's best submission obtained a SPIDEr-FL score of 0.327 on the hidden test set, largely outperforming the 2nd best team's 0.315.
-
- Date: January 12, 2023
Awarded to: William T. Freeman, Thouis R. Jones, and Egon C. Pasztor
Awarded by: IEEE Computer Society
Research Areas: Computer Vision, Machine Learning
Brief - The MERL paper entitled, "Example-Based Super-Resolution" by William T. Freeman, Thouis R. Jones, and Egon C. Pasztor, published in a 2002 issue of IEEE Computer Graphics and Applications, has been awarded a 2021 Test of Time Award by the IEEE Computer Society. This work was done while the principal investigator, Prof. Freeman, was a research scientist at MERL; he is now a Professor of Electrical Engineering and Computer Science at MIT.
This best paper award recognizes regular or special issue papers published by the magazine that have made profound and long-lasting research impacts in bridging the theory and practice of computer graphics. "This paper is an early example of using learning for a low-level vision task and we are very proud of the pioneering work that MERL has done in this area prior to the deep learning revolution," says Anthony Vetro, VP & Director at MERL.
-
- Date: March 15, 2022
Awarded to: Yukimasa Nagai, Jianlin Guo, Philip Orlik, Takenori Sumi, Benjamin A. Rolfe and Hiroshi Mineno
MERL Contacts: Jianlin Guo; Philip V. Orlik
Research Areas: Communications, Machine Learning
Brief - MELCO/MERL research paper “Sub-1 GHz Frequency Band Wireless Coexistence for the Internet of Things” has won the 37th Telecommunications Advancement Foundation Award (Telecom System Technology Award) in Japan. This award started in 1984, and is given to research papers and works related to information and telecommunications that have made significant contributions and achievements to the advancement, development, and standardization of information and telecommunications from technical and engineering perspectives. The award recognizes both the IEEE 802.19.3 standardization efforts and the technological advancements using reinforcement learning and robust access methodologies for wireless communication system. This year, there were 43 entries with 5 winning awards and 3 winning encouragement awards. This is the first time MELCO/MERL has received this award. Our paper has been published by IEEE Access in 2021 and authors are Yukimasa Nagai, Jianlin Guo, Philip Orlik, Takenori Sumi, Benjamin A. Rolfe and Hiroshi Mineno.
-
- Date: November 17, 2021
Awarded to: Elevators and Escalators Division of Mitsubishi Electric US, Inc.
MERL Contacts: Daniel N. Nikovski; William S. Yerazunis
Research Areas: Data Analytics, Machine Learning, Signal Processing
Brief - The Elevators and Escalators Division of Mitsubishi Electric US, Inc. has been recognized as a 2022 CES® Innovation Awards honoree for its new PureRide™ Touchless Control for elevators, jointly developed with MERL. Sponsored by the Consumer Technology Association (CTA), the CES Innovation Awards is the largest and most influential technology event in the world. PureRide™ Touchless Control provides a simple, no-touch product that enables users to call an elevator and designate a destination floor by placing a hand or finger over a sensor. MERL initiated the development of PureRide™ in the first weeks of the COVID-19 pandemic by proposing the use of infra-red sensors for operating elevator call buttons, and participated actively in its rapid implementation and commercialization, resulting in a first customer installation in October of 2020.
-
- Date: October 18, 2021
Awarded to: Daniel Nikovski
MERL Contact: Daniel N. Nikovski
Research Areas: Artificial Intelligence, Machine Learning
Brief - Daniel Nikovski, Group Manager of MERL's Data Analytics group, has received an Outstanding Reviewer Award from the 2021 conference on Neural Information Processing Systems (NeurIPS'21). NeurIPS is the world's premier conference on neural networks and related technologies.
-
- Date: January 25, 2021
Awarded to: Takenori Sumi, Yukimasa Nagai, Jianlin Guo, Philip Orlik, Tatsuya Yokoyama, Hiroshi Mineno
MERL Contacts: Jianlin Guo; Philip V. Orlik
Research Areas: Communications, Machine Learning, Signal Processing
Brief - MELCO and MERL researchers have won "Excellent Presentation Award" at the IPSJ/CDS30 (Information Processing Society of Japan/Consumer Devices and Systems 30th conferences) held on January 25, 2021. The paper titled "Sub-1 GHz Coexistence Using Reinforcement Learning Based IEEE 802.11ah RAW Scheduling" addresses coexistence between IEEE 802.11ah and IEEE 802.15.4g systems in the Sub-1 GHz frequency bands. This paper proposes a novel method to allocate IEEE 802.11 RAW time slots using a Q-Learning technique. MERL and MELCO have been leading IEEE 802.19.3 coexistence standard development and this paper is a good candidate for future standard enhancement. The authors are Takenori Sumi, Yukimasa Nagai, Jianlin Guo, Philip Orlik, Tatsuya Yokoyama and Hiroshi Mineno.
-
- Date: January 6, 2021
Awarded to: Rushil Anirudh, Suhas Lohit, Pavan Turaga
MERL Contact: Suhas Lohit
Research Areas: Computational Sensing, Computer Vision, Machine Learning
Brief - A team of researchers from Mitsubishi Electric Research Laboratories (MERL), Lawrence Livermore National Laboratory (LLNL) and Arizona State University (ASU) received the Best Paper Honorable Mention Award at WACV 2021 for their paper "Generative Patch Priors for Practical Compressive Image Recovery".
The paper proposes a novel model of natural images as a composition of small patches which are obtained from a deep generative network. This is unlike prior approaches where the networks attempt to model image-level distributions and are unable to generalize outside training distributions. The key idea in this paper is that learning patch-level statistics is far easier. As the authors demonstrate, this model can then be used to efficiently solve challenging inverse problems in imaging such as compressive image recovery and inpainting even from very few measurements for diverse natural scenes.
-
- Date: October 15, 2020
Awarded to: Ethan Manilow, Gordon Wichern, Jonathan Le Roux
MERL Contacts: Jonathan Le Roux; Gordon Wichern
Research Areas: Artificial Intelligence, Machine Learning, Speech & Audio
Brief - Former MERL intern Ethan Manilow and MERL researchers Gordon Wichern and Jonathan Le Roux won Best Poster Award and Best Video Award at the 2020 International Society for Music Information Retrieval Conference (ISMIR 2020) for the paper "Hierarchical Musical Source Separation". The conference was held October 11-14 in a virtual format. The Best Poster Awards and Best Video Awards were awarded by popular vote among the conference attendees.
The paper proposes a new method for isolating individual sounds in an audio mixture that accounts for the hierarchical relationship between sound sources. Many sounds we are interested in analyzing are hierarchical in nature, e.g., during a music performance, a hi-hat note is one of many such hi-hat notes, which is one of several parts of a drumkit, itself one of many instruments in a band, which might be playing in a bar with other sounds occurring. Inspired by this, the paper re-frames the audio source separation problem as hierarchical, combining similar sounds together at certain levels while separating them at other levels, and shows on a musical instrument separation task that a hierarchical approach outperforms non-hierarchical models while also requiring less training data. The paper, poster, and video can be seen on the paper page on the ISMIR website.
-
- Date: December 18, 2019
Awarded to: Xuankai Chang, Wangyou Zhang, Yanmin Qian, Jonathan Le Roux, Shinji Watanabe
MERL Contact: Jonathan Le Roux
Research Areas: Artificial Intelligence, Machine Learning, Speech & Audio
Brief - MERL researcher Jonathan Le Roux and co-authors Xuankai Chang, Shinji Watanabe (Johns Hopkins University), Wangyou Zhang, and Yanmin Qian (Shanghai Jiao Tong University) won the Best Paper Award at the 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU 2019), for the paper "MIMO-Speech: End-to-End Multi-Channel Multi-Speaker Speech Recognition". MIMO-Speech is a fully neural end-to-end framework that can transcribe the text of multiple speakers speaking simultaneously from multi-channel input. The system is comprised of a monaural masking network, a multi-source neural beamformer, and a multi-output speech recognition model, which are jointly optimized only via an automatic speech recognition (ASR) criterion. The award was received by lead author Xuankai Chang during the conference, which was held in Sentosa, Singapore from December 14-18, 2019.
-
- Date: October 27, 2019
Awarded to: Abhinav Kumar, Tim K. Marks, Wenxuan Mou, Chen Feng, Xiaoming Liu
MERL Contact: Tim K. Marks
Research Areas: Artificial Intelligence, Computer Vision, Machine Learning
Brief - MERL researcher Tim Marks, former MERL interns Abhinav Kumar and Wenxuan Mou, and MERL consultants Professor Chen Feng (NYU) and Professor Xiaoming Liu (MSU) received the Best Oral Paper Award at the IEEE/CVF International Conference on Computer Vision (ICCV) 2019 Workshop on Statistical Deep Learning in Computer Vision (SDL-CV) held in Seoul, Korea. Their paper, entitled "UGLLI Face Alignment: Estimating Uncertainty with Gaussian Log-Likelihood Loss," describes a method which, given an image of a face, estimates not only the locations of facial landmarks but also the uncertainty of each landmark location estimate.
-
- Date: October 10, 2019
Awarded to: Devesh Jha, Nurali Virani, Zhenyuan Yuan, Ishana Shekhawat and Asok Ray
MERL Contact: Devesh K. Jha
Research Areas: Artificial Intelligence, Control, Data Analytics, Machine Learning, Robotics
Brief - MERL researcher Devesh Jha has won the Rudolf Kalman Best Paper Award 2019 for the paper entitled "Imitation of Demonstrations Using Bayesian Filtering With Nonparametric Data-Driven Models". This paper, published in a Special Commemorative Issue for Rudolf E. Kalman in the ASME JDSMC in March 2018, uses Bayesian filtering for imitation learning in Hidden Mode Hybrid Systems. This award is given annually by the Dynamic Systems and Control Division of ASME to the authors of the best paper published in the ASME Journal of Dynamic Systems Measurement and Control during the preceding year.
-
- Date: May 22, 2019
Awarded to: Siriramya Bhamidipati, Kyeong Jin Kim, Hongbo Sun, Philip Orlik
MERL Contacts: Hongbo Sun; Philip V. Orlik
Research Areas: Artificial Intelligence, Communications, Machine Learning, Signal Processing, Information Security
Brief - MERL researchers, Kyeong Jin Kim, Hongbo Sun, Philip Orlik, along with lead author and former MERL intern Siriramya Bhamidipati were awarded the Smart Grid Symposium Best Paper Award at this year's International Conference on Communications (ICC) held in Shanghai, China. There paper titled "GPS Spoofing Detection and Mitigation in PMUs Using Distributed Multiple Directional Antennas," described a technique to rapidly detect and mitigate GPS timing attacks/errors via hardware (antennas) and signal processing (Kalman Filtering).
-
- Date: April 23, 2019
Awarded to: Teng-yok Lee
Research Areas: Artificial Intelligence, Computer Vision, Data Analytics, Machine Learning
Brief - MERL researcher Teng-yok Lee has won the Best Visualization Note Award at the PacificVis 2019 conference held in Bangkok Thailand, from April 23-26, 2019. The paper entitled "Space-Time Slicing: Visualizing Object Detector Performance in Driving Video Sequences" presents a visualization method called Space-Time Slicing to assist a human developer in the development of object detectors for driving applications without requiring labeled data. Space-Time Slicing reveals patterns in the detection data that can suggest the presence of false positives and false negatives.
-
- Date: November 16, 2018
Awarded to: Ziming Zhang, Alan Sullivan, Hideaki Maehara, Kenji Taira, Kazuo Sugimoto
Research Areas: Artificial Intelligence, Computer Vision, Machine Learning
Brief - Researchers and developers from MERL, Mitsubishi Electric and Mitsubishi Electric Engineering (MEE) have been recognized with an R&D100 award for the development of a deep learning-based water detector. Automatic detection of water levels in rivers and streams is critical for early warning of flash flooding. Existing systems require a height gauge be placed in the river or stream, something that is costly and sometimes impossible. The new deep learning-based water detector uses only images from a video camera along with 3D measurements of the river valley to determine water levels and warn of potential flooding. The system is robust to lighting and weather conditions working well during the night as well as during fog or rain. Deep learning is a relatively new technique that uses neural networks and AI that are trained from real data to perform human-level recognition tasks. This work is powered by Mitsubishi Electric's Maisart AI technology.
-
- Date: August 4, 2017
Awarded to: David Zhuzhunashvili and Andrew Knyazev
Research Area: Machine Learning
Brief - David Zhuzhunashvili, an undergraduate student at UC Boulder, Colorado, and Andrew Knyazev, Distinguished Research Scientist at MERL, received the 2017 Graph Challenge Student Innovation Award. Their poster "Preconditioned Spectral Clustering for Stochastic Block Partition Streaming Graph Challenge" was accepted to the 2017 IEEE High Performance Extreme Computing Conference (HPEC '17), taking place 12-14 September 2017 (http://www.ieee-hpec.org/), and the paper was accepted to the IEEE Xplore HPEC proceedings.
HPEC is the premier conference in the world on the convergence of High Performance and Embedded Computing. DARPA/Amazon/IEEE Graph Challenge is a special HPEC event. Graph Challenge encourages community approaches to developing new solutions for analyzing graphs derived from social media, sensor feeds, and scientific data to enable relationships between events to be discovered as they unfold in the field. The 2017 Streaming Graph Challenge is Stochastic Block Partition. This challenge seeks to identify optimal blocks (or clusters) in a larger graph with known ground-truth clusters, while performance is evaluated compared to baseline Python and C codes, provided by the Graph Challenge organizers.
The proposed approach is spectral clustering that performs block partition of graphs using eigenvectors of a matrix representing the graph. Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) method iteratively approximates a few leading eigenvectors of the symmetric graph Laplacian for multi-way graph partitioning. Preliminary tests for all static cases for the Graph Challenge demonstrate 100% correctness of partition using any of the IEEE HPEC Graph Challenge metrics, while at the same time also being approximately 500-1000 times faster compared to the provided baseline code, e.g., 2M static graph is 100% correctly partitioned in ~2,100 sec. Warm-starts of LOBPCG further cut the execution time 2-3x for the streaming graphs.
-
- Date: March 31, 2016
Awarded to: Andrew Knyazev
Research Areas: Control, Optimization, Dynamical Systems, Machine Learning, Data Analytics, Communications, Signal Processing
Brief - Andrew Knyazev selected as a Fellow of the Society for Industrial and Applied Mathematics (SIAM) for contributions to computational mathematics and development of numerical methods for eigenvalue problems.
Fellowship honors SIAM members who have made outstanding contributions to the fields served by the SIAM. Andrew Knyazev was among a distinguished group of members nominated by peers and selected for the 2016 Class of Fellows.
-
- Date: September 2, 2011
Awarded to: Fatih Porikli and Huseyin Ozkan.
Awarded for: "Data Driven Frequency Mapping for Computationally Scalable Object Detection"
Awarded by: IEEE Advanced Video and Signal Based Surveillance (AVSS)
Research Area: Machine Learning
-