Erfan Aasi
I'm a Postdoctoral Associate at the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT, working with Professors Daniela Rus and Sertac Karaman. My research focuses on advancing the safety and intelligence of robotic systems, with a particular emphasis on autonomous vehicles. By integrating the strengths of classical deep learning algorithms with the emerging capabilities of large language models, I aim to create robust systems that can navigate complex environments, interpret human intent, and adapt to unexpected scenarios.
Research Interests: Safe Autonomy, Interpretable Decision-Making, Deep Learning and Language Models
Generating Out-Of-Distribution Scenarios Using Large Language Models
This project explores the use of Large Language Models (LLMs) to tackle the critical challenge of addressing Out-Of-Distribution (OOD) scenarios in autonomous driving, which are essential for ensuring safety and reliability under unpredictable conditions. Leveraging the zero-shot generalization and reasoning capabilities of LLMs, we introduce a framework that generates diverse OOD scenarios as a branching tree, with each branch representing a unique case. These scenarios are simulated in the CARLA simulator through automated scene augmentation aligned with textual descriptions. We evaluate the framework using a diversity metric and a novel "OOD-ness" metric to quantify deviation from typical urban conditions. Additionally, we assess the ability of Vision-Language Models (VLMs) to interpret and navigate these scenarios. This work highlights the transformative potential of language models in enhancing the safety validation of autonomous vehicles.
Autonomous Driving in Urban Environments Under Uncertain Conditions
This project introduces a two-level hierarchical architecture for controlling autonomous vehicles in complex urban environments, prioritizing collision avoidance, traffic rule adherence, and real-time performance. By integrating Signal Temporal Logic (STL) into Model Predictive Control (MPC) at the top level and leveraging detailed feedback from the bottom level, the system bridges the gap between simplified control models and real-world dynamics. This closed-loop approach enhances safety and reliability, addressing a critical challenge in autonomous driving. Simulations in the CARLA simulator demonstrate the method's effectiveness and efficiency compared to existing solutions, highlighting its potential to advance urban autonomous vehicle navigation. By tackling the discrepancies between simplified models and real-world conditions, this framework provides a robust foundation for safer and more dependable autonomous systems.
Interpretable Classification of Time-Series Data Using Decision Trees
Classifying time-series data is critical for the analysis and control of autonomous systems, including robots and self-driving cars, where interpretability is essential for safety and trust. Existing temporal logic-based learning methods often fall short in real-world applications due to inaccuracies or the complexity of the generated formulae. To address these challenges, we present Boosted Concise Decision Trees (BCDTs), a novel approach that combines an ensemble of simplified decision trees to produce Signal Temporal Logic (STL) classifiers. This method enhances classification accuracy while prioritizing interpretability by generating concise and comprehensible formulae. The effectiveness of BCDTs is demonstrated through naval surveillance and urban-driving case studies, highlighting its practical significance and potential for real-world impact.
Deep Reinforcement Learning in Complex Evnironments with Interpretable Specifications
This project addresses the challenges of robot navigation in cluttered environments with unknown dynamics by leveraging deep reinforcement learning (DRL) to handle tasks specified through Linear Temporal Logic (LTL) formulas. Traditional DRL approaches struggle with exploration due to sparse rewards, a common issue in obstacle-dense settings. To tackle this, we propose a novel framework that integrates path planning-guided reward schemes with sampling-based methods to enhance exploration and ensure efficient task completion. By decomposing complex LTL missions into distributed sub-goals, our approach significantly improves the performance and adaptability of robots in achieving intricate missions, demonstrating the transformative potential of reinforcement learning in robotics.
I am currently a Postdoctoral Associate at MIT, working with Professors Daniela Rus and Sertac Karaman at the Distributed Robotics Lab (DRL). My research centers on enabling autonomous vehicles to achieve common-sense reasoning and contextual understanding by leveraging the capabilities of language models and advanced machine learning algorithms. I earned my Ph.D. in Mechanical Engineering from Boston University in 2023, under the mentorship of Professors Calin Belta and Cristian Vasile. My PhD thesis was focused on designing motion planning and control algorithms for autonomous vehicles operating in urban environments. My work emphasized ensuring safe navigation while adhering to traffic regulations and optimizing performance. Additionally, I integrated machine learning techniques to infer temporal logic properties from time-series data, enabling the generation of interpretable specifications that enhance decision-making systems. Prior to that, I received my Bachelor's degree in Electrical Engineering from Sharif University of Technology in Tehran, Iran, in 2018. My academic journey has been driven by a passion for robotics, artificial intelligence, and developing systems that bridge theoretical insights with practical applications to improve safety and efficiency in autonomous systems.