Previously, I was a Master's student in Robotics in the Robotics Institute at Carnegie Mellon University, advised by Prof. Sebastian Scherer as a member of the AirLab and the Field Robotics Center. Before that, I studied electrical engineering at Tufts Univesity, where I recieved my BS.
I am interested in reinforcement learning and representation learning for fast adaptation and generalization in messy environments. My ultimate goal is to develop robots that adapt fast during deployment, probe when uncertain, and make mistakes only once, if at all.
VAMOS is a hierarchical vision-language-action model that decouples semantic planning from embodiment grounding, enabling robust cross-embodiment navigation with natural language steerability.
We developed a system that enables quadrupedal robots to perform continuous, precise jumps across challenging terrains like stairs and stepping stones, achieving unprecedented agility.
We propose a self-supervised method to predict traversability costmaps by combining exteroceptive environmental information with proprioceptive terrain interaction feedback.
We present an inverse reinforcement learning-based method that efficiently predicts uncertainty-aware costmaps for off-road traversability via conditional value-at-risk (CVaR).
We present a formal framework and implementation in a cognitive agent for novelty handling and demonstrate the efficacy of the proposed methods for detecting and handling a large set of novelties in a crafting task in a simulated environment.
We introduce a unified framework for creative problem solving through action discovery. We describe two methods which enable action discovery at a declarative and neurosymbolic level, namely through action primitive segmentation, and behavior babbling, respectively.
We describe a method for discovering new action primitives through object exploration and action segmentation, which is able to iteratively update the robot's knowledge base on-the-fly until the solution becomes feasible.