My research lies at the intersection of Robot Learning, Reinforcement Learning, VLMs, and Field Robotics. I am interested in reinforcement learning and representation learning for fast adaptation and generalization in unstructured environments. My ultimate goal is to develop robots that adapt fast during deployment, probe when uncertain, and make mistakes only once, if at all.
News
September 2023: I joined UW as a PhD student in Robotics!
July 2023: I attended ICML 2023, where I served as Social Chair at the LatinX in AI workshop.
July 2023: I defended my Master's thesis at Carnegie Mellon University! Here is the recorded video.
May 2023: I presented two conference papers and one workshop paper at ICRA 2023, my first ever in-person conference.
We developed a system that enables quadrupedal robots to perform continuous, precise jumps across challenging terrains like stairs and stepping stones, achieving unprecedented agility.
We propose a self-supervised method to predict traversability costmaps by combining exteroceptive environmental information with proprioceptive terrain interaction feedback.
We present an inverse reinforcement learning-based method that efficiently predicts uncertainty-aware costmaps for off-road traversability via conditional value-at-risk (CVaR).
We present a formal framework and implementation in a cognitive agent for novelty handling and demonstrate the efficacy of the proposed methods for detecting and handling a large set of novelties in a crafting task in a simulated environment.
We introduce a unified framework for creative problem solving through action discovery. We describe two methods which enable action discovery at a declarative and neurosymbolic level, namely through action primitive segmentation, and behavior babbling, respectively.
We describe a method for discovering new action primitives through object exploration and action segmentation, which is able to iteratively update the robot's knowledge base on-the-fly until the solution becomes feasible.