Having spent the summer/fall of 2011 performing computational neuroscience research in the Joe Francis Lab at SUNY Downstate in Brooklyn, NY, Nick decided to pursue graduate studies in robotics as the field combined his interests in neuroscience and mechatronics. He joined the Computational Learning and Motor Control Lab at USC in 2012 to pursue a Ph.D. in Computer Science.
Nick received a M.Sc. degree in Computer Science from in 2014 and passed his PhD screening process in fall 2016; his proposed thesis title is "Estimation-Based Control for Humanoid Robots." His research interests include optimal estimation and control for legged robotic systems, controlling contact interactions for locomotion and trajectory optimization for planning of complex motor control tasks.
Legged robots are expected to locomote autonomously in an uncertain and potentially dynamically changing environment. Active interaction with contacts becomes inevitable to move and apply forces in a goal directed way and withstand unpredicted changes in the environment. Therefore, we need to design algorithms that exploit interacti...
Planning dynamic behaviors for legged robots is a challenging task because the robot is subject to strong dynamic constraints due to its floating base (i.e. it can fall). It needs to take into account intermittent contacts with the environment and apply contact forces in order to move.
When developing controllers for legged robots, one assumes that important quantities like the Center of Mass of the robot (CoM), its position and orientation in space or its joint positions and velocities are know accurately to be used in feedback laws. While the estimation of such quantities is trivial in simulation, it becomes a s...
In Proceedings of the 2014 IEEE/RSJ Conference on Intelligent Robots and Systems, pages: 952-958, Chicago, IL, 2014 (inproceedings)
This paper introduces a framework for state estimation on a humanoid robot platform using only common proprioceptive sensors and knowledge of leg kinematics. The presented approach extends that detailed in prior work on a point-foot quadruped platform by adding the rotational constraints imposed by the humanoidâ??s flat feet. As in previous work, the proposed Extended Kalman Filter accommodates contact switching and makes no assumptions about gait or terrain, making it applicable on any humanoid platform for use in any task. A nonlinear observability analysis is performed on both the point-foot and flat-foot filters and it is concluded that the addition of rotational constraints significantly simplifies singular cases and improves the observability characteristics of the system. Results on a simulated walking dataset demonstrate the performance gain of the flat-foot filter as well as confirm the results of the presented observability analysis.
In Proceedings of Dynamic Walking, Zürich, Switzerland, 2014, clmc (inproceedings)
State estimation plays a crucial role in humanoid locomotion;accurate estimates of the pose and velocity of the robotâ??s baseare necessary for walking tasks. Estimation in robotics haslong been focused on mobile robot localization, where wheelodometry and exteroceptive sensor data are fused to provideestimates of absolute position and yaw. While wheeled robotsare assumed to remain stable and in contact at all times,legged locomotion inherently involves intermittent contacts.This makes stability a concern and complicates odometrybasedapproaches, distinguishing estimation for legged systemsfrom that for wheeled robots. More recent approacheson quadruped and hexapod platforms make unreasonable assumptionsabout walking gaits, assume knowledge of the terrainand use exteroceptive sensor data for corrections. However,the utility of such platforms is their potential for operationin unstructured environments in which gaits are reactive,the terrain is unknown and such sensors are unfit for use. Motivatedby the task of providing robust and generic state estimationfor humanoid robots walking on unknown terrain, weintroduce an estimation framework  which employs onlyproprioceptive sensors and knowledge of leg kinematics.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems