navigating everything from daily environments to uneven and uncertain terrain
with efficiency and robustness. Despite
the simplicity with which humans appear to ambulate, locomotion is inherently
complex due to highly nonlinear dynamics and forcing. Yet there is evidence to suggest that humans
utilize a hierarchical subdivision among cortical control, central pattern
generators in the spinal column, and proprioceptive sensory feedback. This
indicates that when humans perform motion primitives, potentially simple and
characterizable control strategies are implemented. If these fundamental mechanisms underlying
human walking can be discovered and formally understood, human-like abilities
can be imbued into the next generation of robotic devices with far-reaching
applications ranging from prosthesis to legged robots for space exploration and
disaster response.
This talk presents the process of formally achieving
bipedal robotic walking through controller synthesis inspired by human
locomotion, and demonstrates these methods through examples of experimental
realization on numerous bipedal robots. Motivated
by the hierarchical control present in humans, we begin by viewing the human as
a “black box” and describe outputs, or virtual constraints, that appear to
characterize human walking. By
considering the equivalent outputs for the bipedal robot, a novel type of
control Lyapunov function (CLF) can be constructed that drives the outputs of
the robot to the output of the human; moreover, the parameters of this CLF can
be optimized so that stable robotic walking is provably achieved while
simultaneously producing outputs of the robot that are as close as possible to
those of a human. This CLF forms the
basis for a Quadratic Program (QP) yielding locomotion that dynamically
accounts for torque and contact constraints.
The end result is the generation of bipedal robotic walking that is
remarkably human-like and is experimentally realizable, together with a novel
control framework for highly dynamic behaviors on bipedal robots. This is evidenced by the demonstration of the
resulting controllers on multiple robotic platforms, including: AMBER 1 and 2,
NAO, ATRIAS and MABEL Furthermore, these
methods form the basis for achieving a variety of walking behaviors—including
multi-domain and rough terrain locomotion—and have demonstrated application to
the control of prosthesis.