- Talk: GRASP Special Seminar: Jeremy Gillula, University of California at Berkeley, "Guaranteeing Safe Online Machine Learning via Reachability Analysis"
Date: Thursday, January 16, 2014 - 12pm to 1pm
Presenters: Jeremy Gillula
Alternate Location: Moore 317 (inside Moore 316)
Reinforcement learning has proven itself to be a powerful technique
in robotics, however it has rarely been employed to learn in a
hardware-in-the-loop environment due to the fact that spurious
training data could cause a robot to take an unsafe (and potentially
catastrophic) action. We will present a method for overcoming this
limitation known as Guaranteed Safe Online Learning via Reachability
(GSOLR), in which the control outputs from the reinforcement
learning algorithm are wrapped inside another controller based on
reachability analysis that seeks to guarantee safety against
After defining the relevant backwards reachability constructs and
explaining how they can be calculated, we will formalize the concept
of GSOLR and show how it can be used on a real-world target tracking
problem, in which an observing quadrotor helicopter must keep a
target ground vehicle with unknown (but bounded) dynamics inside its
field of view at all times, while simultaneously attempting to build
a motion model of the target. Extensions to GSOLR will then be
presented, which allow the safety of the system to automatically
become neither too liberal nor too conservative, thus allowing the
machine learning algorithm running in parallel the widest possible
latitude while still guaranteeing system safety. These extensions
will be demonstrated on the task of safely learning an altitude
controller for a quadrotor helicopter. These examples demonstrate
the GSOLR framework's robustness to errors in machine learning
algorithms, and indicate its potential for allowing high-performance
machine learning systems to be used in safety-critical situations in
- Talk: GRASP Special Seminar: Jnaneshwar Das, University of Southern California, "Data-driven Robotic Sampling for Marine Ecosystem Monitoring"
Date: Friday, January 17, 2014 - 11am to 12pm
Presenters: Jnaneshwar Das
Alternate Location: Levine 307 (3330 Walnut Street)
Robotic sampling is attractive in many field robotics applications that
require persistent collection of physical samples for ex-situ analysis.
Examples abound in the earth sciences in studies involving the
collection of rock, soil, and water samples for lab analysis. The
desirability of samples in these domains can be expressed as a property
that cannot be determined in-situ, but can be predicted by covariates
measurable in real-time using sensors carried aboard a robot. In our
test domain, marine ecosystem monitoring, accurate measurement of
plankton abundance requires lab analysis of water samples, but
predictions using physical and chemical properties measured in real-time
by sensors carried aboard an autonomous underwater vehicle (AUV) can
guide sample collection decisions. We present a principled approach to
minimize cumulative regret of plankton samples acquired by an AUV over
multiple surveys in batches of k water samples per survey. Samples are
labeled at the end of each survey, and used to update a probabilistic
model that guides sampling in subsequent surveys. The problem is
formulated in an online setting: given a predetermined survey duration
and a probabilistic model learned from earlier surveys, the AUV makes
irrevocable sample collection decisions on a sequential stream of
candidates, with no knowledge of the future. Our experimental results
are based on extensive retrospective studies emulating 100 campaigns,
each composed of 17 surveys. The campaigns were emulated by mining
historical field data collected by an AUV operating at depths of up to
100 m over a 40 sq. km area in an 8 day period. These studies establish
the efficacy of the approach - beginning with no prior, successive
surveys by the AUV result in samples that are progressively
higher-abundance in a pre-specified type of plankton. Additionally we
carried out a one-day field trial with an AUV operating at depths of up
to 30 m over a 1 sq. km area. Beginning with a prior learned from data
collected and labeled in an earlier campaign, the AUV field survey
resulted in samples with a high-abundance of a pre-specified type of
plankton - a potentially toxinogenic alga of interest to marine
ecologists. This is the first time such a field experiment has been
carried out in its entirety in a data-driven fashion, in effect 'closing
the loop' on a significant and relevant ecosystem monitoring problem.
Although the experimental context for work is marine ecosystem
monitoring, it is well-suited for autonomous and persistent robotic
observation of any property that cannot be measured in-situ, but
possesses observable covariates, thus opening up the potential for
advanced autonomous robotic exploration of unstructured environments
that are inaccessible to humans.
- Talk: Spring 2014 GRASP Seminar: Joelle Pineau, McGill University, "Learning Socially Adaptive Navigation Strategies : Lessons from the SmartWheeler Project"
Date: Friday, January 31, 2014 - 11am to 12pm
Presenters: Joelle Pineau
A key skill for mobile robots is the ability to navigate efficiently
through their environment. In the case of social or assistive robots, this
involves navigating through human crowds. Typical performance criteria,
such as reaching the goal using the shortest path, are not appropriate in
such environments, where it is more important for the robot to move in a
socially acceptable manner. In this talk I will describe new methods based
on imitation and reinforcement learning which we have developed to allow
robots to achieve socially adaptive path planning in human environments.
Performance of these methods will be illustrated using a smart power
wheelchair developed in our group, called the SmartWheeler.
- Talk: Spring 2014 GRASP Seminar: Aaron Dollar, Yale University, "Reengineering the Hand: "Mechanical Intelligence" in Robotic Manipulation"
Date: Friday, February 7, 2014 - 11am to 12pm
Presenters: Aaron Dollar
Despite decades of research, current robotic systems are unable
to reliably grasp and manipulate a wide range of unstructured objects in
human environments. The somewhat traditional approach of attempting to copy
the immense mechanical complexity of the human hand in a stiff "robotic"
mechanism, and the subsequently required levels of sensing and control, has
not yet been successful. Alternatively, with careful attention to the design
of the mechanics of hands, including adaptive underacted transmissions and
carefully tuned compliance, we have been able to achieve a level of
dexterity and reliability as yet unseen in the robotics community. I will
describe ongoing efforts to further develop grasping and dexterous
manipulation capabilities in engineered systems as well as our work in
studying human hand function to guide some of the efforts.
- Talk: Spring 2014 GRASP Seminar: Al Rizzi, Boston Dynamics, "Legged Robotics at Boston Dynamics"
Date: Friday, February 14, 2014 - 11am to 12pm
Presenters: Alfred Rizzi
Only about half the Earth's landmass is accessible to wheeled and
tracked vehicles, yet people and animals can go almost everywhere on
foot. Our goal is to develop novel locomotion systems that can go
anywhere people and animals go. The systems we build combine dynamic
control systems, actuated mechanisms and sensing to travel on terrain
that is too rocky, sandy, muddy, snowy, wet or steep for existing
conventional vehicles. This presentation will discuss progress at Boston
Dynamics in building such systems, including WildCat, LS3, Atlas, RHex,
PETMAN and others.
- Talk: MEAM / GRASP Seminar: Matthew Turpin, University of Pennsylvania, "Scalable Trajectory Computation for Large Teams of Interchangeable Robots Applied to Quadrotor MAVs"
Date: Wednesday, February 19, 2014 - 1pm to 2pm
Presenters: Matthew Turpin
Alternate Location: Levine 307 (3330 Walnut Street)
Large teams of robots have been implemented to great success in Kiva's
automated warehouses as well as UPenn's and KMel Robotics' swarms of
quadrotors. In settings such as these, robots must plan paths which
avoid collisions with other robots and obstacles in the environment.
Unfortunately, trajectory planning for large teams of robots generally
suffers from either the curse of dimensionality or lack of completeness.
I will demonstrate that relaxing the assumption of labeling each robot
and specifying a fixed assignment of robots to destinations in the
trajectory generation problem yields a number of computational and
performance benefits. My algorithm to solve this Concurrent Assignment
and Planning of Trajectories (CAPT) problem has bounded computational
complexity of O(N^3), preserves completeness properties of a user
specified single agent motion planner, and tends to minimize effort
exerted by any one robot. This algorithm generates solutions to variants
of the CAPT problem in settings ranging from kinematic robots in an
obstacle free environment to teams of robots with 4th order dynamics in a
cluttered environment. Finally, I will show experimental results of
the algorithm applied on teams of second order aquatic vehicles as well
as on quadrotor micro aerial vehicles. I will also outline how time
consuming aspects of this approach can be parallelized and discuss
possible decentralized implementations.
- Talk: CANCELED: Spring 2014 GRASP Seminar: Kris Hauser, Indiana University, "Motion Planning for Real World Robots"
Date: Friday, February 21, 2014 - 11am to 12pm
Presenters: Kris Hauser
planning -- the problem of computing physical actions to complete a
specified task -- has inspired some of the most theoretically rigorous
and beautiful results in robotics research. But as robots proliferate
in real-world applications like household service, driverless cars,
warehouse automation, minimally-invasive surgery, search-and-rescue, and
unmanned aerial vehicles, the classical theory appears to have fallen
behind the pace of practice. At odds with the "clean" assumptions of
theory, the reality is that robots must handle large amounts of noisy
sensor data, uncertainty, underspecified models, nonlinear and
hysteretic dynamic effects, exotic objective functions and constraints,
and real-time demands. This talk will describe efforts to bring theory
up to speed, in the context of three projects: 1) ladder climbing in the
DARPA Robotics Challenge; 2) intelligent user interfaces for
human-operated robots; and 3) navigation amongst many moving obstacles.
I will present new planning algorithms and architectures whose
performance is backed both by theoretical guarantees and empirical
- Talk: Spring 2014 GRASP Seminar: Ryan Eustice, University of Michigan, "SLAM in the Wild: Robust and Persistent Visual SLAM for Autonomous Underwater Hull Inspection"
Date: Friday, February 28, 2014 - 11am to 12pm
Presenters: Ryan Eustice
The field of simultaneous localization and mapping (SLAM) has made tremendous progress in the last couple of decades, to the point where we have mature-enough methods and algorithms to explore applications on interesting scales both spatially and temporally. In this talk we discuss some of our current efforts in deploying large-scale, long-term SLAM systems in real-world field applications, and in particular, our current work in autonomous underwater ship hull inspection. We will discuss our developments in modeling the visual saliency of underwater imagery for pose-graph SLAM, how this saliency measure can be used within an active SLAM planning paradigm, and our development of generic linear constraints---a principled framework for pose-graph reduction, which is important for controlling multi-session SLAM graph complexity.
- Talk: Spring 2014 GRASP Seminar: Leila Takayama, Google[x], "Designing for the Seemingly Nonsensical Ways People See, Treat, and Use Robots"
Date: Friday, March 7, 2014 - 11am to 12pm
Presenters: Leila Takayama
As robots are entering our everyday lives, it is becoming
increasingly important to understand how untrained people will interact
with robots. Fortunately, untrained people already interact with a
variety of robotic agents (withdrawing cash from ATMs, driving cars with
anti-lock brakes) so we are not completely starting from scratch. In
the moment of those interactions with robotic agents,
people behave in ways that do not necessarily align with the rational
belief that robots are just plain machines. Through a combination of
controlled experiments and field studies, this talk will examine the
ways that untrained people interact with robotic agents,
including (1) how we interact with personal robots, and (2) how we
interact through telepresence robots. Drawing from theories of
human-computer interaction and this type of empirical research, we
provide implications for both theory and the design of interactive
- Talk: GRASP Special Seminar: Sergio Pequito, Carnegie Mellon University, "A Framework for Structural Input/Output and Control Configuration Selection of Large-Scale Systems"
Date: Thursday, March 13, 2014 - 11am to 12pm
Presenters: Sérgio Pequito
Alternate Location: Levine 307 (3330 Walnut Street)
The structure control system design
consists mainly of two steps: input/output (I/O) selection and control
configuration (CC) selection. The first one is devoted to the problem of
computing how many actuators/sensors are needed and where should be
placed in the plant to obtain some desired property. Control
configuration is related to the decentralized control problem and is
dedicated to the task of selecting which outputs (sensors) should be
available for feedback and to which inputs (actuators) in order to
achieve a predefined goal. The choice of inputs and outputs affects the
performance, complexity and costs of the control system. Due to the
combinatorial nature of the selection problem, an efficient and
systematic method is required to complement the designer intuition,
experience and physical insight.
Motivated by the above, this presentation addresses the structure
control system design taking explicitly into consideration the possible
application to large-scale systems. We provide an efficient framework to
solve the following major minimization problems: i) selection of the
minimum number of manipulated/measured variables to achieve structural
controllability/observability of the system, and ii) selection of the
minimum number of measured and manipulated variables, and feedback
interconnections between them such that the system has no structural
fixed modes. Contrary to what would be expected, we showed that it is
possible to obtain the global solution of the aforementioned
minimization problems in polynomial complexity in the number of the
state variables of the system. To this effect, we propose a methodology
that is efficient (polynomial complexity) and unified in the sense that
it solves simultaneously the I/O and the CC selection problems. This is
done by exploiting the implications of the I/O selection in the solution
to the CC problem. An example illustrate the main features of the
- Talk: GRASP Special Seminar (Joint Talk): Daigo Muramatsu & Ikuhisa Mitsugami, Osaka University)
Date: Thursday, March 13, 2014 - 2pm to 3pm
Presenters: Daigo Muramatsu & Ikuhisa Mitsugami
Alternate Location: Levine 307
Presenter: Daigo Muramatsu
"Cross-view Gait Recognition"
Abstract: Gait recognition is a biometric method used to recognize a person from their walking style, which can be acquired from a camera. Unlike many biometric techniques such as fingerprint, iris or face recognition, gait recognition can authenticate a person some distance from the camera, because it has high accuracy even when the resolution of an image sequence is relatively low. However, the accuracy of gait recognition is often degraded by view difference. In this talk, we focus on the view issue of gait recognition and discuss some solution against accuracy degradation caused by the view difference.
Presenter: Ikuhisa Mitsugami"3-D Measurement and Analysis of Walking Person by Range Sensing"
Abstract: Consumer depth sensors, e.g. Microsoft Kinects, are getting more attention because of their low cost and ability to obtain 3-D measurements. We adopt such depth sensors for gait analysis. In this talk, we introduce some of our achievements. One of the recent achievements is full body reconstruction of a walking person. Since the scene is dynamic, we cannot achieve the full body reconstruction only by merging asynchronous range data of Kinects. We thus propose a synchronization method to virtually obtain depth data at the same moment. We also introduce a new gait feature representation based on range observation. It is basically an extension of an existing silhouette-based feature, but shows promising performance in person authentication task.
- Talk: GRASP Special Seminar: Hyun Soo Park, Carnegie Mellon University, "Understanding a Social Scene from Social Cameras"
Date: Friday, March 14, 2014 - 1pm to 2pm
Presenters: Hyun Soo Park
Alternate Location: Levine 512
A social camera is a camera carried or worn by a member of a social
group, (e.g., a smartphone camera, a hand-held camcorder, or a wearable
camera). These cameras are becoming increasingly immersed in our social
lives and closely capture our social activities. In this talk, I argue
that social cameras are the ideal sensors for social scene
understanding, as they inherit social signals such as the gaze behavior
of the people carrying them. I will present a computational
representation for social scene understanding from social cameras.
In the first part of my talk, I will show how visible social signals,
such as body gestures, gaze directions, or facial expression, can be
recovered in 3D from social cameras. This work includes 3D trajectory
reconstruction and motion capture from body-mounted cameras. The second
part of the talk will focus on analysis on the relationship between the
social signals using 3D joint attention. This analysis allows us to
predict social gaze behaviors.
- Talk: Spring 2014 GRASP Seminar: Martial Hebert, Carnegie Mellon University, "Challenges in Semantic Perception for Autonomous Systems"
Date: Friday, March 21, 2014 - 11am to 12pm
Presenters: Martial Hebert
Despite considerable progress in all aspects of machine perception,
using machine vision in autonomous systems remains a formidable
challenge. This is especially true in applications such as robotics, in
which even a small error rate in the perception system can have
catastrophic consequences for the overall system. This talk will review a few ideas that could be used to start
formalizing the issues revolving around the integrating vision systems.
They include a systematic approach to the problem of self-assessment of
vision algorithm and predicting quality metrics on the inputs to the
vision algorithms, ideas on how to manage multiple hypotheses generated
from a vision algorithm rather than relying on a single "hard" decision,
and methods for using external (non-visual) domain- and task-dependent
information. These ideas will be illustrated with examples of recent
vision for scene understanding, depth estimation, and object recognition.
- Talk: Spring 2014 GRASP Seminar: Stefanie Tellex, Brown University, "Natural Language and Robotics"
Date: Friday, March 28, 2014 - 11am to 12pm
Presenters: Stefanie Tellex
Natural language can be a powerful, flexible way for people to interact with robots. A particular challenge for designers of embodied robots, in contrast to disembodied methods such as
phone-based information systems, is that natural language
understanding systems must map between linguistic elements and aspects
of the external world, thereby solving the so-called symbol grounding problem. This talk describes a probabilistic framework for robust interpretation of grounded natural language, called Generalized Grounding Graphs (G^3). The G^3 framework leverages the structure of
language to define a probabilistic graphical model that maps between elements in the language and aspects of the external world. It can
compose learned word meanings to understand novel commands that may have never been seen during training. Taking a probabilistic approach
enables the robot to employ information-theoretic dialog strategies,
asking targeted questions to reduce uncertainty about different parts
of a natural language command. By inverting the model, the robot can generated targeted natural language requests for help from a human
partner. This approach points the way toward more general models of grounded language understanding, which will lead to robots capable of
building world models from both linguistic and non-linguistic input,
following complex grounded natural language commands, and engaging in
fluid, flexible dialog with their human partners.
- Talk: AMCS/GRASP Seminar: Marty Golubitsky, Ohio State University, "Patterns of Synchrony: From Animal Gaits to Binocular Rivalry"
Date: Friday, March 28, 2014 - 2pm to 3pm
Presenters: Marty Golubitsky
Alternate Location: Towne 337
This talk will discuss previous work on quadrupedal gaits and recent work on a generalized model for binocular rivalry proposed by Hugh Wilson. Both applications show how rigid phase-shift synchrony in periodic solutions of coupled systems of differential equations can help understand high level collective behavior in the nervous system. For gaits the symmetries predict unexpected gaits and for binocular rivalry the symmetries predict unexpected percepts.
- Talk: GRASP Special Seminar: Masaki Ogura, Texas Tech University, "Stability Analysis of Switched Linear Systems with Non-Traditional Switching Signals"
Date: Monday, April 7, 2014 - 2pm to 3pm
Presenters: Masaki Ogura
Alternate Location: Levine 307
The talk presents my recent research on the stability analysis of
switched systems, which are a class of dynamical systems whose
dynamics can abruptly change. Examples include the control of systems
over unreliable networks or with a failure-prone controller. In this
talk I will discuss a fundamental property called stability of
switched linear systems. I will in particular focus on the case when
switching is modeled by non-traditional stochastic processes, in
particular, by non-Markovian processes.
- Talk: Spring 2014 GRASP Seminar: Andrea Thomaz, Georgia Institute of Technology, "Designing Learning Interactions for Robots"
Date: Friday, April 11, 2014 - 11am to 12pm
Presenters: Andrea Thomaz
In this talk I present recent work from the Socially Intelligent
Machines Lab at Georgia Tech. One of the focuses of our lab is on
Socially Guided Machine Learning, building robot systems that can learn
from everyday human teachers. We look at standard Machine Learning
interactions and redesign interfaces and algorithms to support the
collection of learning input from naive humans. This talk covers
results on building computational models of reciprocal social
interactions, high-level task goal learning, low-level skill learning,
and active learning interactions using several humanoid robot platforms.
- Talk: Spring 2014 GRASP Seminar: E. Michael Golda, Navy Sea Systems Command Carderock Division, "A Brief Overview of United States Navy Machinery Automation Challenges"
Date: Friday, April 18, 2014 - 11am to 12pm
Presenters: E. Michael Golda
A large naval warship ship is the most complex
structure built by man. The technology
trends over the last 70 years have made automation a necessity for controlling
the components, systems, and integrated systems of systems that make up a
warship. The presentation will provide a
brief introduction of the ship as a system of systems. The evolution of the Navy’s automation to
intelligent agent-based distributed controls will be described. In addition, opportunities for educational
support and joint research with the Navy opportunities will be discussed.