- About Us
- Seminars & Events
Presenter: Martial Hebert (Homepage)
Friday March 21, 2014 from 11:00am to 12:00pm
Despite considerable progress in all aspects of machine perception,
using machine vision in autonomous systems remains a formidable
challenge. This is especially true in applications such as robotics, in
which even a small error rate in the perception system can have
catastrophic consequences for the overall system.
Martial Hebert is a Professor, Robotics Institute at Carnegie-Mellon
University. His interest includes computer vision, especially
recognition in images and video data, model building and object recognition from 3D data, and
perception for mobile robots and for intelligent vehicles. His group has developed
approaches for object recognition and scene analysis in images, 3D point clouds, and video sequences.
In the area of machine perception for robotics, his group has developed techniques for people detection, tracking, and prediction, and for understanding the environment of ground vehicles from sensor data. He has served on the editorial boards the IEEE Transactions on Robotics and Automation, the IEEE transactions on Pattern Analysis and Machine Intelligence, and the International Journal of Computer Vision (for which he currently serves as Editor-in-Chief). He was Program Chair of the 2009 International Conference on Computer Vision, General Chair of the 2005 IEEE Conference on Computer Vision and Pattern Recognition and Program Chair the 2013 edition of this conference.
Presenter: Leila Takayama (Homepage)
Friday March 7, 2014 from 11:00am to 12:00pm
As robots are entering our everyday lives, it is becoming
increasingly important to understand how untrained people will interact
with robots. Fortunately, untrained people already interact with a
variety of robotic agents (withdrawing cash from ATMs, driving cars with
anti-lock brakes) so we are not completely starting from scratch. In
the moment of those interactions with robotic agents,
people behave in ways that do not necessarily align with the rational
belief that robots are just plain machines.
Leila Takayama is a senior user experience researcher at Google[x], a Google lab that aims for moonshots in science and technology. She is also a Young Global Leader and Global Agenda Council Members for the area of robotics and smart devices for the World Economic Forum. In 2012, she was named a TR35 winner (Technology Review's Top 35 innovators under 35) and one of the 100 most creative people in business by Fast Company. Prior to joining Google[x], Leila was a research scientist and area manager for human-robot interaction at Willow Garage. With a background in Cognitive Science, Psychology, and Human-Computer Interaction, she examines human encounters with new technologies. Dr. Takayama completed her PhD in Communication at Stanford University in June 2008, advised by Professor Clifford Nass. She also holds a PhD minor in Psychology from Stanford, MA in Communication from Stanford, and BAs in Psychology and Cognitive Science from UC Berkeley (2003). During her graduate studies, she was a research assistant in the User Interface Research (UIR) group at Palo Alto Research Center (PARC). http://www.leilatakayama.org
Presenter: Ryan Eustice (Homepage)
Friday February 28, 2014 from 11:00am to 12:00pm
The field of simultaneous localization and mapping (SLAM) has made tremendous progress in the last couple of decades, to the point where we have mature-enough methods and algorithms to explore applications on interesting scales both spatially and temporally. In this talk we discuss some of our current efforts in deploying large-scale, long-term SLAM systems in real-world field applications, and in particular, our current work in autonomous underwater ship hull inspection. We will discuss our developments in modeling the visual saliency of underwater imagery for pose-graph SLAM,
Ryan M. Eustice is an Associate Professor in Naval Architecture & Marine Engineering at the University of Michigan, with additional appointments in the Department of Electrical Engineering and Computer Science, and in the Department of Mechanical Engineering. He received his PhD in Ocean Engineering in 2005 from the Massachusetts Institute of Technology / Woods Hole Oceanographic Institution Joint Program, and was a postdoctoral scholar at Johns Hopkins University. His research interests include autonomous navigation and mapping, estimation, computer vision, and perception for mobile robotics, including land/sea/air. He is an Associate Editor for IEEE Transaction on Robotics, Associate Editor for IEEE Journal of Oceanic Engineering, and recipient of young faculty awards from the Office of Naval Research and the National Science Foundation. He founded and directs the Perceptual Robotics Laboratory (PeRL) at the University of Michigan.
Presenter: Kris Hauser (Homepage)
Friday February 21, 2014 from 11:00am to 12:00pm
planning -- the problem of computing physical actions to complete a
specified task -- has inspired some of the most theoretically rigorous
and beautiful results in robotics research. But as robots proliferate
in real-world applications like household service, driverless cars,
warehouse automation, minimally-invasive surgery, search-and-rescue, and
unmanned aerial vehicles, the classical theory appears to have fallen
behind the pace of practice. At odds with the "clean" assumptions of
theory, the reality is that robots must handle large amounts of noisy
Kris Hauser received his PhD in Computer Science from Stanford University in 2008, bachelor's degrees in Computer Science and Mathematics from UC Berkeley in 2003, and worked as a postdoctoral fellow at UC Berkeley’s Automation Lab. He has held his current position as Assistant Professor in Indiana University's School of Informatics and Computing since 2009, where he directs the Intelligent Motion Lab. He is a recipient of a Stanford Graduate Fellowship, Siebel Scholar Fellowship, and the NSF CAREER award. Research interests include robot motion planning and control, semiautonomous robots, and integrating perception and planning. Past applications of this research have included automated vehicle collision avoidance, robotic manipulation, robot-assisted medicine, and legged locomotion.
Lab website: http://www.iu.edu/~motion
Presenter: Alfred Rizzi (Homepage)
Friday February 14, 2014 from 11:00am to 12:00pm
Only about half the Earth's landmass is accessible to wheeled and tracked vehicles, yet people and animals can go almost everywhere on foot. Our goal is to develop novel locomotion systems that can go anywhere people and animals go. The systems we build combine dynamic control systems, actuated mechanisms and sensing to travel on terrain that is too rocky, sandy, muddy, snowy, wet or steep for existing conventional vehicles. This presentation will discuss progress at Boston Dynamics in building such systems, including WildCat, LS3, Atlas, RHex, PETMAN and others.
Al Rizzi has been the Chief Robotics Scientist at Boston Dynamics, a company that develops some of the world's most sophisticated dynamic robots, including WildCat, LS3, BigDog, Atlas, Petman and others. These robots combine advanced locomotion control systems with innovative mechanical designs and are designed to enable travel on rough terrain. Prior to joining Boston Dynamics in 2006, he was an Associate Research Professor in the Robotics Institute at Carnegie Mellon University where he directed research projects focused on hybrid sensor-based control of complex and distributed dynamical systems. Dr. Rizzi received the Sc.B, degree in electrical engineering from the Massachusetts Institute of Technology in 1986. He received the M.S. and Ph.D. from Yale University in 1990 and 1994 respectively.
Presenter: Aaron Dollar (Homepage)
Friday February 7, 2014 from 11:00am to 12:00pm
Despite decades of research, current robotic systems are unable
to reliably grasp and manipulate a wide range of unstructured objects in
human environments. The somewhat traditional approach of attempting to copy
the immense mechanical complexity of the human hand in a stiff "robotic"
mechanism, and the subsequently required levels of sensing and control, has
not yet been successful.
Aaron Dollar is the John J. Lee Associate Professor of Mechanical Engineering and Materials Science at Yale University. He earned a B.S. in Mechanical Engineering at the University of Massachusetts at Amherst, S.M. and Ph.D. degrees in Engineering Science at Harvard University, and was a postdoctoral associate at MIT in Health Sciences and Technology and the Media Lab. He directs the Yale GRAB Lab, which conducts research into robotic hands and dexterous manipulation, prosthetics, and assistive and rehabilitation devices. Prof. Dollar is co-founder and editor of RoboticsCourseWare.org, an open repository for robotics pedagogical materials, and is the recipient of a number of awards, including young investigator awards from DARPA, the Air Force Office of Scientific Research, and the National Science Foundation.
Friday January 24, 2014 at 9:00am
Penn Robotics Industry Day January 24, 2014
Krishna P. Singh Center
3205 Walnut St., Philadelphia, PA
9:00 am – 4:00 pm
09:00 – 10:00 Registration and Continental Breakfast
10:00 – 11:00 Presentations
Presenter: Matthew Turpin (Homepage)
Wednesday February 19, 2014 from 1:00pm to 2:00pm
* Alternate Location: Levine 307 (3330 Walnut Street)*
Large teams of robots have been implemented to great success in Kiva's
automated warehouses as well as UPenn's and KMel Robotics' swarms of
quadrotors. In settings such as these, robots must plan paths which
avoid collisions with other robots and obstacles in the environment.
Matthew Turpin is a PhD candidate in the Department of Mechanical Engineering and Applied Mechanics at the University of Pennsylvania working with Vijay Kumar and Nathan Michael. He works on formation control and trajectory planning for large teams of quadrotor micro-aerial vehicles.
Thursday January 16, 2014
Check out Dr. Vijay Kumar, Dr. Dan Lee, and Dr. Katherine Kuchenbecker participating in a live radio show Thursday, January 16th from 10 to 11 a.m. Eastern.
The show is Radio Times with Marty Moss-Coane and they will be talking about robotics:
The show is broadcasted live and you can listen live by tuning in to WHYY (90.9 FM in the Delaware Valley).
Presenter: Jnaneshwar Das (Homepage)
Friday January 17, 2014 from 11:00am to 12:00pm
* Alternate Location: Levine 307 (3330 Walnut Street)*
Robotic sampling is attractive in many field robotics applications that
require persistent collection of physical samples for ex-situ analysis.
Examples abound in the earth sciences in studies involving the
collection of rock, soil, and water samples for lab analysis. The
desirability of samples in these domains can be expressed as a property
that cannot be determined in-situ, but can be predicted by covariates
measurable in real-time using sensors carried aboard a robot.
Jnaneshwar Das is a Ph.D. candidate in Computer Science at the Robotic Embedded Systems Laboratory, University of Southern California. His research interests are in the use of robotic assets for the earth sciences. Since the summer of 2009, he has been collaborating with the Monterey Bay Aquarium Research Institute (MBARI) on prediction of plankton distribution in the coastal ocean from in-situ data and physical water samples gathered by autonomous underwater vehicles (AUVs). Prior to this effort, he designed and deployed the first prototype of an Oceanographic Decision Support System, http://odss.mbari.org/, used actively by scientists to monitor assets during large-scale field campaigns. He received his M.S. in Computer Science from USC in 2008.