- About Us
- Seminars & Events
Presenter: Hyun Soo Park (Homepage)
Friday March 14, 2014 from 1:30pm to 2:30pm
* Alternate Location: Levine 512*
A social camera is a camera carried or worn by a member of a social group, (e.g., a smartphone camera, a hand-held camcorder, or a wearable camera). These cameras are becoming increasingly immersed in our social lives and closely capture our social activities. In this talk, I argue that social cameras are the ideal sensors for social scene understanding, as they inherit social signals such as the gaze behavior of the people carrying them. I will present a computational representation for social scene understanding from social cameras.
Hyun Soo Park is a Ph.D. student in Mechanical Engineering at Carnegie Mellon University under the supervision of Prof. Yaser Sheikh. He is interested in computer vision, graphics, and robotics. The main focus of his research is developing a computational basis for social scene understanding. He interned at Disney Research, Pittsburgh (2011) and Mircosoft Research, Redmond (2013). He received his bachelor’s degree from POSTECH, Korea in 2007, and master’s degree from Carnegie Mellon University in 2009.
Presenter: Marty Golubitsky (Homepage)
Friday March 28, 2014 from 2:00pm to 3:00pm
* Alternate Location: Towne 337*
This talk will discuss previous work on quadrupedal gaits and recent work on a generalized model for binocular rivalry proposed by Hugh Wilson. Both applications show how rigid phase-shift synchrony in periodic solutions of coupled systems of differential equations can help understand high level collective behavior in the nervous system. For gaits the symmetries predict unexpected gaits and for binocular rivalry the symmetries predict unexpected percepts.
Martin Golubitsky is Distinguished Professor of Natural and Mathematical Sciences at the Ohio State University, where he serves as Director of the Mathematical Biosciences Institute. He received his PhD in Mathematics from M.I.T. in 1970 and has been Professor of Mathematics at Arizona State University (1979-83) and Cullen Distinguished Professor of Mathematics at the University of Houston (1983-2008). Dr. Golubitsky works in the fields of nonlinear dynamics and bifurcation theory studying the role of symmetry in the formation of patterns in physical systems and the role of network architecture in the dynamics of coupled systems. His recent research focuses on some mathematical aspects of biological applications: animal gaits, the visual cortex, the auditory system, and coupled systems. Dr. Golubitsky is a Fellow of the American Academy of Arts and Sciences, the American Association for the Advancement of Science (AAAS), the American Mathematical Society (AMS), and the Society for Industrial and Applied Mathematics (SIAM). He is also the 1997 recipient of the University of Houston Esther Farfel Award, the 2001 corecipient of the Ferran Sunyer i Balaguer Prize (for The Symmetry Perspective) and the recipient of the 2009 Moser Lecture Prize of the SIAM Dynamical Systems Activity Group. Dr. Golubitsky was the founding Editor-in-Chief of the SIAM Journal on Applied Dynamical Systems and has served as President of SIAM (2005-06).
Monday January 27, 2014
Yolanda Chen/News Photo Editor
Monday night kicked
Monday February 10, 2014
Love robots and kids! GRASP is looking for two outstanding candidates interested in spending a year fighting poverty in Philadelphia with robots!
Thursday February 6, 2014
Presenter: E. Michael Golda (Homepage)
Friday April 18, 2014 from 11:00am to 12:00pm
A large naval warship ship is the most complex structure built by man. The technology trends over the last 70 years have made automation a necessity for controlling the components, systems, and integrated systems of systems that make up a warship. The presentation will provide a brief introduction of the ship as a system of systems. The evolution of the Navy’s automation to intelligent agent-based distributed controls will be described. In addition, opportunities for educational support and joint research with the Navy opportunities will be discussed.
E. Michael Golda, Ph.D., Chief Technologist in the Machinery Research and Engineering Department at the Naval Surface Warfare Center Carderock Division, Ship Systems Engineering Station, Philadelphia.
As the Chief Technologist of the Machinery Research and Engineering Department, Dr. Golda is responsible for the planning and execution of the research, development, and transition of new components and integrated machinery systems for future surface vessels and undersea vehicles. Dr. Golda joined the Naval Surface Warfare Center, Carderock Division in 1992. He has served in positions of increasing leadership responsibility in machinery research before being selected for his current position. In 2009, Dr. Golda was awarded the Carderock Division Captain Harold E. Saunders Award For Exemplary Technical Management.
Dr. Golda graduated from the United States Naval Academy with a Bachelor of Science in Ocean Engineering. He received a Masters and PhD in Materials Science and Engineering from Stevens Institute of Technology, Hoboken, New Jersey. He is the author of twenty-four reports and papers.
Presenter: Martial Hebert (Homepage)
Friday March 21, 2014 from 11:00am to 12:00pm
Despite considerable progress in all aspects of machine perception,
using machine vision in autonomous systems remains a formidable
challenge. This is especially true in applications such as robotics, in
which even a small error rate in the perception system can have
catastrophic consequences for the overall system.
Martial Hebert is a Professor, Robotics Institute at Carnegie-Mellon
University. His interest includes computer vision, especially
recognition in images and video data, model building and object recognition from 3D data, and
perception for mobile robots and for intelligent vehicles. His group has developed
approaches for object recognition and scene analysis in images, 3D point clouds, and video sequences.
In the area of machine perception for robotics, his group has developed techniques for people detection, tracking, and prediction, and for understanding the environment of ground vehicles from sensor data. He has served on the editorial boards the IEEE Transactions on Robotics and Automation, the IEEE transactions on Pattern Analysis and Machine Intelligence, and the International Journal of Computer Vision (for which he currently serves as Editor-in-Chief). He was Program Chair of the 2009 International Conference on Computer Vision, General Chair of the 2005 IEEE Conference on Computer Vision and Pattern Recognition and Program Chair the 2013 edition of this conference.
Presenter: Leila Takayama (Homepage)
Friday March 7, 2014 from 11:00am to 12:00pm
As robots are entering our everyday lives, it is becoming
increasingly important to understand how untrained people will interact
with robots. Fortunately, untrained people already interact with a
variety of robotic agents (withdrawing cash from ATMs, driving cars with
anti-lock brakes) so we are not completely starting from scratch. In
the moment of those interactions with robotic agents,
people behave in ways that do not necessarily align with the rational
belief that robots are just plain machines.
Leila Takayama is a senior user experience researcher at Google[x], a Google lab that aims for moonshots in science and technology. She is also a Young Global Leader and Global Agenda Council Members for the area of robotics and smart devices for the World Economic Forum. In 2012, she was named a TR35 winner (Technology Review's Top 35 innovators under 35) and one of the 100 most creative people in business by Fast Company. Prior to joining Google[x], Leila was a research scientist and area manager for human-robot interaction at Willow Garage. With a background in Cognitive Science, Psychology, and Human-Computer Interaction, she examines human encounters with new technologies. Dr. Takayama completed her PhD in Communication at Stanford University in June 2008, advised by Professor Clifford Nass. She also holds a PhD minor in Psychology from Stanford, MA in Communication from Stanford, and BAs in Psychology and Cognitive Science from UC Berkeley (2003). During her graduate studies, she was a research assistant in the User Interface Research (UIR) group at Palo Alto Research Center (PARC). http://www.leilatakayama.org
Presenter: Ryan Eustice (Homepage)
Friday February 28, 2014 from 11:00am to 12:00pm
The field of simultaneous localization and mapping (SLAM) has made tremendous progress in the last couple of decades, to the point where we have mature-enough methods and algorithms to explore applications on interesting scales both spatially and temporally. In this talk we discuss some of our current efforts in deploying large-scale, long-term SLAM systems in real-world field applications, and in particular, our current work in autonomous underwater ship hull inspection. We will discuss our developments in modeling the visual saliency of underwater imagery for pose-graph SLAM,
Ryan M. Eustice is an Associate Professor in Naval Architecture & Marine Engineering at the University of Michigan, with additional appointments in the Department of Electrical Engineering and Computer Science, and in the Department of Mechanical Engineering. He received his PhD in Ocean Engineering in 2005 from the Massachusetts Institute of Technology / Woods Hole Oceanographic Institution Joint Program, and was a postdoctoral scholar at Johns Hopkins University. His research interests include autonomous navigation and mapping, estimation, computer vision, and perception for mobile robotics, including land/sea/air. He is an Associate Editor for IEEE Transaction on Robotics, Associate Editor for IEEE Journal of Oceanic Engineering, and recipient of young faculty awards from the Office of Naval Research and the National Science Foundation. He founded and directs the Perceptual Robotics Laboratory (PeRL) at the University of Michigan.
Presenter: Kris Hauser (Homepage)
Friday February 21, 2014 from 11:00am to 12:00pm
planning -- the problem of computing physical actions to complete a
specified task -- has inspired some of the most theoretically rigorous
and beautiful results in robotics research. But as robots proliferate
in real-world applications like household service, driverless cars,
warehouse automation, minimally-invasive surgery, search-and-rescue, and
unmanned aerial vehicles, the classical theory appears to have fallen
behind the pace of practice. At odds with the "clean" assumptions of
theory, the reality is that robots must handle large amounts of noisy
Kris Hauser received his PhD in Computer Science from Stanford University in 2008, bachelor's degrees in Computer Science and Mathematics from UC Berkeley in 2003, and worked as a postdoctoral fellow at UC Berkeley’s Automation Lab. He has held his current position as Assistant Professor in Indiana University's School of Informatics and Computing since 2009, where he directs the Intelligent Motion Lab. He is a recipient of a Stanford Graduate Fellowship, Siebel Scholar Fellowship, and the NSF CAREER award. Research interests include robot motion planning and control, semiautonomous robots, and integrating perception and planning. Past applications of this research have included automated vehicle collision avoidance, robotic manipulation, robot-assisted medicine, and legged locomotion.
Lab website: http://www.iu.edu/~motion