- About Us
- Seminars & Events
Presenter: GRASP Faculty
Friday September 6, 2013 from 11:00am to 12:00pm
Presenter: Byron Stanley
Friday December 6, 2013 from 11:00am to 12:00pm
Few, if any, autonomous ground vehicles (AGVs) navigate successfully in adverse conditions, such as snow or GPS denied areas. A fundamental limitation is that they are using optical sensors, such as LIDAR or imagers, to fuse with GPS/INS solutions to localize themselves. When the optical surfaces become distorted or obscured, such as with snow, dust, or heavy rain, there is no robust way to localize the vehicle to the required accuracy.
Byron Stanley, co-inventor of the GPR Localization technology, has led the development of the autonomous systems component of the world's first autonomous vehicle to be guided via GPR localization. Byron Stanley has served as the Principal Investigator for several autonomous ground vehicle programs at MIT Lincoln Laboratory including indoor mapping and outdoor navigation. He has been developing robotics and control systems for ground,
maritime, and airborne applications as a full Technical Staff Member in the Control Systems Engineering group for the last 13 years at MIT Lincoln Laboratory and received the Engineering Division early career award in 2011. Byron Stanley received SM and SB degrees in 2001 and 1999 respectively at the Massachusetts Institute of Technology in mechanical engineering, where his contributions included a hardware-in-the-loop simulation for an autonomous air-drop system as a Draper Fellow and development of a touch sensitive chest and a series-elastic actuated hand for COG at the MIT Artificial Intelligence Laboratory.
This work is sponsored by the Assistant Secretary of Defense for Research & Engineering under Air Force Contract #FA8721‐05‐C‐0002. Opinions, interpretations, conclusions and recommendations are those of the author and are not necessarily endorsed by the United States Government.
Presenter: Joachim Buhmann (Homepage)
Friday September 27, 2013 from 11:00am to 12:00pm
The digital revolution has created unprecedented opportunities in computing and communication but it also has generated the data deluge with an urgent demand for new pattern recognition technology. Learning patterns in data requires to extract interesting, statistically significant regularities in (large) data sets, e.g. the identification of connection patterns in the brain (connectomics) or the detection of cancer cells in tissue microarrays and estimating their staining as a cancer severity score.
Joachim M. Buhmann leads the Machine Learning Laboratory in the Department of Computer Science at ETH Zurich. He has been a full professor of Information Science and Engineering since October 2003. He studied physics at the Technical University Munich and obtained his PhD in Theoretical Physics. As postdoc and research assistant professor, he spent 1988-92 at the University of Southern California, Los Angeles, and the Lawrence Livermore National Laboratory. He held a professorship for applied computer science at the University of Bonn, Germany from 1992 to 2003. His research interests spans the areas of pattern recognition and data analysis, including machine learning, statistical learning theory and information theory. Application areas of his research include image analysis, medical imaging, acoustic processing and bioinformatics. Currently, he serves as president of the German Pattern Recognition Society.
Presenter: Aleix Martinez (Homepage)
Friday September 20, 2013 from 11:00am to 12:00pm
The Bayes criterion is generally
regarded as the holy grail in classification because, for known
distributions, it leads to the smallest possible classification error.
Unfortunately, the Bayes classification
boundary is generally nonlinear and its associated error can only be
calculated under unrealistic assumptions. In this talk, we will show how
these obstacles can be readily and efficiently averted yielding Bayes
optimal algorithms in machine learning, statistics,
computer vision and others. We will first derive Bayes optimal
solutions in Discriminant Analysis.
Aleix M. Martinez is an associate professor in the Department of Electrical and Computer Engineering at The Ohio State University (OSU), where he is the founder and director of the Computational Biology and Cognitive Science Lab. He is also affiliated with the Department of Biomedical Engineering and to the Center for Brain and Cognitive Science where he is a member of the executive committee. Aleix has served as an associate editor of IEEE Transactions on Pattern Analysis and Machine Intelligence, Image and Vision Computing, and Computer Vision and Image Understanding of which he currently serves as the editor-in-chief. He has been an area chair for many top conferences and is currently a Program co-Chair for CVPR 2014.
Presenter: Augusto Loureiro da Costa (Homepage)
Friday August 23, 2013 from 12:00pm to 1:00pm
* Alternate Location: Levine 512 (3330 Walnut Street)*
A Cognitive Embedded Model for Mobile Robots, and the experimental results embedding this cognitive model in humanoids mobile robots are presented in this talk. This cognitive model is a computational implementation for the Generic Cognitive Model for Autonomous Agents. First this cognitive agent was implemented in a distributed multi-robot control system for the
Currently he is Visiting Scholar at University of Pennsylvania, Department of Electrical and Systems Engineering, GRASP Laboratory, until August 2013. His permanent position is Associate Professor at the Department of Electrical Engineering, in the Federal University of Bahia, in Brazil. In 2009 received his Ph.D.degree in Electrical Engineering from Federal University of Santa Catarina (2001), in Brazil, he was Visiting PhD Student at University of Karlsruhe, Germany, under CAPES-PROBRAL cooperation project. From 2008 to 2010 he Coordinates the Special Committee on Artificial Intelligence of the Brazilian Computer Society (2008-2010 biennium). Also coordinate the Computer Engineering Undergradute course from 2008 until 2010. He was the Head of Electrical Engineering Department from 2010 to 2012. He publised as Editor the book Lecture Notes of Artificial Intelligence vol. 5249. His Research is in Robotics and Artificial Intelligence, with an emphasis on Autonomous Robots and Multi-robots Systems, acting on thefollowing topics: multi-robot systems, cooperative robotics, autonomous robots.
Presenter: James Rehg (Homepage)
Friday November 15, 2013 from 11:00am to 12:00pm
Advances in camera miniaturization and mobile computing have enabled the
development of wearable camera systems which can capture both the
user's view of the scene (the egocentric, or first-person, view) and
their gaze behavior. In contrast
to the established third-person video paradigm, the egocentric paradigm
makes it possible to easily collect examples of naturally-occurring
human behavior, such as activities of daily living, from a consistent
James M. Rehg (pronounced "ray") is a Professor in the School of Interactive Computing at the Georgia Institute of Technology, where he is co-Director of the Computational Perception Lab (http://cpl.cc.gatech.ed) and is the Associate Director for Research in the Center for Robotics and Intelligent Machines (http://robotics.gatech.edu) (RIM@GT). He received his Ph.D. from CMU in 1995 and worked at the Cambridge Research Lab of DEC (and then Compaq) from 1995-2001, where he managed the computer vision research group. He received an NSF CAREER award in 2001 and a Raytheon Faculty Fellowship from Georgia Tech in 2005. He and his students have received a number of best paper awards, including best student paper awards at ICML 2005 and BMVC 2010. Dr. Rehg serves on the Editorial Board of the Intl. J. of Computer Vision, and he served as the General co-Chair for CVPR 2009. He has authored more than 100 peer-reviewed scientific papers and holds 23 issued US patents. His research interests include computer vision, medical imaging, robot perception, machine learning, and pattern recognition. Dr. Rehg is currently leading a multi-institution effort to develop the science and technology of Behavior Imaging— the capture and analysis of social and communicative behavior using multi-modal sensing, to support the study and treatment of developmental disorders such as autism. See www.cbs.gatech.edu for details.
Presenter: Aaron Ames (Homepage)
Friday November 8, 2013 from 11:00am to 12:00pm
Humans have the ability to walk with deceptive ease,
navigating everything from daily environments to uneven and uncertain terrain
with efficiency and robustness. Despite
the simplicity with which humans appear to ambulate, locomotion is inherently
complex due to highly nonlinear dynamics and forcing. Yet there is evidence to suggest that humans
utilize a hierarchical subdivision among cortical control, central pattern
generators in the spinal column, and proprioceptive sensory feedback.
Dr. Aaron D. Ames is an Assistant Professor in Mechanical Engineering at Texas A&M University with a joint appointment in Electrical & Computer Engineering. His research interests center on robotics, nonlinear control, hybrid systems and cyber-physical systems, with special emphasis on foundational theory and experimental realization on bipedal robots. Dr. Ames received a BS in Mechanical Engineering and a BA in Mathematics from the University of St. Thomas in 2001, and he received a MA in Mathematics and a PhD in Electrical Engineering and Computer Sciences from UC Berkeley in 2006. At UC Berkeley, he was the recipient of the 2005 Leon O. Chua Award for achievement in nonlinear science and the 2006 Bernard Friedman Memorial Prize in Applied Mathematics. Dr. Ames served as a Postdoctoral Scholar in the Control and Dynamical System Department at the California Institute of Technology from 2006 to 2008. In 2010 he received the NSF CAREER award for his research on bipedal robotic walking and its applications to prosthetic devices. Dr. Ames is the head of the A&M Bipedal Experimental Robotics (AMBER) Lab that designs, builds and tests novel bipedal robots with the goal of achieving human-like bipedal robotic walking.
Presenter: Franz Hover (Homepage)
Friday October 25, 2013 from 11:00am to 12:00pm
is a general class of perception and control problems defined by critical space
and time scales: a follower that cannot
maintain adequate real-time performance will simply be unable to keep up. Autonomous pursuit missions in the ocean
include tracking of a marine vehicle or animal, and monitoring a large-scale
ocean process like an oil plume or chemical front. The opportunity for multi-vehicle sensing
systems to contribute is clear, but wireless communication has been a perennial
bottleneck that prevents truly dynamic operation. Network-based control,
Franz Hover was a consultant to industry and a Principal Research Engineer at MIT before joining the MechE faculty in 2007. His research has led to commercial development of the HAUV platform for autonomous ship hull inspection, advances in computational tools for power systems, and innovations in subsea flow control technology. Current work focuses on the design and implementation of multi-agent ocean systems. Professor Hover has authored or co-authored over one hundred refereed papers. He has also supervised more than 150 undergraduate research projects, and served as advisor to the MIT Marine Robotics Team since 2004.
Presenter: Greg Gerling (Homepage)
Friday October 4, 2013 from 12:00pm to 1:30pm
* Alternate Location: IRCS Conference Room (3401 Walnut Street, 400A)*
In this talk, I will describe
how our lab’s collaborative work in understanding the
neurophysiological basis of touch (skin, receptors and neural coding;
psychophysical limits) informs the applied design of neural sensors and
human-machine interfaces, including neural prosthetics and training
simulators in medical environments.
Gregory J. Gerling is an Assistant Professor in the Department of Systems and Information Engineering at the University of Virginia in Charlottesville. He received his Ph.D. degree from the Department of Mechanical and Industrial Engineering at The University of Iowa in the Summer of 2005. Before returning to graduate school, he had industry experience in software engineering at Motorola, NASA Ames Research Center, and Rockwell Collins. His research interests are in general related to the fields of haptics, computational neuroscience, human factors and ergonomics, biomechanics, and human–machine interaction. The application of his research seeks to advance neural prosthetics, aid people whose sense of touch is deteriorating, and improve human–robot interfaces, particularly in medicine.
Presenter: Manuela Veloso (Homepage)
Friday September 13, 2013 from 11:00am to 12:00pm
We envision ubiquitous autonomous mobile robots that coexist and interact
with humans while performing tasks. Such robots are still far from common,
as our environments offer great challenges to robust autonomous robot
perception, cognition, and action. In this talk, I present symbiotic robot
autonomy in which robots are robustly autonomous in their localization and
navigation, as well as handle they limitations by proactively asking for
help from humans, accessing the web for missing knowledge, and
coordinating with other robots.
Manuela M. Veloso is Herbert A. Simon Professor in the Computer Science Department at Carnegie Mellon University. She researches in Artificial Intelligence and Robotics. She founded and directs the CORAL research laboratory, for the study of multiagent systems where agents Collaborate, Observe, Reason, Act, and Learn, www.cs.cmu.edu/~coral. Professor Veloso is IEEE Fellow, AAAS Fellow, and AAAI Fellow. She is the current President of AAAI, and the past President of RoboCup. She received the 2009 ACM/SIGART Autonomous Agents Research Award for her contributions to agents in uncertain and dynamic environments, including distributed robot localization and world modeling, strategy selection in multiagent systems in the presence of adversaries, and robot learning from demonstration. Professor Veloso and her students have worked with a variety of autonomous robots, for robot soccer, education, and service robots. See www.cs.cmu.edu/~mmv for further information, including publications.