- About Us
- Seminars & Events
Presenter: Luca Carlone (Homepage)
Thursday April 9, 2015 from 11:00am to 12:00pm
* Alternate Location: Levine 307*
In many application fields (robotics, computer vision,
sensor networks, etc.) we find inference problems in which the
variables live on the edge of a graph that has a spatial embedding.
Often the variables to be estimated are elements of Lie groups, and
the available measurements are relative measurements corresponding to
the edges of the graph.
Luca Carlone is a postdoctoral fellow at the College of Computing, at Georgia Tech. He was a visiting researcher at the University of California Santa Barbara (2011) and at the University of Zaragoza (2010). He received his M.Sc. degrees (summa cum laude) in Mechatronic Engineering from Politecnico di Torino, and in Automation Engineering from Politecnico di Milano in 2008, and a Ph.D. from Politecnico di Torino in 2012. He is broadly interested in robotic perception, estimation over graphs, and numerical optimization.
Presenter: Sangbae Kim (Homepage)
Friday March 20, 2015 from 9:30am to 10:30am
* Alternate Location: Towne 337*
Realizing animals’ magnificent
dynamic movements in robots is next big challenge in many future robot
applications. In contrast to manufacturing as a main task for
conventional robots, mobile robots' tasks including disaster response often involve exploring unexpected areas and performing physical work in dangerous environments. The
process of ‘principle extraction’ from biology is a critical step
toward the practical adoptation of nature's design.
Prof. Sangbae Kim, is the director of the Biomimetic Robotics Laboratory and an Associate Professor of Mechanical Engineering at MIT. His research focuses on the bio-inspired robotic platform design by extracting principles from complex biological systems. Kim’s achievements on bio-inspired robot development include the world‘s first directional adhesive inspired from gecko lizards, and a climbing robot, Stickybot, that utilizes the directional adhesives to climb smooth surfaces featured in TIME’s best inventions in 2006. The MIT Cheetah achieves stable outdoor running at an efficiency of animals, employing biomechanical principles from studies of best runners in nature. This achievement was covered by more than 200 articles. He is a recipient of King-Sun Fu Memorial Best Transactions on Robotics Paper Award (2008), DARPA YFA(2013), and NSF CAREER (2014) award.
Tuesday March 3, 2015
Tuesday March 3, 2015
Vijay Kumar Named Dean of Penn Engineering
Media Contact:Ron Ozio | firstname.lastname@example.org | 215-898-8658March 3, 2015
Tuesday February 24, 2015
This page can be viewed online at: http://www.upenn.edu/almanac/volumes/v61/n24/honors-other-things.html#lee
Honors & Other Things
Friday January 23, 2015
Presenter: David Balduzzi (Homepage)
* Alternate Location: Levine 307*
The main problem of distributed learning is credit assignment, which was solved in the 80s with the invention of error backpropagation. 30 years later, Backprop, along with a few more recent tricks, is the major workhorse underlying machine learning and remains state-of-the-art for supervised learning. However, weight updates under Backprop depend on recursive computations that require distinct output and error signals -- features not shared by biological neurons, that are perhaps unnecessary.
David Balduzzi is a Senior Lecturer in Mathematics and Statistics at Victoria University Wellington. He received a PhD in algebraic geometry from the University of Chicago, after which he worked on computational neuroscience at UW-Madison and machine learning at the Max Planck Institute for Intelligent Systems and ETH Zürich.
Presenter: Shree Nayar (Homepage)
Computational imaging uses new optics to capture a coded image, and an appropriate algorithm to decode the captured image. This approach of manipulating images before there are recorded and processing recorded images before they are presented has three key benefits. First, it enables us to implement imaging functionalities that would be difficult, if not impossible, to achieve using traditional imaging. Second, it can be used to significantly reduce the hardware complexity of an imaging system.
Shree K. Nayar is the T. C. Chang Professor of Computer Science at
Columbia University. He heads the Columbia Vision Laboratory (CAVE),
which develops advanced computer vision systems. His research is focused
on three areas - the creation of novel cameras that provide new forms
of visual information, the design of physics based models for vision and
graphics, and the development of algorithms for understanding scenes
from images. His work is motivated by applications in the fields of
digital imaging, computer graphics, robotics and human-computer
Presenter: Katie Byl (Homepage)
Katie Byl received her B.S., M.S., and Ph.D. degrees in mechanical engineering from MIT. Her research is in dynamic systems and control, with particular interest in modeling and control techniques to deal with the challenges of underactuation, stochasticity, and dimensionality reduction that characterize bio-inspired robot locomotion and manipulation in real-world environments. She is the recipient of an NSF CAREER award (2013), a Hellman Foundation Fellowship (2012), and an Alfred P. Sloan Research Fellowship in Neuroscience (2011). Katie has worked on a wide range of research topics in the control of dynamic systems, including magnetic bearing control, flapping-wing microrobotics, piezoelectic noise cancellation for aircraft, and vibration isolation for gravity wave detection, and she was once a professional gambler on the now-infamous MIT Blackjack Team.
Presenter: Sanja Fidler (Homepage)
A successful autonomous system needs to not only understand the visual
world but also communicate its understanding with humans. To make this
possible, language can serve as a natural link between high level
semantic concepts and low level visual perception. In this talk, I'll
present our recent work on 3D scene understanding, and show how natural
sentential descriptions can be exploited to improve 3D visual parsing,
and vice-versa, how image information can help resolve ambiguities in
Sanja Fidler is an Assistant Professor at the Department of Computer Science, University of Toronto. Previously she was a Research Assistant Professor at TTI-Chicago, a philanthropically endowed academic institute located in the campus of the University of Chicago. She completed her PhD in computer science at University of Ljubljana in 2010, and was a postdoctoral fellow at University of Toronto during 2011-2012. In 2010 she visited UC Berkeley as a visiting student. She has served in program committees of numerous international conferences, and has received three outstanding reviewer awards (ECCV 2008, CVPR 2012, ECCV 2012). She also served as presentations chair at CVPR 2010, and publication chair of CVPR 2013, 2014 and 2015. Her main research interests are large-scale object detection, 3D scene understanding, and combining language and vision.