Presenter: David Balduzzi (Homepage)

Event Dates:
  Thursday January 22, 2015 from 10:30am to 11:30am

* Alternate Location: Levine 307*

The main problem of distributed learning is credit assignment, which was solved in the 80s with the invention of error backpropagation. 30 years later, Backprop, along with a few more recent tricks, is the major workhorse underlying machine learning and remains state-of-the-art for supervised learning. However, weight updates under Backprop depend on recursive computations that require distinct output and error signals -- features not shared by biological neurons, that are perhaps unnecessary.


Presenter's Biography:

David Balduzzi is a Senior Lecturer in Mathematics and Statistics at Victoria University Wellington. He received a PhD in algebraic geometry from the University of Chicago, after which he worked on computational neuroscience at UW-Madison and machine learning at the Max Planck Institute for Intelligent Systems and ETH Zürich.

Presenter: Shree Nayar (Homepage)

Event Dates:
  Friday April 3, 2015 from 11:00am to 12:00pm

Computational imaging uses new optics to capture a coded image, and an appropriate algorithm to decode the captured image. This approach of manipulating images before there are recorded and processing recorded images before they are presented has three key benefits. First, it enables us to implement imaging functionalities that would be difficult, if not impossible, to achieve using traditional imaging. Second, it can be used to significantly reduce the hardware complexity of an imaging system.

Presenter's Biography:

Shree K. Nayar is the T. C. Chang Professor of Computer Science at Columbia University. He heads the Columbia Vision Laboratory (CAVE), which develops advanced computer vision systems. His research is focused on three areas - the creation of novel cameras that provide new forms of visual information, the design of physics based models for vision and graphics, and the development of algorithms for understanding scenes from images. His work is motivated by applications in the fields of digital imaging, computer graphics, robotics and human-computer interfaces.

Nayar received his PhD degree in Electrical and Computer Engineering from the Robotics Institute at Carnegie Mellon University. For his research and teaching he has received several honors including the David Marr Prize (1990 and 1995), the David and Lucile Packard Fellowship (1992), the National Young Investigator Award (1993), the NTT Distinguished Scientific Achievement Award (1994), the Keck Foundation Award for Excellence in Teaching (1995), the Columbia Great Teacher Award (2006), and the Carnegie Mellon Alumni Achievement Award (2009). For his contributions to computer vision and computational imaging, he was elected to the National Academy of Engineering in 2008, the American Academy of Arts and Sciences in 2011, and the National Academy of Inventors in 2014.

Presenter: Katie Byl (Homepage)

Event Dates:
  Friday March 27, 2015 from 11:00am to 12:00pm


Presenter's Biography:

Katie Byl received her B.S., M.S., and Ph.D. degrees in mechanical engineering from MIT. Her research is in dynamic systems and control, with particular interest in modeling and control techniques to deal with the challenges of underactuation, stochasticity, and dimensionality reduction that characterize bio-inspired robot locomotion and manipulation in real-world environments. She is the recipient of an NSF CAREER award (2013), a Hellman Foundation Fellowship (2012), and an Alfred P. Sloan Research Fellowship in Neuroscience (2011). Katie has worked on a wide range of research topics in the control of dynamic systems, including magnetic bearing control, flapping-wing microrobotics, piezoelectic noise cancellation for aircraft, and vibration isolation for gravity wave detection, and she was once a professional gambler on the now-infamous MIT Blackjack Team.

 

Presenter: Sanja Fidler (Homepage)

Event Dates:
  Friday March 20, 2015 from 11:00am to 12:00pm

A successful autonomous system needs to not only understand the visual world but also communicate its understanding with humans. To make this possible, language can serve as a natural link between high level semantic concepts and low level visual perception. In this talk, I'll present our recent work on 3D scene understanding, and show how natural sentential descriptions can be exploited to improve 3D visual parsing, and vice-versa, how image information can help resolve ambiguities in text.

Presenter's Biography:

Sanja Fidler is an Assistant Professor at the Department of Computer Science, University of Toronto. Previously she was a Research Assistant Professor at TTI-Chicago, a philanthropically endowed academic institute located in the campus of the University of Chicago. She completed her PhD in computer science at University of Ljubljana in 2010, and was a postdoctoral fellow at University of Toronto during 2011-2012. In 2010 she visited UC Berkeley as a visiting student. She has served in program committees of numerous international conferences, and has received three outstanding reviewer awards (ECCV 2008, CVPR 2012, ECCV 2012). She also served as presentations chair at CVPR 2010, and publication chair of CVPR 2013, 2014 and 2015. Her main research interests are large-scale object detection, 3D scene understanding, and combining language and vision.

Presenter: Noah Cowan (Homepage)

Event Dates:
  Friday April 17, 2015 from 11:00am to 12:00pm

Control systems engineering commonly relies on the "separation principle", which allows designers to independently design state observers and controllers. Biological control systems, however, routinely violate the requirements for separability. Indeed, animals often rely on a strategy known as "active sensing" in which organisms use their own movements to alter spatiotemporal patterns of sensory information to improve task-level control performance.

Presenter's Biography:

Noah Cowan earned his Ph.D. from the University of Michigan, Ann Arbor, in 2001 in Electrical Engineering. Following his Ph.D., he was a Postdoctoral Fellow in Integrative Biology at the University of California, Berkeley for 2 years. In 2003, he joined Johns Hopkins University, where he is now an associate professor of Mechanical Engineering and directs the LIMBS Laboratory. Noah's research is devoted to understanding navigation and control in machines and animals. This research program has been recognized by a Presidential Early Career Award in Science and Engineering (PECASE) and a James S. McDonnel Complex Systems Scholar award.

Event Date(s):
  Friday February 27, 2015 at 9:00am

GRASP Lab - Industry Day February 27, 2015

Krishna P. Singh Center
3205 Walnut St., Philadelphia, PA
9:00 am – 5:00 pm

Agenda

09:00 – 10:00             Registration and Continental Breakfast

10:00 – 11:00             Presentations (Glandt Forum, 3rd Floor)

Presenter: Ayanna Howard (Homepage)

Event Dates:
  Friday February 20, 2015 from 11:00am to 12:00pm

Robots for therapy applications can increase the quality of life for children who experience disabling circumstances, by, for example, becoming therapeutic playmates for children with neurological disorders. There are numerous challenges though that must be addressed - determining the roles and responsibilities of both clinician, child, and robot; developing interfaces for clinicians to interact with robots that does not require extensive training; and developing methods to allow the robot to learn from their child counterpart.

Presenter's Biography:

Ayanna Howard is the Motorola Foundation Professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. She received her B.S. in Engineering from Brown University, her M.S.E.E. from the University of Southern California, and her Ph.D. in Electrical Engineering from the University of Southern California in 1999. Her area of research is centered around the concept of humanized intelligence, the process of embedding human cognitive capability into the control path of autonomous systems. This work, which addresses issues of autonomous control as well as aspects of interaction with humans and the surrounding environment, has resulted in over 180 peer-reviewed publications in a number of projects – from scientific rover navigation in glacier environments to assistive robots for the home. To date, her unique accomplishments have been highlighted through a number of awards and articles, including highlights in USA Today, Upscale, and TIME Magazine, as well as being named a MIT Technology Review top young innovator of 2003, recognized as NSBE Educator of the Year in 2009, and receiving the Georgia-Tech Outstanding Interdisciplinary Activities Award in 2013. In 2013, she also founded Zyrobotics, which is currently licensing technology derived from her research lab and has released their first suite of educational technology products. From 1993-2005, Dr. Howard was at NASA's Jet Propulsion Laboratory, California Institute of Technology. Following this, she joined Georgia Tech in July 2005 and founded the Human-Automation Systems Lab. She is currently the Associate Director of Research for the Georgia Tech Institute for Robotics and Intelligent Machines. Prior to that, she served as Chair of the multidisciplinary Robotics Ph.D. program at Georgia Tech for three years from 2010-2013.  

 

Presenter: Soon-Jo Chung (Homepage)

Event Dates:
  Friday February 13, 2015 from 11:00am to 12:00pm

The rapid and ubiquitous proliferation of reliable rotorcraft platforms such as quadcopters has resulted in a boom in aerial robotics. However, rotorcraft have issues of safety, high noise levels, and low efficiency for forward flight.

Presenter's Biography:

Prof. Soon-Jo Chung received the S.M. degree in Aeronautics and Astronautics and the Sc.D. degree in Estimation and Control from MIT in 2002 and 2007, respectively. He received the B.S. degree from KAIST in 2000 (school class rank 1/120). He is currently an Assistant Professor in the Department of Aerospace Engineering and the Coordinated Science Laboratory at the University of Illinois at Urbana-Champaign. He is also a Beckman Fellow of the U. of Illinois Center for Advanced Study (2014-2015). His research areas include nonlinear control and estimation theory and optimal/robust flight controls with application to aerial robotics, distributed spacecraft systems, and computer vision-based navigation. He is a recipient of the 2014 UIUC Engineering Dean’s Award for Excellence in Research, the AFOSR Young Investigator Award, the NSF CAREER Award, and two best conference paper awards from IEEE and AIAA. He also received multiple teaching recognitions including the UIUC List of Teachers Ranked as Excellent and the instructor/advisor for the 1st place winning team of the AIAA Undergraduate Team Space Design Competition. Prof. Chung has been a Member of the Guidance & Control Analysis Group in the Jet Propulsion Laboratory as a JPL Summer Faculty Fellow/Affiliate working on distributed small satellites during the summers of 2010-2014. http://publish.illinois.edu/aerospacerobotics/

Presenter: Dana Ballard (Homepage)

Event Dates:
  Friday February 6, 2015 from 11:00am to 12:00pm

The human motor control system is an extraordinarily complex system that consists of layers of neural control systems that address different demands of motive behavior. A primary distinction between these systems can be made on the basis of time. Systems in the forebrain operate on the order of

Presenter's Biography:

Dr. Dana Ballard is currently a Professor in Computer Science at the University of Texas at Austin. He received his PhD in 1974 from the University of California, Irvine. Dr. Ballard's main research interest is in computational theories of the brain with emphasis on human vision. In 1985 Chris Brown and Dr. Ballard led a team that designed and built a high speed binocular camera control system capable of simulating human eye movements. The system was mounted on a robotic arm that allowed it to move at one meter per second in a two meter radius workspace. This system has led to an increased understanding of the role of behavior in vision. The theoretical aspects of that system were summarized in a paper ``Animate Vision,'' which received the Best Paper Award at the 1989 International Joint Conference on Artificial Intelligence. Currently, Dr. Ballard is interested in pursuing this research by using model humans in virtual reality environments. In addtion, he is interested in models of the brain that relate to detailed neural codes. A position paper on this work appeared in the Behavioral and Brain Sciences.

Presenter: Hanumant Singh (Homepage)

Event Dates:
  Friday January 30, 2015 from 11:00am to 12:00pm

The Arctic and Antarctic remain one of least explored parts of the world's oceans. In this talk we look at efforts over the last decade to explore areas under-ice which have traditionally been difficult to access. The focus of the talk will be on the robots, the role of communications over low bandwidth acoustic links, and navigation and mapping methodologies. This is all within the context of real data collected on several expeditions to the Arctic and Antarctic.

Presenter's Biography:

Hanumant Singh graduated from the MIT WHOI Joint Program in 1995 after which he joined the staff at the Woods Hole Oceanographic Institution. His interests lie at the intersection of imaging and robotics underwater. Over the course of his career he has participated in more than 50 research expeditions all over the world in support of Marine Archaeology, Marine Geology and Geophysics, Marine Chemistry, Coral Reef Ecology, Fisheries and Polar Studies.