Presenter: Tadayoshi Aoyama (hosted by CJ Taylor)
Friday June 10, 2011 from 2:00pm to 3:00pm
* Alternate Location: Levine 512 (3330 Walnut Street)*
First, the concept of “Multi–Locomotion Robot” that
has multiple types of locomotion is introduced. The robot is developed
to achieve a bipedal walk, a quadruped walk and a brachiation,
mimicking locomotion ways of a gorilla. It therefore has higher
mobility by selecting a proper locomotion type according to its
environment and purpose. I show you some experimental videos with
respect to realized motions before now.
Presenter: Silvia Ferrari (Homepage)
Friday December 2, 2011 from 11:00am to 12:00pm
Unmanned ground, aerial, and underwater vehicles equipped
with on-board wireless sensors are becoming crucial to both civilian and
military applications because of their ability to replace or assist humans in
carrying out dangerous yet vital missions.
As they are often required to operate in unstructured and uncertain
environments, these mobile sensor networks must be adaptive and reconfigurable,
and decide future actions intelligently based on the sensor measurements and
Silvia Ferrari is Paul Ruffin Scarborough Associate
Professor of Engineering at Duke University, where she directs the Laboratory
for Intelligent Systems and Controls (LISC).
Her principal research interests include robust adaptive control of
aircraft, learning and approximate dynamic programming, and optimal control of
mobile sensor networks. She received the
B.S. degree from Embry-Riddle Aeronautical University
and the M.A. and Ph.D. degrees from Princeton
University. She is a senior member of the IEEE, and a
member of ASME, SPIE, and AIAA. She is
the recipient of the ONR young investigator award (2004), the NSF CAREER award
(2005), and the Presidential Early Career Award for Scientists and Engineers
(PECASE) award (2006).
Presenter: Alex Stoytchev (Homepage)
Friday October 14, 2011 from 11:00am to 12:00pm
robotics is an emerging field that blends the boundaries between robotics,
artificial intelligence, developmental psychology, and philosophy. The basic
research hypothesis of developmental robotics is that truly intelligent robot
behavior cannot be achieved in the absence of a prolonged interaction with a
physical or a social environment. In other words, robots must undergo a
developmental period similar to that of humans and animals.
Stoytchev is an Assistant Professor
of Electrical and Computer Engineering and the Director of the Developmental
Robotics Laboratory at Iowa State University. He received his MS and PhD
degrees in computer science from the Georgia Institute of Technology in 2001
and 2007, respectively. His research interests are in the areas of
developmental robotics, autonomous robotics, computational perception, and
Presenter: David Brainard (Homepage)
Friday October 7, 2011 from 11:00am to 12:00pm
The human visual system shares with most digital cameras the design
feature that color information is acquired via spatially interleaved
sensors with different spectral properties. That is, the human retina
contains three distinct spectral classes of cone photoreceptors, the
L-, M-, and S-cones, and cones of these three classes are spatially
interleaved in the retina. Similarly, most digital cameras employ a
design with interleaved red, green, and blue sensors. In each case,
generating a full color image requires application of a demosaicing
algorithm that uses the
David Brainard received his AB in physics from Harvard University
(1982) and MS (electrical engineering) and PhD (psychology) from
Stanford University in 1989. He is currently Professor of Psychology
at the University of Pennsylvania and his research focuses on human
color vision and color image processing. He is a fellow of the Optical
Society of America and the Association for Psychological Science.
Presenter: Srikumar Ramalingam (Homepage)
Friday September 30, 2011 from 11:00am to 12:00pm
This seminar will focus on localization in GPS challenged urban
skylines. In our experimental setup, a fisheye camera is
to capture images of the immediate skyline, which is generally
serves as a fingerprint for a specific location in a city. We
estimate the global position by matching skylines extracted from
images to skyline segments from coarse 3D city models.
Srikumar Ramalingam is a Research
Scientist at Mitsubishi
Electric Research Lab (MERL). He received his Ph.D. from INRIA
(France) in 2007. His PhD was funded by Marie Curie Fellowship
Union. His doctoral thesis on generic imaging models received
INPG best thesis
prize and AFRIF thesis prize (honorable mention) from the
for Pattern Recognition. His research interests include
for non-conventional camera models, discrete optimization,
localization and robotics applications.
Presenter: Russel Epstein (Homepage)
Friday September 23, 2011 from 11:00am to 12:00pm
navigation is the ability to get from point A to point B in large-scale space.
Humans and animals use a variety of strategies to solve this problem. One such
strategy is landmark-based wayfinding, which is the use of fixed landmarks to
determine one’s location and orientation in the world.
Russell Epstein is Associate Professor of Psychology at Penn. He
is a member of the Center for Cognitive Neuroscience, the Institute for
Neurological Sciences, and the Institute for Research in Cognitive Science. He
received his PhD from Harvard in Computer Vision and did postdoctoral work in
cognitive neuroscience at MIT and Cambridge University before joining the Penn
faculty in 2002. His research focuses on the neural systems mediating visual
scene recognition and spatial navigation in humans.
Presenter: R. Andrew Hicks (Homepage)
Friday September 16, 2011 from 11:00am to 12:00pm
The first photograph was created in 1827 by Joseph Nicephore Niepce. In 1828, William Rowan Hamilton's founding papers on geometric optics began to appear. This seems to be a remarkable coincidence and one would think that the two siblings, photography and geometric optics, would each contribute to the growth of the other. But this never happened. Optical design in the 19th century was largely empirical, and today design is mostly performed by optimizing a cost function which is defined via ray tracing.
R. Andrew Hicks graduated from Queens College CUNY in with a BA in mathematics in 1988. He received his Ph.D. in 1995 in Mathematics from the University of Pennsylvania, in the field of Differential Geometry. He was enrolled in the CIS Masters program at Penn from 1995-96. From 1996-99 he was a postdoc at the GRASP laboratory of UPenn under Ruzena Bajcsy. He is currently professor of mathematics at Drexel University. His research interests include optical design, numerical analysis and computing.
Presenter: Russell H. Taylor (Homepage)
Friday April 20, 2012 from 11:00am to 12:00pm
talk will discuss ongoing NIH-funded research at Johns Hopkins University and
Carnegie-Mellon University to develop technology and systems addressing fundamental
limitations in current microsurgical practice, using vitreoretinal surgery as
our focus. Vitreoretinal surgery is the
most technically demanding ophthalmologic discipline and addresses prevalent
sight-threatening conditions in areas of growing need. At the center of our planned approach is a
“surgical workstation” system interfaced to a stereo visualization subsystem
and a family of novel sensors, ins
Russell H. Taylor received his Ph.D. in
Computer Science from Stanford in 1976.
He joined IBM Research in 1976, where he developed the AML robot
language and managed the Automation Technology Department and (later) the
Computer-Assisted Surgery Group before moving in 1995 to Johns Hopkins, where
he is a Professor of Computer Science with joint appointments in Mechanical
Engineering, Radiology, and Surgery and is also Director of the NSF Engineering
Research Center for Computer-Integrated Surgical Systems and Technology. He is the author of approximately 275
refereed publications, a Fellow of the IEEE, of the AIMBE, of the MICCAI
Society, and of the Engineering School of the University of Tokyo. He is also a recipient of the IEEE Robotics
Pioneer Award, of the MICCAI Society Enduring Impact Award, and of the Maurice
Müller award for excellence in computer-assisted orthopaedic surgery.
Monday May 23, 2011
Menglong Zhu at Penn has given PR2 a fantastic new skill: the ability to read.
Using the literate_pr2
software he wrote, PR2 can drive around and read aloud the signs that it sees.
Whether it's writing on a whiteboard, nameplates on a door, or posters
advertising events, the ability to recognize text in the real world is an
important skill for robots.
Wednesday May 11, 2011
Joe Romano is a GRASP PhD student under Katherine Kuchenbecker. His PR2_props
code was demonstrated live at the Google I/O 2011 Developer Conference,
May 10th and 11th, both on stage and as a demo . Hundreds of Google I/O
attendees got to experience the joy of fist-bumping a robot (plus the
high-five at the start of the I/O talk). View the YouTube video.