Seminars From Year 2013

View seminars by year:     2003    2004    2005    2006    2007    2008    2009    2010    2011    2012    2013    2014    2015    All    

All seminars will be held in Wu & Chen Auditorium, Levine Hall (3330 Walnut Street) unless otherwise indicated.

  • Talk: Spring 2013 GRASP Seminar: Raquel Urtasun, Toyota Technological Institute at Chicago, "Efficient Algorithms for Semantic Scene Parsing"
    Date: Friday, February 1, 2013 - 11am to 12pm
    Presenters: Raquel Urtasun
    Developing autonomous systems that are able to assist humans in everyday’s tasks is one of the grand challenges in modern computer science. Notable examples are personal robotics for the elderly and people with disabilities, as well as autonomous driving systems which can help decrease fatalities caused by traffic accidents. In order to perform tasks such as navigation, recognition and manipulation of objects, these systems should be able to efficiently extract 3D knowledge of their environment. While a variety of novel sensors have been developed in the past few years, in this work we focus on the extraction of this knowledge from visual information alone. In this talk, I'll show how Markov random fields provide a great mathematical formalism to extract this knowledge. In particular, I'll focus on a few examples, i.e., 3D reconstruction, 3D layout estimation, 2D holistic parsing and object detection, and show  representations and inference strategies that allow us to achieve state-of-the-art performance as well as  several orders of magnitude speed-ups.
  • Talk: Special GRASP Seminar: Phillipos Modrohai, Stevens Institute of Technology, "The Roles of Uncertainty in 3D Reconstruction"
    Date: Thursday, February 7, 2013 - 3pm to 4pm
    Presenters: Philippos Mordohai
    Alternate Location: Levine 512 (3330 Walnut Street)
    3D reconstruction from two or more images is one of the most well-studied problems in computer vision. Due to the inverse nature of the problem, the reconstructed models typically suffer from various errors. In this talk, I will distinguish between two types of uncertainty that can cause these errors, namely correspondence and geometric uncertainty. The former refers to the uncertainty in determining the correct match for a given pixel while the latter refers to the uncertainty in the coordinates of the reconstructed 3D point, assuming that correct correspondences have been established. Based on this analysis, I will present an approach for depth map fusion and a solution to the next-best-view problem in target localization that benefit from explicit uncertainty modeling.
  • Talk: Spring 2013 GRASP Seminar: Pedro Ortega, Max Planck Institute for Intelligent Systems and Biological Cybernetics, "Adaptive Coding of Actions and Observations"
    Date: Friday, February 8, 2013 - 11am to 12pm
    Presenters: Pedro A. Ortega
    The application of expected utility theory to construct adaptive agents is both computationally intractable and statistically questionable. To overcome these difficulties, agents need the ability to delay the choice of the optimal policy to a later stage when they have learned more about the environment. How should agents do this optimally? An information-theoretic answer to this question is given by the Bayesian control rule - the solution to the adaptive coding problem when there are not only observations but also actions. We review the central ideas behind the Bayesian control rule.
  • Talk: Spring 2013 GRASP Seminar: Calin Belta, Boston University, "Formal Methods for Discrete-Time Linear Systems"
    Date: Friday, February 15, 2013 - 11am to 12pm
    Presenters: Calin Belta
    In control theory, “complex” models of physical processes, such as systems of differential equations, are usually checked against “simple” specifications, such as stability and set invariance. In formal methods, “rich” specifications, such as languages and formulae of temporal logics, are checked against “simple” models of software programs and digital circuits, such as finite transition graphs. With the development and integration of cyber physical and safety critical systems, there is an increasing need for computational tools for verification and control of complex systems from rich, temporal logic specifications. The formal verification and synthesis problems have been shown to be undecidable even for very simple classes of infinite-space continuous and hybrid systems. However, provably correct but conservative approaches, in which the satisfaction of a property by a dynamical system is implied by the satisfaction of the property by a finite over-approximation (abstraction) of the system, have received a lot of attention in recent years. The focus of this talk is on discrete-time linear systems, for which it is shown that finite abstractions can be constructed through polyhedral operations only. By using techniques from model checking and automata games, this allows for verification and control from specifications given as Linear Temporal Logic (LTL) formulae over linear predicates in the state variables. The usefulness of these computational tools is illustrated with various examples such as verification and synthesis of biological circuits in synthetic biology and motion planning and control in robotics.
  • Talk: 2013 Heilmeier Lecturer for Excellence in Faculty Research - Dr. Vijay Kumar, "Aerial Robot Swarms"
    Date: Friday, February 22, 2013 - 11am to 12pm
    Presenters: Vijay Kumar
    Autonomous micro aerial robots can operate in three-dimensional environments and offer many opportunities for environmental monitoring, search and rescue, and first response. In this lecture, Dr. Kumar will describe his recent work with small, agile aerial robots, and discuss the challenges in the deployment of large numbers of aerial robots, with applications to cooperative manipulation and transport, construction, and exploration and mapping.Read More...
  • Talk: Spring 2013 GRASP Seminar: David Sontag, New York University, "Method-of-Moment Algorithms for Learning Bayesian Networks"
    Date: Friday, March 1, 2013 - 11am to 12pm
    Presenters: David Sontag
    We present new algorithms for unsupervised learning of probabilistic topic models and noisy-OR Bayesian networks. Probabilistic topic models are frequently used to learn thematic structure from large document collections without human supervision, and the Bayesian networks that we study are often used for medical diagnosis. We circumvent the computational intractability of maximum likelihood learning by making the assumption that the observed data is drawn from a distribution within the model family that we are attempting to learn, such as Bayesian networks with latent variables. We demonstrate a set of structural constraints that make learning possible, yet are still realistic for many real-world applications. The new algorithms produce results comparable to the best MCMC implementations while running orders of magnitude faster. Joint work with Sanjeev Arora, Rong Ge, Yoni Halpern, David Mimno, Ankur Moitra, Yichen Wu, and Michael Zhu
  • Talk: Special GRASP Seminar: Mubarak Shah, University of Central Florida, "Representing Human Actions As Motion Patterns"
    Date: Wednesday, March 13, 2013 - 3pm to 4pm
    Presenters: Mubarak Shah
    Alternate Location: Levine 307 (3330 Walnut Street)
    Automatic analysis of videos is one of most challenging problems in Computer vision. In this talk I will introduce the problem of action, event, and activity representation and recognition from video sequences. I will begin by giving a brief overview of a few interesting methods to solve this problem, including trajectories, volumes, and local interest points based representations. The main part of the talk will focus on a newly developed framework for the discovery and statistical representation of motion patterns in videos, which can act as primitive, atomic actions. These action primitives are employed as a generalizable representation of articulated human actions, gestures, and facial expressions. The motion primitives are learned by hierarchical clustering of observed optical flow in four dimensional, spatial and motion flow space, and a sequence of these primitives can be represented as a simple string, a histogram, or a Hidden Markov model. I will then describe methods to extend the framework of motion patterns estimation to the problem of multi-agent activity recognition. First, I will talk about Similarity invariant matching of motion patterns in order to recognize simple events in surveillance scenarios. I will end the talk by presenting a framework in which a motion pattern represents the behavior of a single agent, while multi-agent activity takes the form of a graph, which can be compared to other activity graphs, by attributed inexact graph matching. This method is applied to the problem of American football plays recognition.
  • Talk: Spring 2013 GRASP Seminar: Rene Vidal, Johns Hopkins University, "Sparse and Low-Rank Subspace Clustering"
    Date: Friday, March 15, 2013 - 11am to 12pm
    Presenters: Rene Vidal
    In the era of data deluge, the development of methods for discovering structure in high-dimensional data is becoming increasingly important. Traditional approaches often assume that the data is sampled from a single low-dimensional manifold. However, in many applications in signal/image processing, machine learning and computer vision, data in multiple classes lie in multiple low-dimensional subspaces of a high-dimensional ambient space. In this talk, I will present methods from algebraic geometry, sparse representation theory and rank minimization for clustering and classification of data in multiple low-dimensional subspaces. I will show how these methods can be extended to handle noise, outliers as well as missing data. I will also present applications of these methods to video segmentation and face clustering.
  • Talk: Spring 2013 GRASP Seminar: Volkan Isler, University of Minnesota, "Robotic Sensor Networks for Environmental Monitoring"
    Date: Friday, March 22, 2013 - 11am to 12pm
    Presenters: Volkan Isler
    Robotic Sensor Networks composed of robots and wireless sensing devices hold the potential to revolutionize environmental sciences by enabling researchers to collect data across expansive environments, over long, sustained periods of time. In this talk, I will report our progress on building such systems for two applications. The first application is on monitoring invasive fish (common carp) in inland lakes. In the second application, the robots act as data mules and collect data from sparsely deployed wireless sensors. After presenting results from field experiments, I will focus on two algorithmic challenges: planning robot paths to minimize the time to collect data from all sensors, and designing search strategies for finding (possibly mobile) targets.
  • Talk: MEAM/GRASP/PRECISE Seminar: Gary Fedder, Carnegie Mellon University, "Advanced Manufacturing Institutes – A $2B National Experiment in Government-Industry-University Private-Public Partnerships"
    Date: Friday, March 22, 2013 - 12pm to 1pm
    Presenters: Gary K. Fedder
    Alternate Location: Berger Auditorium, Skirkanich Hall
    While the United States is a leading manufacturer in the world, our nation has been losing manufacturing jobs to overseas operations for the last three decades. This trend accelerated after 2000. Revitalizing our manufacturing sector is important for three compelling reasons: manufacturing provides high paying jobs that spawn service-sector jobs, product innovation is facilitated by co-location of design and production processes, and domestic manufacturing capability is vital to national security*. To address these concerns, in March 2012, President Obama announced a national initiative to create up to 15 institutes for advanced manufacturing as part of the National Network for Manufacturing Innovation (NNMI). Through a swift competition and selection process, a pilot institute for NNMI, called the National Additive Manufacturing Innovation Institute, was awarded in August 2012 with the winning team centered in the Ohio, Pennsylvania and West Virginia region. Competitions for three more NNMI institutes are forecast for this year. The NAMII and the NNMI institutes are instances of unique government-industry-university private-public partnerships that amount to an interesting national experiment to address the gap in R&D activities between applied research and productization. I will walk through the events leading to these national manufacturing initiatives, draw on some lessons already learned, and point to future opportunities for advanced manufacturing R&D. I will also describe a unique program, called Research for Advanced Manufacturing in Pennsylvania (RAMP) and led by Carnegie Mellon and Lehigh University, which seeds university R&D projects that are driven by industry needs. * Report the President on Ensuring American Leadership in Advanced Manufacturing, President’s Council of Advisors on Science and Technology.
  • Talk: GRASP Special Seminar: Sohee Lee, Technische Universität München, "Dynamics-Based Motion Planning, Control, and Task Programming for Mobile Manipulation"
    Date: Friday, March 22, 2013 - 3pm to 4pm
    Presenters: Sohee Lee
    Alternate Location: Moore 317
    The contents are divided into three parts of: (i) modular and reusable software architectures for motion programming; (ii) motion control laws that take into account the limited computing resources; (iii) online rollover prevention in high speed mobile manipulation based on the Lie group dynamics formulation; (i) I describe a unified framework for task planning, motion planning, and control of wheeled mobile manipulators. This relates to the setting up of a (low-level, control level) motion primitives database which is user-friendly and reusable and aims to combine with the higher level components for reasoning and intelligence, etc. As a result, the integrated, hierarchical programming environment for autonomous robots is developed. (ii) In the literature of human motor control, it is well known that humans select the optimal motion among diverse feasible motions between initial pose and goal pose. The various optimum criteria (e.g., minimum energy, minimum jerk, minimum torque change, etc.) are evaluated to explain the theory of human motor coordination. I suggest the minimum attention that takes into account the cost of control as a paradigm for human-like movement generation of a robot. (iiI) I briefly introduce the Lie group dynamics formulation, and as an application of this, a real-time dynamic balancing control law for wheeled mobile manipulators is proposed. For the dynamic stability criterion of zero moment point, a correct formulation which makes the definition of a potential function mathematically consistent and physically plausible is developed. Also, I derive efficient recursive algorithms for computing exact analytic gradients of the zero moment point functions. This leads to marked improvements in convergence and computational performance over existing approaches.
  • Talk: Spring 2013 GRASP Seminar - Bruno Siciliano, University of Naples Federico II, "Grasping and Control of Multi-fingered Hands"
    Date: Friday, March 29, 2013 - 11am to 12pm
    Presenters: Bruno Siciliano
    After a brief tutorial of the research carried out at PRISMA Lab, along with the highlights from the current projects, the talk reports some recent results achieved within the framework of the European project DEXMART. An important issue in controlling a multi-fingered robotic hand grasping an object is the synthesis of the optimal contact points and the evaluation of the minimal contact forces able to guarantee the stability of the grasp and its feasibility. Both these problems can be solved online if suitable sensing information is available. In detail, using images taken by a camera mounted in an eye-in-hand configuration, a surface reconstruction algorithm and a grasp planner evolving in a synchronized parallel way have been designed for fast visual grasp of objects of unknown geometry. On the other hand, using finger tactile information and contact force measurements, an efficient algorithm was developed to compute the optimal contact forces, assuming that, during the execution of a manipulation task, both the position of the contact points on the object and the wrench to be balanced by the contact forces may change with time. Another goal pursued in DEXMART was the development of a human-like grasping approach inspired to neuroscience studies. In order to simplify the synthesis of a grasp, a configuration subspace based on few predominant postural synergies of the robotic hand has been computed. This approach was evaluated at kinematic level, showing that power and precise grasps can be performed using up to the third predominant synergy. The talk concludes by outlining active trends and perspectives in the field of robotics.
  • Talk: Spring 2013 GRASP Seminar: Luis Sentis, University of Texas, "Rough Terrain Mobility and Manipulation"
    Date: Friday, April 5, 2013 - 11am to 12pm
    Presenters: Luis Sentis
    Everyday environments, such as urban or industrial settings, contain clutter and rough terrain topography. Traditionally, the approach to deal with those scenarios has been based on avoiding the clutter or by using low profile mobile robotic systems that can maneuver in the terrains. An exception to these approaches is the use of legged humanoid robots, where their small footprint and highly articulated bodies enable them to maneuver in the difficult environments. However, research addressing those sophisticated skills is still at a very early developmental stage. In this talk, I will present my research pursuit in two separate areas related to the previous problems: (1) on enabling mobile humanoid robots to maneuver in rough terrains while engaging into contact with the cluttered environment, and (2) on planning locomotion and multicontact trajectories of legged humanoids in extreme terrains. In particular, I will focus on the areas of compliant models for mobility, robust control of gaits in rough terrains, and the whole-body control software architecture. Examples using our humanoid robots Dreamer and Hume will be shown and discussed.
  • Talk: Special GRASP Seminar: Zexiang Li, Hong Kong University of Science and Technology, "From Da Vinci to Five-Axes Machines"
    Date: Tuesday, April 9, 2013 - 3pm to 4pm
    Presenters: Zexiang Li
    Alternate Location: Levine 307 (3330 Walnut Street)
    A five-axes machine is capable of rotating its tool spindle in a high angle, and therefore largely increases efficiency by machining multiple faces of a workpiece with only one setup. The rotation motion type of five-axes machining is a two dimensional submanifold of the special Orthogonal group SO(3). For decades, the Hooke joint model is taken for granted to describe such a motion type. However, the Hooke joint model has a topology of a torus, which contradicts the fact that the spindle rotation has a topology of a sphere. The Hooke joint model is not an ideal model to type synthesize parallel kinematics machine with omni-directional high angle capability. Inspired by a recent survey study, we discovered that there exists a unified model for the motion types ranging from kinesiology of Da Vinci and Listing?s law of eye movement to Rzeppa?s constant velocity joint, Zlatanov?s multi-mode parallel mechanism and five-axes machining. This newly discovered model, known as an ?exponential submanifold?, has the correct sphere topology and successfully explains the inconsistency of the Hooke joint model. It is shown that exponential submanifold has its unique topological and geometrical properties, which naturally leads to an omni-directional high angle capability. Therefore, exponential submanifold is not only the correct kinematics model for five-axes machines, rehabilitation robotics, eye movement, constant velocity joints etc., but also an indispensable mathematics tool for type synthesis.
  • Talk: Spring 2013 GRASP Seminar: National Robotics Week Event
    Date: Friday, April 12, 2013 - 11am to 12pm
    GRASP Lab K-16 Student Open House- April 12th, 2013 In celebration of National Robotics Week the GRASP Lab will host an open house on Friday, April 12th. Attendees will have a chance to explore GRASP facilities, view robot demonstrations and speak with robotics researchers. GRASP graduate students will be showing off robots that fly, perform surgery, play soccer, high-five, walk on two legs and climb poles. The event is free and open to K-16 groups. Registration is required and space is limited. Update (2/15/13)-We have reached capacity for this event. If you are still interested, please fill out the registration form using the link below and we will put you on our waiting list. Read More Here...
  • Talk: Spring 2013 GRASP Seminar: Noah Snavely, Cornell University, "The Distributed Camera: Modeling the World from Everyone's Online Photos"
    Date: Friday, April 19, 2013 - 11am to 12pm
    Presenters: Noah Snavely
    We live in a world of ubiquitous imagery, in which the number of images at our fingertips is growing at a seemingly exponential rate.  These images come from a wide variety of sources, including mapping sites, webcams, and millions of photographers around the world uploading billions and billions of images to social media and photo-sharing websites, such as Facebook. Taken together, these sources of imagery can be thought of as constituting a distributed camera capturing the entire world at unprecedented scale, and continually documenting its cities, mountains, buildings, people, and events.  This talk will focus on how we might use this distributed camera as a fundamental new tool for science, engineering, and environmental monitoring, and how a key problem is *calibration* -- determining the geometry of each photo, and relating it to all other photos, in an efficient, automatic way.  I will describe our ongoing work on using automated 3D reconstruction algorithms for recovering such geometry from massive photo collections, with the goal of using these photos the gain a better understanding of our world.
  • Talk: GRASP Special Seminar: Giuseppe Loianno, University of Naples Federico II, "Visual Navigation, 3D Mapping and Reconstruction for MAVs"
    Date: Friday, April 19, 2013 - 2pm to 3pm
    Presenters: Giuseppe Loianno
    Alternate Location: Levine 307 (3330 Walnut Street)
    Giuseppe Loianno’ s research activities focus on the development of sensor fusion algorithms, visual environment reconstruction and visual control for microaerial vehicles (MAVs). Camera sensors are slow for some control applications, thus, it is necessary to fuse their measurements to those of other sensor sources like IMU (Inertial Measurement Unit) and eventually GPS. In the monocular case, the proposed problem has been solved combining different landmarks with IMU measurements to obtain a closed form solution for scale factor estimation. The result has been used with optical flow to control the vehicle along a corridor avoiding lateral obstacles. An interesting application to exploit optical flow, is to use an average flow to control the vehicle in contact with the environment realizing a wall approach for docking and grasping objects. Other sensor fusion techniques based on Kalman filter and Pareto Optimization have been implemented, tested and compared in simulation showing an improvment of the Pareto technique at the price of an increased computational cost. Low-cost range sensors are an attractive alternative for expensive laser scanners or 3D cameras in research domains such as indoor navigation and mapping, surveillance and autonomous robotics. Consumer-grade range sensing technology gives the opportunity to choose between different devices available on the market. The newest ASUS Xtion sensor presents a low weight with respect to the first generation of RGB-D cameras (around 70g without the external casing), it does not need external power other than the USB connection, and it is very compact. These properties give to this device some unique characteristics suitable, for example, for unmanned aerial vehicles applications. The new sensor is employed by coupling a monocular multi-map visual odometry algorithm with depth, estimating the scale factor and obtaining a dense absolute colored map. To avoid memory rising up in large environment, a spatial multi-resolution approach is proposed to acquire point cloud data according to local environment distance. Finally, an environment high level map has been realized for supervisory control and used for estimating a planar wall to be inspected by the vehicle.
  • Talk: Spring 2013 GRASP Seminar: Kristin Dana, Rutgers University, "Illumination Modeling for Camera-Display Communication"
    Date: Friday, May 3, 2013 - 11am to 12pm
    Presenters: Kristin Dana
    Our modern society has pervasive electronic displays such as billboards, computers, tablets, signage and kiosks. The prevalence of these displays provides opportunities to develop photographic methods for active scenes where intentional information is encoded in the display images and must be recovered by a camera. These active scenes are fundamentally different from traditional passive scenes because image formation is based on display emittance, not surface reflectance. QR-codes on billboards are one example of an active scene with intentional information, albeit a very simple case. The problem becomes more challenging when the message is hidden and dynamic. Detecting and decoding the message requires careful photometric modeling for computational message recovery. We present a novel method for communicating between a camera and display by embedding and recovering information within a displayed image. A handheld camera pointed at the display can receive not only the display image, but also the underlying message. Unlike standard watermarking and steganography that lie outside the domain of computer vision, our message recovery algorithm uses illumination in order to op- tically communicate hidden messages in real world scenes. The key innovation of our approach is an algorithm to perform simultaneous radiometric calibration and message recovery in one convex optimization problem. By modeling the photometry of the system using a camera-display transfer function (CDTF), we derive a physics-based kernel function for support vector machine classification. We demonstrate that our method of optimal online radiometric calibration (OORC) leads to an efficient and robust algorithm for a computational messaging between various commercial cameras and displays. An evaluation of results has been provided by using video messaging with nine different combinations of commercial cameras and displays.
  • Talk: Special GRASP Seminar: Pete Shull, Shanghai Jiao Tong University, "Wearable Haptics for Clinical Applications"
    Date: Friday, May 10, 2013 - 2pm to 3pm
    Presenters: Pete Shull
    Alternate Location: Levine 307 (3330 Walnut Street)
    Movement is an essential part of human life. An average day flows with thousands of movements, from rolling out of bed to walking down the stairs to getting into and out of the car. Many diseases and injuries hinder or are exacerbated by movement. Osteoarthritis often worsens due to joint loading during movement. Stroke, spinal cord injury and other neurological disorders inhibit motion and reduce the sense of movement control. In sports, poor landing and cutting techniques can tear the anterior cruciate ligament (ACL), and improper running mechanics can lead to tibial stress fractures. Wearable haptic feedback can guide and train human movements to treat disease and prevent injury. This seminar explores wearable haptics for two important and debilitating diseases: knee osteoarthritis and stroke. Recent research will be presented involving real-time feedback movement training through wearable haptic devices, kinematic and kinetic sensing, system control and biomechanical modeling. Implementation of such systems has enabled knee osteoarthritis patients to walk with less joint loading and less knee pain and has facilitated upper extremity rehabilitation for stroke victims.
  • Talk: Spring 2013 GRASP Seminar: Sangbae Kim, Massachusetts Institute of Technology, "Toward Highly Dynamic Locomotion : Actuation, Structure and Control of the MIT Cheetah Robot"
    Date: Friday, May 17, 2013 - 11am to 12pm
    Presenters: Sangbae Kim
    Robot designers are increasingly searching for ideas from biology. The talk will introduce such bio-inspired robots that embody the hypothesized principles from the insights obtained by animal studies. Through these examples, the intricate processes of design principle extraction will be discussed. Current research in the MIT biomimetics lab is centered on the development of a cheetah-inspired running robot. Three major associated research thrusts are optimum actuator design, biotensegrity structure design, and the impluse-based control architecture for stable galloping control. Each research component is guided by biomechanics of runners such as dogs and cheetahs capable of the fast traverse on rough and unstructured terrains.
  • Talk: GRASP Special Seminar: Dimitrios Kanoulas, Northeastern University, "From Noisy Point Clouds to Curved Contact Patches"
    Date: Monday, June 10, 2013 - 11am to 12pm
    Presenters: Dimitrios Kanoulas
    Alternate Location: Levine 512 (3330 Walnut Street)
    In this talk I describe perception algorithms that use curved patches to model contact surfaces in uneven environments like rocky trails. First I introduce a set of 10 patch models for contact areas both in the environment and on an articulated robot, and an algorithm for fitting these to point cloud data with estimated uncertainty both in the input points and the output patch. Then I describe an algorithm for sparsely covering nearby environment surfaces with patches appropriate for a robot to touch. The algorithm keeps only those patches which pass several validation checks to ensure fidelity to the sensed point cloud data. I also introduce a notion of saliency of a patch with respect to a locomotion task using local surface properties like normal vectors and curvatures. I present results on datasets of natural rocky terrain taken with a Kinect and compare point neighborhoods based on k-d trees vs. triangle meshes.
  • Talk: GRASP Special Seminar: Lee White, University of Washington, "Quantitative Objective Assessment of Preoperative Warm-up for Robotic Surgery"
    Date: Thursday, July 25, 2013 - 4pm to 6pm
    Presenters: Lee White
    Alternate Location: Levine 307 (3330 Walnut Street)
    Here I present the application of three established methods for quantitatively and objectively assessing robotic surgical performance as well the development and application of a fourth. These four tools are used to assess the hypothesis that a certain surgical warm-up protocol improves performance of surgeons on a da Vinci robotic surgical system. In the protocol, surgeons perform a brief warm-up task on the Mimic dV-Trainer virtual reality simulator prior to performing one of two robotic surgery practice tasks. Of the four techniques used for performance assessment, the three established techniques consist of basic measures (task time, tool path length, economy of motion and errors), algorithmic assessment (using trained Hidden Markov Model machine learning algorithms) and surgeon assessment (using the Global Evaluative Assessment of Robotics Surgery). The newly proposed technique called Crowd-Sourced Assessment of Technical Skill (C-SATS) draws on crowds of people on the Internet to assess the surgical performance. The evidence that warm-up improves surgical performance is presented as well as an analysis of the strong agreement between C-SATS and grades provided by a group of surgeons trained to assess surgical performance.
  • Talk: GRASP REU Site Oral Presentations - Summer 2013
    Date: Tuesday, August 6, 2013 - 1pm to 3pm
    Presenters: GRASP REU Site Oral Presentations
    Alternate Location: Wu & Chen Auditorium (Levine 101)
    GRASP REU Site Oral PresentationsTuesday, August 6, 2013Wu and Chen Auditorium1:00pm - 3:00pm  Welcome by Katherine J. Kuchenbecker and Max Mintz, GRASP REU Site Co-Directors 1:00 p.m.      Mitchell Breitbart Rising Junior in Computer Science and English at Williams College Advised by Dr. Daniel Koditschek; mentored by Dr. Dan Guralnik A Simulation Engine of a Memory Model for Autonomous Mapping and Navigation 1:15 p.m.      Karena Cai Rising Junior in Mechanical and Aerospace Engineering at Princeton University Advised by Dr. Daniel Koditschek; mentored by Jeff Duperret Modeling and Simulation of a Flexible Spine Quadruped 1:30 p.m.      Lizzie Halper Rising Junior in Mathematics and Scientific Computing at Kenyon College Advised by Dr. Camillo J. Taylor; mentored by David Isele Robotic Olfaction: Chemical Sensing and Analysis 1:45 p.m.      Patrick Husson Rising Senior in Computer Engineering at the University of Maryland, Baltimore County Advised by Dr. Daniel Lee; mentored by Alex Burka and Joe Trovato Localizing and Positioning Quadrotors using Visual Fiducial Markers 2:00 p.m.      Jean Mendez Rising Senior in Computer Engineering at the University of Puerto Rico, Mayaguez Campus Advised by Dr. Mark Yim; mentored by Tarik Tosun Kinematic Retargeting: Making Robots Move More Like Humans 2:15 p.m.      Julie Ochoa-Canizares Rising Senior in Robotics Engineering at Worcester Polytechnic University (WPI) Advised by Dr. Katherine J. Kuchenbecker; mentored by Vivienne Clayton Electromechanical Design and Control of a Novel Cable-Based Gait Rehabilitation System 2:30 p.m.      Camille Ramseur Rising Senior in Computer Science at the University of South Florida Advised by Dr. George Pappas; mentored by Chinwendu Enyioha Redundancy in Networks 2:45 p.m.      Megan Tienjaroonkul Rising Junior in Mechanical Engineering at Carnegie Mellon University Advised by Dr. Vijay Kumar; mentored by Philip Dames Graphical User Interface for Multi-Target Localization Simulation Many thanks to the advisors, mentors, colleagues, staff, GRASP Lab, and larger Penn community for helping make the second year of the GRASP REU Site such a success.  We are especially indebted to Charity Payne and Yibin Zhang for their excellent work in running the program this summer.  Congratulations to all eight 2013 Participants!
  • Talk: GRASP Special Seminar: Augusto Loureiro da Costa, Federal University of Bahia, Brazil, "A Cognitive Embedded Model for Mobile Robots"
    Date: Friday, August 23, 2013 - 12pm to 1pm
    Presenters: Augusto Loureiro da Costa
    Alternate Location: Levine 512 (3330 Walnut Street)
    A Cognitive Embedded Model for Mobile Robots, and the experimental results embedding this cognitive model in humanoids mobile robots are presented in this talk. This cognitive model is a computational implementation for the Generic Cognitive Model for Autonomous Agents. First this cognitive agent was implemented in a distributed  multi-robot control system for the  RoboCup Soccer simulation league. This implementation incorporates some important features from the cognitive model. The experiments with the simulated multi-robot system have shown it is a well-suited cognition model for providing robots with high-level complexity task execution. Because of that, we have chosen to embed the cognitive agent system into humanoids robots, allowing them to have the fully decision-autonomy for high-level task execution. Additionally, the features of the presented cognitive agent are completely coherent with the so-called desirable features of the Generic Cognitive Model hypotheses. A locomotion task in dynamic environment with predictable and unpredictable obstacles have been used for these experiments.
  • Talk: GRASP Faculty Research Introductions
    Date: Friday, September 6, 2013 - 11am to 12pm
    Presenters: GRASP Faculty
  • Talk: Fall 2013 GRASP Seminar: Manuela Veloso, Carnegie Mellon University, "Symbiotic Autonomy: Robots, Humans, and the Web"
    Date: Friday, September 13, 2013 - 11am to 12pm
    Presenters: Manuela Veloso
    We envision ubiquitous autonomous mobile robots that coexist and interact with humans while performing tasks. Such robots are still far from common, as our environments offer great challenges to robust autonomous robot perception, cognition, and action. In this talk, I present symbiotic robot autonomy in which robots are robustly autonomous in their localization and navigation, as well as handle they limitations by proactively asking for help from humans, accessing the web for missing knowledge, and coordinating with other robots. Such symbiotic autonomy has enabled our CoBot robots to move in our multi-floor buildings performing a variety of service tasks, including escorting visitors, and transporting packages between locations. I will describe CoBot's fully autonomous effective mobile robot indoor localization and navigation algorithms, its human-centered task planning, and its symbiotic interaction with the humans, the web, and other robots, namely other CoBots and Baxter. I will further present our ongoing research on knowledge learning from our speech-based robot interaction with humans. The talk will be illustrated with results and examples from many hours-long runs of the robots in our buildings. The work is joint with Joydeep Biswas, Brian Coltin, Stephanie Rosenthal, Mehdi Samadi, Tom Kollar, Vittorio Perera, Robin Soetens, and Yichao Sun. Special thanks to Cetin Mericli and Daniele Nardi.
  • Talk: GRASP Special Seminar: Serafin Diaz, Qualcomm Research, "Augmented Reality and Computer Vision at Qualcomm"
    Date: Monday, September 16, 2013 - 11am to 12pm
    Presenters: Serafin Diaz
    Alternate Location: Levine 307 (3330 Walnut Street)
    Augmented reality (AR) is a technology where computer-generated objects are viewed in the context of the physical world.  It add information and meaning to real objects. In order to do this, AR relies on computational intensive Computer Vision (CV) to accurately detect and track objects within our environment. AR was originally limited to the lab and fixed setups, but now is free to enhanced the experience of mobile users. This is only possible thanks to recent developments in mobile technology which have enabled this computational intensive technology to be deployed in mobile devices. This talk will focus on providing a summary of the AR and CV research activities taking place at Qualcomm. Explaining how and why a communication company has taken the challenge of producing the now leading AR SDK known as Vuforia.
  • Talk: Fall 2013 GRASP Seminar: Aleix Martinez, Ohio State University, "My Adventures with Bayes: Searching for Bayes optimal solutions in machine learning, statistics, computer vision, neuroscience and beyond"
    Date: Friday, September 20, 2013 - 11am to 12pm
    Presenters: Aleix Martinez
    The Bayes criterion is generally regarded as the holy grail in classification because, for known distributions, it leads to the smallest possible classification error. Unfortunately, the Bayes classification boundary is generally nonlinear and its associated error can only be calculated under unrealistic assumptions. In this talk, we will show how these obstacles can be readily and efficiently averted yielding Bayes optimal algorithms in machine learning, statistics, computer vision and others. We will first derive Bayes optimal solutions in Discriminant Analysis. We will then extend the notion of homoscedasticity (meaning of the same variance) to spherical-homoscedasticity (meaning of the same variance up to a rotation) and show how this allows us to generalize the Bayes criterion beyond previously defined domains. This will lead to a new concept of kernel mappings with applications in classification (machine learning), shape analysis (statistics), and structure from motion (computer vision). We will conclude with an outline of ongoing research for nonparametric kernel learning.
  • Talk: Fall 2013 GRASP Seminar: Joachim Buhmann, ETH Zurich, "Big Data: where is the information?"
    Date: Friday, September 27, 2013 - 11am to 12pm
    Presenters: Joachim Buhmann
    The digital revolution has created unprecedented opportunities in computing and communication but it also has generated the data deluge with an urgent demand for new pattern recognition technology. Learning patterns in data requires to extract interesting, statistically significant regularities in (large) data sets, e.g. the identification of connection patterns in the brain (connectomics) or the detection of cancer cells in tissue microarrays and estimating their staining as a cancer severity score. Admissible solutions or hypotheses specify the context of pattern analysis problems which have to cope with model mismatch and noise in data. A statistical theory of discriminative learning is developed based on information theory where the precision of inferred solution sets is estimated in a noise adapted way. The tradeoff between "informativeness" and "robustness" is mirrored by the balance between high information content and identifiability of solution sets, thereby giving rise to a new notion of context sensitive information. Cost functions to rank solutions and, more abstractly, algorithms are considered as noisy channels with a data dependent approximation capacity. The effectiveness of this concept is demonstrated by model validation for spectral clustering based on different variants of graph cuts. The concept also enables us to measure how many bit are extracted by sorting algorithms when the input and the pairwise comparisons are subject to fluctuations.
  • Talk: GRASP Special Seminar: Domenico Daniele Bloisi, Sapienza University of Rome, "Intelligent Surveillance Applications"
    Date: Wednesday, October 2, 2013 - 1pm to 2pm
    Presenters: Domenico Daniele Bloisi
    Alternate Location: Levine 512 (3330 Walnut Street)
    In this talk a set of intelligent surveillance systems are presented. Possible solutions for solving automatic video surveillance challenges such as gradual and sudden illumination changes, modifications in the background geometry, dynamic background, shadows and reflections are discussed. Different state-of-the-art approaches for detecting and recognizing object of interest in the monitored scene, tracking them over time, and handling events are presented as well as examples and results from real systems.
  • Talk: IRCS / GRASP Seminar: Greg Gerling, University of Virginia, "Computational Models of Tactile Mechanotransduction & the Design of Medical Simulators"
    Date: Friday, October 4, 2013 - 12pm to 1pm
    Presenters: Greg Gerling
    Alternate Location: IRCS Conference Room (3401 Walnut Street, 400A)
    In this talk, I will describe how our lab’s collaborative work in understanding the neurophysiological basis of touch (skin, receptors and neural coding; psychophysical limits) informs the applied design of neural sensors and human-machine interfaces, including neural prosthetics and training simulators in medical environments. Our sense of touch, while not yet as well understood as vision and audition, is essential for behaviors that range from avoiding bodily harm to vital social interactions. Discoveries in this field may help restore sensory function for disabled populations and enhance human performance and information processing capability. In particular in this talk, I will discuss work in using computational models (finite element, neuraltransduction) and artificial sensor correlates to capture the neural behavior of the skin mechanics – receptor end organ interaction for the slowly adapting type I tactile afferent. This work spans science and engineering where modeling of intact sensory systems is used to define transfer functions for application to upper limb neural prosthetics and to define the appropriate range of sensory stimuli for medical simulators. 
  • Talk: GRASP Special Seminar: Cesar Cadena, George Mason University, "Semantic Segmentation for Mobile Robots"
    Date: Monday, October 7, 2013 - 12pm to 1pm
    Presenters: Cesar Cadena
    Alternate Location: Levine 512 (3330 Walnut Street)
    The semantic mapping of the environment requires simultaneous segmentation and categorization of the acquired stream of sensory information. The existing methods typically consider the semantic mapping as the final goal and differ in the number and types of considered semantic categories. We envision semantic understanding of the environment as an on-going process and seek representations which can be refined and adapted depending on the task and robot's interaction with the environment.The proposed approach uses the Conditional Random Field framework to infer the semantic categories in a scene (e.g. ground, structure, furniture and props categories in indoors or ground, sky, building, vegetation and objects in outdoors). Using visual and 3D data a novel graph structure and effective set of features are exploited for efficient learning and inference, obtaining better or comparable results at the fraction of computational cost, in publicly available RGB-D and vision and 3D lidar sensors datasets. The chosen representation naturally lends itself for on-line recursive belief updates with a simple soft data association mechanism, and can seamlessly integrate evidence from multiple sensors with overlapping but possibly different fields of view (FOV), account for missing data and predict semantic labels over the spatial union of sensors coverages.Check out the talk here...
  • Talk: Fall 2013 GRASP Seminar: Ted Zobeck, U.S. Department of Agriculture (USDA), "Wind Erosion and Dust Emissions Processes and Study Methods"
    Date: Friday, October 18, 2013 - 11am to 12pm
    Presenters: Ted Zobeck
    Introduction and Context Setting will be provided by Dr. Daniel Koditschek, University of Pennsylvania.Humanity's growing need to instrument the desert represents a new opportunity for robotics to impact society. Sand and dust storms have emerged as a growing worldwide menace, impacting increasingly large human populations on nearly every continent, damaging habitation, disrupting transportation, threatening agriculture, human health and life, as well as a permanently altered “desertified” environment. Soil erodibility is a key determinant of spatio-temporal wind erosion patterns, but few metrics – and still less empirical data – have been developed to map out erodibility at the landscape scale over the days-to-weeks timescales of chief relevance. Empirical studies at the ~ acre/day scales are presently underway in several geographical regions, but there is growing evidence that far more data at still higher spatiotemporal resolution will be required to adequately inform emerging theoretical models. Continuing advances in satellite remote measurement technology respond in some measure to these needs, but it is clear that the heterogeneous theory (e.g. the dust chemistry and flux) associated with desertification models is only poorly constrained by such coarse grained measurements. An emerging new generation of field-portable systems (e.g., miniaturized wind tunnels and in situ wind erosion apparatus or portable spectroradiometers or laser particle counters) can provide information at the requisite spatial and temporal scale.
  • Talk: GRASP / CG@Penn Special Seminar: Ariel Shamir, The Interdisciplinary Center, Herzliya (now visiting Disney Research Boston & MIT), "Smart tools for photos and 3D models manipulation"
    Date: Thursday, October 24, 2013 - 10am to 11am
    Presenters: Ariel Shamir
    Alternate Location: Levine 307 (3330 Walnut Street)
    Powerful computer applications today allow manipulating and fabricating digital objects in unimaginable ways. However, these tools are often sophisticated and very difficult to use. One of the challenges in graphics and design today is to create simpler tools that allow even novice users to use computers more naturally for photographs and 3D objects manipulations. In this talk I will present several such efforts including sketch2photo, sketch2-3D and the recent 3-sweep technology. The key factor in all these works is utilizing humans specifically for semantic, high-level tasks that are very simple for them, and are still extremely difficult for machines, while utilizing the machine for tasks that are harder and tedious for humans.
  • Talk: Fall 2013 GRASP Seminar: Franz Hover, Massachusetts Institute of Technology, "PLUME-CHASERS: Designing Fast Robot Teams Underwater"
    Date: Friday, October 25, 2013 - 11am to 12pm
    Presenters: Franz Hover
    Pursuit is a general class of perception and control problems defined by critical space and time scales:  a follower that cannot maintain adequate real-time performance will simply be unable to keep up.  Autonomous pursuit missions in the ocean include tracking of a marine vehicle or animal, and monitoring a large-scale ocean process like an oil plume or chemical front.  The opportunity for multi-vehicle sensing systems to contribute is clear, but wireless communication has been a perennial bottleneck that prevents truly dynamic operation.  Network-based control, a major research area over the last ten years, offers some solutions since packet loss, quantization, and delay are all relevant to gateway arrangements and acoustic modems in use today. I will discuss some of the framework and leading approaches for disciplined design of marine vehicle teams operating under severe communication constraints.  Our work includes the multi-armed bandit for stochastic adaptive positioning, target pursuit with joint estimation and coordinated control through acoustic modems, and an extension of target pursuit to follow ocean features.  This integrated “plume-chaser” mission is made possible by projecting a predictive field model onto vehicle coordinates, and applying strong synthesis tools within a linear time-invariant framework.
  • Talk: GRASP Special Seminar: Svetlana Lazebnik, University of Illinois at Urbana-Champaign, "Towards Open-Universe Image Parsing with Broad Coverage"
    Date: Monday, November 4, 2013 - 1pm to 2pm
    Presenters: Svetlana Lazebnik
    Alternate Location: Levine 307 (3330 Walnut Street)
    Joint work with J. TigheI will present our work on image parsing, or labeling each pixel in an image with its semantic category (e.g., sky, ground, tree, person, etc.). Our aim is to achieve broad coverage across hundreds of object categories in large-scale datasets that can continuously evolve. I will first describe our baseline nonparametric region-based parsing system that can easily scale to datasets with tens of thousands of images and hundreds of labels. Next, I will describe our approach to combining this region-based system with per-exemplar sliding window detectors to improve parsing performance on small object classes, which achieves state-of-the-art results on several challenging datasets. Time and strength remaining, I may mention new extensions just submitted to CVPR.
  • Talk: Fall 2013 GRASP Seminar: Aaron Ames, Texas A&M, "Controlling the Next Generation of Bipedal Robots"
    Date: Friday, November 8, 2013 - 11am to 12pm
    Presenters: Aaron Ames
    Humans have the ability to walk with deceptive ease, navigating everything from daily environments to uneven and uncertain terrain with efficiency and robustness.  Despite the simplicity with which humans appear to ambulate, locomotion is inherently complex due to highly nonlinear dynamics and forcing.  Yet there is evidence to suggest that humans utilize a hierarchical subdivision among cortical control, central pattern generators in the spinal column, and proprioceptive sensory feedback. This indicates that when humans perform motion primitives, potentially simple and characterizable control strategies are implemented.  If these fundamental mechanisms underlying human walking can be discovered and formally understood, human-like abilities can be imbued into the next generation of robotic devices with far-reaching applications ranging from prosthesis to legged robots for space exploration and disaster response. This talk presents the process of formally achieving bipedal robotic walking through controller synthesis inspired by human locomotion, and demonstrates these methods through examples of experimental realization on numerous bipedal robots.  Motivated by the hierarchical control present in humans, we begin by viewing the human as a “black box” and describe outputs, or virtual constraints, that appear to characterize human walking.  By considering the equivalent outputs for the bipedal robot, a novel type of control Lyapunov function (CLF) can be constructed that drives the outputs of the robot to the output of the human; moreover, the parameters of this CLF can be optimized so that stable robotic walking is provably achieved while simultaneously producing outputs of the robot that are as close as possible to those of a human.  This CLF forms the basis for a Quadratic Program (QP) yielding locomotion that dynamically accounts for torque and contact constraints.   The end result is the generation of bipedal robotic walking that is remarkably human-like and is experimentally realizable, together with a novel control framework for highly dynamic behaviors on bipedal robots.  This is evidenced by the demonstration of the resulting controllers on multiple robotic platforms, including: AMBER 1 and 2, NAO, ATRIAS and MABEL  Furthermore, these methods form the basis for achieving a variety of walking behaviors—including multi-domain and rough terrain locomotion—and have demonstrated application to the control of prosthesis.
  • Talk: Fall 2013 GRASP Seminar: James Rehg, Georgia Institute of Technology, "Egocentric Recognition of Objects and Activities"
    Date: Friday, November 15, 2013 - 11am to 12pm
    Presenters: James Rehg
    Advances in camera miniaturization and mobile computing have enabled the development of wearable camera systems which can capture both the user's view of the scene (the egocentric, or first-person, view) and their gaze behavior. In contrast to the established third-person video paradigm, the egocentric paradigm makes it possible to easily collect examples of naturally-occurring human behavior, such as activities of daily living, from a consistent vantage point. Moreover, there exist a variety of egocentric cues which can be extracted from these videos and used for weakly-supervised learning of objects and activities. We focus on activities requiring hand-eye coordination and model the spatio-temporal relationship between the gaze point, the scene objects, and the action label. We demonstrate that gaze measurement can provide a powerful cue for recognition. In addition, we present an inference method that can predict gaze locations and use the predicted gaze to infer action labels. We demonstrate improvements in action recognition rates and gaze prediction accuracy relative to state-of-the-art methods, on a new dataset containing egocentric videos of daily activities and gaze. We will also describe some applications in psychology, where we are developing methods for automating the measurement of children's behavior, as part of a large effort targeting autism and other behavioral disorders. This is joint work with Alireza Fathi, Yin Li, and Agata Rozga.
  • Talk: Fall 2013 GRASP Seminar: Viktor Gruev, Washington University in St. Louis, "Spectral Polarization Focal-Plane Sensing for Functional Neural Imaging"
    Date: Friday, November 22, 2013 - 11am to 12pm
    Presenters: Viktor Gruev
    Recording neural activity using light has opened up unprecedented possibilities in the quest of understanding functionality of the nervous system. Light offers great advantages over electrophysiology such as: incredible spatial resolution, which is limited by the diffraction of light, contact-less probing capabilities, which avoids physical damage and interference with neural activity during recording, and simultaneous recording from large ensemble of neurons. However, in order to record an optical signal from a neuron, the electrical signal must be converted into an optical signal via a molecular reporter. The use of a reporter to translate the language of the neurons from electrons to photons currently has two major limitations: photobleaching and photodamage. In order to address the above limitations of the current state-of-the-art optical neural recording devices, we have develop a novel imaging technique which avoids the use of molecular reporters and relies on the neuron’s intrinsic changes during an action potential. The main premise for our work is the following: light reflected from the surface of a neuron is partially linearly polarized and the degree of linear polarization is a function of neural activity. In order to capture this neural activity, we have developed polarization sensitive imaging sensor with high spatial and temporal resolution. In this talk, I will describe the key components of our imaging system, such as nanofabrication of sub-wavelength metallic nanostructures acting as linear polarization filters and monolithic integration of nanostructures with imaging arrays; image processing algorithms tailored for this new class of sensora and validation of this imaging technique via in-vivo recording of neural activity from the antenna lobe of a locust.
  • Talk: Fall 2013 GRASP Seminar: Byron Stanley, Massachusetts Institute of Technology - Lincoln Labs, "Enabling Robust AGV Localization in Adverse Conditions"
    Date: Friday, December 6, 2013 - 11am to 12pm
    Presenters: Byron Stanley
    Few, if any, autonomous ground vehicles (AGVs) navigate successfully in adverse conditions, such as snow or GPS denied areas. A fundamental limitation is that they are using optical sensors, such as LIDAR or imagers, to fuse with GPS/INS solutions to localize themselves. When the optical surfaces become distorted or obscured, such as with snow, dust, or heavy rain, there is no robust way to localize the vehicle to the required accuracy. GPS/INS solutions, which are in themselves insufficient to maintain a vehicle within a lane for extended time periods, also fail around significant RF noise or jamming, tall buildings, trees, and other blocking or multipath scenarios. This talk presents a new MIT Lincoln Laboratory developed mode of vehicle localization that has low sensitivity to the failure modes of LIDAR, camera, and GPS/INS sensors. We have demonstrated that a uniquely designed Localizing Ground Penetrating Radar (LGPR) array can map the relatively static area below the road surface and use that map as a reference, in previously mapped areas, to localize an autonomous vehicle at over 60Hz to an accuracy of approximately 2 cm rms. Implications for robust autonomous ground vehicle localization and utility to other industries will be discussed.