# Research

## Current Projects

The Autonomous Aerial Vehicles rresearch project is mainly focused around autonomous navigation of unmanned air vehicles. The challenge is to design systems, which exhibit a goal-driven behavior, while sensing and reacting to changing environment. This project is a collaboration between students and faculty from  University of Pennsylvania and industry experts from  Dragonfly Pictures, Inc

Project Homepage: http://reu.grasp.upenn.edu/

The GRASP Laboratory at the University of Pennsylvania is proud to be operating a Research Experience for Undergraduates (REU) Site entitled Perception, Planning, Mobility, and Interaction for Next Generation Robotics. This REU Site opened in 2012 and is funded by the National Science Foundation (NSF).

Please see the GRASP REU Site's website for all details on this program.

Haptography, like photography in the visual domain, enables an individual to quickly record the haptic feel of a real object and reproduce it later for others to interact with in a variety of contexts. Particular positive ramifications of establishing the approach of haptography are to let doctors and dentists create haptic records of medical afflictions such as a decayed tooth surface to assist in diagnosis and patient health tracking; to improve the realism and consequent training efficacy of haptic surgical simulators and other computer-based education tools; to allow a wide range of people, such as museum goers and online shoppers, to touch realistic virtual copies of valuable items; to facilitate a haptographic approach to low-bandwidth and time-delayed teleoperation, as found in space exploration; and to enable new insights on human and robot touch capabilities.
The primary hypothesis of this research is that the feel of tool-mediated contact with real and virtual objects is directly governed by the high-frequency accelerations that occur during the interaction, as opposed to the low-frequency impedance of the contact. Building on our knowledge of the human haptic sensory system, our approach will use measurement-based mathematical modeling to derive perceptually relevant haptic surface models and dynamically robust haptic display paradigms, which will be tested via both experimental validation and human-subject studies.
Publications:

W. McMahan and K. J. Kuchenbecker. Haptic Display of Realistic Tool Contact Via Dynamically Compensated Control of a Dedicated Actuator. In Proceedings, IEEE Intelligent RObots and Systems Conference, pages 3171-3177, October 2009.

K. J. Kuchenbecker, J. Romano, and W. McMahan. Haptography: Capturing and Recreating the Rich Feel of Real Surfaces (Invited Paper). In Proceedings, International Symposium on Robotics Research, August 2009.

More than 780,000 Americans suffer a stroke each year, and approximately 80% of these individuals survive and require rehabilitation to regain motor functionality, though the optimal treatment method is not yet known. This project aims to create a new low-cost rehabilitation system that measures the user's arm movements in real time and uses a combination of graphical and tactile feedback to guide him or her through a set of motions chosen by the therapist. He or she views the posture or motion to master on a screen and attempts to move his or her body to match. The movements of all the body segments are tracked through a motion capture system, displayed on the screen, and compared with the target body configuration in real time. When he or she deviates more than a small amount from this target, tactors on the associated limb segment provide feedback, helping the user know how to translate or rotate that part of his or her body toward the correct configuration.

Publications:
P. Kapur, S. Premakumar, S. A. Jax, L. J. Buxbaum, A. M. Dawson, and K. J. Kuchenbecker. Vibrotactile feedback system for intuitive upper-limb rehabilitation. In Proceedings, IEEE World Haptics Conference, pages 621–622, March 2009.

P. Kapur, M. Jensen, S.A. Jax, L.J. Buxbaum, and K.J. Kuchenbecker. Spatially Distributed Tactile Feedback for Kinesthetic Motion Guidance. In Proceedings, IEEE Haptics Symposum, March 2010.

In this project we research algorithms for reasoning efficiently in large information spaces generated by suites of rich sensors that single and multi-agent robotic systems are typically equipped nowadays. Examles of problems studied under this project include scalable planning under uncertainty, high-dimensional planning, multi-agent planning, ways to compress information spaces, making predictions based on heterogenous information such as tracking multiple people in highly cluttered spaces.

This project is concerned with developing high-dimensional motion planners that can control mobile manipulation robotic systems. The challenge is to develop planners that can do it in real-time and at the same time provide theoretical guarantees on performance such as completeness. Example problems include fully autonomous door opening and mobile manipulation of objects in cluttered spaces. This project is in collaboration with Willow Garage company.

Control and decision-making for independent legged robotic agents. RoboCup is an international robotics competition that draws teams from all over the world to build and program robots that play soccer. The overarching aim of the competition is to have, by the mid 21st Century, a team of eleven autonomous robots that will beat the human soccer world champions.

RHex is a biologically inspired hexapedal robot invented and first characterized at the dawn of the century as part of a large DARPA funded consortium. A variety of RHex platforms have been developed since that time, and our lab has been particularly active in developing new versions for studying biologically inspired locomotion, gait control, and sensor based navigation as well as for developing novel courses and other educational materials.

Publications:

Gait Transitions for Quasi-Static Hexapedal Locomotion on Level Ground - G. C. Haynes, F. R. Cohen, D. E. Koditschek, International Symposium of Robotics Research, August 2009 PDF version

Rapid Pole Climbing with a Quadrupedal Robot - G. C. Haynes, Alex Khripin, Goran Lynch, Jon Amory, Aaron Saunders, Alfred A. Rizzi, and D. E. Koditschek, IEEE International Conference on Robotics and Automation, May 2009 PDF Version

HUNT is a collaboration of researchers from the University of Pennsylvania, Arizona State University, University of California, Berkeley, Georgia Technical Institute of Technology and the Office of Naval Research. The grand challenge for HUNT is to push the state-of-the-art in complex, time-critical mission planning and execution for large numbers of heterogeneous vehicles collaborating with warﬁghters. Sophisticated cooperation among intelligent biological organisms, including humans, will offer critical insight and solution templates for many hard engineering problems. To meet this challenge, we have a assembled an interdisciplinary team of leading researchers who have pioneered work in artificial intelligence, vehicle control and robotics, cognitive psychology and human factors, biology, and political economics.

Micro Autonomous Systems Technologies (MAST) is a collaboration with University of Maryland, University of Michigan, BAE Systems and Army Research Laboratories. Our vision is to develop Autonomous Multifunctional Mobile Microsystems (Am3 ), a networked group of small vehicles and sensors operating in dynamic, resource-constrained, adversarial environments. While individual units may be specialized, Am3 will be multifunctional because of its heterogeneity, the ability of individual units to automatically reconfigure and adapt to the environment and to human commands, and its distributed intelligence. Am3 will need to operate with little or no direct human supervision, because groups like this will be very difficult, if not impossible, to efficiently manage or control by programming or by tele-operation. The deployment, monitoring, and tasking of such multifunctional groups will be challenging and will require the application of new, yet-to-be-developed methods of communication, control, computation and sensing, specifically tailored to Mast applications.

Omnidirectional vision systems can provide panoramic alertness in surveillance, improve navigational capabilities, and produce panoramic images for multimedia.

This project is investigating and developing methods for the recovery of 3D underground structures from subsurface non-invasive measurements obtained with ground penetrating radar, magnetometry, and conductivity sensors. The results will not only provide hints for further excavation but also 3D models that can be studied as if they were already excavated. The three fundamental challenges investigated are the inverse problem of recovering the volumetric material distribution, the segmentation of the underground volumes, and the reconstruction of the surfaces that comprise interesting structures.

Tele-Immersion will enable users at geographically distributed sites to collaborate in real time in a shared, environment as if they were in the same physical room. This new paradigm for human-computer interaction is the ultimate synthesis of networking and media technologies.

Publications:

X. Zabulis and K. Daniilidis. Multi-camera reconstruction based on surface normal estimation and best viewpoint selection. In 2nd International Symposium on 3D Data Processing, Visualization and Transmission, pages 733-740, 2004.

O. Naroditsky and K. Daniilidis. 3d scanning using spatiotemporal orientation. In Proceedings of the 17th International Conference on Pattern Recognition, volume 1, pages 5-9, 2004.

J. Mulligan, N. Kelshikar, X. Zampoulis, K. Daniilidis, Stereo-based Environment Scanning for Immersive Telepresence, submitted to the IEEE Transactions on Circuits and Systems for Video Technology, pdf.

The Ben Franklin Racing Team's goal is to build fast, reliable, safe and autonomous vehicles that will revolutionize transportation systems in urban environments. We will leverage state-of-the-art advances in sensing, control theory, machine learning, automotive technology and artificial advantages to build robotic cars. The team will participate the 2007 DARPA Urban Challenge.

Publications:

Bohren, J. and Foote, T. and Keller, J. and Kushleyev, A. and Lee, D. and Stewart, A. and Vernaza, P. and Derenick, J. and Spletzer, J. and Satterfield, B., Little Ben: The Ben Franklin Racing Team's entry in the 2007 DARPA Urban Challenge, Journal of Field Robotics, vol. 25, 2008, pdf.

Arvind Bhusnurmath and CJ Taylor have been working on approaches to recasting many classic image matching problems including, stereopsis, motion estimation, image registration and 3D volumetric matching as convex optimization problems that can be solved effectively using the Interior Point method. More specifically they proceed by constructing piecewise linear convex approximations of the original image matching functions and then reformulate the matching problems as linear programs. Importantly, in each case they are able to exploit the structure of the resulting linear program to develop efficient algorithms which allow them to solve optimization problems involving hundreds of thousands of variables more efficiently than standard codes like TOMLAB and MOSEK.

Publications:

Solving Image Registration Problems Using Interior Point Methods - European Conference on Computer Vision, October 2008: [ pdf ]

Graph Cuts via $\ell_1$ Norm Minimization - IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol: 30, No: 10, Pgs: 1866-1871, October 2008: [ pdf ]

It is increasingly attractive to consider the deployment of smart camera networks. Such camera networks could be used to support a wide variety of applications including environmental modeling, 3D model construction and surveillance.

Publications:

Using Smart Cameras to Localize Self-Assembling Modular Robots - First ACM/IEEE International Conference on Distributed Smart Camera, Pgs: 76-80, Sept 2007: [ pdf ]

Self Localizing Smart Camera Networks and Their Applications to 3D Modeling - . ACM SenSys/First Workshop on Distributed Smart Cameras (DSC 06), October 2006: [ pdf ]

Faculty: George Pappas

The SUBTLE team brings together researchers with expertise in a wide range of disciplines: computational linguistics, including formal language theory, computational semantics and parsing; syntax, semantics and pragmatics within linguistic theory; probabilistic modeling and machine learning; robotics; and human–robot interaction (HRI). At the heart of our proposal is fundamental new research (but extending previous work by each of us) to develop methods for constructing a computationally tractable end-to-end system for a habitable subset of English, one that takes us from utterances all the way to the understanding of them, including both a formal representation of the implicit meaning of utterances and the generation of control programs for a robot platform, here an iRobot ATRV-JR. In parallel, we will also develop a virtual simulation of the USAR environment to enable inexpensive large-scale corpus collection to proceed during many stages of system development.

Faculty: Vijay Kumar

This project is an effort at the GRASP Laboratory to develop a new technology in the form of a smart wheelchair. This device is equipped with a virtual interface and on-board cameras that enable the subject to navigate on the ground by interacting with the virtual system interface or use one of the built-in control algorithms.

Project Homepage: http://modlabupenn.org/

The Modular Robotics Lab (ModLab) is a subgroup of the GRASP Lab and the Mechanical Engineering and Applied Mechanics Department at the University of Pennsylvania. A modular robot is a versatile system consisting of many simple modules that can change their configuration to suit a given task. These systems are inherently robust due to their redundancy, adaptability, and ability to self-repair. While originally focused on continuing research in the field of modular robotics, recent work at the lab has expanded to include micro/nano air vehicles and tunable stiffness for legged robot locomotion. The ModLab is comprised of undergraduate and graduate students from multiple disciplines including mechanical, electrical, and computer systems engineering.

Project Homepage: http://www.swarms.org/

The SWARMS project brings together experts in artificial intelligence, control theory, robotics, systems engineering and biology with the goal of understanding swarming behaviors in nature and applications of biologically-inspired models of swarm behaviors to large networked groups of autonomous vehicles. Our main goal is to develop a framework and methodology for the analysis of swarming behavior in biology and the synthesis of bio-inspired swarming behavior for engineered systems.

HURT is a multi-vehicle controller that coordinates and collaboratively plans urban RSTA missions for autonomous vehicles.  It implements augmented autonomy for teams of arbitrary vehicle platforms.

The goal of the research is to develop a framework and the support tools for the deployment of multiple autonomous robots in an unstructured and unknown environment with applications to reconnaissance, surveillance, target acquisition, and the removal of explosive ordnance. The current state-of-the-art in control software allows for supervised autonomy, a paradigm in which a human user can command and control one robot using teleoperation and close supervisory control. The objective here is to develop the software framework and tools for a new generation of autonomous robots.

SToMP is a 4-year, \$7.98M grant from DARPA's Defense Sciences Office. The goal is to create and utilize mathematical innovations to deduce global structure from local information in distributed and coordinated sensing platforms. The tools used come mostly from topology and geometry. Application domains include sensor networks, multi-agent robot coordination, and pursuit-evasion scenarios.

Faculty: Jianbo Shi

We are developing computer algorithms to recognize human at multiple levels of abstractions: from the basic body limb tracking, to human identification, to gesture recognition, to activity inference. The ultimate goal is to develop computation algorithms to understand human behavior in video.

The rapid growth in size of storage devices allows us to store hours, days or even months of video data. Watching through and analyzing videos of such length is no longer feasible. In order to summarize or index videos (for search purposes) we need to develop algorithms which detect and classify events happening in the video without human supervision. To identify and describe various types of events we seek important features and ways of extracting/learning them from the video data.

Faculty: Vijay Kumar

The DaVinci project brings together mathematicians and engineers from the University of Iowa, Maryland, Pennsylvania and Rensselaer Polytechnic Institute, to address the urgent need for a thorough understanding of the mathematics of engineering systems that can be modeled by Differential Algebraic Inequalities and Differential Complementarity Problems. The project will open a new chapter in applied mathematics in which classical differential equation theory is merged with contemporary mathematical programming methods. The deliverables of our research are a set of broadly applicable mathematical theories, algorithms, and computational tools that will have a direct impact on an array of engineering and scientific disciplines."