In this talk, I will overview our recent work on gesture-based haptic guidance for physical human-robot interaction (pHRI) that can be used in virtual minimally invasive surgery (MIS) training. I will present an approach in which the knowledge and experience of experts have been modeled and used to improve the unpredictable motions of novice trainees. Two statistical models, hidden Markov model (HMM) and hidden Conditional Random Fields (HCRF), have been used to train gesture models for a virtual MIS related task. These models have also been used for automatic gesture segmentation and recognition as well as generating guidance forces. The forces are adaptively calculated in real time with respect to gestural similarities among user motions and the gesture models.