From factories to households, we envision a future where robots can work safely and efficiently alongside humans. For robots to truly be adopted in such dynamic environments, we must i) minimize human effort while communicating and transferring tasks to robots; ii) endow robots with the capabilities of adapting to changes in the environment, in the task objectives and human intentions; and iii) ensure safety for both the human and the robot. However, combining these objectives is challenging as providing a single optimal solution can be intractable and even infeasible due to problem complexity and contradicting goals. In my research, I seek to unify robot learning and control strategies to provide safe and fluid physical human-robot-interaction (pHRI) while theoretically guaranteeing task success and stability. To achieve this, I devise techniques that step over traditional disciplinary boundaries, seamlessly blending concepts from control theory, robotics, and machine learning. In this talk, I will present contributions that leverage Bayesian non-parametrics with dynamical system (DS) theory, solving challenging open problems in the Learning from Demonstration (LfD) and pHRI domains. By formulating and learning motion policies as DS with convergence guarantees, a single motion policy (or sequence of) can be used to solve a myriad of robotics problems. I will present novel DS formulations and efficient learning schemes that are capable of executing i) continuous complex motions, such as pick-and-place and trajectory following tasks; ii) sequential household manipulation tasks, such as rolling dough or peeling vegetables; iii) and more dynamic scenarios, such as object hand-overs from humans and catching objects in flight. Finally, I will show how these techniques scale to more complex scenarios and domains such as navigation and co-manipulation with humanoid robots.