Autonomous robot systems have to make perceptual and control decisions at every moment of time, and have to learn and adapt to improve the system’s performance. High dimensional continuous state-action spaces still pose significant scaling problems for learning algorithms to find (approximately) optimal solutions, and appropriate task descriptions or cost functions require a large amount of human guidance. In order to address autonomous skillful movement generation in complex robot and task scenarios, we have been working on a variety of sub problems to facilitate robust task achievement. Among these topics are general representations for movement in form of movement primitives, trajectory-based reinforcement learning with path integral reinforcement learning, and inverse reinforcement learning to extract the “intent” of observed behavior. However, this “action centric” view of skill acquisition needs to be extended with a stronger perceptual component, as in the end the entire perception-action learning loop could be considered the key element to address, rather than isolated components of this loop. In some tentative initial research, we have been exploring Associative Skill Memories, i.e., the simple idea to start memorizing all sensory events and their statistics together with each movement skill. This concepts opens a wide spectrum of adding predictive, corrective, and switching behaviors in motor skills, and may create an interesting foundation to automatically generate the graphs underlying complex sequential motor skills.