Nick Malone, Aleksandra Faust, Brandon Rohrer, John Wood, Lydia Tapia, "Efficient Motion-based Task Learning," Accepted to Robot Motion Planning Workshop, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vilamoura, Portugal, October 2013. Abstract— Generating motions for robot arms in real-world complex tasks requires a combination of approaches to cope with the task structure, environmental noise, and hardware imperfections. In this paper we present an efficient framework for adaptive motion task learning on real hardware that consists of task transfer, probabilistic roadmaps (PRM), and an online reinforcement learning algorithm. Online refers to the agent making decisions and then receiving information about that decision immediately after the decision has been made, instead of receiving a complete training set. The task transfer jump starts training on the hardware with knowledge learned in simulation. To achieve faster trainings speeds we integrate a PRM with the learning agent. For motion-based task learning, we use a reinforcement learning algorithm loosely based on human cognition. We demonstrate the framework by applying it to two pointing tasks on a 7 degree of freedom Barrett Whole Arm Manipulator (WAM) robot. The first task has a stationary target and illustrates the ability of the framework to quickly adapt and compensate for hardware noise. The second task goes a step further and introduces a non-stationary target, demonstrating the framework’s ability to adapt quickly to a new environment and new task.