Beschreibung
Environments, in which robots can assist humans both in production tasks as well as in everyday tasks, will demand advanced capabilities of these robotic systems of cooperating with humans and other robots. To achieve this, robots should be able to navigate and manipulate in dynamic environments safely. As such, it is essential that a robot can accurately determine its pose (i.e., its position and orientation) in the environment based on optical sensors. However, both the map of the robots surrounding as well as its sensors can contain inaccuracies, which can cause problematic consequences. The work presented here focuses on this issue by introducing several novel computer vision-based methods. These approaches lead to a set of challenges which are addressed in this book. These are: How accurately can a robot estimate its pose in a known environment, i.e., assuming that a precise map of its surrounding is available? Secondly, how can a model of the robots surroundings be created if no map of its surroundings is known a priori? Lastly, how can this be done if neither a priori environment models nor models of the robots internal state are available? The introduced methods are experimentally evaluated throughout this book employing different mobile robotic systems, ranging from industrial manipulators to humanoid robots. Going beyond traditional robotics, this work examines how the presented methods can also be applied to human-machine interaction. It shows, that by solely visually observing the movement of the muscles in the human forearm and by employing machine learning methods, the corresponding hand gestures can be determined, opening entirely new possibilities in the control of robotic hands and hand prostheses.