In applications involving the deployment of robots with locomotion capabilities, e.g., humanoids, drones, and automated guided vehicles (AGVs), it is a fundamental requirement that the robot can perceive its environment and decide about it being navigable and the affordances it provides for navigation in general. Nowadays, almost every moving robot is equipped with sensors to observe its working s
pace and gather rich information. However, when robots are starved of computational and sensing resources, simple passive perception approaches, such as building a 3D map before one can act, fall apart. To this end, the active vision, which involves building algorithmic engines around the perception-action synergy loop, offers a promising solution. Active vision endows the robot to actively aim the sensor towards several viewpoints according to a specific scanning strategy. Thus, a vital issue in the active vision systems is that the agent has to decide “where to look” following a plan; that is, the vision sensor must be purposefully configured and placed at several positions to observe a target by taking actions in robotic perception. The intentional acts in purposive perception planning introduce active and purposeful behaviors. In particular, four modes of activeness have been formally identified: by moving the agent itself, by employing an active sensor, by moving a part of the agent’s body, and by hallucinating active movements.