HySeLAM Hybrid Semantic Localization and Mapping:About

From HySeLAM Hybrid Semantic Localization and Mapping
Jump to: navigation, search

The author of these pages is Filipe Baptista Neves Dos Santos.


Research Institute /Group

Faculdade de Engenharia da Universidade do Porto (FEUP) – Groundsys – INESCPorto

Research Laboratory / Room:

FEUP I-109

Name of the Supervisors:

Prof. Paulo Gomes da Costa and Prof. António Paulo Moreira

Start date/Last update:


Brief description of the research work:

When moving towards a greater autonomy for a group of robots and when trying to have them solving tasks like ”Robot, go to the room A and pick me up the box that is placed over the brown table” most localization and mapping approaches are not in tune with the higher layers of the robots’ planning system. Here is proposed that hybrid (metric and semantic) localization and mapping layer (HLML) should: estimate the robot localization and orientation in the navigation frame, estimate the free space around it, represent in a semantic map the knowledge about the physical world acquired during its movement, and acquire/share the map/objects knowledge with other entities. For robustness and versatility, the HLML must work in a low structured or non-structured environment and, like humans and animals, should be able to gather a 3D view of the surrounding world resorting to stereo images, IR sensors and others sources (INS, Lasers). Due to the intended application, there are realtime concerns as the free space around it (obstacles) and positioning data should be available in a short time frame. From top to bottom, there is the need to: define a generic and adaptive architecture for HLML; define the semantic language and the specification for classification; evaluate, model and optimize existing image processing and data fusion algorithms to process laser sensors, stereo and multiple camera images both in the visible and IR spectrum in order to be able to track objects, artificial and natural landmarks.

Personal tools