Depth-Visual-Inertial Dataset for 3D Indoor Reconstruction


Image Image Top and middle: ground truth map. Bottom left: sample visualization of some of the RGB and depth sensor streams. Bottom right: sensor platform.

About

This site presents the dataset collected for our ongoing research on portable 3D mapping systems based on depth-visual-inertial cameras. Targeting resource and time-constrained mapping operations in adverse conditions (e.g. in the context of search-and-rescue), our sequences include agressive motion, sequences with people moving in the field of view of the sensors as well as sequences with abrupt lighting changes.

Our sensor platform features an extensive set of sensors:

  • 3D LiDAR
  • Active stereo camera
  • Passive stereo camera
  • Time-of-flight camera
  • Multiple Inertial Measurement Units (IMUs)

With this sensor suite, our dataset allows the comparison of various depth sensing modalities for the task of 3D reconstruction in a challenging and large-scale scenario. The data was captured in a bunker-like location, resembling a typical search-and-rescue operation site. The whole environment is a large, 1600m2 area made of small corridors, rooms of all sizes and shapes and various objects.

Authors

  • Charles Hamesse1, 2 (corresponding author - charles.hamesse@mil.be)
  • Michiel Vlaminck2
  • Hiep Luong2
  • Rob Haelterman1

Special thanks to Alain Vanhove, Mario Malizia and Timothée Fréville.

  1. Royal Military Academy of Belgium
  2. Ghent University

Citation

This work is currently under revision. We will update the citation later on.