134271-Thumbnail Image.png
Description
In recent years, environment mapping has garnered significant interest in both industrial and academic settings as a viable means of generating comprehensive virtual models of the physical world. These maps are created using simultaneous localization and mapping (SLAM) algorithms that

In recent years, environment mapping has garnered significant interest in both industrial and academic settings as a viable means of generating comprehensive virtual models of the physical world. These maps are created using simultaneous localization and mapping (SLAM) algorithms that combine depth contours with visual imaging information to create rich, layered point clouds. Given the recent advances in virtual reality technology, these generated point clouds can be imported onto the Oculus Rift or similar headset for virtual reality implementation. This project deals with the robotic implementation of RGB-D SLAM algorithms on mobile ground robots to generate complete point clouds that can be processed off-line and imported into virtual reality engines for viewing in the Oculus Rift. This project uses a ground robot along with a Kinect sensor to collect RGB-D data of the surrounding environment to build point cloud maps using SLAM software. These point clouds are then exported as object or polygon files for post-processing in software engines such as Meshlab or Unity. The point clouds generated from the SLAM software can be viewed in the Oculus Rift as is. However, these maps are mainly empty space and can be further optimized for virtual viewing. Additional techniques such as meshing and texture meshing were implemented on the raw point cloud maps and tested on the Oculus Rift. The aim of this project was to increase the potential applications for virtual reality by taking a robotic mapping approach to virtual reality environment development. This project was successful in achieving its objective. The following report details the processes used in developing a remotely-controlled robotic platform that can scan its environment and generate viable point cloud maps. These maps are then processed off line and ported into virtual reality software for viewing through the Oculus Rift.


Download restricted.
Restrictions Statement

Barrett Honors College theses and creative projects are restricted to ASU community members.

Download count: 2

Details

Title
  • Robotic 3D Mapping for Virtual Reality Implementation
Contributors
Date Created
2017-05
Resource Type
  • Text
  • Machine-readable links