yiyuan.space

Immersive Spatial Perception Experience Through Interactive 3D Scanning

2019

This research is conducted at Obuchi Lab, the University of Tokyo. Collaborators: Alex Orsholits and Joe Li.
Visit archived.space for more information.


/3d-scan-1.gif

Based on a HTC VIVE Hardware set and an RPLiDAR A1M8, we propose a new method for spatial perception experience. It filters out the color and texture, only presenting the distance information in the form of monochrome point cloud.


Video Loading...


For visual spatial perception experience assisted by VR technology filtering out unnecessary visual information, a depth sensing device is needed. In our case, we choose to use RPLIDAR A1M8. Compared to other depth sensing devices like structured light scanners, a LiDAR produces lighter amount of data which ensures real-time data post processing. It is also relatively cost effective.

As the data LiDAR gets is limited to the plane it is physically at, we need to add another dimension to process 3d spatial data. In our case, we choose to mount the LiDAR on a HTC VIVE Controller, which provides relatively precise position tracking with the assistance of HTC VIVE Base Stations, and provides a intuitive handle with interactable control buttons.


/3d-scan-2.jpg
/3d-scan-3.gif

To process the data from the LiDAR and the Controller, we run python on a desktop computer to calculate the cartesian coordinates of each point.

From SteamVR and OpenVR (Valve OpenVR SDK python bindings), we get the real-time updating cartesian coordinates xc, yc and zc (with base station c considered as the origin) and Euler angles α (roll), β (pitch) and γ (yaw) of the Controller. At the same time, from RPLidar (Python module for RPLidar rangefinder scanners), we get real-time updating angle θ and its corresponding distance d.


/3d-scan-4.jpg

Considering the way the LiDAR is mounted onto the Controller, horizontal offset a = 86.93mm, vertical offset b = 67.28mm; the angle between the 2d LiDAR polar coordinate plane and the Controller cartesian coordinates xy plane is ϕ = 32.139° .

Together with the displacement and rotation of the Controller, the cartesian coordinates of each scanned point (vector p' = (x,y,z)) in 3d space can be calculated through quaternion rotation and translation: p' = qpq^(-1) + v


/3d-scan-5.jpg

For interaction, we used the existing trigger and touchpad on the Controller to provide further possibility of interaction.


3d-scan-6.jpg/3d-scan-7.jpg/3d-scan-8.jpg

We save each the coordinates of each single point when the trigger is pushed. Gradually it forms a point cloud of the scanned object or space. In most cases, the point cloud does not appear homogeneous.


/3d-scan-9.gif
/3d-scan-10.jpg

To display the point cloud back to the user, we used processing with ViveP5 Library. The program tracks user's HTC VIVE Headset position and creates an immersive VR visual experience accordingly. As a result, the user sees the point cloud through the Headset overlaying the position where the object or space is in reality


/3d-scan-11.jpg

To validate the feasibility of the proposed method and to clarify issues for real application, we set up a space for a single user at each time with a full set of the tool to experience the space without physically seeing. Specifically, we set up an 3x3m area with foam cubes and cardboard boxes as "obstacles", forming a path with one entrance and one exit.

In consideration of setting a "challenge", we limit the total number of points displayed at a time. As new points being displayed, old points will be replaced. With this limitation, no complete image of the space will be formed over time. Users have to comprehend the space with short-term memory and re-scan if they forget.


/3d-scan-12.jpg
/3d-scan-13.jpg

Among the 10 users invited to experience the constructed route with the help of our immersive virtual perception toolset, 9 users successfully exited the play area in a varying amount of time. During these tests, users perceived the space in different ways according to their own unique characteristics ranging from capabilities of spatial cognition, personalities, to body height and width.

The figure above is a depiction of two different users' paths as they navigate through the space.

The feedback provided by the test users gave us an important insight into the assessment of the experience system. User 2 stated "it is very interesting to view the space in a new way". User 6 was one of the fastest test users, having prior experience with VR. They deemed the system to be "intuitive, but still harder than seeing normally with eyes". User 5, who did not complete the route pointed out that the system is hard for first time VR users to understand, but enjoyed it none-the-less.