Stray Toolkit

Stray Scanner

Stray Scanner is an iOS app for collecting RGB-D datasets. It can be downloaded from the App Store.

The recorded datasets contain:

  • color images
  • depth frames from the LiDAR sensor
  • depth confidence maps
  • camera position estimates for each frame
  • camera calibration matrix
  • IMU measurements

They can be converted into our scene data format with the stray dataset import command.

Exporting Data

There are two ways of exporting the data from the device. The first way is to connect your phone to a computer with a lightning cable. The other option is through the iOS Files app.

Exporting Using Cable

To access data collected using Stray Scanner, connect your iPhone or iPad to your computer using a lightning cable. Open Finder.app. Select your device from the sidebar. Click on the "Files" tab beneath your device description. Under "Stray Scanner", you should see one directory per dataset you have collected. Drag these to wherever you want to place them.

How to access Stray Scanner data In this image, you can see the two datasets "ac1ed2228f" and "c26b6838a9". These are the folders you should drag to your desired destination.

On Windows, a similar process can be followed, but the device is accessed through iTunes.

Exporting Through the Files App

In the Files app, under "Browse > On My iPhone > Stray Scanner" you can see a folder for each recorded dataset. You can export a folder by moving it to your iCloud drive or share it with some other app.

Data Specification

This document describes the data format recorded by the Stray Scanner iOS app. Note, that it is slightly different from the dataset format. Stray Scanner datasets can be converted using the import command.

The collected datasets are each contained in a folder, named after a random hash, for example 71de12f9. A dataset folder has the following directory structure:

camera_matrix.csv
odometry.csv
imu.csv
depth/
  - 000000.png
  - 000001.png
  - ...
confidence/
  - 000000.png
  - 000001.png
  - ...
rgb.mp4

rgb.mp4 is an HEVC encoded video, which contains the recorded data from the iPhone's camera.

The depth/ directory contains the depth maps. One .png file per rgb frame. Each of these is a 16 bit grayscale png image. They have a height of 192 elements and width of 256 elements. The values are the measured depth in millimeters, for that pixel position. In OpenCV, these can be read with cv2.imread(depth_frame_path, -1).

The confidence/ directory contains confidence maps corresponding to each depth map. They are grayscale png files encoding 192 x 256 element matrices. The values are either 0, 1 or 2. A higher value means a higher confidence.

The camera_matrix.csv is a 3 x 3 matrix containing the camera intrinsic parameters.

The odometry.csv file contains the camera positions for each frame. The first line is a header. The meaning of the fields are:

Field
Meaning
timestampTimestamp in seconds
frameFrame number to which this pose corresponds to e.g. 000005
xx coordinate in meters from when the session was started
yy coordinate in meters from when the session was started
zz coordinate in meters from when the session was started
qxx component of quaternion representing camera pose rotation
qyy component of quaternion representing camera pose rotation
qzz component of quaternion representing camera pose rotation
qww component of quaternion representing camera pose rotation

The imu.csv file contains timestamps, linear acceleration readings and angular rotation readings. The first line is a header. The meaning of the fields are:

Field
Meaning
timestampTimestamp in seconds
a_xAcceleration in m/s^2 in x direction
a_yAcceleration in m/s^2 in y direction
a_zAcceleration in m/s^2 in z direction
alpha_xRotation in rad/s around the x-axis
alpha_yRotation in rad/s around the y-axis
alpha_zRotation in rad/s around the z-axis