Install

The Stray Command Line Tool and Stray Studio can be installed using our install script. We currently support macOS and Linux based systems.

The script installs the tool and Studio into your home directory into a folder called .stray. Some commands are implemented as Docker containers (e.g. calibration, model and studio integrate), which means you will have to have Docker installed and the daemon running.

To install Docker, follow the instructions here.

Other commands are implemented as Python scripts. These will are installed into a Python environment that is downloaded by the installer into the .stray directory.

To install the toolkit run this command in your shell:

curl --proto '=https' --tlsv1.2 -sSf https://stray-builds.ams3.digitaloceanspaces.com/cli/install.sh | bash

Then source your environment with source ~/.bashrc or source ~/.zshrc if you are using zsh.

Uninstall

If you want to uninstall the toolkit, simply delete the .stray directory with rm -rf ~/.stray.

Help

Visit our issue tracker for help and direct support.

Commands

stray dataset

The dataset subcommand is used display and manipulate data. It can also be used to import data from the Stray Scanner app.

stray studio

The studio subcommand is used to create integrated scenes from datasets that can be annotated in the Studio visual interface.

stray model

The model subcommand is used to generate and train models that can be evaluated and used for different tasks.

stray calibration

The calibration subcommand is used to generate calibration targets and find intrinsic parameters of cameras.

Help

Visit our issue tracker for help and direct support.

Dataset

The dataset subcommand is used to import data from the Stray Scanner app to be used with the rest of the Stray toolkit.

Available commands

stray dataset import

Imports data from the Stray Scanner app into the scene format

Options

namedefaultchoicesdescription
<scenes>Paths to the raw scenes. The rgb.mp4 file needs to be present at minimum for scenes to be used in camera calibration, the depth directory also needs to exist for scenes to be integrated)
--out, -oDirectory where to save the imported scenes
--every1Skip frames
--width1920Width of the imported images. Make sure to match the camera_intrinsics.json width in case --intrinsics is passed
--height1440Height of the imported images. Make sure to match the camera_intrinsics.json height in case --intrinsics is passed
--intrinsicsNonePath to a custom camera_intrinsics.json (for example, generated by the calibration command) to include in the imported scene instead of parameters found in the camera_matrix.csv (if present)
--help, -hShow help

stray dataset show

  • Displays the dataset optionally with labels of different types

Options

namedefaultchoicesdescription
<scenes>Paths to the Stray scene(s) to show
--bboxFalse(flag)Render 2D bounding boxes based on the annotations.json file
--segmentationFalse(flag)Render segmentation masks on the images (masks can be created with stray dataset bake
--bbox-from-maskFalse(flag)Determine the 2D bounding box from segmentation mask (masks can be created with stray dataset bake
--saveFalse(flag)Save the shown images to scene/labeled_examples
--rate, -r30Frame rate
--help, -hShow help

stray dataset bake

  • Bake different assets (such as segmentation masks) into the dataset

Options

namedefaultchoicesdescription
<scenes>Paths to the Scenes into which assets should be baked
--segmentationFalse(flag)Bake segmentation masks to the scene. The masks are saved as pickles into scene/segmentation/instance_i folders based on the number of bounding boxes in the scene
--help, -hShow help

Help

Visit our issue tracker for help and direct support.

Studio

The studio subcommand is used to integrate scenes from datasets and provides a visual interface to annotate the scene.

Stray Studio Interface

Available commands

stray studio integrate <scenes-directory>

Reads color and depth images from a scene to compute the trajectory of the camera and produces a mesh of the scene. After this has been done, the scene can be opened in the Studio with the open command.

The scene directory has to follow the dataset format.

Options

namedefaultchoicesdescription
scenesPath to the directory containing the scenes to integrate

stray studio open <scene>

Opens a scene in the Studio graphical interface. Before a scene can be opened, it has to be integrated with the integrate command.

stray studio preview <scene>

Plays through images in the scene with overlayed 3d annotations.

Options

namedefaultchoicesdescription
scenePath to a single scene to open

Keyboard Shortcuts for the Studio graphical user interface

cmd+s to save.

k switches to the keypoint tool.

v switches to the move tool.

b switches to the bounding box tool.

Help

Visit our issue tracker for help and direct support.

Model

The model subcommand is used to generate and train models that can be evaluated and used for different tasks.

Available commands

stray model generate

Options

namedefaultchoicesdescription
--model-typedetectron2detectron2The model type to use. Currently only detectron2 is supported
--repositoryWhere to save the newly created models
--help, -hShow help

stray model bake

  • "Bakes" a dataset into a given model. Saves the model as model.pth into the model directory. The model training happens inside a Docker container.

Options

namedefaultchoicesdescription
<scenes>Paths to the scenes which are used in model training
--modelPath to the model to be used in baking
--num-gpus0Number of GPUs to use in baking
--resumeFalse(flag)Resume training from previous run
--segmentationFalse(flag)Train a segmentation model, requires segmentation masks to exist (masks can be created with stray dataset bake
--bbox-from-maskFalse(flag)Determine the 2D bounding boxes from segmentation masks, requires segmentation masks to exist (masks can be created with stray dataset bake
--help, -hShow help

stray model evaluate

  • Evaluates model performance against the given evaluation dataset

Options

namedefaultchoicesdescription
<scenes>Paths to the scenes to use for evaluation
--modelPath to the model to evaluate
--weightsmodel/output/model_final.pthPath to the weights file to use for evaluation
--threshold0.7Prediction confidence threshold
--help, -hShow help

Help

Visit our issue tracker for help and direct support.

Calibration

The calibration command is a thin usability wrapper around Kalibr to help with calibrating cameras and generating calibration targets.

stray calibration generate

This command creates a calibration target that you can print and use to calibrate your cameras.

Options

namedefaultchoicesdescription
<target-yaml>Path to the target.yaml file

First you should define a calibration target using a yaml configuration file (saved as target.yaml). Here is an example configuration for a board that can be printed on an A2 size poster:

target_type: 'aprilgrid'
tagCols: 8
tagRows: 5
tagSize: 0.05
tagSpacing: 0.3
  • target_type defines the type of calibration board to generate and can be either "aprilgrid", "checkerboard" or "circlegrid". We recommend "aprilgrid"
  • tagCols determines how many tags to place in the horizontal direction in the grid
  • tagRows determines how many tags to place in the vertical direction in the grid
  • tagSize determines the size of each individual tag
  • tagSpacing determines the spacing between the tags as a percentage, the actual spacing is equal to tagSize * tagSpacing

If you are generating a target for a specific print size, it helps to make sure the size is properly defined and fits the target. This avoids having to scale the target and you can skip measuring the target after scaling and printing. The total width of the target is calculated as tagCols * tagSize + (tagCols - 1) * tagSpacing * tagSize. The total height is tagRows * tagSize + (tagRows - 1) * tagSize * tagSpacing.

Running the stray calibration generate <target-yaml> command will create a target.pdf file in your current directory, which you can print.

We recommend using a larger target, but still small enough so that you can easily observe it from all different angles. We have found that using targets anywhere between the size of an A2 sheet of paper and 0.75x0.75m meters to be convenient, but still large enough to accurately capture.

Be sure to check that no scaling is applied when printing the target. After printing, make sure that the tag did not get scaled by measuring the tag with a ruler. If needed, update the tagSize field in the target.yaml file to reflect the actual size, as the file and tag size will be used again in the calibration step.

stray calibration run <type> <scene>

This command runs camera calibration. It can calibrate intrinsic parameters of the camera as well as camera-to-imu calibration.

Options

namedefaultchoicesdescription
<type>noneintrinsics, camera_imuThe type of calibration to run, see below for a description
<scene>Path to the scene to use in calibration
--targetPath to the target.yaml file
--cameraPath to the camchain.yaml file (only for camera_imu calibration)
--imuPath to the imu_noise.yaml file (only for camera_imu calibration)

Camera Intrinsics Calibration

Now that we have a calibration board, we can move on to the actual intrinsics calibration step. In this step, we will collect a dataset where we observe the calibration board from many different viewpoints, covering as many orientations and angles as possible. From this dataset, we can estimate the intrinsic parameters of the camera.

First, mount your calibration board on a flat surface, for example a wall or a table. Make sure that the calibration grid is perfectly flat on the surface and wrinkle free.

Record a dataset with your camera covering as many views as possible. A few things to keep in mind:

  • Try to capture the whole board on every frame
  • Capture the board from as many different camera poses as possible
  • Ensure an even distribution of the different poses, as to not bias the dataset
  • Make sure the calibration board is entirely visible in the image
  • Use images of the same size that you intend to use with stray studio integrate (or alternatively scale the calibration afterwards with stray calibration scale to match the image size)

Here is an example:

Convert your dataset into the Stray scene format. Only the color directory is needed for running intrinsics calibration. We recommend capturing frames at somewhere between 5 to 10hz, as using higher frame rates will needlessly slow down computing the calibration without much benefit.

Run the intrinsics calibration step with the command stray calibration run intrinsics <scene-path> <target-yaml>.

The command will extract the calibration target from each image and recover the intrinsic parameters through optimization. Once done, the command will create a camera_intrinsics.json file in the scene data directory, which contains the intrinsic parameters of the camera, including the intrinsics matrix and distortion coefficients. You can then copy or import this file over to all other scenes captured with this camera.

The command will output a calibration-report.pdf file into the scene directory you used. You can check the report to make sure the reprojection errors are less than a few pixels. The smaller the better. If they are large, try recording a new dataset or run the calibration with a higher resolution.

Another output, camchain.yaml is a yaml file containing intrinsics parameters. It can be used in the camera-imu step.

Now you are done, and can proceed to integrate and annotate some scenes!

Camera-imu Calibration

Camera-imu calibration is for computing the transformation from your IMU to the camera sensor. This is needed if you want to do visual-inertial SLAM.

For camera-imu calibration you will need a scene with a color directory, imu.csv file with imu readings and a frames.csv file with timestamps for each frame. Additionally, you will need an imu noise configuration file, call it imu_noise.yaml, and intrinsics calibration yaml file camchain.yaml (generated by the intrinsics calibration step) and a calibration target file target.yaml as specified in the generate step.

For this type of calibration you will need camera images recorded at 20hz and an imu rate as high as possible. For a tutorial on how to collect the dataset, check out the Kalibr wiki.

The imu noise configuration file describes the noise properties of your inertial sensor. Here is an example file for the imu on an iPhone 12 Pro:

#Accelerometers
accelerometer_noise_density: 4.25e-03   #Noise density (continuous-time)
accelerometer_random_walk:   2.97e-04   #Bias random walk

#Gyroscopes
gyroscope_noise_density:     1.4e-04    # Noise density (continuous-time)
gyroscope_random_walk:       5.86e-06   # Bias random walk

update_rate:                 100.0      # Hz frequency of imu measurements.

You should be able to use the same one for other iPhones. The manufacturer of your IMU sensor might report these values, if not you can use a tool such as imu_utils to compute them.

Once if you have your dataset collected and imu configured, you can run the command:

stray calibration run camera_imu <path-to-scene> --target target.yaml --camera camchain.yaml --imu imu_noise.yaml

to compute the

Outputs include:

  • report-imucam.pdf a report with details on how the calibration succeeded.
  • camchain-imucam.yaml contains estimated camera to imu transformation and time shift.

stray calibration scale

If you calibrated your sensor with a specific resolution, you can fix this by scaling the calibration to the size of the current dataset. You might want to run calibration at full resolution while only storing your datasets at a smaller resolution.

Options

namedefaultchoicesdescription
<scenes>Paths to the scenes. The camera_intrinsics.json file needs to be present at minimum
--widthDesired new width of the calibration
--heightDesired new height of the calibration

The stray calibration scale <scenes> --width <new-width> --height <new-height> command reads the current camera_intrinsics.json file in each scene and scales it to the new width and height.

Help

Visit our issue tracker for help and direct support.

Formats

Data

The Stray data format documentation

Model

The Stray model format documentation

Help

Visit our issue tracker for help and direct support.

Dataset Format

Stray operates on a standard dataset format. A dataset consists of one or more scenes stored in a directory. A dataset directory consists of several scene directories.

Scene Format

Each scene directory should contain:

color

Contains numbered (000.jpg, 001.jpg, ..) color images (jpg/png) of the image sequence used to produce the scene

depth

Contains numbered (000.png, 001.png, ...) png files which contain depth maps used to produce the scene.

Depth maps are encoded as 16 bit grayscale png images, where each value corresponds to depth in millimeters.

frames.csv

CSV file containing timestamps for each frame. Columns:

  • timestamp A timestamp in seconds of when the frame was captured
  • frame The number of the frame. E.g. 000012

imu.csv

CSV file containing imu readings. The columns are as follows:

  • timestamp a timestamp in seconds, should be synchronized with the values in frames.csv
  • a_x acceleration in x direction in m/s^2
  • a_y acceleration in y direction in m/s^2
  • a_z acceleration in z direction in m/s^2
  • alpha_x rotation along the x axis in rad/s
  • alpha_y rotation along the y axis in rad/s
  • alpha_z rotation along the z axis in rad/s The depth maps do not have to be of the same size as the color images, but they do need to have the same aspect ratio.

camera_intrinsic.json

Contains the intrinsic parameters of the camera that was used to collect the color and depth files. It should contain a single object, with the following fields:

  • depth_format string, the data format of depth frames, currently only Z16 is supported, meaning 16-bit grayscale
  • depth_scale number, the depth scale of the depth maps. The depth value divided this value should equal the depth in meters.
  • fps number, the frame rate (fps) used to collect the color and depth files
  • width number, width of the color and depth files
  • height number, height of the color and depth files
  • intrinsic_matrix array of numbers, the intrinsic matrix of the camera used to collect the color and depth files. Details about the intrinsic matric can be found for example on Wikipedia
  • camera_model string, should be pinhole for now.
  • distortion_model string (optional) currently, only KannalaBrandt is supported.
  • distortion_coefficients list of 4 floats, these are the distortion coefficients for the camera model. See camera calibration for details on how to obtain these.

Here is an example of a camera_intrinsics.json file:

{
    "depth_format": "Z16",
    "depth_scale": 1000.0,
    "fps": 60.0,
    "height": 480,
    "width": 640,
    "intrinsic_matrix": [
        483.9207283436,
        0.0,
        0.0,
        0.0,
        484.2223165574,
        0.0,
        308.8264255133,
        240.4719135967,
        1.0
    ],
    "camera_model": "pinhole",
    "distortion_model": "KannalaBrandt",
    "distortion_coefficients": [0.4930586782521112, -0.42050294868589483, 1.2586663628718142, -1.1575906751296825]
}

The width and height have to correspond to the size of the color images.

In addition, the following data can be created with various Stray commands:

scene

- Contains a mesh file called `integrated.ply`
- Contains a camera pose trajectory file called `trajectory.log`
- Can be created with the `stray studio integrate` [command](/commands/studio.md#stray-studio-integrate)

annotations.json

A json file created by Studio which contains annotations (keypoints, bounding boxes etc.) that have been added to the scene.

Here is an example annotations.json file:

{
  "bounding_boxes":[{
    "instance_id": 0,
    "dimensions": [0.07500000298023224, 0.07500000298023224, 0.2919999957084656],
    "orientation": {"w": -0.36170855164527893, "x": 0.30457407236099243, "y": 0.8716252446174622, "z": -0.12911593914031982},
    "position": [-0.030162816867232323, 0.02697429060935974, 0.5071253776550293]
  }],
  "keypoints":[{
    "instance_id": 0,
    "position": [-0.1353698968887329, 0.027062859386205673, 0.413930207490921]
  }]
}
  • bounding_boxes are the bounding boxes that have been placed in the scene.
    • instance_id is the numerical id of the object class.
    • dimensions is the size of the bounding box in meters along the x, y and z directions in the local coordinate frame of the bounding box.
    • orientation w, x, y, z are components of a quaternion that rotate the bounding box from world to object coordinates.
    • position is the translation from world to the center of the bounding box.
  • keypoints are individual keypoints that have been placed with the keypoint tool. They are points and have a position, but no rotation.
    • instance_id is the numerical id of the keypoint type.
    • position is the position of the keypoint in the scene's coordinate frame.

<primitive>_labels

Directories containing labels (semantic masks, keypoint annotations etc.) that can be created with the stray label generate command

Available primitive types are:

  • semantic, semantic segmentation masks saved as png files
  • bbox_3d, 3D bounding boxes saved as csv
  • bbox_2d, 2D bounding boxes saved as csv
  • keypoints, 3D keypoints saved as csv

Scene Configuration

In addition to scene folders, a dataset directory can contain a metadata.json file which details how many object classes there are and what these correspond to. You can also specify the size of each object type, which speeds up labeling and reduces errors.

A metadata.json file should contain a single object with the following fields:

  • num_classes integer -- how many different classes are in the dataset
  • instances list of instance objects
    • An instance object contains the following fields:
      • instance_id positive integer these should start from 0 and increase
      • name string the name of the class
      • size array with 3 float values extents of the object in meters in the x, y and z directions which is used as the default bounding box size

Here is an example configuration.

{
  "num_classes": 2,
  "instances": [{
    "instance_id": 0,
    "name": "Wine Bottle",
    "size": [0.075, 0.075, 0.292]
  }, {
    "instance_id": 1,
    "name": "33cl Can",
    "size": [0.066, 0.066, 0.115]
  }]
}

Help

Visit our issue tracker for help and direct support.

Model Format

Stray operates on a standard model format. The model directory can be produced and modified with different Stray commands. A model directory consists of the following items:

  • output
    • Weights and other data are saved here during training. Models can be created and trained with the stray model command
  • config.yaml
    • Contains the model type specific configuration for training and inference
    • Initialized when running the stray model generate command
  • dataset_metadata.json
    • Metadata (e.g. class labels) from training data is stored here

Help

Visit our issue tracker for help and direct support.

Stray Scanner

Stray Scanner is an iOS app for collecting RGB-D datasets. It can be downloaded from the App Store.

The recorded datasets contain:

  • color images
  • depth frames from the LiDAR sensor
  • depth confidence maps
  • camera position estimates for each frame
  • camera calibration matrix
  • IMU measurements

They can be converted into our scene data format with the stray dataset import command.

Exporting Data

There are two ways of exporting the data from the device. The first way is to connect your phone to a computer with a lightning cable. The other option is through the iOS Files app.

Exporting Using Cable

To access data collected using Stray Scanner, connect your iPhone or iPad to your computer using a lightning cable. Open Finder.app. Select your device from the sidebar. Click on the "Files" tab beneath your device description. Under "Stray Scanner", you should see one directory per dataset you have collected. Drag these to wherever you want to place them.

How to access Stray Scanner data In this image, you can see the two datasets "ac1ed2228f" and "c26b6838a9". These are the folders you should drag to your desired destination.

On Windows, a similar process can be followed, but the device is accessed through iTunes.

Exporting Through the Files App

In the Files app, under "Browse > On My iPhone > Stray Scanner" you can see a folder for each recorded dataset. You can export a folder by moving it to your iCloud drive or share it with some other app.

Data Specification

The collected datasets are each contained in a folder, named after a random hash, for example 71de12f9. A dataset folder has the following directory structure:

camera_matrix.csv
odometry.csv
imu.csv
depth/
  - 000000.npy
  - 000001.npy
  - ...
confidence/
  - 000000.png
  - 000001.png
  - ...
rgb.mp4

rgb.mp4 is an HEVC encoded video, which contains the recorded data from the iPhone's camera.

The depth/ directory contains the depth maps. One .npy file per rgb frame. Each of these is a Numpy matrix file containing uint16 values. They have a height of 192 elements and width of 256 elements. The values are the measured depth in millimeters, for that pixel position. These can be loaded using Numpy using the np.load function.

The confidence/ directory contains confidence maps corresponding to each depth map. They are grayscale png files encoding 192 x 256 element matrices. The values are either 0, 1 or 2. A higher value means a higher confidence.

The camera_matrix.csv is a 3 x 3 matrix containing the camera intrinsic parameters.

The odometry.csv file contains the camera positions for each frame. The first line is a header. The meaning of the fields are:

Field
Meaning
timestampTimestamp in seconds
frameFrame number to which this pose corresponds to e.g. 000005
xx coordinate in meters from when the session was started
yy coordinate in meters from when the session was started
zz coordinate in meters from when the session was started
qxx component of quaternion representing camera pose rotation
qyy component of quaternion representing camera pose rotation
qzz component of quaternion representing camera pose rotation
qww component of quaternion representing camera pose rotation

The imu.csv file contains timestamps, linear acceleration readings and angular rotation readings. The first line is a header. The meaning of the fields are:

Field
Meaning
timestampTimestamp in seconds
a_xAcceleration in m/s^2 in x direction
a_yAcceleration in m/s^2 in y direction
a_zAcceleration in m/s^2 in z direction
alpha_xRotation in rad/s around the x-axis
alpha_yRotation in rad/s around the y-axis
alpha_zRotation in rad/s around the z-axis

Tutorials

In these tutorials, we walk you through the different workflows of the Stray toolkit.

Tutorial: Recording and importing data from Stray Scanner

In this tutorial, we cover how to import data from the Stray Scanner app into the Stray Command Line Tool and Stray Studio.

To walk through this tutorial, you will need:

  1. A LiDAR enabled iOS device, such as an iPhone 12 Pro, an iPhone 13 Pro or an iPad Pro with a LiDAR sensor
  2. The Stray Scanner app installed on the device
  3. A computer with the Stray CLI installed

While this tutorial covers the Stray Scanner app, you can import data from any other depth sensor. Here is an example on how to record data using an Intel RealSense sensor.

The goal of this tutorial is to scan a scene using a depth sensor and convert it into a dataset that follows our scene and dataset format. If you have some other depth sensor you can reach out to us and we can hopefully add support for your depth sensor. If you are dealing with some other dataset format that you would like to import, you can always write your own data format conversion script.

Recording a scene using Stray Scanner

First, we need to record a scene to process. This is done by opening app, tapping "Record a new session", then press the red button to start a recording. Then scan the scene by filming a short clip that views the relevant parts of the scene from different viewpoints.

Pro tip: you can tap on the video view to switch between depth and rgb mode.

Some suggestions to get the best possible results:

  • Make sure to avoid shaking and fast motion
    • Blurred images will make it hard for the reconstruction pipeline to localize the frames
  • Keep clips short and to the point
    • The more frames in the clips, the longer it will take to process
  • Make sure that recognizable features are visible in every frame
    • Avoid recording close to featureless objects such as walls
    • If no features are visible or the view is covered, the software might not be able to localize the camera
  • Observe the scanning target from multiple viewpoints
    • This ensures that the target can be properly reconstructed in the integration step

Moving the data over to your computer

Now that we have a scene recorded, we can move it over to our computer.

Here, we use a macOS computer with Finder. If you are on Linux, use the iOS Files app to access the Stray Scanner folder and move it over through a cloud service or share it through some other app.

First, we create two folders: a dataset folder which will contain our processed imported scenes and a staging folder where we temporarily keep the Stray Scanner scans. To create these, we run:

mkdir dataset/
mkdir staging/

To move the files over to the staging folder:

  1. Connect your iPhone or iPad to your computer using a Lightning cable
  2. Open Finder.app
  3. Select your device from the sidebar
  4. Click on the "Files" tab beneath your device description
  5. Under "Stray Scanner", you should see one directory per scene you have collected. Drag the scanned folders to the staging folder

How to access Stray Scanner data


Note: The directories are named using random hashes, for example "ac1ed2228f". This is to prevent conflicts with scenes collected using other devices, when you are collaborating with other people. This avoids having to rename them later, though we do agree that it can sometimes be hard to keep track of which scene is which. Feel free to rename the rename the folders however you like.


Now that we have moved over the scenes, we can import and convert them to our data format and into our dataset. This is done with the stray dataset import command:

stray dataset import staging/* --out dataset/

Optionally, you can specify the resolution at which you want to import the dataset by appending --width=<width> --height=<height> to the command. For example, stray dataset import staging/* --out dataset --width=1920 --height=1440. Generally, we recommend a larger resolution, but sometimes, smaller can be easier to work with and can be good enough quality wise.

To verify that the dataset was imported correctly, you can play through the image frames with the stray dataset show dataset/* command. This will play through the scenes one image at the time.

Concluding

Now we have successfully imported our first scene! Now it's time to move on to the next step, which is integrating your scenes. The integration step, takes a scene, recovers camera poses and creates a 3D reconstruction of the scene. This allows us to label the scenes in 3D.

Tutorial: Integrating a scene for 3D labeling

First, make sure you have the Stray Toolkit installed and that you have imported a scene. If you haven't, check out the importing tutorial.

To proceed, you will need a dataset with at least one scene. An example directory structure might look like this:

dataset/
    scene1/
    scene2/

Where scene1 and scene2 are scenes following the scene dataset format.

Check that the Stray Toolkit is installed and loaded in your shell with stray --version. This should print something similar to Stray Robots CLI version 1.0.0.

If not, check out the installation guide.

Integrating the scene

Scenes are integrated with the stray studio integrate command.

With the above directory structure, we run:

stray studio integrate dataset/scene1

to integrate scene1.

Checking the results

To check the result of the integration run stray studio open dataset/scene1.

Studio Electric Scooter

That's it! Now you can start creating entire datasets and adding your annotations using Studio.

Telemetry

The Stray Command Line Tool collects anononymous usage statistics about general usage of the tool. Participation is optional and you can opt out if you like.

The collected data is very coarse and only includes the commands that are run and the version of the tool. No information is collected about the machine. There is no way in which we can associate the data with any particular user. We also never send any of the data that you process to our servers.

Our servers do not log ip addresses or otherwise try to associate the events to any particular address and the usage data is truly anonymous.

Why do we collect data?

Telemetry allows us to accurately gauge which features are being used.

Without telemetry, we would have no idea which features of our tool people actually use. The data is simply to inform us which tools are providing value to our users and which ones are not being used. If we see that a tool that isn't being used takes up a lot of our time, we might consider axing it. On the other hand, if a tool which we think is not that great, sees a lot of usage, we can further invest into it's development.

Opting out

You can opt out of telemetry by setting the DO_NOT_TRACK environment variable to any non-empty value in your shell. This can be done by adding:

export DO_NOT_TRACK=true

to your shell's rc file and sourcing it for the current session. For example, .bashrc for Bash or .zshrc for zsh.

If you opt out, we won't send any usage events back to us.

Support

If you have bug reports or experience any issues with any of the Stray tools, visit our issue tracker or send an email to [email protected].