The calibration command is a thin usability wrapper around Kalibr to help with calibrating cameras and generating calibration targets.
This command creates a calibration target that you can print and use to calibrate your cameras.
|Path to the |
First you should define a calibration target using a yaml configuration file (saved as
target.yaml). Here is an example configuration for a board that can be printed on an A2 size poster:
target_type: 'aprilgrid' tagCols: 8 tagRows: 5 tagSize: 0.05 tagSpacing: 0.3
target_typedefines the type of calibration board to generate and can be either "aprilgrid", "checkerboard" or "circlegrid". We recommend "aprilgrid"
tagColsdetermines how many tags to place in the horizontal direction in the grid
tagRowsdetermines how many tags to place in the vertical direction in the grid
tagSizedetermines the size of each individual tag
tagSpacingdetermines the spacing between the tags as a percentage, the actual spacing is equal to
tagSize * tagSpacing
If you are generating a target for a specific print size, it helps to make sure the size is properly defined and fits the target. This avoids having to scale the target and you can skip measuring the target after scaling and printing. The total width of the target is calculated as
tagCols * tagSize + (tagCols - 1) * tagSpacing * tagSize. The total height is
tagRows * tagSize + (tagRows - 1) * tagSize * tagSpacing.
stray calibration generate <target-yaml> command will create a
target.pdf file in your current directory, which you can print.
We recommend using a larger target, but still small enough so that you can easily observe it from all different angles. We have found that using targets anywhere between the size of an A2 sheet of paper and 0.75x0.75m meters to be convenient, but still large enough to accurately capture.
Be sure to check that no scaling is applied when printing the target. After printing, make sure that the tag did not get scaled by measuring the tag with a ruler. If needed, update the
tagSize field in the
target.yaml file to reflect the actual size, as the file and tag size will be used again in the calibration step.
This command runs camera calibration. It can calibrate intrinsic parameters of the camera as well as camera-to-imu calibration.
|none||The type of calibration to run, see below for a description|
|Path to the scene to use in calibration|
|Path to the |
|Path to the |
|Path to the |
Now that we have a calibration board, we can move on to the actual intrinsics calibration step. In this step, we will collect a dataset where we observe the calibration board from many different viewpoints, covering as many orientations and angles as possible. From this dataset, we can estimate the intrinsic parameters of the camera.
First, mount your calibration board on a flat surface, for example a wall or a table. Make sure that the calibration grid is perfectly flat on the surface and wrinkle free.
Record a dataset with your camera covering as many views as possible. A few things to keep in mind:
- Try to capture the whole board on every frame
- Capture the board from as many different camera poses as possible
- Ensure an even distribution of the different poses, as to not bias the dataset
- Make sure the calibration board is entirely visible in the image
- Use images of the same size that you intend to use with
stray studio integrate(or alternatively scale the calibration afterwards with
stray calibration scaleto match the image size)
Here is an example:
Convert your dataset into the Stray scene format. Only the color directory is needed for running intrinsics calibration. We recommend capturing frames at somewhere between 5 to 10hz, as using higher frame rates will needlessly slow down computing the calibration without much benefit.
Run the intrinsics calibration step with the command
stray calibration run intrinsics <scene-path> <target-yaml>.
The command will extract the calibration target from each image and recover the intrinsic parameters through optimization. Once done, the command will create a
camera_intrinsics.json file in the scene data directory, which contains the intrinsic parameters of the camera, including the intrinsics matrix and distortion coefficients. You can then copy or import this file over to all other scenes captured with this camera.
The command will output a
calibration-report.pdf file into the scene directory you used. You can check the report to make sure the reprojection errors are less than a few pixels. The smaller the better. If they are large, try recording a new dataset or run the calibration with a higher resolution.
camchain.yaml is a yaml file containing intrinsics parameters. It can be used in the camera-imu step.
Now you are done, and can proceed to integrate and annotate some scenes!
Camera-imu calibration is for computing the transformation from your IMU to the camera sensor. This is needed if you want to do visual-inertial SLAM.
For camera-imu calibration you will need a scene with a
imu.csv file with imu readings and a
frames.csv file with timestamps for each frame. Additionally, you will need an imu noise configuration file, call it
imu_noise.yaml, and intrinsics calibration yaml file
camchain.yaml (generated by the intrinsics calibration step) and a calibration target file
target.yaml as specified in the
For this type of calibration you will need camera images recorded at 20hz and an imu rate as high as possible. For a tutorial on how to collect the dataset, check out the Kalibr wiki.
The imu noise configuration file describes the noise properties of your inertial sensor. Here is an example file for the imu on an iPhone 12 Pro:
#Accelerometers accelerometer_noise_density: 4.25e-03 #Noise density (continuous-time) accelerometer_random_walk: 2.97e-04 #Bias random walk #Gyroscopes gyroscope_noise_density: 1.4e-04 # Noise density (continuous-time) gyroscope_random_walk: 5.86e-06 # Bias random walk update_rate: 100.0 # Hz frequency of imu measurements.
You should be able to use the same one for other iPhones. The manufacturer of your IMU sensor might report these values, if not you can use a tool such as imu_utils to compute them.
Once if you have your dataset collected and imu configured, you can run the command:
stray calibration run camera_imu <path-to-scene> --target target.yaml --camera camchain.yaml --imu imu_noise.yaml
to compute the
report-imucam.pdfa report with details on how the calibration succeeded.
camchain-imucam.yamlcontains estimated camera to imu transformation and time shift.
If you calibrated your sensor with a specific resolution, you can fix this by scaling the calibration to the size of the current dataset. You might want to run calibration at full resolution while only storing your datasets at a smaller resolution.
|Paths to the scenes. The |
|Desired new width of the calibration|
|Desired new height of the calibration|
stray calibration scale <scenes> --width <new-width> --height <new-height> command reads the current
camera_intrinsics.json file in each scene and scales it to the new width and height.
Visit our issue tracker for help and direct support.