DAIR-V2X-C Complemented Annotations

About

Originally DAIR-V2X only annotates 3D boxes within the range of camera's view in vehicle-side. We supplement the missing 3D box annotations to enable the 360 degree detection. With fully complemented vehicle-side labels, we regenerate the cooperative labels for users, which follow the original cooperative label format.

Original Cooperative Labels

Updated Cooperative Labels



Usage:

Download the annotations files from Google Drive, which contains complemented vehicle-side labels, cooperative labels and individual train/val split files from official splits. Suppose you have downloaded the DAIR-V2X-C dataset and unzip it under YOUR_DAIR_V2X_C_PATH.

teaser

Step 1: Vehicle side annotations

teaser

- rename YOUR_DAIR_V2X_C_PATH/cooperative-vehicle-infrastructure/vehicle-side/label/lidar to YOUR_DAIR_V2X_C_PATH/cooperative-vehicle-infrastructure/vehicle-side/label/lidar_backup

- put new_labels/vehicle-side_label/lidar to YOUR_DAIR_V2X_C_PATH/cooperative-vehicle-infrastructure/vehicle-side/label/lidar.

Step 2: Cooperative annotations

teaser

- rename YOUR_DAIR_V2X_C_PATH/cooperative-vehicle-infrastructure/cooperative/label_world to YOUR_DAIR_V2X_C_PATH/cooperative-vehicle-infrastructure/cooperative/label_world_backup

- put new_labels/cooperative_label/label_world to YOUR_DAIR_V2X_C_PATH/cooperative-vehicle-infrastructure/cooperative/label_world.

Step 3 (optional): individual splits
put provided train.json and val.json under YOUR_DAIR_V2X_C_PATH/cooperative-vehicle-infrastructure/. These are just individual splits from official splits. Notice that the original train split contains one frame (vehicle frame id "002234") without infrastructure files, so we omit it in `train.json`.

Video

Original Cooperative Labels

Updated Cooperative Labels

Citation

If you use the DAIR-V2X dataset and our complemented annotations, please

cite DAIR-V2X
@inproceedings{yu2022dairv2x,
   title={DAIR-V2X: A Large-Scale Dataset for Vehicle-Infrastructure Cooperative 3D Object Detection},
   author={Yu, Haibao and Luo, Yizhen and Shu, Mao and Huo, Yiyi and Yang, Zebang and Shi, Yifeng and Guo, Zhenglong and Li, Hanyu and Hu, Xing and Yuan, Jirui and Nie, Zaiqing},
   booktitle={IEEE/CVF Conf.~on Computer Vision and Pattern Recognition (CVPR)},
   month = jun,
   year={2022}}
}
cite CoAlign.
@article{lu2022robust,
  title={Robust Collaborative 3D Object Detection in Presence of Pose Errors},
  author={Lu, Yifan and Li, Quanhao and Liu, Baoan and Dianati, Mehrdad and Feng, Chen and Chen, Siheng and Wang, Yanfeng},
  journal={arXiv preprint arXiv:2211.07214},
  year={2022}
}

Acknowledgement

Thanks to the first real-world dataset DAIR-V2X, the development of collaborative perception research has been greatly advanced.