You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to make a kitti-like dataset without images, for which I studied the conversion of annotations from BasicAI LiDAR Fusion format to Kitti 3D Object Detection format.
I used the LiDAR without any configs dataset provided in this URL https://docs.basic.ai/docs/upload-data and annotated it. The annotation example is shown in the figure below.
When I exported, I saved the BasicAI LiDAR Fusion format and Kitti 3D Object Detection format respectively. I compared the annotations of the 3d bounding box in the json file and the txt file. As shown in the figure, the size of the box is the same, except for the order. How is the center position obtained?
Is it obtained by txt_position = camera_external * json_position? But the result is wrong when calculated in this way. I read the description of camera extrinsic parameters on the webpage https://docs.basic.ai/docs/camera-intrinsic-extrinsic-and-distortion-in-camera-calibration#practice-in-basicai-, as shown in the figure. The order of the extrinsic parameters provided in the above website seems to be different.
The figure below shows the parameters corresponding to the above set of data. This parameter is the second extrinsic parameter in LiDAR_Fusion_without_any_configs/camera_config/08.json.
So how should the position conversion from BasicAI LiDAR Fusion format to Kitti 3D Object Detection format be calculated? Is my calculation formula wrong, or is there a problem with the camera extrinsic parameters?
The text was updated successfully, but these errors were encountered:
I want to make a kitti-like dataset without images, for which I studied the conversion of annotations from BasicAI LiDAR Fusion format to Kitti 3D Object Detection format.
I used the LiDAR without any configs dataset provided in this URL https://docs.basic.ai/docs/upload-data and annotated it. The annotation example is shown in the figure below.
When I exported, I saved the BasicAI LiDAR Fusion format and Kitti 3D Object Detection format respectively. I compared the annotations of the 3d bounding box in the json file and the txt file. As shown in the figure, the size of the box is the same, except for the order. How is the center position obtained?
Is it obtained by txt_position = camera_external * json_position? But the result is wrong when calculated in this way. I read the description of camera extrinsic parameters on the webpage https://docs.basic.ai/docs/camera-intrinsic-extrinsic-and-distortion-in-camera-calibration#practice-in-basicai-, as shown in the figure. The order of the extrinsic parameters provided in the above website seems to be different.
The figure below shows the parameters corresponding to the above set of data. This parameter is the second extrinsic parameter in LiDAR_Fusion_without_any_configs/camera_config/08.json.
So how should the position conversion from BasicAI LiDAR Fusion format to Kitti 3D Object Detection format be calculated? Is my calculation formula wrong, or is there a problem with the camera extrinsic parameters?
The text was updated successfully, but these errors were encountered: