Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

camera extrinsic matrix plot problem #292

Open
LuizGuzzo opened this issue Oct 28, 2024 · 21 comments
Open

camera extrinsic matrix plot problem #292

LuizGuzzo opened this issue Oct 28, 2024 · 21 comments
Assignees

Comments

@LuizGuzzo
Copy link

I'm trying to correctly add the camera's extrinsic matrix to calibrate the lidar and camera, but when going to xtreme1 (or basic.ai) it always gives me a really crazy position... I did the test and the cross on the right is the transformation where it should be the camera, however when using it it delivers the camera in a very random position.. and I have no idea how to debug this to make it work correctly, does anyone have any suggestions? Below is the image (I used the print from basic.ai because it shows the camera's line of sight, but the results are the same with the xtreme1)

image

As you can see the cross on the right and the camera line on the left, it is below the ground and in the opposite direction.. (the other crosses are other sensors)

image

And here is the plot made by another program using the json that I passed to xtreme1/basic.ai showing the correct camera plot.

@guhaomine
Copy link
Collaborator

When using basic.ai, we will get the camera position from your camera's external matrix. You can check if the camera position in the camera matrix is ​​correct.

@LuizGuzzo
Copy link
Author

LuizGuzzo commented Nov 8, 2024

Hello, I’d like to update with a proof of concept that demonstrates the accuracy of my sensor transformation matrices, which I believe are set up correctly. Despite this, when testing in Xtreme1, the camera appears in an unexpected, seemingly random position.

Problem Description

I am encountering issues when setting up the camera transformation relative to the LiDAR point cloud in Xtreme1 (after transitioning from basic.ai). The setup should allow for precise visualization of the camera and LiDAR point cloud together in the scene, but in Xtreme1, the camera appears misaligned.

Configuration and Procedure

  1. Sensor Positions and Orientations:

    • I have set up the following positional and rotational data for each sensor relative to a shared reference frame on the truck:
      • LiDAR0:
        • Position (x, y, z): (0.11, 0.0, 0.19)
        • Orientation (roll, pitch, yaw): (0.0231, 0.000679, -0.00151)
      • Camera (Intelbras3):
        • Position (x, y, z): (0.55, -1.25, -1.9)
        • Orientation (roll, pitch, yaw): (0.0, -0.03, 0.06)
      • Sensor Board 1:
        • Position (x, y, z): (4.8, 0.0, 2.61)
        • Orientation (roll, pitch, yaw): (0.0, 0.03, 0.0)
  2. Transformation Process:

    • I generated transformation matrices by chaining these poses. First, I applied the sensor board transformation to the truck’s coordinate frame. Then I multiplied this by the LiDAR transformation matrix to obtain the final LiDAR position in the truck’s coordinate system.
    • The same method was used for the camera, ensuring that all transformations are relative to the same shared reference frame on the truck.
  3. Verification Using Custom Code:

    • I implemented a Python script that takes these raw data (point clouds and sensor poses) and performs all necessary transformations. The script plots the resulting sensor positions and orientations together with the point cloud, showing consistent and accurate alignment.
    • I have added flags to toggle between using the raw point cloud and pre-transformed data intended for Xtreme1, allowing a direct comparison of the configurations. When running locally, all sensors appear correctly positioned and aligned with the point cloud data.

Testing in Xtreme1

I created a test case that can be easily uploaded to Xtreme1:

  • To use the test case: Simply compress the following folders:
    • camera_config - Contains the camera’s configuration and pose information.
    • camera_image_0 - Stores the images captured by the camera.
    • lidar_point_cloud_0 - Contains the LiDAR point cloud data, already transformed.
  • Name the compressed file Scene_0.zip. Xtreme1 should recognize and interpret this automatically.

Current Issue in Xtreme1

Despite the transformations appearing correct locally, when uploaded to Xtreme1, the camera is displayed in an incorrect position. It does not match the expected alignment seen in local testing, and I am unsure of the cause.

Any assistance in troubleshooting this would be greatly appreciated. Please let me know if additional details or access to my code would help clarify the setup.

here is the zip with the files and the code inside it.
Scene_test.zip

And this is the result inside the xtreme1, completely uncalibrated.
image

It's worth noting that I applied a rotation to the camera to correct the camera's perspective with the LiDAR, but this only positions the rotations to the right side, the XYZ (i.e. the translation) is not affected.

@guhaomine
Copy link
Collaborator

I noticed distortion in your image. Is it taken with a fisheye camera? If so, distortion parameters need to be added

@LuizGuzzo
Copy link
Author

LuizGuzzo commented Nov 18, 2024

You are correct, I had forgotten to include the distortion parameters in the tests. I have now added them and ensured they are being used. Here are the distortion parameters for the camera:

  • fx: 442.8260000473373
  • fy: 590.3518315647152
  • cu: 329.2672574940965
  • cv: 235.1941019102088
  • k1: -0.4012898929082725
  • k2: 0.2119039649987176
  • k3: -0.07115199650013011
  • p1: -0.0017767161262766081
  • p2: -0.0033067056085569397
  • Image width: 640
  • Image height: 480

I have applied these parameters correctly to the image for distortion correction. However, even with the distortion parameters in place, the result remains the same. I suspect the issue might still be related to how the transformation is interpreted by Xtreme1.

I wonder if it could be a unit conversion problem, such as the system expecting values in a different base (e.g., meters to inches or similar). I'm not sure, just speculating.

Here is the updated result after applying the distortion parameters:

image

Unfortunately, the camera position remains incorrect. Let me know if you have any additional suggestions or insights into this issue.

Thank you for your assistance!

@guhaomine
Copy link
Collaborator

I tried it and found that the distortion parameter is not the root cause. There may be an issue with cameraExternal in camera_config.
The following image may be the correct camera position
image
But it's displayed here
屏幕截图 2024-11-19 145344

@guhaomine
Copy link
Collaborator

The impact of distortion parameters is minimal, and open-source projects currently do not support distorted images

@LuizGuzzo
Copy link
Author

Thank you for the response. I understand that the distortion parameters do not have a significant effect. Regarding the camera's position, it should actually be further to the right and lower than where you marked in the provided image.

image

To confirm this, I plotted a cross in the point cloud to visualize where the extrinsic matrix transformation should ideally place the camera. However, Xtreme1 does not seem to deliver this position accurately, as the resulting camera placement remains inconsistent with the expected transformation.

Would it be possible to define the camera position manually within Xtreme1? If so, I could attempt to work backward from this manual definition to understand how the matrix is being calculated. This might help me identify if there's an underlying issue in the processing pipeline or an adjustment needed on my side. I'm also open to any specific sequence of steps or commands you might recommend to help refine or verify the extrinsic matrix.

@VeeranjaneyuluToka
Copy link

VeeranjaneyuluToka commented Nov 22, 2024

@LuizGuzzo , You are using a camera and a lidar feeds but when i check your camera config, there are different externals in each .json file. am thinking that all the .json files should have same content as calibration parameters same for the images and point clouds, is not it?

BTW, i am also trying to build own dataset to feed to the xtreme1.

@LuizGuzzo
Copy link
Author

LuizGuzzo commented Nov 22, 2024

@VeeranjaneyuluToka
Yes, the extrinsics in the .json files are slightly different because I consider the timestamp of the image and the sweep of the sensors. I perform an interpolation of the camera's position to match the exact timestamp of the lidar sweep.

This approach refines the extrinsics by interpolating the camera's pose relative to the time and the global pose, aiming for higher precision. However, I believe it should work if you define fixed extrinsics instead.

@VeeranjaneyuluToka
Copy link

VeeranjaneyuluToka commented Nov 23, 2024

@LuizGuzzo , thanks for quick reply.

I followed the document and prepared the dataset and verified with your files as well. I have camera_config which has same format of yours and they are all having same content, camera_image_0, and lidar_point_cloud_0. But when i upload, it shows me both camera and lidar feeds when i just open, but when i try to annotate or view, it shows only LiDAR feeds, do you know any reason for this behavior?

@LuizGuzzo
Copy link
Author

This happens because the LiDAR and the camera are not calibrated. If they were, the bounding box in the LiDAR would be projected onto the image. That’s exactly what I’m trying to solve—I’m looking for an extrinsic matrix to calibrate the sensors, enabling mutual annotation in the point cloud and the image.

Or are you saying that you don’t have the image? If that’s the case, it might be because the point cloud, image, and configuration don’t share the same name. They need to have exactly the same name for Xtreme1 to link them together.

@VeeranjaneyuluToka
Copy link

VeeranjaneyuluToka commented Nov 27, 2024

Yes, i do not have any image in annotate plane, i only see lidar point cloud. Btw, i have cloned the codebase from this link https://github.com/xtreme1-io/xtreme1.git and started using, is not the same codebase are you using ?

I am sure file names are same among camera_config, camera_image_0 and lidar_point_cloud_0.

@LuizGuzzo
Copy link
Author

Strange... it should be working. Unless the calibration is completely off, the image should appear.

What I can suggest (because it took me a while to notice this) is to check if there is a button like ">" on the left side of the screen, vertically in the middle. If you click on it, it should display the images you uploaded. Here's a photo demonstrating it.

image

If it doesn't appear, try advancing a few frames to see if any image loads. If not... I'm not sure what could be wrong. If the file names are identical and the calibration is at least minimally acceptable, it should work.

@VeeranjaneyuluToka
Copy link

VeeranjaneyuluToka commented Nov 27, 2024

Thanks for quick reply, i have captured a screen for your reference. I could not see when i open not in the view or annotate tab.
Screenshot from 2024-11-27 15-51-04
Screenshot from 2024-11-27 15-51-23
Screenshot from 2024-11-27 15-51-48

This is how i am getting, could you tell me the format in json file?

@LuizGuzzo
Copy link
Author

Download the file I sent at the beginning of the issue and see if you can run it. Make sure to select "annotate" instead of "view" (I’m not sure if this will make a difference, but it’s what I usually select).

If my example works, then there might be something wrong with your data.

@VeeranjaneyuluToka
Copy link

Same behavior with your files also. I believe you might have used the released code base then as they mentioned in readme.

@VeeranjaneyuluToka
Copy link

VeeranjaneyuluToka commented Nov 28, 2024

Sorry it works with your files but i do not see camera feed when i press annotate rather when i press next, then > appear and when i press on >, the image gets displayed.

And could figure out that there is some issue in .json files, fixed it and it works fine. And am not facing the shifted issue that you are pointing above. Thanks for your prompt reply!

@LuizGuzzo
Copy link
Author

I'm glad you solved the image display problem. And about the shift, did you make any changes to solve it? When you tested it with my data did it keep the error?

I'm about to give up on this tool and go to Scalabel >_>

@VeeranjaneyuluToka
Copy link

VeeranjaneyuluToka commented Nov 29, 2024

I did not do any changes wrt my data, passed the following
-> Camera feeds
-> LiDAR feeds
-> Camera config that contains the following (I already calibrated cameras, LiDAR2Camer and i have transformation matrix)
camera intrinsics
Lidar to camera transformation

I was facing issues that i mentioned because my config is not in the right format.

I did notice the issue that you are facing wrt your data.

@LuizGuzzo
Copy link
Author

I'm glad to hear that you managed to solve your issue. It seems that there is some strange error with my data, as I replicated the same scenario in Scalabel (another annotation tool) and encountered the same problem. While testing, transforming the point cloud to the camera's perspective seemed to solve the issue, but I had to reset the camera's position. I'll take a closer look at our data and try to solve the problem by placing the cameras in the correct position instead of moving the points to it. I'm confident that the issue is with my dataset. I would like to thank everyone for their help, assistance, and for being attentive with me.

@LuizGuzzo
Copy link
Author

I did not do any changes wrt my data, passed the following -> Camera feeds -> LiDAR feeds -> Camera config that contains the following (I already calibrated cameras, LiDAR2Camer and i have transformation matrix) camera intrinsics Lidar to camera transformation

I was facing issues that i mentioned because my config is not in the right format.

I did notice the issue that you are facing wrt your data.

@VeeranjaneyuluToka ,

I had previously closed this issue, but I would like to ask you something specific. I noticed that you uploaded a dataset to Xtreme1. I would like to know if this dataset is custom. If so, would it be possible for you to share 10 frames of LiDAR and camera data with me? I am interested in studying a custom scenario where camera and LiDAR calibration was successfully achieved.

Additionally, I wanted to ask if you have ever tested the process of annotating a dataset, exporting it, deleting it from the tool, and then re-importing it along with the annotations in order to continue editing. I am curious to know if it is possible to fully reconstruct an annotated scenario after it has been exported and re-imported.

The reason I am asking is that I have not been able to calibrate my data properly, and I also faced issues when trying to re-import annotated data to rebuild the scenario. I would appreciate any guidance or help you can offer on this.

Thank you in advance for your assistance!

@LuizGuzzo LuizGuzzo reopened this Dec 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants