-
Notifications
You must be signed in to change notification settings - Fork 151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
camera extrinsic matrix plot problem #292
Comments
When using basic.ai, we will get the camera position from your camera's external matrix. You can check if the camera position in the camera matrix is correct. |
Hello, I’d like to update with a proof of concept that demonstrates the accuracy of my sensor transformation matrices, which I believe are set up correctly. Despite this, when testing in Xtreme1, the camera appears in an unexpected, seemingly random position. Problem Description I am encountering issues when setting up the camera transformation relative to the LiDAR point cloud in Xtreme1 (after transitioning from basic.ai). The setup should allow for precise visualization of the camera and LiDAR point cloud together in the scene, but in Xtreme1, the camera appears misaligned. Configuration and Procedure
Testing in Xtreme1 I created a test case that can be easily uploaded to Xtreme1:
Current Issue in Xtreme1 Despite the transformations appearing correct locally, when uploaded to Xtreme1, the camera is displayed in an incorrect position. It does not match the expected alignment seen in local testing, and I am unsure of the cause. Any assistance in troubleshooting this would be greatly appreciated. Please let me know if additional details or access to my code would help clarify the setup. here is the zip with the files and the code inside it. And this is the result inside the xtreme1, completely uncalibrated. It's worth noting that I applied a rotation to the camera to correct the camera's perspective with the LiDAR, but this only positions the rotations to the right side, the XYZ (i.e. the translation) is not affected. |
I noticed distortion in your image. Is it taken with a fisheye camera? If so, distortion parameters need to be added |
You are correct, I had forgotten to include the distortion parameters in the tests. I have now added them and ensured they are being used. Here are the distortion parameters for the camera:
I have applied these parameters correctly to the image for distortion correction. However, even with the distortion parameters in place, the result remains the same. I suspect the issue might still be related to how the transformation is interpreted by Xtreme1. I wonder if it could be a unit conversion problem, such as the system expecting values in a different base (e.g., meters to inches or similar). I'm not sure, just speculating. Here is the updated result after applying the distortion parameters: Unfortunately, the camera position remains incorrect. Let me know if you have any additional suggestions or insights into this issue. Thank you for your assistance! |
The impact of distortion parameters is minimal, and open-source projects currently do not support distorted images |
Thank you for the response. I understand that the distortion parameters do not have a significant effect. Regarding the camera's position, it should actually be further to the right and lower than where you marked in the provided image. To confirm this, I plotted a cross in the point cloud to visualize where the extrinsic matrix transformation should ideally place the camera. However, Xtreme1 does not seem to deliver this position accurately, as the resulting camera placement remains inconsistent with the expected transformation. Would it be possible to define the camera position manually within Xtreme1? If so, I could attempt to work backward from this manual definition to understand how the matrix is being calculated. This might help me identify if there's an underlying issue in the processing pipeline or an adjustment needed on my side. I'm also open to any specific sequence of steps or commands you might recommend to help refine or verify the extrinsic matrix. |
@LuizGuzzo , You are using a camera and a lidar feeds but when i check your camera config, there are different externals in each .json file. am thinking that all the .json files should have same content as calibration parameters same for the images and point clouds, is not it? BTW, i am also trying to build own dataset to feed to the xtreme1. |
@VeeranjaneyuluToka This approach refines the extrinsics by interpolating the camera's pose relative to the time and the global pose, aiming for higher precision. However, I believe it should work if you define fixed extrinsics instead. |
@LuizGuzzo , thanks for quick reply. I followed the document and prepared the dataset and verified with your files as well. I have camera_config which has same format of yours and they are all having same content, camera_image_0, and lidar_point_cloud_0. But when i upload, it shows me both camera and lidar feeds when i just open, but when i try to annotate or view, it shows only LiDAR feeds, do you know any reason for this behavior? |
This happens because the LiDAR and the camera are not calibrated. If they were, the bounding box in the LiDAR would be projected onto the image. That’s exactly what I’m trying to solve—I’m looking for an extrinsic matrix to calibrate the sensors, enabling mutual annotation in the point cloud and the image. Or are you saying that you don’t have the image? If that’s the case, it might be because the point cloud, image, and configuration don’t share the same name. They need to have exactly the same name for Xtreme1 to link them together. |
Yes, i do not have any image in annotate plane, i only see lidar point cloud. Btw, i have cloned the codebase from this link https://github.com/xtreme1-io/xtreme1.git and started using, is not the same codebase are you using ? I am sure file names are same among camera_config, camera_image_0 and lidar_point_cloud_0. |
Strange... it should be working. Unless the calibration is completely off, the image should appear. What I can suggest (because it took me a while to notice this) is to check if there is a button like ">" on the left side of the screen, vertically in the middle. If you click on it, it should display the images you uploaded. Here's a photo demonstrating it. If it doesn't appear, try advancing a few frames to see if any image loads. If not... I'm not sure what could be wrong. If the file names are identical and the calibration is at least minimally acceptable, it should work. |
Download the file I sent at the beginning of the issue and see if you can run it. Make sure to select "annotate" instead of "view" (I’m not sure if this will make a difference, but it’s what I usually select). If my example works, then there might be something wrong with your data. |
Same behavior with your files also. I believe you might have used the released code base then as they mentioned in readme. |
Sorry it works with your files but i do not see camera feed when i press annotate rather when i press next, then > appear and when i press on >, the image gets displayed. And could figure out that there is some issue in .json files, fixed it and it works fine. And am not facing the shifted issue that you are pointing above. Thanks for your prompt reply! |
I'm glad you solved the image display problem. And about the shift, did you make any changes to solve it? When you tested it with my data did it keep the error? I'm about to give up on this tool and go to Scalabel >_> |
I did not do any changes wrt my data, passed the following I was facing issues that i mentioned because my config is not in the right format. I did notice the issue that you are facing wrt your data. |
I'm glad to hear that you managed to solve your issue. It seems that there is some strange error with my data, as I replicated the same scenario in Scalabel (another annotation tool) and encountered the same problem. While testing, transforming the point cloud to the camera's perspective seemed to solve the issue, but I had to reset the camera's position. I'll take a closer look at our data and try to solve the problem by placing the cameras in the correct position instead of moving the points to it. I'm confident that the issue is with my dataset. I would like to thank everyone for their help, assistance, and for being attentive with me. |
I had previously closed this issue, but I would like to ask you something specific. I noticed that you uploaded a dataset to Xtreme1. I would like to know if this dataset is custom. If so, would it be possible for you to share 10 frames of LiDAR and camera data with me? I am interested in studying a custom scenario where camera and LiDAR calibration was successfully achieved. Additionally, I wanted to ask if you have ever tested the process of annotating a dataset, exporting it, deleting it from the tool, and then re-importing it along with the annotations in order to continue editing. I am curious to know if it is possible to fully reconstruct an annotated scenario after it has been exported and re-imported. The reason I am asking is that I have not been able to calibrate my data properly, and I also faced issues when trying to re-import annotated data to rebuild the scenario. I would appreciate any guidance or help you can offer on this. Thank you in advance for your assistance! |
I'm trying to correctly add the camera's extrinsic matrix to calibrate the lidar and camera, but when going to xtreme1 (or basic.ai) it always gives me a really crazy position... I did the test and the cross on the right is the transformation where it should be the camera, however when using it it delivers the camera in a very random position.. and I have no idea how to debug this to make it work correctly, does anyone have any suggestions? Below is the image (I used the print from basic.ai because it shows the camera's line of sight, but the results are the same with the xtreme1)
As you can see the cross on the right and the camera line on the left, it is below the ground and in the opposite direction.. (the other crosses are other sensors)
And here is the plot made by another program using the json that I passed to xtreme1/basic.ai showing the correct camera plot.
The text was updated successfully, but these errors were encountered: