You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello! Thank you for your work!
I encountered a problem in my recent operation. I would like to ask if this code can be run on orinagx on the arm architecture? Because I encountered problems during the implementation, I installed cuda11.4 and pytorch adapted to orinagx normally, and installed third-party libraries normally, but there was a problem when running:
nvidia@nvidia-desktop:~/ssd/zzq/MonoGS$ python slam.py --config ./configs/rgbd/replica/office0.yaml
MonoGS: saving results in results/Replica_office0/2024-12-01-20-13-29
MonoGS: training_setup Done.
Traceback (most recent call last):
File "slam.py", line 263, in
slam = SLAM(config, save_dir=save_dir)
File "slam.py", line 112, in init
backend_process.start()
File "/home/nvidia/miniconda3/envs/SLAM/lib/python3.8/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/home/nvidia/miniconda3/envs/SLAM/lib/python3.8/multiprocessing/context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "/home/nvidia/miniconda3/envs/SLAM/lib/python3.8/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/home/nvidia/miniconda3/envs/SLAM/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in init super().init(process_obj) File "/home/nvidia/miniconda3/envs/SLAM/lib/python3.8/multiprocessing/popen_fork.py", line 19, in init self._launch(process_obj) File "/home/nvidia/miniconda3/envs/SLAM/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/home/nvidia/miniconda3/envs/SLAM/lib/python3.8/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) File "/home/nvidia/miniconda3/envs/SLAM/lib/python3.8/site-packages/torch/multiprocessing/reductions.py", line 270, in reduce_tensor event_sync_required) = storage.share_cuda() File "/home/nvidia/miniconda3/envs/SLAM/lib/python3.8/site-packages/torch/storage.py", line 1034, in share_cuda return self._untyped_storage.share_cuda(*args, **kwargs) RuntimeError: CUDA error: operation not supported CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.
The text was updated successfully, but these errors were encountered:
Hello! Thank you for your work!
I encountered a problem in my recent operation. I would like to ask if this code can be run on orinagx on the arm architecture? Because I encountered problems during the implementation, I installed cuda11.4 and pytorch adapted to orinagx normally, and installed third-party libraries normally, but there was a problem when running:
nvidia@nvidia-desktop:~/ssd/zzq/MonoGS$ python slam.py --config ./configs/rgbd/replica/office0.yaml
MonoGS: saving results in results/Replica_office0/2024-12-01-20-13-29
MonoGS: training_setup Done.
Traceback (most recent call last):
File "slam.py", line 263, in
slam = SLAM(config, save_dir=save_dir)
File "slam.py", line 112, in init
backend_process.start()
File "/home/nvidia/miniconda3/envs/SLAM/lib/python3.8/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/home/nvidia/miniconda3/envs/SLAM/lib/python3.8/multiprocessing/context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "/home/nvidia/miniconda3/envs/SLAM/lib/python3.8/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/home/nvidia/miniconda3/envs/SLAM/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in init super().init(process_obj) File "/home/nvidia/miniconda3/envs/SLAM/lib/python3.8/multiprocessing/popen_fork.py", line 19, in init self._launch(process_obj) File "/home/nvidia/miniconda3/envs/SLAM/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/home/nvidia/miniconda3/envs/SLAM/lib/python3.8/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) File "/home/nvidia/miniconda3/envs/SLAM/lib/python3.8/site-packages/torch/multiprocessing/reductions.py", line 270, in reduce_tensor event_sync_required) = storage.share_cuda() File "/home/nvidia/miniconda3/envs/SLAM/lib/python3.8/site-packages/torch/storage.py", line 1034, in share_cuda return self._untyped_storage.share_cuda(*args, **kwargs) RuntimeError: CUDA error: operation not supported CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with
TORCH_USE_CUDA_DSA
to enable device-side assertions.The text was updated successfully, but these errors were encountered: