Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

To bulid my own policy, but have errors TypeError: '>' not supported between instances of 'int' and 'dict' #555

Open
zhouzhq2021 opened this issue Dec 7, 2024 · 2 comments

Comments

@zhouzhq2021
Copy link

I improved the act policy in lerobot framework and created a new policy named myact. I mainly did the following:
Create the my_act folder in the lerobot/common/policies/ path
Create 'configuration_my_act.py' and 'modeling_my_act.py' in the + my_act folder
Create lerobot/configs/policy/myact yaml, which is modified to name: myact

But when I'm done, run the following command and get an error:

xvfb-run python lerobot/scripts/train.py
hydra.run.dir=mypolicy/train/AlohaInsertion-v0
policy=myact
dataset_repo_id=lerobot/aloha_sim_insertion_human
env=aloha
env.task=AlohaInsertion-v0

INFO 2024-12-07 17:01:50 n/logger.py:106 Logs will be saved locally.
INFO 2024-12-07 17:01:50 ts/train.py:337 make_dataset
Fetching 56 files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 56/56 [00:00<00:00, 9842.48it/s]
INFO 2024-12-07 17:01:56 ts/train.py:350 make_env
INFO 2024-12-07 17:01:56 /init.py:88 MUJOCO_GL is not set, so an OpenGL backend will be chosen automatically.
INFO 2024-12-07 17:01:57 /init.py:96 Successfully imported OpenGL backend: %s
INFO 2024-12-07 17:01:57 /init.py:31 MuJoCo library version is: %s
INFO 2024-12-07 17:02:03 ts/train.py:353 make_policy

Error executing job with overrides: ['policy=act', 'dataset_repo_id=lerobot/aloha_sim_insertion_human', 'env=aloha', 'env.task=AlohaInsertion-v0']
Traceback (most recent call last):
File "/root/autodl-tmp/lerobot/lerobot/scripts/train.py", line 677, in train_cli
train(
File "/root/autodl-tmp/lerobot/lerobot/scripts/train.py", line 354, in train
policy = make_policy(
File "/root/autodl-tmp/lerobot/lerobot/common/policies/factory.py", line 105, in make_policy
policy = policy_cls(policy_cfg, dataset_stats)
File "", line 26, in init
File "/root/autodl-tmp/lerobot/lerobot/common/policies/act/configuration_act.py", line 158, in post_init
if self.n_action_steps > self.chunk_size:
TypeError: '>' not supported between instances of 'int' and 'dict'

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

At this time, I also reported this error when I ran lerobot's act strategy. Do you know how to solve, thank you!

@zhouzhq2021
Copy link
Author

I added the print statement to check and found that the chunk size had been reassigned, but I did not find the problem:

def post_init(self):
print(f"chunk_size type: {type(self.chunk_size)}")
print(f"chunk_size value: {self.chunk_size}")
if self.n_action_steps > self.chunk_size:
raise ValueError(
f"The chunk size is the upper bound for the number of action steps per model invocation. Got "
f"{self.n_action_steps} for n_action_steps and {self.chunk_size} for chunk_size."
)

chunk_size type: <class 'int'>
chunk_size value: 100
chunk_size type: <class 'dict'>
chunk_size value: {'action': {'max': tensor([0.3175, 0.0844, 1.2226, 0.2807, 0.9986, 0.4418, 1.1625, 0.3206, 0.2056,
1.2118, 0.7056, 1.1459, 0.4801, 0.9541]), 'mean': tensor([ 0.0075, -0.1817, 0.7322, -0.0069, 0.4357, -0.0031, 0.2792, -0.1002,
-0.2062, 0.6435, 0.2017, 0.6110, -0.1440, 0.2546]), 'min': tensor([-0.2884, -0.9557, 0.3022, -0.2654, -0.5446, -0.4142, 0.0827, -0.4725,
-0.9940, 0.0890, -0.2209, -0.4449, -0.8452, -0.0650]), 'std': tensor([0.1098, 0.2156, 0.2022, 0.1066, 0.2204, 0.1394, 0.2707, 0.1425, 0.3127,
0.2861, 0.1915, 0.2999, 0.2880, 0.3504])}, 'episode_index': {'max': tensor([49.]), 'mean': tensor([24.5000]), 'min': tensor([0.]), 'std': tensor([14.4309])}, 'frame_index': {'max': tensor([499.]), 'mean': tensor([249.5000]), 'min': tensor([0.]), 'std': tensor([144.3372])}, 'index': {'max': tensor([24999.]), 'mean': tensor([12499.4971]), 'min': tensor([0.]), 'std': tensor([7216.8779])}, 'next.done': {'max': tensor([1.]), 'mean': tensor([0.0020]), 'min': tensor([0.]), 'std': tensor([0.0447])}, 'observation.images.top': {'max': tensor([[[1.]],

[[1.]],

[[1.]]]), 'mean': tensor([[[0.4850]],

[[0.4560]],

[[0.4060]]]), 'min': tensor([[[0.]],

[[0.]],

[[0.]]]), 'std': tensor([[[0.2290]],

[[0.2240]],

[[0.2250]]])}, 'observation.state': {'max': tensor([0.3167, 0.0753, 1.2031, 0.2757, 0.9998, 0.4411, 1.0911, 0.3204, 0.2070,
1.2158, 0.8069, 1.1542, 0.5064, 1.0758]), 'mean': tensor([ 0.0076, -0.1872,  0.7382, -0.0074,  0.4406, -0.0038,  0.6210, -0.0990,
-0.2150,  0.6490,  0.1894,  0.6149, -0.1431,  0.4161]), 'min': tensor([-0.2861, -0.9600,  0.3067, -0.2636, -0.4930, -0.4227,  0.0000, -0.4775,
-0.9906,  0.0944, -0.2257, -0.4322, -0.8461, -0.0121]), 'std': tensor([0.1091, 0.2242, 0.2028, 0.1034, 0.2257, 0.1394, 0.1923, 0.1422, 0.3174,
0.2855, 0.1851, 0.3043, 0.2879, 0.2574])}, 'timestamp': {'max': tensor([9.9800]), 'mean': tensor([4.9900]), 'min': tensor([0.]), 'std': tensor([2.8867])}}

@zhouzhq2021
Copy link
Author

I found a problem. I had an error setting in the factory.py file. The problem is now fixed. anyway, anyone out there trying to create their own policies in the lerobot framework is welcome to discuss!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant