Skip to content

BatchNorm don't work as expected #2650

Answered by laggui
wangjiawen2013 asked this question in Q&A
Discussion options

You must be logged in to vote

The result is not wrong 🙂

The difference lies in the training vs inference computation for a batchnorm module. If you use m.eval() instead for the pytorch module you should get equivalent results.

With burn you have to be explicit when using autodiff. In pytorch it's kind of the opposite, it will track the gradients and keep the autodiff graph by default unless you use with torch.no_grad() context.

This is also explained in the autodiff section.

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by laggui
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants
Converted from issue

This discussion was converted from issue #2642 on January 02, 2025 14:32.