Skip to content

v0.4.5: Faster, memory efficient Wide component

Compare
Choose a tag to compare
@jrzaurin jrzaurin released this 09 Aug 10:15
· 623 commits to master since this release
627caf4

Version 0.4.5 includes a new implementation of the Wide Linear component via an Embedding layer. Previous versions implemented this component using a Linear layer that received one hot encoded features. For large datasets, this was slow and was not memory efficient (See #18 ). Therefore, we decided to replace such implementation with an Embedding layer that receives label encoded features. Note that although the two implementations are equivalent, the latter is indeed faster and moreover significantly more memory efficient.

Also mentioning that the printed loss in the case of Regression is no longer RMSE but MSE. This is done for consistency with the metrics saved in the History callback.

NOTE: this does not change a thing in terms of how one would use the package. pytorch-widedeep can be used in the exact same way as previous versions. However, since the model components have changed, models generated with previous versions are not compatible with this version.