Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pad = None in ops.py throws error for checks that require iterables #2377

Open
gudgud96 opened this issue Oct 29, 2024 · 1 comment
Open
Assignees
Labels
bug Unexpected behaviour that should be corrected (type) triaged Reviewed and examined, release as been assigned if applicable (status)

Comments

@gudgud96
Copy link

🐞Describing the bug

Currently in converters/mil/frontend/torch/ops.py, for convolution, pad is set to None even if all paddings in convolution/transpose convolution are zeros. See this line of code.

For several checks in the code that requires pad to be an iterable, having it set to None will throw errors. An example is the sum check and pad copy.

In this case setting pad to [0] would work but I am not sure the intention behind using None instead.

Stack Trace

File "/Users/haohao/miniconda3/envs/xxx/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 1226, in _convolution
    if sum(pad) == 0 and any(output_padding):
TypeError: 'NoneType' object is not iterable

To Reproduce

  • Please add a minimal code example that can reproduce the error when running it.
# Paste Python code snippet here, complete with any required import statements.
import torch
from torch import nn


class SimpleUNetLike(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv1d(1, 16, 3, padding=0, stride=2)
        self.convt1 = nn.ConvTranspose1d(16, 1, 3, padding=0, stride=2)
    
    def forward(self, x):
        x = self.conv1(x)
        print("conv1", x.shape)
        x = self.convt1(x, output_size=(1024,))     # (1023,) if without output_padding
        print("convt1", x.shape)
        return x


if __name__ == "__main__":
    model = SimpleUNetLike()
    model.eval()
    x = torch.randn(1, 1, 1024)
    
    traced_model = torch.jit.trace(model, x)
    import coremltools as ct
    ct_model = ct.convert(
        traced_model,
        convert_to="mlprogram",
        inputs=[
            ct.TensorType(name="x", shape=x.shape),
        ],
        outputs=[
            ct.TensorType(name="y"),
        ],
    )

System environment (please complete the following information):

  • coremltools version: 8.0 (code still is problematic on current main branch)
  • OS (e.g. MacOS version or Linux type): Sonoma
  • Any other relevant version information (e.g. PyTorch or TensorFlow version):

Additional context

  • Add anything else about the problem here that you want to share.
@gudgud96 gudgud96 added the bug Unexpected behaviour that should be corrected (type) label Oct 29, 2024
@YifanShenSZ YifanShenSZ added the triaged Reviewed and examined, release as been assigned if applicable (status) label Oct 31, 2024
@YifanShenSZ
Copy link
Collaborator

YifanShenSZ commented Oct 31, 2024

Nice catch 🚀 I missed it and our unit test missed these cases

@YifanShenSZ YifanShenSZ self-assigned this Nov 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Unexpected behaviour that should be corrected (type) triaged Reviewed and examined, release as been assigned if applicable (status)
Projects
None yet
Development

No branches or pull requests

2 participants