Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when training a new model in CPU mode #16

Closed
GenevieveBuckley opened this issue Jun 17, 2022 · 2 comments
Closed

Error when training a new model in CPU mode #16

GenevieveBuckley opened this issue Jun 17, 2022 · 2 comments

Comments

@GenevieveBuckley
Copy link

When using the empanada-napari plugin to train a new model from scratch in CPU mode, I get this error at the end of the training iterations (presumably when we are trying to export the model?)

File ~/mambaforge/envs/napari-empanada/lib/python3.9/site-packages/torch/cuda/__init__.py:210, in _lazy_init()
    206     raise RuntimeError(
    207         "Cannot re-initialize CUDA in forked subprocess. To use CUDA with "
    208         "multiprocessing, you must use the 'spawn' start method")
    209 if not hasattr(torch._C, '_cuda_getDeviceCount'):
--> 210     raise AssertionError("Torch not compiled with CUDA enabled")
    211 if _cudart is None:
    212     raise AssertionError(
    213         "libcudart functions unavailable. It looks like you have a broken build?")

AssertionError: Torch not compiled with CUDA enabled
Full traceback (click to expand):
                )
              )
              (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (2): SiLU(inplace=True)
            )
            (1): Sequential(
              (0): SeparableConv2d(
                (sepconv): Sequential(
                  (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False)
                  (1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                )
              )
              (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (2): SiLU(inplace=True)
            )
            (2): Sequential(
              (0): SeparableConv2d(
                (sepconv): Sequential(
                  (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False)
                  (1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                )
              )
              (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (2): SiLU(inplace=True)
            )
            (3): Sequential(
              (0): SeparableConv2d(
                (sepconv): Sequential(
                  (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False)
                  (1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                )
              )
              (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (2): SiLU(inplace=True)
            )
          )
        )
      )
      (1): BiFPNLayer(
        (top_down_fpn): TopDownFPN(
          (resamplings): ModuleList(
            (0): Resample2d(
              (conv): Identity()
            )
            (1): Resample2d(
              (conv): Identity()
            )
            (2): Resample2d(
              (conv): Identity()
            )
            (3): Resample2d(
              (conv): Identity()
            )
          )
          (resize_up): Resize2d(
            (resample): Interpolate2d()
          )
          (after_combines): ModuleList(
            (0): Sequential(
              (0): SeparableConv2d(
                (sepconv): Sequential(
                  (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False)
                  (1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                )
              )
              (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (2): SiLU(inplace=True)
            )
            (1): Sequential(
              (0): SeparableConv2d(
                (sepconv): Sequential(
                  (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False)
                  (1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                )
              )
              (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (2): SiLU(inplace=True)
            )
            (2): Sequential(
              (0): SeparableConv2d(
                (sepconv): Sequential(
                  (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False)
                  (1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                )
              )
              (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (2): SiLU(inplace=True)
            )
            (3): Sequential(
              (0): SeparableConv2d(
                (sepconv): Sequential(
                  (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False)
                  (1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                )
              )
              (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (2): SiLU(inplace=True)
            )
          )
        )
        (bottom_up_fpn): BottomUpFPN(
          (resamplings): ModuleList(
            (0): Resample2d(
              (conv): Identity()
            )
            (1): Resample2d(
              (conv): Identity()
            )
            (2): Resample2d(
              (conv): Identity()
            )
            (3): Resample2d(
              (conv): Identity()
            )
          )
          (resize_down): Resize2d(
            (resample): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
          )
          (after_combines): ModuleList(
            (0): Sequential(
              (0): SeparableConv2d(
                (sepconv): Sequential(
                  (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False)
                  (1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                )
              )
              (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (2): SiLU(inplace=True)
            )
            (1): Sequential(
              (0): SeparableConv2d(
                (sepconv): Sequential(
                  (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False)
                  (1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                )
              )
              (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (2): SiLU(inplace=True)
            )
            (2): Sequential(
              (0): SeparableConv2d(
                (sepconv): Sequential(
                  (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False)
                  (1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                )
              )
              (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (2): SiLU(inplace=True)
            )
            (3): Sequential(
              (0): SeparableConv2d(
                (sepconv): Sequential(
                  (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False)
                  (1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                )
              )
              (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (2): SiLU(inplace=True)
            )
          )
        )
      )
      (2): BiFPNLayer(
        (top_down_fpn): TopDownFPN(
          (resamplings): ModuleList(
            (0): Resample2d(
              (conv): Identity()
            )
            (1): Resample2d(
              (conv): Identity()
            )
            (2): Resample2d(
              (conv): Identity()
            )
            (3): Resample2d(
              (conv): Identity()
            )
          )
          (resize_up): Resize2d(
            (resample): Interpolate2d()
          )
          (after_combines): ModuleList(
            (0): Sequential(
              (0): SeparableConv2d(
                (sepconv): Sequential(
                  (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False)
                  (1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                )
              )
              (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (2): SiLU(inplace=True)
            )
            (1): Sequential(
              (0): SeparableConv2d(
                (sepconv): Sequential(
                  (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False)
                  (1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                )
              )
              (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (2): SiLU(inplace=True)
            )
            (2): Sequential(
              (0): SeparableConv2d(
                (sepconv): Sequential(
                  (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False)
                  (1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                )
              )
              (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (2): SiLU(inplace=True)
            )
            (3): Sequential(
              (0): SeparableConv2d(
                (sepconv): Sequential(
                  (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False)
                  (1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                )
              )
              (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (2): SiLU(inplace=True)
            )
          )
        )
        (bottom_up_fpn): BottomUpFPN(
          (resamplings): ModuleList(
            (0): Resample2d(
              (conv): Identity()
            )
            (1): Resample2d(
              (conv): Identity()
            )
            (2): Resample2d(
              (conv): Identity()
            )
            (3): Resample2d(
              (conv): Identity()
            )
          )
          (resize_down): Resize2d(
            (resample): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
          )
          (after_combines): ModuleList(
            (0): Sequential(
              (0): SeparableConv2d(
                (sepconv): Sequential(
                  (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False)
                  (1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                )
              )
              (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (2): SiLU(inplace=True)
            )
            (1): Sequential(
              (0): SeparableConv2d(
                (sepconv): Sequential(
                  (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False)
                  (1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                )
              )
              (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (2): SiLU(inplace=True)
            )
            (2): Sequential(
              (0): SeparableConv2d(
                (sepconv): Sequential(
                  (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False)
                  (1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                )
              )
              (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (2): SiLU(inplace=True)
            )
            (3): Sequential(
              (0): SeparableConv2d(
                (sepconv): Sequential(
                  (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False)
                  (1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
                )
              )
              (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (2): SiLU(inplace=True)
            )
          )
        )
      )
    )
  )
  (semantic_decoder): BiFPNDecoder(
    (upsamplings): ModuleList(
      (0): Sequential(
        (0): ConvTranspose2d(128, 128, kernel_size=(2, 2), stride=(2, 2), bias=False)
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU()
      )
      (1): Sequential(
        (0): ConvTranspose2d(256, 128, kernel_size=(2, 2), stride=(2, 2), bias=False)
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU()
      )
      (2): Sequential(
        (0): ConvTranspose2d(256, 128, kernel_size=(2, 2), stride=(2, 2), bias=False)
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU()
      )
      (3): Sequential(
        (0): ConvTranspose2d(256, 128, kernel_size=(2, 2), stride=(2, 2), bias=False)
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU()
      )
      (4): Sequential(
        (0): ConvTranspose2d(256, 128, kernel_size=(2, 2), stride=(2, 2), bias=False)
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU()
      )
    )
    (fusion): Sequential(
      (0): SeparableConv2d(
        (sepconv): Sequential(
          (0): Conv2d(256, 256, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=256, bias=False)
          (1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        )
      )
      (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU()
    )
  )
  (semantic_head): PanopticDeepLabHead(
    (head): Sequential(
      (0): Sequential(
        (0): SeparableConv2d(
          (sepconv): Sequential(
            (0): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=128, bias=False)
            (1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          )
        )
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU()
      )
      (1): Conv2d(128, 1, kernel_size=(1, 1), stride=(1, 1))
    )
  )
  (ins_center): PanopticDeepLabHead(
    (head): Sequential(
      (0): Sequential(
        (0): SeparableConv2d(
          (sepconv): Sequential(
            (0): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=128, bias=False)
            (1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          )
        )
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU()
      )
      (1): Conv2d(128, 1, kernel_size=(1, 1), stride=(1, 1))
    )
  )
  (ins_xy): PanopticDeepLabHead(
    (head): Sequential(
      (0): Sequential(
        (0): SeparableConv2d(
          (sepconv): Sequential(
            (0): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=128, bias=False)
            (1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          )
        )
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (2): ReLU()
      )
      (1): Conv2d(128, 2, kernel_size=(1, 1), stride=(1, 1))
    )
  )
  (interpolate): Interpolate2d()
  (quant): Identity()
  (dequant): Identity()
  (semantic_pr): QuantizablePointRendSemSegHead(
    (point_head): StandardPointHead(
      (fc_layers): ModuleList(
        (0): Sequential(
          (0): Conv1d(129, 128, kernel_size=(1,), stride=(1,))
          (1): ReLU(inplace=True)
        )
        (1): Sequential(
          (0): Conv1d(129, 128, kernel_size=(1,), stride=(1,))
          (1): ReLU(inplace=True)
        )
        (2): Sequential(
          (0): Conv1d(129, 128, kernel_size=(1,), stride=(1,))
          (1): ReLU(inplace=True)
        )
      )
      (predictor): Conv1d(129, 1, kernel_size=(1,), stride=(1,))
    )
    (interpolate): Interpolate2d()
  )
)
        device = None

File ~/mambaforge/envs/napari-empanada/lib/python3.9/site-packages/torch/nn/modules/module.py:578, in Module._apply(self=QuantizablePanopticBiFPNPR(
  (encoder): Quantiz...))
    )
    (interpolate): Interpolate2d()
  )
), fn=<function Module.cuda.<locals>.<lambda>>)
    576 def _apply(self, fn):
    577     for module in self.children():
--> 578         module._apply(fn)
        module = QuantizableResNet(
  (conv1): ConvReLU2d(
    (0): Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3))
    (1): ReLU()
  )
  (bn1): Identity()
  (relu): Identity()
  (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  (layer1): Sequential(
    (0): QuantizableBottleneck(
      (conv1): ConvReLU2d(
        (0): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1))
        (1): ReLU()
      )
      (bn1): Identity()
      (conv2): ConvReLU2d(
        (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): ReLU()
      )
      (bn2): Identity()
      (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))
      (bn3): Identity()
      (relu): ReLU()
      (downsample): Sequential(
        (0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))
        (1): Identity()
      )
      (skip_add_relu): FloatFunctional(
        (activation_post_process): Identity()
      )
      (relu1): Identity()
      (relu2): Identity()
    )
    (1): QuantizableBottleneck(
      (conv1): ConvReLU2d(
        (0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
        (1): ReLU()
      )
      (bn1): Identity()
      (conv2): ConvReLU2d(
        (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): ReLU()
      )
      (bn2): Identity()
      (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))
      (bn3): Identity()
      (relu): ReLU()
      (skip_add_relu): FloatFunctional(
        (activation_post_process): Identity()
      )
      (relu1): Identity()
      (relu2): Identity()
    )
    (2): QuantizableBottleneck(
      (conv1): ConvReLU2d(
        (0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
        (1): ReLU()
      )
      (bn1): Identity()
      (conv2): ConvReLU2d(
        (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): ReLU()
      )
      (bn2): Identity()
      (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))
      (bn3): Identity()
      (relu): ReLU()
      (skip_add_relu): FloatFunctional(
        (activation_post_process): Identity()
      )
      (relu1): Identity()
      (relu2): Identity()
    )
  )
  (layer2): Sequential(
    (0): QuantizableBottleneck(
      (conv1): ConvReLU2d(
        (0): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))
        (1): ReLU()
      )
      (bn1): Identity()
      (conv2): ConvReLU2d(
        (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
        (1): ReLU()
      )
      (bn2): Identity()
      (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1))
      (bn3): Identity()
      (relu): ReLU()
      (downsample): Sequential(
        (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2))
        (1): Identity()
      )
      (skip_add_relu): FloatFunctional(
        (activation_post_process): Identity()
      )
      (relu1): Identity()
      (relu2): Identity()
    )
    (1): QuantizableBottleneck(
      (conv1): ConvReLU2d(
        (0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1))
        (1): ReLU()
      )
      (bn1): Identity()
      (conv2): ConvReLU2d(
        (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): ReLU()
      )
      (bn2): Identity()
      (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1))
      (bn3): Identity()
      (relu): ReLU()
      (skip_add_relu): FloatFunctional(
        (activation_post_process): Identity()
      )
      (relu1): Identity()
      (relu2): Identity()
    )
    (2): QuantizableBottleneck(
      (conv1): ConvReLU2d(
        (0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1))
        (1): ReLU()
      )
      (bn1): Identity()
      (conv2): ConvReLU2d(
        (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): ReLU()
      )
      (bn2): Identity()
      (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1))
      (bn3): Identity()
      (relu): ReLU()
      (skip_add_relu): FloatFunctional(
        (activation_post_process): Identity()
      )
      (relu1): Identity()
      (relu2): Identity()
    )
    (3): QuantizableBottleneck(
      (conv1): ConvReLU2d(
        (0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1))
        (1): ReLU()
      )
      (bn1): Identity()
      (conv2): ConvReLU2d(
        (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): ReLU()
      )
      (bn2): Identity()
      (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1))
      (bn3): Identity()
      (relu): ReLU()
      (skip_add_relu): FloatFunctional(
        (activation_post_process): Identity()
      )
      (relu1): Identity()
      (relu2): Identity()
    )
  )
  (layer3): Sequential(
    (0): QuantizableBottleneck(
      (conv1): ConvReLU2d(
        (0): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))
        (1): ReLU()
      )
      (bn1): Identity()
      (conv2): ConvReLU2d(
        (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
        (1): ReLU()
      )
      (bn2): Identity()
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
      (bn3): Identity()
      (relu): ReLU()
      (downsample): Sequential(
        (0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2))
        (1): Identity()
      )
      (skip_add_relu): FloatFunctional(
        (activation_post_process): Identity()
      )
      (relu1): Identity()
      (relu2): Identity()
    )
    (1): QuantizableBottleneck(
      (conv1): ConvReLU2d(
        (0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
        (1): ReLU()
      )
      (bn1): Identity()
      (conv2): ConvReLU2d(
        (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): ReLU()
      )
      (bn2): Identity()
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
      (bn3): Identity()
      (relu): ReLU()
      (skip_add_relu): FloatFunctional(
        (activation_post_process): Identity()
      )
      (relu1): Identity()
      (relu2): Identity()
    )
    (2): QuantizableBottleneck(
      (conv1): ConvReLU2d(
        (0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
        (1): ReLU()
      )
      (bn1): Identity()
      (conv2): ConvReLU2d(
        (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): ReLU()
      )
      (bn2): Identity()
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
      (bn3): Identity()
      (relu): ReLU()
      (skip_add_relu): FloatFunctional(
        (activation_post_process): Identity()
      )
      (relu1): Identity()
      (relu2): Identity()
    )
    (3): QuantizableBottleneck(
      (conv1): ConvReLU2d(
        (0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
        (1): ReLU()
      )
      (bn1): Identity()
      (conv2): ConvReLU2d(
        (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): ReLU()
      )
      (bn2): Identity()
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
      (bn3): Identity()
      (relu): ReLU()
      (skip_add_relu): FloatFunctional(
        (activation_post_process): Identity()
      )
      (relu1): Identity()
      (relu2): Identity()
    )
    (4): QuantizableBottleneck(
      (conv1): ConvReLU2d(
        (0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
        (1): ReLU()
      )
      (bn1): Identity()
      (conv2): ConvReLU2d(
        (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): ReLU()
      )
      (bn2): Identity()
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
      (bn3): Identity()
      (relu): ReLU()
      (skip_add_relu): FloatFunctional(
        (activation_post_process): Identity()
      )
      (relu1): Identity()
      (relu2): Identity()
    )
    (5): QuantizableBottleneck(
      (conv1): ConvReLU2d(
        (0): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
        (1): ReLU()
      )
      (bn1): Identity()
      (conv2): ConvReLU2d(
        (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): ReLU()
      )
      (bn2): Identity()
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
      (bn3): Identity()
      (relu): ReLU()
      (skip_add_relu): FloatFunctional(
        (activation_post_process): Identity()
      )
      (relu1): Identity()
      (relu2): Identity()
    )
  )
  (layer4): Sequential(
    (0): QuantizableBottleneck(
      (conv1): ConvReLU2d(
        (0): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1))
        (1): ReLU()
      )
      (bn1): Identity()
      (conv2): ConvReLU2d(
        (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
        (1): ReLU()
      )
      (bn2): Identity()
      (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1))
      (bn3): Identity()
      (relu): ReLU()
      (downsample): Sequential(
        (0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2))
        (1): Identity()
      )
      (skip_add_relu): FloatFunctional(
        (activation_post_process): Identity()
      )
      (relu1): Identity()
      (relu2): Identity()
    )
    (1): QuantizableBottleneck(
      (conv1): ConvReLU2d(
        (0): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1))
        (1): ReLU()
      )
      (bn1): Identity()
      (conv2): ConvReLU2d(
        (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): ReLU()
      )
      (bn2): Identity()
      (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1))
      (bn3): Identity()
      (relu): ReLU()
      (skip_add_relu): FloatFunctional(
        (activation_post_process): Identity()
      )
      (relu1): Identity()
      (relu2): Identity()
    )
    (2): QuantizableBottleneck(
      (conv1): ConvReLU2d(
        (0): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1))
        (1): ReLU()
      )
      (bn1): Identity()
      (conv2): ConvReLU2d(
        (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
        (1): ReLU()
      )
      (bn2): Identity()
      (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1))
      (bn3): Identity()
      (relu): ReLU()
      (skip_add_relu): FloatFunctional(
        (activation_post_process): Identity()
      )
      (relu1): Identity()
      (relu2): Identity()
    )
  )
)
        fn = <function Module.cuda.<locals>.<lambda> at 0x16eb559d0>
    580     def compute_should_use_set_data(tensor, tensor_applied):
    581         if torch._has_compatible_shallow_copy_type(tensor, tensor_applied):
    582             # If the new tensor has compatible tensor type as the existing tensor,
    583             # the current behavior is to change the tensor in-place using `.data =`,
   (...)
    588             # global flag to let the user control whether they want the future
    589             # behavior of overwriting the existing tensor or not.

File ~/mambaforge/envs/napari-empanada/lib/python3.9/site-packages/torch/nn/modules/module.py:578, in Module._apply(self=QuantizableResNet(
  (conv1): ConvReLU2d(
    (0... Identity()
      (relu2): Identity()
    )
  )
), fn=<function Module.cuda.<locals>.<lambda>>)
    576 def _apply(self, fn):
    577     for module in self.children():
--> 578         module._apply(fn)
        module = ConvReLU2d(
  (0): Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3))
  (1): ReLU()
)
        fn = <function Module.cuda.<locals>.<lambda> at 0x16eb559d0>
    580     def compute_should_use_set_data(tensor, tensor_applied):
    581         if torch._has_compatible_shallow_copy_type(tensor, tensor_applied):
    582             # If the new tensor has compatible tensor type as the existing tensor,
    583             # the current behavior is to change the tensor in-place using `.data =`,
   (...)
    588             # global flag to let the user control whether they want the future
    589             # behavior of overwriting the existing tensor or not.

File ~/mambaforge/envs/napari-empanada/lib/python3.9/site-packages/torch/nn/modules/module.py:578, in Module._apply(self=ConvReLU2d(
  (0): Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3))
  (1): ReLU()
), fn=<function Module.cuda.<locals>.<lambda>>)
    576 def _apply(self, fn):
    577     for module in self.children():
--> 578         module._apply(fn)
        module = Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3))
        fn = <function Module.cuda.<locals>.<lambda> at 0x16eb559d0>
    580     def compute_should_use_set_data(tensor, tensor_applied):
    581         if torch._has_compatible_shallow_copy_type(tensor, tensor_applied):
    582             # If the new tensor has compatible tensor type as the existing tensor,
    583             # the current behavior is to change the tensor in-place using `.data =`,
   (...)
    588             # global flag to let the user control whether they want the future
    589             # behavior of overwriting the existing tensor or not.

File ~/mambaforge/envs/napari-empanada/lib/python3.9/site-packages/torch/nn/modules/module.py:601, in Module._apply(self=Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3)), fn=<function Module.cuda.<locals>.<lambda>>)
    597 # Tensors stored in modules are graph leaves, and we don't want to
    598 # track autograd history of `param_applied`, so we have to use
    599 # `with torch.no_grad():`
    600 with torch.no_grad():
--> 601     param_applied = fn(param)
        param = Parameter containing:
tensor([[[[-1.3827e-01,  2.2338e-01, -8.5514e-03,  ...,  9.5794e-02,
            2.7735e-02, -1.0101e-02],
          [-1.2926e-01,  4.2722e-01, -2.9619e-01,  ...,  4.2545e-01,
           -8.5923e-02, -6.1387e-02],
          [-1.6669e-02,  3.7624e-01, -6.8885e-01,  ...,  6.2813e-01,
           -4.4159e-01,  2.7331e-02],
          ...,
          [ 2.0136e-01, -7.4150e-02, -5.8240e-01,  ..., -8.3312e-02,
           -5.6433e-01,  2.7626e-01],
          [ 8.7270e-02, -1.6361e-01, -1.3582e-01,  ..., -3.8484e-01,
           -1.3917e-01,  2.3031e-01],
          [-6.4943e-02, -7.1273e-02,  6.7430e-02,  ..., -3.6433e-01,
            8.9752e-02,  5.3151e-02]]],


        [[[-7.0538e-02,  6.7588e-02,  1.6372e-01,  ..., -2.6306e-01,
            1.1816e-01,  1.2167e-01],
          [-1.1430e-01,  4.1105e-02,  3.1738e-01,  ..., -4.1845e-01,
           -1.2233e-03,  3.6213e-01],
          [-1.0830e-01, -1.6113e-01,  3.8736e-01,  ..., -4.7221e-01,
           -3.9430e-01,  2.8897e-01],
          ...,
          [ 7.0493e-02, -4.1146e-01, -1.1589e-01,  ...,  2.7939e-01,
           -5.0483e-01,  7.7611e-02],
          [ 1.9559e-01, -1.6260e-01, -4.4147e-01,  ...,  4.4834e-01,
           -2.7656e-01,  3.5569e-04],
          [ 1.5157e-01,  1.1810e-01, -3.3291e-01,  ...,  3.0075e-01,
           -1.5190e-01,  1.0633e-02]]],


        [[[-1.4575e-02, -1.6360e-03,  2.6694e-02,  ..., -2.1643e-02,
            2.7863e-02, -1.1525e-02],
          [-6.1357e-03, -1.4260e-02,  1.4181e-02,  ..., -3.8732e-02,
            2.0618e-02, -1.3460e-02],
          [ 2.6686e-02,  2.0833e-02,  1.6749e-02,  ..., -3.5901e-02,
            2.7766e-02,  8.0329e-03],
          ...,
          [-2.3554e-02, -4.2689e-02, -8.1980e-02,  ..., -2.4745e-01,
           -7.9766e-02, -4.7000e-02],
          [ 2.5915e-02,  2.7031e-02,  6.0339e-03,  ..., -1.3560e-01,
           -8.5434e-03,  1.6083e-02],
          [-3.9184e-03,  6.0534e-03, -1.6641e-02,  ..., -1.1332e-01,
           -1.3558e-02, -1.0630e-02]]],


        ...,


        [[[-1.9770e-02, -2.3317e-02, -2.5708e-02,  ..., -6.9819e-02,
            5.5179e-02,  1.6605e-01],
          [ 1.7081e-02, -1.6174e-02, -1.0812e-01,  ..., -2.1499e-01,
           -1.4281e-01, -1.6075e-02],
          [ 4.0908e-03,  1.9585e-02, -2.3330e-02,  ..., -2.8816e-01,
           -3.8933e-01, -2.7997e-01],
          ...,
          [-7.0546e-02,  2.7068e-02,  1.7138e-01,  ...,  2.9212e-01,
            2.3368e-01,  2.2608e-01],
          [ 6.2845e-02,  7.8810e-03,  7.7501e-02,  ...,  3.4201e-01,
            3.0601e-01,  2.9623e-01],
          [-7.3257e-02, -2.9145e-01, -3.1779e-01,  ..., -1.6841e-01,
           -1.5563e-01, -1.2848e-01]]],


        [[[ 3.1462e-02,  2.0343e-01, -1.7117e-01,  ...,  7.7419e-02,
            6.0029e-02,  1.8891e-02],
          [ 6.7599e-02, -3.1245e-01, -3.0002e-01,  ...,  4.1841e-01,
           -9.8261e-02, -4.6880e-02],
          [-2.5505e-02, -2.2065e-01,  4.6422e-01,  ..., -3.8421e-01,
           -4.7265e-01,  2.2053e-02],
          ...,
          [ 2.1234e-03,  2.0285e-01, -2.8883e-01,  ...,  3.5064e-01,
            7.2746e-01, -1.9755e-02],
          [ 3.3662e-02, -2.7949e-02, -2.9370e-01,  ...,  6.4120e-01,
           -2.2935e-02, -3.8977e-01],
          [ 1.3420e-02, -2.5446e-02,  8.7103e-03,  ..., -1.2218e-01,
           -3.3818e-01,  1.0282e-01]]],


        [[[ 1.5300e-02, -7.0023e-02, -3.7934e-02,  ..., -6.0566e-02,
            8.1772e-02,  8.9402e-02],
          [ 4.5247e-02, -1.2217e-02, -1.6589e-02,  ..., -2.3788e-01,
           -3.6985e-02, -7.0280e-03],
          [ 2.3613e-02,  1.6436e-02,  4.3058e-02,  ..., -2.9586e-01,
           -1.6075e-01, -6.7325e-02],
          ...,
          [ 5.1611e-02,  1.6344e-01,  2.6017e-01,  ..., -1.1679e-02,
           -1.2686e-01, -1.7816e-01],
          [-6.8520e-02,  3.3485e-02,  2.0347e-01,  ...,  6.7542e-02,
           -4.2491e-02, -1.5391e-01],
          [-1.3259e-01, -6.8617e-02,  1.3884e-01,  ...,  1.8652e-01,
            6.9371e-02, -6.8876e-02]]]], requires_grad=True)
        fn = <function Module.cuda.<locals>.<lambda> at 0x16eb559d0>
    602 should_use_set_data = compute_should_use_set_data(param, param_applied)
    603 if should_use_set_data:

File ~/mambaforge/envs/napari-empanada/lib/python3.9/site-packages/torch/nn/modules/module.py:688, in Module.cuda.<locals>.<lambda>(t=Parameter containing:
tensor([[[[-1.3827e-01,  2... 6.9371e-02, -6.8876e-02]]]], requires_grad=True))
    671 def cuda(self: T, device: Optional[Union[int, device]] = None) -> T:
    672     r"""Moves all model parameters and buffers to the GPU.
    673
    674     This also makes associated parameters and buffers different objects. So
   (...)
    686         Module: self
    687     """
--> 688     return self._apply(lambda t: t.cuda(device))
        device = None
        t = Parameter containing:
tensor([[[[-1.3827e-01,  2.2338e-01, -8.5514e-03,  ...,  9.5794e-02,
            2.7735e-02, -1.0101e-02],
          [-1.2926e-01,  4.2722e-01, -2.9619e-01,  ...,  4.2545e-01,
           -8.5923e-02, -6.1387e-02],
          [-1.6669e-02,  3.7624e-01, -6.8885e-01,  ...,  6.2813e-01,
           -4.4159e-01,  2.7331e-02],
          ...,
          [ 2.0136e-01, -7.4150e-02, -5.8240e-01,  ..., -8.3312e-02,
           -5.6433e-01,  2.7626e-01],
          [ 8.7270e-02, -1.6361e-01, -1.3582e-01,  ..., -3.8484e-01,
           -1.3917e-01,  2.3031e-01],
          [-6.4943e-02, -7.1273e-02,  6.7430e-02,  ..., -3.6433e-01,
            8.9752e-02,  5.3151e-02]]],


        [[[-7.0538e-02,  6.7588e-02,  1.6372e-01,  ..., -2.6306e-01,
            1.1816e-01,  1.2167e-01],
          [-1.1430e-01,  4.1105e-02,  3.1738e-01,  ..., -4.1845e-01,
           -1.2233e-03,  3.6213e-01],
          [-1.0830e-01, -1.6113e-01,  3.8736e-01,  ..., -4.7221e-01,
           -3.9430e-01,  2.8897e-01],
          ...,
          [ 7.0493e-02, -4.1146e-01, -1.1589e-01,  ...,  2.7939e-01,
           -5.0483e-01,  7.7611e-02],
          [ 1.9559e-01, -1.6260e-01, -4.4147e-01,  ...,  4.4834e-01,
           -2.7656e-01,  3.5569e-04],
          [ 1.5157e-01,  1.1810e-01, -3.3291e-01,  ...,  3.0075e-01,
           -1.5190e-01,  1.0633e-02]]],


        [[[-1.4575e-02, -1.6360e-03,  2.6694e-02,  ..., -2.1643e-02,
            2.7863e-02, -1.1525e-02],
          [-6.1357e-03, -1.4260e-02,  1.4181e-02,  ..., -3.8732e-02,
            2.0618e-02, -1.3460e-02],
          [ 2.6686e-02,  2.0833e-02,  1.6749e-02,  ..., -3.5901e-02,
            2.7766e-02,  8.0329e-03],
          ...,
          [-2.3554e-02, -4.2689e-02, -8.1980e-02,  ..., -2.4745e-01,
           -7.9766e-02, -4.7000e-02],
          [ 2.5915e-02,  2.7031e-02,  6.0339e-03,  ..., -1.3560e-01,
           -8.5434e-03,  1.6083e-02],
          [-3.9184e-03,  6.0534e-03, -1.6641e-02,  ..., -1.1332e-01,
           -1.3558e-02, -1.0630e-02]]],


        ...,


        [[[-1.9770e-02, -2.3317e-02, -2.5708e-02,  ..., -6.9819e-02,
            5.5179e-02,  1.6605e-01],
          [ 1.7081e-02, -1.6174e-02, -1.0812e-01,  ..., -2.1499e-01,
           -1.4281e-01, -1.6075e-02],
          [ 4.0908e-03,  1.9585e-02, -2.3330e-02,  ..., -2.8816e-01,
           -3.8933e-01, -2.7997e-01],
          ...,
          [-7.0546e-02,  2.7068e-02,  1.7138e-01,  ...,  2.9212e-01,
            2.3368e-01,  2.2608e-01],
          [ 6.2845e-02,  7.8810e-03,  7.7501e-02,  ...,  3.4201e-01,
            3.0601e-01,  2.9623e-01],
          [-7.3257e-02, -2.9145e-01, -3.1779e-01,  ..., -1.6841e-01,
           -1.5563e-01, -1.2848e-01]]],


        [[[ 3.1462e-02,  2.0343e-01, -1.7117e-01,  ...,  7.7419e-02,
            6.0029e-02,  1.8891e-02],
          [ 6.7599e-02, -3.1245e-01, -3.0002e-01,  ...,  4.1841e-01,
           -9.8261e-02, -4.6880e-02],
          [-2.5505e-02, -2.2065e-01,  4.6422e-01,  ..., -3.8421e-01,
           -4.7265e-01,  2.2053e-02],
          ...,
          [ 2.1234e-03,  2.0285e-01, -2.8883e-01,  ...,  3.5064e-01,
            7.2746e-01, -1.9755e-02],
          [ 3.3662e-02, -2.7949e-02, -2.9370e-01,  ...,  6.4120e-01,
           -2.2935e-02, -3.8977e-01],
          [ 1.3420e-02, -2.5446e-02,  8.7103e-03,  ..., -1.2218e-01,
           -3.3818e-01,  1.0282e-01]]],


        [[[ 1.5300e-02, -7.0023e-02, -3.7934e-02,  ..., -6.0566e-02,
            8.1772e-02,  8.9402e-02],
          [ 4.5247e-02, -1.2217e-02, -1.6589e-02,  ..., -2.3788e-01,
           -3.6985e-02, -7.0280e-03],
          [ 2.3613e-02,  1.6436e-02,  4.3058e-02,  ..., -2.9586e-01,
           -1.6075e-01, -6.7325e-02],
          ...,
          [ 5.1611e-02,  1.6344e-01,  2.6017e-01,  ..., -1.1679e-02,
           -1.2686e-01, -1.7816e-01],
          [-6.8520e-02,  3.3485e-02,  2.0347e-01,  ...,  6.7542e-02,
           -4.2491e-02, -1.5391e-01],
          [-1.3259e-01, -6.8617e-02,  1.3884e-01,  ...,  1.8652e-01,
            6.9371e-02, -6.8876e-02]]]], requires_grad=True)

File ~/mambaforge/envs/napari-empanada/lib/python3.9/site-packages/torch/cuda/__init__.py:210, in _lazy_init()
    206     raise RuntimeError(
    207         "Cannot re-initialize CUDA in forked subprocess. To use CUDA with "
    208         "multiprocessing, you must use the 'spawn' start method")
    209 if not hasattr(torch._C, '_cuda_getDeviceCount'):
--> 210     raise AssertionError("Torch not compiled with CUDA enabled")
    211 if _cudart is None:
    212     raise AssertionError(
    213         "libcudart functions unavailable. It looks like you have a broken build?")

AssertionError: Torch not compiled with CUDA enabled
@GenevieveBuckley
Copy link
Author

I'm sure this problem doesn't appear very often, since you would typically train models only on the GPU. (I found it because I'm testing out empanada at the hackathon & only have a mac laptop with me at the moment).

I'll see if I can increase the amount of history my terminal stores and re-upload a more detailed traceback.

@conradry
Copy link
Contributor

Error caused by a typo in the model export fixed by

if torch.cuda.is_available():
model.cuda()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants