You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
C:\Users\Oikura.conda\envs\g37two\python.exe C:\Users\Oikura\Desktop\drive\code\hyperseg-main\configs\train\cityscapes_efficientnet_b1_hyperseg-m.py
100%|██████████| 2975/2975 [01:02<00:00, 47.65files/s]
100%|██████████| 500/500 [00:10<00:00, 47.87files/s]
C:\Users\Oikura.conda\envs\g37two\lib\site-packages\torch\utils\data\dataloader.py:490: UserWarning: This DataLoader will create 16 worker processes in total. Our suggested max number of worker in current system is 8 (cpuset is not taken into account), which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
cpuset_checked))
Loading pretrained weights for efficientnet-b1...
C:\Users\Oikura.conda\envs\g37two\lib\site-packages\torch\functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\TensorShape.cpp:2228.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
=> no checkpoint found at 'checkpoints/cityscapes\cityscapes_efficientnet_b1_hyperseg-m'
0%| | 0/250 [00:00<?, ?batches/s]Traceback (most recent call last):
File "", line 1, in
0%| | 0/250 [00:04<?, ?batches/s]
File "C:\Users\Oikura.conda\envs\g37two\lib\multiprocessing\spawn.py", line 105, in spawn_main
Traceback (most recent call last):
File "C:\Users\Oikura\Desktop\drive\code\hyperseg-main\configs\train\cityscapes_efficientnet_b1_hyperseg-m.py", line 48, in
scheduler=scheduler, pretrained=pretrained, model=model, criterion=criterion, batch_scheduler=batch_scheduler)
exitcode = _main(fd) File "C:\Users\Oikura\Desktop\drive\code\hyperseg-main\hyperseg\train.py", line 248, in main
File "C:\Users\Oikura.conda\envs\g37two\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Users\Oikura.conda\envs\g37two\lib\multiprocessing\spawn.py", line 225, in prepare
epoch_loss, epoch_iou = proces_epoch(train_loader, train=True)
File "C:\Users\Oikura\Desktop\drive\code\hyperseg-main\hyperseg\train.py", line 104, in proces_epoch
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\Oikura.conda\envs\g37two\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
for i, (input, target) in enumerate(pbar):
File "C:\Users\Oikura.conda\envs\g37two\lib\site-packages\tqdm\std.py", line 1178, in iter
run_name="mp_main")
File "C:\Users\Oikura.conda\envs\g37two\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\Oikura.conda\envs\g37two\lib\runpy.py", line 96, in _run_module_code
for obj in iterable:
File "C:\Users\Oikura.conda\envs\g37two\lib\site-packages\torch\utils\data\dataloader.py", line 368, in iter
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\Oikura.conda\envs\g37two\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\Oikura\Desktop\drive\code\hyperseg-main\configs\train\cityscapes_efficientnet_b1_hyperseg-m.py", line 4, in
return self._get_iterator()
File "C:\Users\Oikura.conda\envs\g37two\lib\site-packages\torch\utils\data\dataloader.py", line 314, in get_iterator
import torch.optim as optim
File "C:\Users\Oikura.conda\envs\g37two\lib\site-packages\torch_init.py", line 126, in
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\Oikura.conda\envs\g37two\lib\site-packages\torch\utils\data\dataloader.py", line 927, in init
raise err
OSError: [WinError 1455] 页面文件太小,无法完成操作。 Error loading "C:\Users\Oikura.conda\envs\g37two\lib\site-packages\torch\lib\cudnn_cnn_infer64_8.dll" or one of its dependencies.
w.start()
File "C:\Users\Oikura.conda\envs\g37two\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\Users\Oikura.conda\envs\g37two\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\Oikura.conda\envs\g37two\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\Oikura.conda\envs\g37two\lib\multiprocessing\popen_spawn_win32.py", line 89, in init
reduction.dump(process_obj, to_child)
File "C:\Users\Oikura.conda\envs\g37two\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe
进程已结束,退出代码1
why?
The text was updated successfully, but these errors were encountered:
C:\Users\Oikura.conda\envs\g37two\python.exe C:\Users\Oikura\Desktop\drive\code\hyperseg-main\configs\train\cityscapes_efficientnet_b1_hyperseg-m.py
100%|██████████| 2975/2975 [01:02<00:00, 47.65files/s]
100%|██████████| 500/500 [00:10<00:00, 47.87files/s]
C:\Users\Oikura.conda\envs\g37two\lib\site-packages\torch\utils\data\dataloader.py:490: UserWarning: This DataLoader will create 16 worker processes in total. Our suggested max number of worker in current system is 8 (
cpuset
is not taken into account), which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.cpuset_checked))
Loading pretrained weights for efficientnet-b1...
C:\Users\Oikura.conda\envs\g37two\lib\site-packages\torch\functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\TensorShape.cpp:2228.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
=> no checkpoint found at 'checkpoints/cityscapes\cityscapes_efficientnet_b1_hyperseg-m'
0%| | 0/250 [00:00<?, ?batches/s]Traceback (most recent call last):
File "", line 1, in
0%| | 0/250 [00:04<?, ?batches/s]
File "C:\Users\Oikura.conda\envs\g37two\lib\multiprocessing\spawn.py", line 105, in spawn_main
Traceback (most recent call last):
File "C:\Users\Oikura\Desktop\drive\code\hyperseg-main\configs\train\cityscapes_efficientnet_b1_hyperseg-m.py", line 48, in
scheduler=scheduler, pretrained=pretrained, model=model, criterion=criterion, batch_scheduler=batch_scheduler)
exitcode = _main(fd) File "C:\Users\Oikura\Desktop\drive\code\hyperseg-main\hyperseg\train.py", line 248, in main
File "C:\Users\Oikura.conda\envs\g37two\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Users\Oikura.conda\envs\g37two\lib\multiprocessing\spawn.py", line 225, in prepare
epoch_loss, epoch_iou = proces_epoch(train_loader, train=True)
File "C:\Users\Oikura\Desktop\drive\code\hyperseg-main\hyperseg\train.py", line 104, in proces_epoch
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\Oikura.conda\envs\g37two\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
for i, (input, target) in enumerate(pbar):
File "C:\Users\Oikura.conda\envs\g37two\lib\site-packages\tqdm\std.py", line 1178, in iter
run_name="mp_main")
File "C:\Users\Oikura.conda\envs\g37two\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\Oikura.conda\envs\g37two\lib\runpy.py", line 96, in _run_module_code
for obj in iterable:
File "C:\Users\Oikura.conda\envs\g37two\lib\site-packages\torch\utils\data\dataloader.py", line 368, in iter
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\Oikura.conda\envs\g37two\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\Oikura\Desktop\drive\code\hyperseg-main\configs\train\cityscapes_efficientnet_b1_hyperseg-m.py", line 4, in
return self._get_iterator()
File "C:\Users\Oikura.conda\envs\g37two\lib\site-packages\torch\utils\data\dataloader.py", line 314, in get_iterator
import torch.optim as optim
File "C:\Users\Oikura.conda\envs\g37two\lib\site-packages\torch_init.py", line 126, in
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\Oikura.conda\envs\g37two\lib\site-packages\torch\utils\data\dataloader.py", line 927, in init
raise err
OSError: [WinError 1455] 页面文件太小,无法完成操作。 Error loading "C:\Users\Oikura.conda\envs\g37two\lib\site-packages\torch\lib\cudnn_cnn_infer64_8.dll" or one of its dependencies.
w.start()
File "C:\Users\Oikura.conda\envs\g37two\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\Users\Oikura.conda\envs\g37two\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\Oikura.conda\envs\g37two\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\Oikura.conda\envs\g37two\lib\multiprocessing\popen_spawn_win32.py", line 89, in init
reduction.dump(process_obj, to_child)
File "C:\Users\Oikura.conda\envs\g37two\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe
进程已结束,退出代码1
why?
The text was updated successfully, but these errors were encountered: