Result table

This table was generated on 2025-11-09 at 02:36. See more results here. See last results here.

results
project_namegroup_namehostnamestatustimeFIDPrecisionRecallerror_msg
BestOfBachelor2024
profs
coktailjet
Success
439.4
12.66
0.87
0.66
None
BestOf2023-1
profs
coktailjet
Success
85.21
17.95
0.57
0.22
None
BestOf2024-1
profs
coktailjet
Success
85.68
18.51
0.72
0.59
None
brigand
Master-IASD
coktailjet
Success
95.7
28.94
0.81
0.44
None
penebarthanouk
Master-IASD
coktailjet
Success
88.17
30.81
0.53
0.21
None
3-bet_light
Master-IASD
coktailjet
Success
91.46
34.2
0.51
0.24
None
shape_shifters
Master-IASD
coktailjet
Success
87.75
54.17
0.56
0.15
None
supergan
Master-IASD
coktailjet
Success
97.99
113.81
0.48
0.05
None
crous-de-chatelet
Master-IASD
coktailjet
Success
90.97
383.94
0.0
0.0
None
arigan
Master-IASD
upnquick
Error
0
1000
0
0
OutOfMemoryError: Caught OutOfMemoryError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/parallel/parallel_apply.py", line 99, in _worker output = module(*input, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a2/extra/fid/inception.py", line 153, in forward x = block(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/container.py", line 250, in forward input = module(input) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torchvision/models/inception.py", line 405, in forward x = self.conv(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/conv.py", line 548, in forward return self._conv_forward(input, self.weight, self.bias) ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/conv.py", line 543, in _conv_forward return F.conv2d( ~~~~~~~~^ input, weight, bias, self.stride, self.padding, self.dilation, self.groups ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 348.00 MiB. GPU 0 has a total capacity of 79.14 GiB of which 214.50 MiB is free. Process 2527836 has 436.00 MiB memory in use. Process 2615505 has 75.84 GiB memory in use. Process 2697684 has 726.00 MiB memory in use. Including non-PyTorch memory, this process has 840.00 MiB memory in use. Process 2697171 has 1.11 GiB memory in use. Of the allocated memory 217.10 MiB is allocated by PyTorch, and 12.90 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
clash-of-gans
Master-IASD
upnquick
Error
0
1000
0
0
OutOfMemoryError: Caught OutOfMemoryError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/parallel/parallel_apply.py", line 99, in _worker output = module(*input, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a2/extra/fid/inception.py", line 153, in forward x = block(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/container.py", line 250, in forward input = module(input) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torchvision/models/inception.py", line 406, in forward x = self.bn(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/batchnorm.py", line 193, in forward return F.batch_norm( ~~~~~~~~~~~~^ input, ^^^^^^ ...<11 lines>... self.eps, ^^^^^^^^^ ) ^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/functional.py", line 2813, in batch_norm return torch.batch_norm( ~~~~~~~~~~~~~~~~^ input, ^^^^^^ ...<7 lines>... torch.backends.cudnn.enabled, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 348.00 MiB. GPU 0 has a total capacity of 79.14 GiB of which 154.50 MiB is free. Process 2527836 has 436.00 MiB memory in use. Process 2615505 has 75.84 GiB memory in use. Process 2697684 has 726.00 MiB memory in use. Process 2697170 has 840.00 MiB memory in use. Including non-PyTorch memory, this process has 1.16 GiB memory in use. Of the allocated memory 563.99 MiB is allocated by PyTorch, and 14.01 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
cyril-gane
Master-IASD
upnquick
Error
0
1000
0
0
OutOfMemoryError: Caught OutOfMemoryError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/parallel/parallel_apply.py", line 99, in _worker output = module(*input, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a2/extra/fid/inception.py", line 153, in forward x = block(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/container.py", line 250, in forward input = module(input) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torchvision/models/inception.py", line 406, in forward x = self.bn(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/batchnorm.py", line 193, in forward return F.batch_norm( ~~~~~~~~~~~~^ input, ^^^^^^ ...<11 lines>... self.eps, ^^^^^^^^^ ) ^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/functional.py", line 2813, in batch_norm return torch.batch_norm( ~~~~~~~~~~~~~~~~^ input, ^^^^^^ ...<7 lines>... torch.backends.cudnn.enabled, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 474.00 MiB. GPU 0 has a total capacity of 79.14 GiB of which 383.62 MiB is free. Process 2527836 has 436.00 MiB memory in use. Process 2615505 has 75.84 GiB memory in use. Including non-PyTorch memory, this process has 2.48 GiB memory in use. Of the allocated memory 933.43 MiB is allocated by PyTorch, and 990.57 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
dsl
Master-IASD
upnquick
Error
0
1000
0
0
OutOfMemoryError: Caught OutOfMemoryError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/parallel/parallel_apply.py", line 99, in _worker output = module(*input, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a2/extra/fid/inception.py", line 153, in forward x = block(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/container.py", line 250, in forward input = module(input) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torchvision/models/inception.py", line 406, in forward x = self.bn(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/batchnorm.py", line 193, in forward return F.batch_norm( ~~~~~~~~~~~~^ input, ^^^^^^ ...<11 lines>... self.eps, ^^^^^^^^^ ) ^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/functional.py", line 2813, in batch_norm return torch.batch_norm( ~~~~~~~~~~~~~~~~^ input, ^^^^^^ ...<7 lines>... torch.backends.cudnn.enabled, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 338.00 MiB. GPU 0 has a total capacity of 79.14 GiB of which 134.50 MiB is free. Process 2527836 has 436.00 MiB memory in use. Process 2615505 has 75.84 GiB memory in use. Process 2701024 has 528.00 MiB memory in use. Process 2701105 has 710.00 MiB memory in use. Including non-PyTorch memory, this process has 1.50 GiB memory in use. Of the allocated memory 901.63 MiB is allocated by PyTorch, and 24.37 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
fc-gan
Master-IASD
upnquick
Error
0
1000
0
0
bytes: b'Traceback (most recent call last):\n File "/home/lamsade/testplatform/test-platform-a2/repos/Master-IASD/fc-gan/generate.py", line 33, in \n adagan_generate(n_samples=10000, batch_size=args.batch_size, T=T, device=device)\n ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/home/lamsade/testplatform/test-platform-a2/repos/Master-IASD/fc-gan/adagan_generate.py", line 14, in adagan_generate\n model = load_model(model, f"checkpoints/adagan_{i}", device)\n File "/home/lamsade/testplatform/test-platform-a2/repos/Master-IASD/fc-gan/utils.py", line 62, in load_model\n ckpt = torch.load(ckpt_path, map_location=device)\n File "/home/lamsade/testplatform/test-platform-a2/repos/Master-IASD/fc-gan/venv/lib/python3.13/site-packages/torch/serialization.py", line 1484, in load\n with _open_file_like(f, "rb") as opened_file:\n ~~~~~~~~~~~~~~~^^^^^^^^^\n File "/home/lamsade/testplatform/test-platform-a2/repos/Master-IASD/fc-gan/venv/lib/python3.13/site-packages/torch/serialization.py", line 759, in _open_file_like\n return _open_file(name_or_buffer, mode)\n File "/home/lamsade/testplatform/test-platform-a2/repos/Master-IASD/fc-gan/venv/lib/python3.13/site-packages/torch/serialization.py", line 740, in __init__\n super().__init__(open(name, mode))\n ~~~~^^^^^^^^^^^^\nFileNotFoundError: [Errno 2] No such file or directory: \'checkpoints/adagan_0/G.pth\'\n'
ganarchy
Master-IASD
upnquick
Error
0
1000
0
0
OutOfMemoryError: Caught OutOfMemoryError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/parallel/parallel_apply.py", line 99, in _worker output = module(*input, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a2/extra/fid/inception.py", line 153, in forward x = block(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/container.py", line 250, in forward input = module(input) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torchvision/models/inception.py", line 406, in forward x = self.bn(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/batchnorm.py", line 193, in forward return F.batch_norm( ~~~~~~~~~~~~^ input, ^^^^^^ ...<11 lines>... self.eps, ^^^^^^^^^ ) ^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/functional.py", line 2813, in batch_norm return torch.batch_norm( ~~~~~~~~~~~~~~~~^ input, ^^^^^^ ...<7 lines>... torch.backends.cudnn.enabled, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 676.00 MiB. GPU 0 has a total capacity of 79.14 GiB of which 340.06 MiB is free. Process 2527836 has 436.00 MiB memory in use. Process 2615505 has 75.84 GiB memory in use. Process 2701105 has 710.00 MiB memory in use. Including non-PyTorch memory, this process has 1.82 GiB memory in use. Of the allocated memory 1.20 GiB is allocated by PyTorch, and 23.26 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
gang
Master-IASD
upnquick
Error
0
1000
0
0
OutOfMemoryError: Caught OutOfMemoryError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/parallel/parallel_apply.py", line 99, in _worker output = module(*input, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a2/extra/fid/inception.py", line 153, in forward x = block(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/container.py", line 250, in forward input = module(input) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torchvision/models/inception.py", line 406, in forward x = self.bn(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/batchnorm.py", line 193, in forward return F.batch_norm( ~~~~~~~~~~~~^ input, ^^^^^^ ...<11 lines>... self.eps, ^^^^^^^^^ ) ^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/functional.py", line 2813, in batch_norm return torch.batch_norm( ~~~~~~~~~~~~~~~~^ input, ^^^^^^ ...<7 lines>... torch.backends.cudnn.enabled, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 474.00 MiB. GPU 0 has a total capacity of 79.14 GiB of which 383.62 MiB is free. Process 2527836 has 436.00 MiB memory in use. Process 2615505 has 75.84 GiB memory in use. Including non-PyTorch memory, this process has 2.48 GiB memory in use. Of the allocated memory 933.43 MiB is allocated by PyTorch, and 990.57 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
gan_gsters
Master-IASD
upnquick
Error
0
1000
0
0
bytes: b'Traceback (most recent call last):\n File "/home/lamsade/testplatform/test-platform-a2/repos/Master-IASD/gan_gsters/generate.py", line 32, in \n model = Generator(g_output_dim=mnist_dim).to(device)\n File "/home/lamsade/testplatform/test-platform-a2/repos/Master-IASD/gan_gsters/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1371, in to\n return self._apply(convert)\n ~~~~~~~~~~~^^^^^^^^^\n File "/home/lamsade/testplatform/test-platform-a2/repos/Master-IASD/gan_gsters/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 930, in _apply\n module._apply(fn)\n ~~~~~~~~~~~~~^^^^\n File "/home/lamsade/testplatform/test-platform-a2/repos/Master-IASD/gan_gsters/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 957, in _apply\n param_applied = fn(param)\n File "/home/lamsade/testplatform/test-platform-a2/repos/Master-IASD/gan_gsters/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1357, in convert\n return t.to(\n ~~~~^\n device,\n ^^^^^^^\n dtype if t.is_floating_point() or t.is_complex() else None,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n non_blocking,\n ^^^^^^^^^^^^^\n )\n ^\ntorch.AcceleratorError: CUDA error: out of memory\nSearch for `cudaErrorMemoryAllocation\' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information.\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\n\n'
ganiants
Master-IASD
upnquick
Error
0
1000
0
0
OutOfMemoryError: Caught OutOfMemoryError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/parallel/parallel_apply.py", line 99, in _worker output = module(*input, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a2/extra/fid/inception.py", line 153, in forward x = block(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/container.py", line 250, in forward input = module(input) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torchvision/models/inception.py", line 406, in forward x = self.bn(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/batchnorm.py", line 193, in forward return F.batch_norm( ~~~~~~~~~~~~^ input, ^^^^^^ ...<11 lines>... self.eps, ^^^^^^^^^ ) ^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/functional.py", line 2813, in batch_norm return torch.batch_norm( ~~~~~~~~~~~~~~~~^ input, ^^^^^^ ...<7 lines>... torch.backends.cudnn.enabled, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 338.00 MiB. GPU 0 has a total capacity of 79.14 GiB of which 18.50 MiB is free. Process 2527836 has 436.00 MiB memory in use. Process 2615505 has 75.84 GiB memory in use. Process 2703440 has 688.00 MiB memory in use. Process 2703629 has 666.00 MiB memory in use. Including non-PyTorch memory, this process has 1.50 GiB memory in use. Of the allocated memory 901.63 MiB is allocated by PyTorch, and 24.37 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
ganibal
Master-IASD
upnquick
Error
0
1000
0
0
OutOfMemoryError: Caught OutOfMemoryError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/parallel/parallel_apply.py", line 99, in _worker output = module(*input, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a2/extra/fid/inception.py", line 150, in forward x = 2 * x - 1 # Scale from range (0, 1) to range (-1, 1) ~~~~~~^~~ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 132.00 MiB. GPU 0 has a total capacity of 79.14 GiB of which 78.06 MiB is free. Process 2527836 has 436.00 MiB memory in use. Process 2615505 has 75.84 GiB memory in use. Process 2703354 has 1.82 GiB memory in use. Including non-PyTorch memory, this process has 972.00 MiB memory in use. Of the allocated memory 348.06 MiB is allocated by PyTorch, and 13.94 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
ganimals
Master-IASD
upnquick
Error
0
1000
0
0
OutOfMemoryError: Caught OutOfMemoryError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/parallel/parallel_apply.py", line 99, in _worker output = module(*input, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a2/extra/fid/inception.py", line 153, in forward x = block(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/container.py", line 250, in forward input = module(input) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torchvision/models/inception.py", line 406, in forward x = self.bn(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/batchnorm.py", line 193, in forward return F.batch_norm( ~~~~~~~~~~~~^ input, ^^^^^^ ...<11 lines>... self.eps, ^^^^^^^^^ ) ^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/functional.py", line 2813, in batch_norm return torch.batch_norm( ~~~~~~~~~~~~~~~~^ input, ^^^^^^ ...<7 lines>... torch.backends.cudnn.enabled, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 676.00 MiB. GPU 0 has a total capacity of 79.14 GiB of which 538.06 MiB is free. Process 2527836 has 436.00 MiB memory in use. Process 2615505 has 75.84 GiB memory in use. Including non-PyTorch memory, this process has 1.82 GiB memory in use. Process 2702821 has 512.00 MiB memory in use. Of the allocated memory 1.20 GiB is allocated by PyTorch, and 23.26 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
ganki-dama
Master-IASD
upnquick
Error
0
1000
0
0
AcceleratorError: CUDA error: out of memory Search for `cudaErrorMemoryAllocation' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information. CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
ganumber
Master-IASD
upnquick
Error
0
1000
0
0
OutOfMemoryError: Caught OutOfMemoryError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/parallel/parallel_apply.py", line 99, in _worker output = module(*input, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a2/extra/fid/inception.py", line 153, in forward x = block(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/container.py", line 250, in forward input = module(input) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torchvision/models/inception.py", line 406, in forward x = self.bn(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/batchnorm.py", line 193, in forward return F.batch_norm( ~~~~~~~~~~~~^ input, ^^^^^^ ...<11 lines>... self.eps, ^^^^^^^^^ ) ^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/functional.py", line 2813, in batch_norm return torch.batch_norm( ~~~~~~~~~~~~~~~~^ input, ^^^^^^ ...<7 lines>... torch.backends.cudnn.enabled, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 676.00 MiB. GPU 0 has a total capacity of 79.14 GiB of which 384.06 MiB is free. Process 2527836 has 436.00 MiB memory in use. Process 2615505 has 75.84 GiB memory in use. Process 2706257 has 666.00 MiB memory in use. Including non-PyTorch memory, this process has 1.82 GiB memory in use. Of the allocated memory 1.20 GiB is allocated by PyTorch, and 23.26 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
gentlemen
Master-IASD
upnquick
Error
0
1000
0
0
bytes: b'Traceback (most recent call last):\n File "/home/lamsade/testplatform/test-platform-a2/repos/Master-IASD/gentlemen/generate.py", line 1, in \n import torch\nModuleNotFoundError: No module named \'torch\'\n'
hero-to-zero
Master-IASD
upnquick
Error
0
1000
0
0
OutOfMemoryError: Caught OutOfMemoryError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/parallel/parallel_apply.py", line 99, in _worker output = module(*input, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a2/extra/fid/inception.py", line 153, in forward x = block(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/container.py", line 250, in forward input = module(input) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torchvision/models/inception.py", line 406, in forward x = self.bn(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/batchnorm.py", line 193, in forward return F.batch_norm( ~~~~~~~~~~~~^ input, ^^^^^^ ...<11 lines>... self.eps, ^^^^^^^^^ ) ^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/functional.py", line 2813, in batch_norm return torch.batch_norm( ~~~~~~~~~~~~~~~~^ input, ^^^^^^ ...<7 lines>... torch.backends.cudnn.enabled, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 474.00 MiB. GPU 0 has a total capacity of 79.14 GiB of which 383.62 MiB is free. Process 2527836 has 436.00 MiB memory in use. Process 2615505 has 75.84 GiB memory in use. Including non-PyTorch memory, this process has 2.48 GiB memory in use. Of the allocated memory 933.43 MiB is allocated by PyTorch, and 990.57 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
mansit_walou
Master-IASD
upnquick
Error
0
1000
0
0
bytes: b'Traceback (most recent call last):\n File "/home/lamsade/testplatform/test-platform-a2/repos/Master-IASD/mansit_walou/generate.py", line 1, in \n import torch\nModuleNotFoundError: No module named \'torch\'\n'
nano_kiwi
Master-IASD
upnquick
Error
0
1000
0
0
OutOfMemoryError: Caught OutOfMemoryError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/parallel/parallel_apply.py", line 99, in _worker output = module(*input, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a2/extra/fid/inception.py", line 153, in forward x = block(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/container.py", line 250, in forward input = module(input) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torchvision/models/inception.py", line 406, in forward x = self.bn(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/batchnorm.py", line 193, in forward return F.batch_norm( ~~~~~~~~~~~~^ input, ^^^^^^ ...<11 lines>... self.eps, ^^^^^^^^^ ) ^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/functional.py", line 2813, in batch_norm return torch.batch_norm( ~~~~~~~~~~~~~~~~^ input, ^^^^^^ ...<7 lines>... torch.backends.cudnn.enabled, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 338.00 MiB. GPU 0 has a total capacity of 79.14 GiB of which 40.50 MiB is free. Process 2527836 has 436.00 MiB memory in use. Process 2615505 has 75.84 GiB memory in use. Process 2708769 has 564.00 MiB memory in use. Process 2708770 has 768.00 MiB memory in use. Including non-PyTorch memory, this process has 1.50 GiB memory in use. Of the allocated memory 901.63 MiB is allocated by PyTorch, and 24.37 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
organ
Master-IASD
upnquick
Error
0
1000
0
0
OutOfMemoryError: Caught OutOfMemoryError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/parallel/parallel_apply.py", line 99, in _worker output = module(*input, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a2/extra/fid/inception.py", line 153, in forward x = block(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/container.py", line 250, in forward input = module(input) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torchvision/models/inception.py", line 406, in forward x = self.bn(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/batchnorm.py", line 193, in forward return F.batch_norm( ~~~~~~~~~~~~^ input, ^^^^^^ ...<11 lines>... self.eps, ^^^^^^^^^ ) ^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/functional.py", line 2813, in batch_norm return torch.batch_norm( ~~~~~~~~~~~~~~~~^ input, ^^^^^^ ...<7 lines>... torch.backends.cudnn.enabled, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 474.00 MiB. GPU 0 has a total capacity of 79.14 GiB of which 383.62 MiB is free. Process 2527836 has 436.00 MiB memory in use. Process 2615505 has 75.84 GiB memory in use. Including non-PyTorch memory, this process has 2.48 GiB memory in use. Of the allocated memory 933.43 MiB is allocated by PyTorch, and 990.57 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
ouragan
Master-IASD
upnquick
Error
0
1000
0
0
OutOfMemoryError: Caught OutOfMemoryError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/parallel/parallel_apply.py", line 99, in _worker output = module(*input, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a2/extra/fid/inception.py", line 153, in forward x = block(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/container.py", line 250, in forward input = module(input) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torchvision/models/inception.py", line 406, in forward x = self.bn(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/batchnorm.py", line 193, in forward return F.batch_norm( ~~~~~~~~~~~~^ input, ^^^^^^ ...<11 lines>... self.eps, ^^^^^^^^^ ) ^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/functional.py", line 2813, in batch_norm return torch.batch_norm( ~~~~~~~~~~~~~~~~^ input, ^^^^^^ ...<7 lines>... torch.backends.cudnn.enabled, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 676.00 MiB. GPU 0 has a total capacity of 79.14 GiB of which 282.06 MiB is free. Process 2527836 has 436.00 MiB memory in use. Process 2615505 has 75.84 GiB memory in use. Process 2708770 has 768.00 MiB memory in use. Including non-PyTorch memory, this process has 1.82 GiB memory in use. Of the allocated memory 1.20 GiB is allocated by PyTorch, and 23.26 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
tzatziki
Master-IASD
upnquick
Error
0
1000
0
0
OutOfMemoryError: Caught OutOfMemoryError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/parallel/parallel_apply.py", line 99, in _worker output = module(*input, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a2/extra/fid/inception.py", line 153, in forward x = block(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/container.py", line 250, in forward input = module(input) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torchvision/models/inception.py", line 406, in forward x = self.bn(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/batchnorm.py", line 193, in forward return F.batch_norm( ~~~~~~~~~~~~^ input, ^^^^^^ ...<11 lines>... self.eps, ^^^^^^^^^ ) ^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/functional.py", line 2813, in batch_norm return torch.batch_norm( ~~~~~~~~~~~~~~~~^ input, ^^^^^^ ...<7 lines>... torch.backends.cudnn.enabled, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 676.00 MiB. GPU 0 has a total capacity of 79.14 GiB of which 384.06 MiB is free. Process 2527836 has 436.00 MiB memory in use. Process 2615505 has 75.84 GiB memory in use. Process 2710643 has 666.00 MiB memory in use. Including non-PyTorch memory, this process has 1.82 GiB memory in use. Of the allocated memory 1.20 GiB is allocated by PyTorch, and 23.26 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
yes-we-gan
Master-IASD
upnquick
Error
0
1000
0
0
OutOfMemoryError: Caught OutOfMemoryError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/parallel/parallel_apply.py", line 99, in _worker output = module(*input, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a2/extra/fid/inception.py", line 153, in forward x = block(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/container.py", line 250, in forward input = module(input) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torchvision/models/inception.py", line 406, in forward x = self.bn(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/batchnorm.py", line 193, in forward return F.batch_norm( ~~~~~~~~~~~~^ input, ^^^^^^ ...<11 lines>... self.eps, ^^^^^^^^^ ) ^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/functional.py", line 2813, in batch_norm return torch.batch_norm( ~~~~~~~~~~~~~~~~^ input, ^^^^^^ ...<7 lines>... torch.backends.cudnn.enabled, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 474.00 MiB. GPU 0 has a total capacity of 79.14 GiB of which 383.62 MiB is free. Process 2527836 has 436.00 MiB memory in use. Process 2615505 has 75.84 GiB memory in use. Including non-PyTorch memory, this process has 2.48 GiB memory in use. Of the allocated memory 933.43 MiB is allocated by PyTorch, and 990.57 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
BestOf2023-2
profs
upnquick
Error
0
1000
0
0
OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB. GPU 0 has a total capacity of 79.14 GiB of which 12.06 MiB is free. Process 2527836 has 436.00 MiB memory in use. Process 2615505 has 75.84 GiB memory in use. Process 2713322 has 2.36 GiB memory in use. Including non-PyTorch memory, this process has 492.00 MiB memory in use. Of the allocated memory 73.01 MiB is allocated by PyTorch, and 4.99 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
BestOf2024-2
profs
upnquick
Error
0
1000
0
0
OutOfMemoryError: Caught OutOfMemoryError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/parallel/parallel_apply.py", line 99, in _worker output = module(*input, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a2/extra/fid/inception.py", line 153, in forward x = block(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/container.py", line 250, in forward input = module(input) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torchvision/models/inception.py", line 406, in forward x = self.bn(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/batchnorm.py", line 193, in forward return F.batch_norm( ~~~~~~~~~~~~^ input, ^^^^^^ ...<11 lines>... self.eps, ^^^^^^^^^ ) ^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/functional.py", line 2813, in batch_norm return torch.batch_norm( ~~~~~~~~~~~~~~~~^ input, ^^^^^^ ...<7 lines>... torch.backends.cudnn.enabled, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 474.00 MiB. GPU 0 has a total capacity of 79.14 GiB of which 469.69 MiB is free. Process 2527836 has 436.00 MiB memory in use. Process 2615505 has 75.84 GiB memory in use. Including non-PyTorch memory, this process has 2.36 GiB memory in use. Process 2713321 has 18.00 MiB memory in use. Process 2713323 has 12.00 MiB memory in use. Of the allocated memory 933.43 MiB is allocated by PyTorch, and 864.57 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
VanillaGAN
profs
upnquick
Error
0
1000
0
0
AcceleratorError: CUDA error: out of memory Search for `cudaErrorMemoryAllocation' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information. CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

Plots

Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
BestOfBachelor2024 Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
BestOf2023-1 Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
BestOf2024-1 Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
brigand Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
penebarthanouk Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
3-bet_light Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
shape_shifters Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
supergan Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
crous-de-chatelet Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image