Result table

This table was generated on 2025-11-13 at 02:42. See more results here. See last results here.

results
project_namegroup_namehostnamestatustimeFIDPrecisionRecallerror_msg
arigan
Master-IASD
coktailjet
Success
91.3
8.12
0.8
0.71
None
BestOfBachelor2024
profs
coktailjet
Success
443.71
12.61
0.87
0.65
None
ganki-dama
Master-IASD
upnquick
Success
81.37
13.38
0.81
0.59
None
gang
Master-IASD
coktailjet
Success
78.24
14.88
0.77
0.6
None
cyril-gane
Master-IASD
coktailjet
Success
96.59
17.68
0.48
0.34
None
BestOf2024-1
profs
coktailjet
Success
88.03
18.22
0.72
0.58
None
BestOf2023-1
profs
coktailjet
Success
87.26
18.33
0.56
0.28
None
ganarchy
Master-IASD
upnquick
Success
67.83
27.35
0.59
0.56
None
organ
Master-IASD
upnquick
Success
72.28
30.5
0.56
0.23
None
gentlemen
Master-IASD
coktailjet
Success
77.44
33.91
0.49
0.23
None
gan_gsters
Master-IASD
coktailjet
Success
80.95
34.6
0.48
0.21
None
3-bet_light
Master-IASD
upnquick
Success
90.21
34.94
0.53
0.23
None
crous-de-chatelet
Master-IASD
upnquick
Success
23.91
42.71
0.31
0.23
None
VanillaGAN
profs
upnquick
Success
80.94
50.39
0.47
0.2
None
fc-gan
Master-IASD
upnquick
Success
83.63
54.03
0.51
0.17
None
shape_shifters
Master-IASD
coktailjet
Success
78.22
57.46
0.5
0.14
None
ganiants
Master-IASD
coktailjet
Success
80.33
66.54
0.5
0.14
None
yes-we-gan
Master-IASD
coktailjet
Success
84.56
68.58
0.52
0.17
None
supergan
Master-IASD
upnquick
Success
86.46
133.86
0.28
0.0
None
clash-of-gans
Master-IASD
coktailjet
Success
93.92
143.61
0.56
0.02
None
nano_kiwi
Master-IASD
coktailjet
Success
87.1
313.18
0.0
0.0
None
hero-to-zero
Master-IASD
coktailjet
Success
75.59
384.08
0.0
0.0
None
brigand
Master-IASD
upnquick
Error
0
1000
0
0
OutOfMemoryError: Caught OutOfMemoryError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/parallel/parallel_apply.py", line 99, in _worker output = module(*input, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a2/extra/fid/inception.py", line 144, in forward x = F.interpolate(x, size=(299, 299), mode='bilinear', align_corners=False) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/functional.py", line 4768, in interpolate return torch._C._nn.upsample_bilinear2d( ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ input, output_size, align_corners, scale_factors ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 132.00 MiB. GPU 0 has a total capacity of 79.14 GiB of which 76.06 MiB is free. Process 2076579 has 66.74 GiB memory in use. Process 2132156 has 8.15 GiB memory in use. Process 2132162 has 3.45 GiB memory in use. Including non-PyTorch memory, this process has 708.00 MiB memory in use. Of the allocated memory 86.14 MiB is allocated by PyTorch, and 11.86 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
dsl
Master-IASD
upnquick
Error
0
1000
0
0
OutOfMemoryError: Caught OutOfMemoryError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/parallel/parallel_apply.py", line 99, in _worker output = module(*input, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torchvision/models/vgg.py", line 66, in forward x = self.features(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/container.py", line 250, in forward input = module(input) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/conv.py", line 548, in forward return self._conv_forward(input, self.weight, self.bias) ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/conv.py", line 543, in _conv_forward return F.conv2d( ~~~~~~~~^ input, weight, bias, self.stride, self.padding, self.dilation, self.groups ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.53 GiB. GPU 0 has a total capacity of 79.14 GiB of which 484.06 MiB is free. Process 2076579 has 66.74 GiB memory in use. Process 2134702 has 5.45 GiB memory in use. Including non-PyTorch memory, this process has 2.99 GiB memory in use. Process 2136667 has 3.46 GiB memory in use. Of the allocated memory 2.17 GiB is allocated by PyTorch, and 218.84 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
ganibal
Master-IASD
upnquick
Error
0
1000
0
0
OutOfMemoryError: Caught OutOfMemoryError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/parallel/parallel_apply.py", line 99, in _worker output = module(*input, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a2/extra/fid/inception.py", line 144, in forward x = F.interpolate(x, size=(299, 299), mode='bilinear', align_corners=False) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/functional.py", line 4768, in interpolate return torch._C._nn.upsample_bilinear2d( ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ input, output_size, align_corners, scale_factors ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 132.00 MiB. GPU 0 has a total capacity of 79.14 GiB of which 72.06 MiB is free. Process 2076579 has 66.74 GiB memory in use. Process 2134702 has 7.07 GiB memory in use. Process 2136667 has 4.53 GiB memory in use. Including non-PyTorch memory, this process has 708.00 MiB memory in use. Of the allocated memory 86.14 MiB is allocated by PyTorch, and 11.86 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
ganimals
Master-IASD
upnquick
Error
0
1000
0
0
AcceleratorError: CUDA error: out of memory Search for `cudaErrorMemoryAllocation' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information. CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
ganumber
Master-IASD
upnquick
Error
0
1000
0
0
OutOfMemoryError: Caught OutOfMemoryError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/parallel/parallel_apply.py", line 99, in _worker output = module(*input, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torchvision/models/vgg.py", line 66, in forward x = self.features(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/container.py", line 250, in forward input = module(input) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/conv.py", line 548, in forward return self._conv_forward(input, self.weight, self.bias) ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/conv.py", line 543, in _conv_forward return F.conv2d( ~~~~~~~~^ input, weight, bias, self.stride, self.padding, self.dilation, self.groups ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.53 GiB. GPU 0 has a total capacity of 79.14 GiB of which 608.06 MiB is free. Process 2076579 has 66.74 GiB memory in use. Process 2140520 has 8.06 GiB memory in use. Including non-PyTorch memory, this process has 2.99 GiB memory in use. Process 2146485 has 742.00 MiB memory in use. Of the allocated memory 2.17 GiB is allocated by PyTorch, and 218.84 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
mansit_walou
Master-IASD
coktailjet
Error
0
1000
0
0
bytes: b'Traceback (most recent call last):\n File "/home/lamsade/testplatform/test-platform-a2/repos/Master-IASD/mansit_walou/generate.py", line 1, in \n import torch\nModuleNotFoundError: No module named \'torch\'\n'
ouragan
Master-IASD
upnquick
Error
0
1000
0
0
OutOfMemoryError: Caught OutOfMemoryError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/parallel/parallel_apply.py", line 99, in _worker output = module(*input, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torchvision/models/vgg.py", line 66, in forward x = self.features(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/container.py", line 250, in forward input = module(input) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/conv.py", line 548, in forward return self._conv_forward(input, self.weight, self.bias) ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/conv.py", line 543, in _conv_forward return F.conv2d( ~~~~~~~~^ input, weight, bias, self.stride, self.padding, self.dilation, self.groups ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.53 GiB. GPU 0 has a total capacity of 79.14 GiB of which 1.23 GiB is free. Process 2076579 has 66.74 GiB memory in use. Process 2146343 has 8.15 GiB memory in use. Including non-PyTorch memory, this process has 2.99 GiB memory in use. Of the allocated memory 2.17 GiB is allocated by PyTorch, and 218.84 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
penebarthanouk
Master-IASD
upnquick
Error
0
1000
0
0
OutOfMemoryError: Caught OutOfMemoryError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/parallel/parallel_apply.py", line 99, in _worker output = module(*input, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a2/extra/fid/inception.py", line 144, in forward x = F.interpolate(x, size=(299, 299), mode='bilinear', align_corners=False) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/functional.py", line 4768, in interpolate return torch._C._nn.upsample_bilinear2d( ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ input, output_size, align_corners, scale_factors ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 132.00 MiB. GPU 0 has a total capacity of 79.14 GiB of which 72.06 MiB is free. Process 2076579 has 66.74 GiB memory in use. Process 2146343 has 8.15 GiB memory in use. Process 2147042 has 3.46 GiB memory in use. Including non-PyTorch memory, this process has 708.00 MiB memory in use. Of the allocated memory 86.14 MiB is allocated by PyTorch, and 11.86 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
tzatziki
Master-IASD
upnquick
Error
0
1000
0
0
OutOfMemoryError: Caught OutOfMemoryError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/parallel/parallel_apply.py", line 99, in _worker output = module(*input, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torchvision/models/vgg.py", line 66, in forward x = self.features(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/container.py", line 250, in forward input = module(input) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/conv.py", line 548, in forward return self._conv_forward(input, self.weight, self.bias) ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/conv.py", line 543, in _conv_forward return F.conv2d( ~~~~~~~~^ input, weight, bias, self.stride, self.padding, self.dilation, self.groups ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.53 GiB. GPU 0 has a total capacity of 79.14 GiB of which 1.32 GiB is free. Process 2076579 has 66.74 GiB memory in use. Process 2149674 has 8.06 GiB memory in use. Including non-PyTorch memory, this process has 2.99 GiB memory in use. Of the allocated memory 2.17 GiB is allocated by PyTorch, and 218.84 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
BestOf2023-2
profs
upnquick
Error
0
1000
0
0
OutOfMemoryError: Caught OutOfMemoryError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/parallel/parallel_apply.py", line 99, in _worker output = module(*input, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torchvision/models/vgg.py", line 66, in forward x = self.features(x) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/container.py", line 250, in forward input = module(input) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl return forward_call(*args, **kwargs) File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/conv.py", line 548, in forward return self._conv_forward(input, self.weight, self.bias) ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/lamsade/testplatform/test-platform-a1/venv/lib/python3.13/site-packages/torch/nn/modules/conv.py", line 543, in _conv_forward return F.conv2d( ~~~~~~~~^ input, weight, bias, self.stride, self.padding, self.dilation, self.groups ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.53 GiB. GPU 0 has a total capacity of 79.14 GiB of which 478.06 MiB is free. Process 2076579 has 66.74 GiB memory in use. Process 2153831 has 5.46 GiB memory in use. Process 2153833 has 3.46 GiB memory in use. Including non-PyTorch memory, this process has 2.99 GiB memory in use. Of the allocated memory 2.17 GiB is allocated by PyTorch, and 218.84 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
BestOf2024-2
profs
upnquick
Error
0
1000
0
0
AcceleratorError: CUDA error: out of memory Search for `cudaErrorMemoryAllocation' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information. CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

Plots

Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
arigan Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
BestOfBachelor2024 Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
ganki-dama Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
gang Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
cyril-gane Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
BestOf2024-1 Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
BestOf2023-1 Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
ganarchy Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
organ Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
gentlemen Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
gan_gsters Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
3-bet_light Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
crous-de-chatelet Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
VanillaGAN Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
fc-gan Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
shape_shifters Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
ganiants Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
yes-we-gan Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
supergan Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
clash-of-gans Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
nano_kiwi Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Project Name Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9
hero-to-zero Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image
Image Image Image Image Image Image Image Image Image Image