Result table

This table was generated on 2025-11-20 at 11:23. See more results here. See last results here.

results
project_namegroup_namehostnamestatustimetime_per_image_msacc_natacc_pgdlinfacc_pgdl2aggerror_msg
BestOf2023-1
profs
upnquick
Success
128.84
6.44
62.5
70.19
70.81
141.0
None
BestOf2024-1
profs
upnquick
Success
1397.73
69.89
62.5
51.35
63.4
114.74
None
BestOf2024-2
profs
coktailjet
Success
1154.0
57.7
81.25
52.49
59.51
112.0
None
exocet
Master-IASD
coktailjet
Success
478.79
23.94
98.75
41.91
58.32
100.23
None
BestOf2023-2
profs
coktailjet
Success
109.3
5.46
56.25
41.02
53.42
94.44
None
attack_mesonet
Master-IASD
coktailjet
Success
72.69
3.63
50.0
25.67
32.9
58.57
None
attack-of-babrumen
Master-IASD
upnquick
Success
84.04
4.2
37.5
25.99
27.43
53.42
None
invisible_attack
Master-IASD
coktailjet
Success
98.43
4.92
62.5
6.05
25.14
31.19
None
ciclose-10
Master-IASD
upnquick
Success
86.4
4.32
56.25
6.02
25.16
31.18
None
counter_attack
Master-IASD
coktailjet
Success
97.17
4.86
56.25
6.02
25.16
31.18
None
blast_attack
Master-IASD
upnquick
Success
85.69
4.28
43.75
6.05
25.12
31.17
None
base_model
profs
coktailjet
Success
108.08
5.4
43.75
6.03
25.12
31.15
None
attackonnetworks
Master-IASD
coktailjet
Success
72.31
3.62
56.25
6.0
25.14
31.14
None
attackus
Master-IASD
coktailjet
Success
73.04
3.65
62.5
6.03
25.11
31.14
None
jogabonito
Master-IASD
upnquick
Success
86.38
4.32
68.75
6.01
25.11
31.12
None
attaqueoudefense
Master-IASD
upnquick
Success
82.27
4.11
62.5
5.98
25.13
31.11
None
attackonpixels
Master-IASD
upnquick
Success
82.67
4.13
68.75
6.0
25.08
31.08
None
the-taithon-canon
Master-IASD
upnquick
Success
51.05
2.55
6.25
10.0
10.0
20.0
None
BestOfMiles
profs
upnquick
Error
0
0
0
0
0
0
OutOfMemoryError: CUDA out of memory. Tried to allocate 248.00 MiB. GPU 0 has a total capacity of 79.14 GiB of which 220.06 MiB is free. Process 567211 has 73.54 GiB memory in use. Process 662892 has 600.00 MiB memory in use. Including non-PyTorch memory, this process has 3.20 GiB memory in use. Process 662894 has 1.58 GiB memory in use. Of the allocated memory 2.56 GiB is allocated by PyTorch, and 152.30 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

Plots