ChanServ changed the topic of #freedesktop to: https://www.freedesktop.org infrastructure and online services || for questions about freedesktop.org projects, please see each project's contact || for discussions about specifications, please use https://gitlab.freedesktop.org/xdg or xdg@lists.freedesktop.org
ppascher has joined #freedesktop
Haaninjo has quit [Quit: Ex-Chat]
<DavidHeidelberg[m]> what most likely happens when I'm s3cp to same file (except caches runner proxy cache would keep it)
___nick___ has quit []
___nick___ has joined #freedesktop
___nick___ has quit []
jadahl has quit [Remote host closed the connection]
___nick___ has joined #freedesktop
Prf_Jakob has quit [Remote host closed the connection]
Prf_Jakob has joined #freedesktop
jadahl has joined #freedesktop
jarthur has quit [Quit: Textual IRC Client: www.textualapp.com]
ppascher has quit [charon.oftc.net helix.oftc.net]
ppascher has joined #freedesktop
ximion has quit []
alpernebbi has quit [Quit: alpernebbi]
alpernebbi has joined #freedesktop
___nick___ has quit []
___nick___ has joined #freedesktop
alanc has quit [Remote host closed the connection]
alanc has joined #freedesktop
danvet has joined #freedesktop
MajorBiscuit has joined #freedesktop
<ishitatsuyuki> I'm seeing a bunch of ERROR: Job failed (system failure): Error response from daemon: container create: allocating lock for new container: allocation failed; exceeded num_locks (2048) (docker.go:534:0s)
Major_Biscuit has joined #freedesktop
MajorBiscuit has quit [Ping timeout: 480 seconds]
<MrCooper> seems to be the fdo-equinix-m3l-11 runner
mvlad has joined #freedesktop
<MrCooper> bentiss: any idea why ccache from F37 hangs in the futex syscall in CI?
<bentiss> MrCooper: ouch, that's way too many words for me to parse in the morning :)
<MrCooper> sorry :)
<bentiss> heh, no worries
<MrCooper> https://gitlab.freedesktop.org/daenzer/mesa/-/jobs/36120295 was hanging for minutes before I cancelled it
<bentiss> FWIW, fdo-equinix-m3l-11 has a lot of crosvm jobs running
<bentiss> fdo-equinix-m3l-12 is pretty much unused now but has a load average of 15, meaning that it was quite busy not so long ago
<bentiss> let me reboot/upgrade them. Though I am afraid virglrenderer is messing with the runner
<MrCooper> to be clear, ccache hanging seems unrelated to overloaded runners
<bentiss> well, the runners do not seem to be in a pretty good shape, so a reboot might help
<MrCooper> didn't seem to affect ccache from F34 though, or I would have expected screaming on #dri-devel
<MrCooper> still hanging on fdo-equinix-m3l-12: https://gitlab.freedesktop.org/daenzer/mesa/-/jobs/36124305
<bentiss> anyway, m3l-11 is now rebooting, ww'll see if this one fails
<bentiss> m3l-12 still not updated/rebooted FWIW
<bentiss> I prefer not killing 2 out of the 3 runners at the same time
<bentiss> MrCooper: mind if I kill that job?
<MrCooper> not at all
<bentiss> k, thanks
<bentiss> damn, it seems that the reboot did not help: https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/jobs/36124753
<ishitatsuyuki> it did get my radv job through though
<bentiss> yeah some are passing, most are not
<bentiss> I am tempted to just dump m3l-11 and respin a new one
* bentiss just doesn;'t have the time to debug this today
<bentiss> OK, so all 3 x86 runners have been updated, but m3l-11 is still behaving badly, so I disabled it. I'll respin a new one this afternoon when I get a little bit more time
<bentiss> MrCooper: ^^ please tell me if m3l-12 is still acting badly
<MrCooper> I don't think it's related to the other runner issues
<bentiss> MrCooper: it could be a f37 issue
<MrCooper> some kind of bad interaction between f37 and the CI environment, yeah
<bentiss> have you tried running it locally in a container?
<MrCooper> I'll try if it happens with f36 as well
<bentiss> I got to go for an errand, bbl
<MrCooper> bentiss: doesn't hang on my personal gitlab-runner (which is an old version though due to Debian, 13.3.1): https://gitlab.freedesktop.org/daenzer/mesa/-/jobs/36125610
<MrCooper> also note that it uses docker, not podman
MrCooper has quit [Quit: Leaving]
MrCooper has joined #freedesktop
kxkamil2 has quit []
ppascher has quit [Ping timeout: 480 seconds]
phasta has joined #freedesktop
phasta has quit [Ping timeout: 480 seconds]
bnieuwenhuizen has quit [Quit: Bye]
bnieuwenhuizen has joined #freedesktop
phasta has joined #freedesktop
<bentiss> Alright. I have now spun up m3l-14 and will burn with fire m3l-11 asap
<zmike> is it normal for sanity job to take 40 minutes to start
<daniels> no
vkareh has joined #freedesktop
<daniels> see ongoing fire above
<zmike> k just checking if same issue
vkareh has quit []
vkareh has joined #freedesktop
phasta has quit [Ping timeout: 480 seconds]
phasta has joined #freedesktop
bilboed0 has joined #freedesktop
bilboed has quit [Ping timeout: 480 seconds]
phasta has quit [Ping timeout: 480 seconds]
nous has joined #freedesktop
nous has quit []
vyivel has quit [Remote host closed the connection]
vyivel has joined #freedesktop
vkareh has quit [Remote host closed the connection]
vkareh has joined #freedesktop
<bentiss> indeed, that runner doesn't have disks properly set :(
ximion has joined #freedesktop
MajorBiscuit has joined #freedesktop
Major_Biscuit has quit [Ping timeout: 480 seconds]
lileo_ has quit []
lileo has joined #freedesktop
<bentiss> alright, I respun a new one, the cloud-init config file was completely wrong, and fixing it would take more time than just bringing in a new one
MajorBiscuit has quit [Ping timeout: 482 seconds]
<alatiera> one of the gst runners is out of space too
<alatiera> fixing
Haaninjo has joined #freedesktop
Lyude has quit [Read error: Connection reset by peer]
Lyude has joined #freedesktop
AbleBacon has joined #freedesktop
vkareh has quit [Quit: WeeChat 3.6]
___nick___ has quit []
___nick___ has joined #freedesktop
Kayden has quit [Quit: to jf]
Kayden has joined #freedesktop
danvet has quit [Ping timeout: 480 seconds]
mvlad has quit [Remote host closed the connection]
Lyude has quit [Quit: Bouncer restarting]
Lyude has joined #freedesktop