ChanServ changed the topic of #freedesktop to: https://www.freedesktop.org infrastructure and online services || for questions about freedesktop.org projects, please see each project's contact || for discussions about specifications, please use https://gitlab.freedesktop.org/xdg or xdg@lists.freedesktop.org
Haaninjo has quit [Quit: Ex-Chat]
columbarius has joined #freedesktop
co1umbarius has quit [Ping timeout: 480 seconds]
<whot>
pendingchaos: it's not listed as enabled project, so damspam doesn't handle it - all projects need to request the handling individually
dsrt^ has quit [Remote host closed the connection]
blatant has quit [Quit: WeeChat 4.0.0]
<pq>
half an hour job queue times for weston on x86 runners, is that occasionally normal?
<zmike>
seems like runner availability is very low today
<daniels>
yeah it's bad
<daniels>
looked like NM was smashing it to bits
<daniels>
I have nfi if their test suite is just hugely single-threaded or something but their jobs take like 45min to complete, and they have a ton of them
<zmike>
can we vote them off the island
<bentiss>
FWIW, it was not great this morning, and when I checked a couple of persons manually started the mesa pipelines, which also takes a lot
<bentiss>
in addition to marge
<zmike>
I think people running manual mesa pipelines is pretty normal?
<zmike>
or at least I run them regularly most days and I know others do too
<bentiss>
not when you run the full pipeline
<bentiss>
the problem is that mesa is heavy, and then saying NM is bad because they run a pipeline when they do a tag is not fair IMO
<zmike>
haha
<zmike>
all CI jobs are bad obviously
<zmike>
if nobody ran them we wouldn't have problems!
<daniels>
mesa is heavy but pretty ephemeral on the x86 runners afaict
<bentiss>
and back to the mesa one, if all submitters of MRs would run the full pipeline, we would not have enough runners. So luckily the mesa pipeline is smart enough to detect what changed
<daniels>
like even the manual jobs get distributed across to the hw runners no-one else uses, and the rest is in placeholder jobs ... apart from build jobs which are generally <10min
<daniels>
and yeah, I guess the issue with NM is that they tend to work in pretty huge batches
<bentiss>
well, NM is usually only working on one platform for regular pipelines, except when they do a tag in which case they test on everything
<bentiss>
and this takes a bit
<daniels>
like today there's been testing runs on main + 1.42 + 1.38, all of which occupy a huge number of job slots for 1h continuously
<daniels>
(we do need to figure out wtf shader-db takes so long in mesa's debian-build-testing job tho)
<bentiss>
__tim: more seriously, I think you should be having gitlab admin rights. The more the merier
<bentiss>
the hard part is not gitlab in itself, the hard part is the hosting IMO
<__tim>
I don't really expect or plan to do anything tbh, but it would be nice to be able to look at what runners are up to and such if there are issues
<daniels>
++++
<bentiss>
__tim, daniels: done :)
<__tim>
ta
<bentiss>
__tim: the only thing to be aware of is that now you have visibility on all of the projects on the instance. So be careful with your api tokens too, because now you can nuke users
<__tim>
right, good to know
<daniels>
__tim: just don't let him trick you into finding out how the storage cluster works
agd5f has joined #freedesktop
<bentiss>
daniels: I guess __tim will help me connecting the 2 ceph clusters if we do the migration to DC :)
<daniels>
agreed
<hakzsam>
are you aware of the sanity job being stucked?
<daniels>
hakzsam: yes
<hakzsam>
ok
<daniels>
demand > capacity
<daniels>
there's no service problem per se, just not enough of it
<MrCooper>
satisfying people's expectations of how long it should take for a runner to pick up a job requires over-provisioning runner capacity in general