ChanServ changed the topic of #freedesktop to: https://www.freedesktop.org infrastructure and online services || for questions about freedesktop.org projects, please see each project's contact || for discussions about specifications, please use https://gitlab.freedesktop.org/xdg or xdg@lists.freedesktop.org
alpernebbi has quit [Read error: Connection reset by peer]
alpernebbi has joined #freedesktop
ximion1 has quit [Remote host closed the connection]
ximion1 has joined #freedesktop
ximion1 has quit []
agd5f_ has joined #freedesktop
agd5f has quit [Ping timeout: 480 seconds]
itoral has joined #freedesktop
danvet has joined #freedesktop
ryanpavlik has quit []
ocrete has quit [Quit: Ping timeout (120 seconds)]
sergi has quit [Quit: Ping timeout (120 seconds)]
italove has quit [Quit: Ping timeout (120 seconds)]
MajorBiscuit has quit [Read error: Connection reset by peer]
dcunit3d has joined #freedesktop
MajorBiscuit has joined #freedesktop
<MrCooper>
eric_engestrom: right now, enabling LTO for binaries used in CI test jobs would result in MRs being "randomly" unmergeable, because enabling LTO has a chance in ] 0, 1 [ of breaking stuff due to as yet unknown factors (the particular commit alone not being sufficient to determine which side the coin will end up on)
<MrCooper>
(that's for Mesa)
MajorBiscuit has quit [Quit: WeeChat 3.6]
<eric_engestrom>
MrCooper: yeah I know, we wouldn't be able to merge an MR that enabled lto for testing until we fixed these bugs
<eric_engestrom>
daniels: indeed, the runner went down again, it looks like disk issues; we'll keep it offline until we fixed that, so half capacity until further notice
MajorBiscuit has joined #freedesktop
<eric_engestrom>
DavidHeidelberg[m]: about lto taking longer (and that being an issue if tests have to wait on that): I expect it takes longer to build with lto enabled, but i don't have an idea of how much longer, do you have before/after numbers?
rgallaispou has joined #freedesktop
Haaninjo has joined #freedesktop
AbleBacon has quit [Read error: Connection reset by peer]
rgallaispou has left #freedesktop [#freedesktop]
<karolherbst>
daniels (or any other admin): mind banning dariaamanda769 real quick? I've did a report and it's a spam bot
<karolherbst>
just one comment tho :D
<daniels>
done
<karolherbst>
thx
<DavidHeidelberg[m]>
eric_engestrom: I would say 1 - 3 minutes extra, depending on how much components are linking
itoral has quit [Remote host closed the connection]
<eric_engestrom>
DavidHeidelberg[m]: that's not that much (although more than I expected); IMO that shouldn't be a blocker
<DavidHeidelberg[m]>
container -> build (curr 2-4 min; after 3-8 min) -> test (15min). There are two risky things. First is slowdown before the test gets started. Second, increased flakiness in CI.
<DavidHeidelberg[m]>
On the other hand, it has one upside I should have calculated into the equation. The tests get faster. What I saw "subjectively" saw on the LTO pipeline is like ~ 10% increase in performance (I guess because we're bound a lot with CPU)
<DavidHeidelberg[m]>
btw. yesterday I found out, that another distro uses Mesa with LTO, Chimera Linux.
pendingchaos_ has joined #freedesktop
pendingchaos has quit [Ping timeout: 480 seconds]
pendingchaos has joined #freedesktop
pendingchaos_ has quit [Ping timeout: 480 seconds]
<MrCooper>
DavidHeidelberg[m]: since we can't enable LTO in build jobs which produce binaries for test jobs, enabling LTO in build test jobs shouldn't affect the total runtime of pipelines with test jobs
<daniels>
it was triggered at 2:54pm and it's still running
<eric_engestrom>
robclark: waiting time counts, it's not jsut running time
ybogdano has joined #freedesktop
<robclark>
hmm, maybe marge should wait longer.. the CI jobs themselves have timeouts.. as is it is just going to result in yet another CI run and more wait time
<daniels>
yeah, we could set it higher, but it's also an indication of something catastrophically wrong
<daniels>
in this case, it's that NetworkManager is doing a release and sort of DoSing everything with long-running tests which don't parallelise well
<robclark>
bleh
<daniels>
having someone actually spend the time to fix that would be awesome, as would someone being able to sit down and justify to Equinix why we need more CI resources
<MrCooper>
robclark: longer timeout for Marge means more time wasted when something goes wrong like this
<MrCooper>
well, I guess not exactly like this
<robclark>
I think if it were a real timeout, marge would see that the pipeline failed (or at least that is my assumption)
damian has quit []
<MrCooper>
not always, e.g. Marge also hits the timeout when the pipeline never starts in the first place
<MrCooper>
in that case, Marge's timeout limits the time wasted doing literally nothing
siddh has joined #freedesktop
<MrCooper>
siddh: hi, feel free to ask your question about dri-devel list DMARC failures anytime