ChanServ changed the topic of #freedesktop to: https://www.freedesktop.org infrastructure and online services || for questions about freedesktop.org projects, please see each project's contact || for discussions about specifications, please use https://gitlab.freedesktop.org/xdg or xdg@lists.freedesktop.org
Kayden has quit [Quit: -> home]
guludo has quit [Ping timeout: 480 seconds]
shbrngdo has quit [Remote host closed the connection]
shbrngdo has joined #freedesktop
AbleBacon has quit [Read error: Connection reset by peer]
ximion has quit [Remote host closed the connection]
swatish2 has joined #freedesktop
swatish2 has quit [Ping timeout: 480 seconds]
scrumplex_ has joined #freedesktop
pi3 has joined #freedesktop
scrumplex has quit [Ping timeout: 480 seconds]
rvr has quit [Ping timeout: 480 seconds]
Kayden has joined #freedesktop
BeutifullScience has joined #freedesktop
konstantin_ has joined #freedesktop
konstantin is now known as Guest12100
dcunit3d has quit [Quit: Quitted]
Guest12100 has quit [Ping timeout: 480 seconds]
BeutifullScience has quit []
eluks has quit [Remote host closed the connection]
eluks has joined #freedesktop
AbleBacon has joined #freedesktop
Kayden has quit [Ping timeout: 480 seconds]
swatish2 has joined #freedesktop
swatish2 has quit [Remote host closed the connection]
swatish2 has joined #freedesktop
swatish21 has joined #freedesktop
swatish2 has quit [Ping timeout: 480 seconds]
swatish21 is now known as swatish2
sghuge has quit [Remote host closed the connection]
sghuge has joined #freedesktop
swatish21 has joined #freedesktop
swatish2 has quit [Ping timeout: 480 seconds]
sima has joined #freedesktop
jsa1 has joined #freedesktop
swatish21 is now known as swatish2
tzimmermann has joined #freedesktop
Kayden has joined #freedesktop
AbleBacon has quit [Read error: Connection reset by peer]
airlied has quit [Remote host closed the connection]
airlied has joined #freedesktop
<sergi> eric_engestrom: Now, with !34120 merged and the farms with nginx-proxy/cache problem disabled, we can address that calm but committed. None of my experiments to understand the problem gave me any useful information. How can we sync to find a root cause affecting those proxies?
kxkamil has quit []
mripard has joined #freedesktop
swatish2 has quit [Ping timeout: 480 seconds]
Thymo has quit [Quit: ZNC - http://znc.in]
Thymo has joined #freedesktop
kxkamil has joined #freedesktop
<eric_engestrom> sergi: a workaround has been found by jasuarez, MR coming soon :)
<sergi> great! thanks!
dwt has left #freedesktop [#freedesktop]
Thymo has quit [Quit: ZNC - http://znc.in]
<eric_engestrom> sergi, daniels: please send this to anyone with a lava or baremetal farm: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/34174
<eric_engestrom> the igalia farm already has that change and is about to be re-enabled
fomys_ has joined #freedesktop
Thymo has joined #freedesktop
<eric_engestrom> (igalia farm re-enablement: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/34175)
swatish2 has joined #freedesktop
<daniels> enunes: ^ for Lima
<daniels> blu: ^ for vmw
<daniels> robclark: ^ for fdno
<eric_engestrom> thanks!
f_ is now known as funderscore
funderscore is now known as f_
f_ is now known as funderscore
funderscore is now known as f_
guludo has joined #freedesktop
<enunes> daniels: eric_engestrom: I had the old snippet indeed, applied the the fix now
<blu> heya, so I've patched my linux branch to upload artifacts via curl now, and it seems to upload fine, but for some reason I cannot consume any of the new artifacts under https://s3.freedesktop.org/mesa-lava/.. -- the files are just 404. There used to be a better folder for these artifacts, right?
<valentine> blu: Hey, I checked your branch, and the problem is that your S3 URL contains the filename again
swatish2 has quit [Ping timeout: 480 seconds]
<valentine> with the recent changes, the upload URL should simply be "https://${S3_PATH}/"
<blu> valentine: oops. thanks!
<valentine> No problem :)
todi has quit []
todi has joined #freedesktop
<daniels> blu: please please rebase your kernel so it goes to mesa-rootfs and not mesa-lava
<daniels> mesa-lava expires after a month; mesa-rootfs does not
<daniels> so if you do this, we don't have to worry about the vmware jobs all failing because someone needs to regenerate the kernel again
<blu> daniels: ah, that's the folder I was looking for. thank for that too!
<daniels> (using the kernel from gfx-ci/linux would also be really great)
swatish2 has joined #freedesktop
pjakobsson has quit [Remote host closed the connection]
<robclark> eric_engestrom: that is a change I need to make locally to the farm?
pjakobsson has joined #freedesktop
kasper93 has joined #freedesktop
kasper93 has quit [Remote host closed the connection]
kasper93_ has quit [Ping timeout: 480 seconds]
kasper93 has joined #freedesktop
fomys_ has quit []
pjakobsson has quit [Remote host closed the connection]
swatish2 has quit [Ping timeout: 480 seconds]
<__tim> do we need to bump the ci-templates commit in our ci pipelines for the image jobs to work again or will that fix itself once the remaining registry stuff is sorted?
<__tim> (current fails with 'operation not permitted' doing 'podman login', but mesa pipelines seem to be working)
<bentiss> __tim: job link?
* bentiss looks
<bentiss> placeholder-job are not privilged
<__tim> ah, so need to add the kvm label?
<bentiss> remove the placeholder label basically
<__tim> hrm ok
<bentiss> I'm seriously considering how to solve that "placeholder" problem
<bentiss> it used to be "let's keep some long standing jobs around without limitting our capacity", but now it's used as "fast forward the queue and have the job running now"
<__tim> it was handy imho that these image jobs which 99.999% of the times do nothing and finish within seconds had priority, because they block the rest of the pipeline
<bentiss> it's a valid use case, but probably the implementation is wrong
<__tim> ok
<bentiss> __tim: FWIW, slapping kvm should give you a fast forward too ATM, but not in the long run
<__tim> would you recommend we do that for now, or should we just wait until some other solution is found later?
<__tim> we don't want to game/abuse the system either of course
<bentiss> yeah, that should be fine. I don't have a full solution for this. We briefly talked about this at plumbers with daniels, but right now I'm not sure I'll have the bandwidth wo draw a new solution :/
<__tim> alright, I'll add 'kvm' for now then, and please shout if it causes problems
<bentiss> __tim: thanks :)
<__tim> Thank *you*!
<blu> robclark: that change needs to go into the nginx config of your farm. please see set_by_lua here: https://docs.mesa3d.org/ci/bare-metal.html#caching-downloads
<robclark> ok, yeah, I eventually figured that out.. the next challenge is that I don't actually have sudo on the box ;-)
<blu> robclark: oh :/
<robclark> ok, I think I got that sorted
<__tim> ooc, is runner capacity already at the full planned level, because I'm seeing things being queued 25-35 minutes in the mesa pipelines
<Ford_Prefect> PipeWire pipelines are also crawling rn
<__tim> ah, I guess I should have re-read the ticket