ChanServ changed the topic of #freedesktop to: https://www.freedesktop.org infrastructure and online services || for questions about freedesktop.org projects, please see each project's contact || for discussions about specifications, please use https://gitlab.freedesktop.org/xdg or xdg@lists.freedesktop.org
Kayden has quit [Quit: -> home]
guludo has quit [Ping timeout: 480 seconds]
shbrngdo has quit [Remote host closed the connection]
shbrngdo has joined #freedesktop
AbleBacon has quit [Read error: Connection reset by peer]
ximion has quit [Remote host closed the connection]
swatish2 has joined #freedesktop
swatish2 has quit [Ping timeout: 480 seconds]
scrumplex_ has joined #freedesktop
pi3 has joined #freedesktop
scrumplex has quit [Ping timeout: 480 seconds]
rvr has quit [Ping timeout: 480 seconds]
Kayden has joined #freedesktop
BeutifullScience has joined #freedesktop
konstantin_ has joined #freedesktop
konstantin is now known as Guest12100
dcunit3d has quit [Quit: Quitted]
Guest12100 has quit [Ping timeout: 480 seconds]
BeutifullScience has quit []
eluks has quit [Remote host closed the connection]
eluks has joined #freedesktop
AbleBacon has joined #freedesktop
Kayden has quit [Ping timeout: 480 seconds]
swatish2 has joined #freedesktop
swatish2 has quit [Remote host closed the connection]
swatish2 has joined #freedesktop
swatish21 has joined #freedesktop
swatish2 has quit [Ping timeout: 480 seconds]
swatish21 is now known as swatish2
sghuge has quit [Remote host closed the connection]
sghuge has joined #freedesktop
swatish21 has joined #freedesktop
swatish2 has quit [Ping timeout: 480 seconds]
sima has joined #freedesktop
jsa1 has joined #freedesktop
swatish21 is now known as swatish2
tzimmermann has joined #freedesktop
Kayden has joined #freedesktop
AbleBacon has quit [Read error: Connection reset by peer]
airlied has quit [Remote host closed the connection]
airlied has joined #freedesktop
<sergi>
eric_engestrom: Now, with !34120 merged and the farms with nginx-proxy/cache problem disabled, we can address that calm but committed. None of my experiments to understand the problem gave me any useful information. How can we sync to find a root cause affecting those proxies?
<enunes>
daniels: eric_engestrom: I had the old snippet indeed, applied the the fix now
<blu>
heya, so I've patched my linux branch to upload artifacts via curl now, and it seems to upload fine, but for some reason I cannot consume any of the new artifacts under https://s3.freedesktop.org/mesa-lava/.. -- the files are just 404. There used to be a better folder for these artifacts, right?
<valentine>
blu: Hey, I checked your branch, and the problem is that your S3 URL contains the filename again
swatish2 has quit [Ping timeout: 480 seconds]
<valentine>
with the recent changes, the upload URL should simply be "https://${S3_PATH}/"
<daniels>
blu: please please rebase your kernel so it goes to mesa-rootfs and not mesa-lava
<daniels>
mesa-lava expires after a month; mesa-rootfs does not
<daniels>
so if you do this, we don't have to worry about the vmware jobs all failing because someone needs to regenerate the kernel again
<blu>
daniels: ah, that's the folder I was looking for. thank for that too!
<daniels>
(using the kernel from gfx-ci/linux would also be really great)
swatish2 has joined #freedesktop
pjakobsson has quit [Remote host closed the connection]
<robclark>
eric_engestrom: that is a change I need to make locally to the farm?
pjakobsson has joined #freedesktop
kasper93 has joined #freedesktop
kasper93 has quit [Remote host closed the connection]
kasper93_ has quit [Ping timeout: 480 seconds]
kasper93 has joined #freedesktop
fomys_ has quit []
pjakobsson has quit [Remote host closed the connection]
swatish2 has quit [Ping timeout: 480 seconds]
<__tim>
do we need to bump the ci-templates commit in our ci pipelines for the image jobs to work again or will that fix itself once the remaining registry stuff is sorted?
<__tim>
(current fails with 'operation not permitted' doing 'podman login', but mesa pipelines seem to be working)
<bentiss>
I'm seriously considering how to solve that "placeholder" problem
<bentiss>
it used to be "let's keep some long standing jobs around without limitting our capacity", but now it's used as "fast forward the queue and have the job running now"
<__tim>
it was handy imho that these image jobs which 99.999% of the times do nothing and finish within seconds had priority, because they block the rest of the pipeline
<bentiss>
it's a valid use case, but probably the implementation is wrong
<__tim>
ok
<bentiss>
__tim: FWIW, slapping kvm should give you a fast forward too ATM, but not in the long run
<__tim>
would you recommend we do that for now, or should we just wait until some other solution is found later?
<__tim>
we don't want to game/abuse the system either of course
<bentiss>
yeah, that should be fine. I don't have a full solution for this. We briefly talked about this at plumbers with daniels, but right now I'm not sure I'll have the bandwidth wo draw a new solution :/
<__tim>
alright, I'll add 'kvm' for now then, and please shout if it causes problems