ChanServ changed the topic of #freedesktop to: https://www.freedesktop.org infrastructure and online services || for questions about freedesktop.org projects, please see each project's contact || for discussions about specifications, please use https://gitlab.freedesktop.org/xdg or xdg@lists.freedesktop.org
ybogdano has joined #freedesktop
alanc has quit [Remote host closed the connection]
alanc has joined #freedesktop
ybogdano has quit [Ping timeout: 480 seconds]
<karolherbst>
who I need to ask about what email address I had configured for my @x.org one?
<mupuf>
karolherbst: IIRC, emersion is working on the email side
<mupuf>
oh, and if anyone is poking there, I wouldn't mind if mupuf@x.org would redirect to martin.roukala@mupuf.org rather than martin.peres@free.fr :)
<karolherbst>
and do I actually have to use my x.org address for submitting CTS runs?
<karolherbst>
(for the account yes, but also for the submission?)
<airlied>
I think once you are logged in, you don't
<karolherbst>
I have to specify this in the submission document
<bentiss>
and it turns out the mesa git cache is broken for the past month
<daniels>
bentiss: huh, so that policy doesn't actually do what it claims then ... I wonder when that happened, because it certainly used to not allow anon listings
<daniels>
I guess that's fine then
<bentiss>
yeah, we might as well just move on, because I am not sure how I could filter that out from the headers only
<bentiss>
actually, I can just prevent anon access to just the bucket URL. Because if you call https://s3.freedesktop.org/artifacts/bentiss for instance, it returns no such key
<bentiss>
and I think there is no value is allowing any anon listing of any dir
<bentiss>
works!
<bentiss>
mupuf: welcome to the hidden world!
<bentiss>
mupuf: now you get to finish the migration from minio-packet.fd.o to s3.fd.o :)
<daniels>
bentiss: nice!
<bentiss>
daniels: so now the question: do we want/need to bring in all of the data currently on minio-packet over to s3, or should we leave some aside
<bentiss>
I suspect we can leave artifacts and git-cache, we can recreate them easily
<bentiss>
probably we need to pull mesa-* buckets
<daniels>
we don't need mesa-lava/ there, that's just a cache really
<bentiss>
OK, so that would be just mesa-tracie-*?
<daniels>
yep :)
<bentiss>
maybe we should also update the various scripts to not use ci-fairy minio but a plain curl instead
<bentiss>
And I'll probably need help to fix all of the CIs (we need to change mini-packet.freedesktop.org to s3.freedesktop.org, and the access)
* daniels
nods, that'll take some doing
<daniels>
I'm still stuck in project hell this week but can find someone to help out
<daniels>
so for the JWT, just to be really sure, istio validates that against the JWK before it gets to OPA, right?
<bentiss>
daniels: the other way around. JWT is validated after, but it doesn't change a bit, an invalid jwt or a jwt from an unknown issuer is rejected
<bentiss>
so worse case we could be DoSed at the OPA level, but it won't do much harm
<bentiss>
I'll try to do the git-cache migration of mesa, and we'll start from that
<daniels>
bentiss: ok cool, thankyou!
* bentiss
wonders if he should not patch ci-fairy minio first
<bentiss>
so it gets transparent for the users
ximion has joined #freedesktop
<bentiss>
daniels: I am tempted to *not* update ci-fairy minio, because it is a complete different system. However, if I use plain curl, the token might be seen on the logs. Is there a bash way to hide the token when it is stored on a file and when "set -x" is enabled?
chipxxx has joined #freedesktop
<daniels>
by 'completely different system', I guess you mean STS vs. pure-JWT?
fahien has joined #freedesktop
MajorBiscuit has quit [Ping timeout: 480 seconds]
MajorBiscuit has joined #freedesktop
<bentiss>
daniels: yeah. In one hand, you have to login to get the STS and then rely on S3 API, while now we just have a PUT request with the Bearer token
<bentiss>
the S3 API requires to add a bunch of headers, while here we just need to give the public-read acl and done :)
<bentiss>
anyway, working on 'ci-fairy s3cp' which handles those cases, that should be enough
<mupuf>
bentiss: thank you! I think the first project will be to update indico and finally add backups before moving to much harder projects!
<mupuf>
what is the policy for accepting gitlab users
chipxxx has quit [Ping timeout: 480 seconds]
chipxxx has joined #freedesktop
thaller has quit [Ping timeout: 480 seconds]
<bentiss>
mupuf: as ong as they are not spammers, we accept them :)
<mupuf>
and we detect spammers by their silly email addresses?
<bentiss>
mupuf: and you should just ignore those emails, there are automation in place to accept people
<mupuf>
I see!
<mupuf>
I moved them to a different folder, but I could just as well delete them,
<bentiss>
basically after a few minutes, we ask them to validate their email, and after another few minutes, we accept them
<bentiss>
this filter some bots/spam account that never validate their email and start sending junk
<bentiss>
heh, I just realized we are 3 french in the admins...
thaller has joined #freedesktop
<bentiss>
the plan to secretly rule the world is all in place :)
<daniels>
sacre bleu
<karolherbst>
sooo.. CL CTS submission is out :)
<mupuf>
daniels: ROFL
<mupuf>
bentiss: good job, seems like a nice working solution :)
<daniels>
karolherbst: \o/
<mupuf>
karolherbst: congrats!
* mupuf
removed all the stale runners he inadertedly created over the last 2 years
<bentiss>
tintou: I see a bunch of 2022-10-13 15:20:19,281 WARNING Suspicious CI status: 'manual'
<bentiss>
so I would say, technically not marge, just the CI in teh project which is not correctly defined, as it requires manual actions that marge can not trigger
<bentiss>
tintou: the pipeline was run is the submitter namespace, not in the upstream project by marge, so it seems marge was not able to trigger the pipeline
<tintou>
bentiss: Yeah but she usually first do a rebase/amend/force-push
<bentiss>
I see she is trying to force push, but I suspect that given there are already the trailers, she didn't change any commit, and pushed the same commit
<bentiss>
that's weird, but I would try removing those trailers first (Part-of), force push, and then assign to marge
<tintou>
ah, thanks that's probably it then, I'll ask the PR owner to remove the "part of" from the commit message then
<tintou>
bentiss: We just retried without the trailer part and it seems to be stuck still
chipxxx has quit [Read error: Connection reset by peer]
<bentiss>
tintou: AFAICT she's still waiting on the previous CI to finish
<bentiss>
let me kick her
a-l-e has joined #freedesktop
<tintou>
thank you, that worked!
Guest2970 has quit []
a-l-e has quit []
a-l-e has joined #freedesktop
* bentiss
can now check that line on his resume: "know the deep internals on marge bot" :) (by turning it off and on again)
a-l-e has quit []
<tintou>
😅
ybogdano has joined #freedesktop
Major_Biscuit has joined #freedesktop
MajorBiscuit has quit [Ping timeout: 480 seconds]
Major_Biscuit has quit [Ping timeout: 480 seconds]
fahien has quit [Ping timeout: 480 seconds]
kem has quit [Ping timeout: 480 seconds]
ybogdano has quit [Ping timeout: 480 seconds]
kem has joined #freedesktop
chipxxx has joined #freedesktop
ybogdano has joined #freedesktop
fahien has joined #freedesktop
fahien has quit [Ping timeout: 480 seconds]
spawacz has joined #freedesktop
<spawacz>
Hello, I've been digging the net but did not find an answer. Is it possible to run X session on a remote machine AND make it render all the things on the gpu, then stream it to my main PC? I've tried vncserver but it does not use gpu (software rendering only). Forwarding X over ssh also does not seem to work.
<mattst88>
yeah, that sounds like VNC is what you want
ybogdano has quit [Ping timeout: 480 seconds]
Haaninjo has quit [Quit: Ex-Chat]
<airlied>
spawacz: there was a vnc module you loaded into the X server that did that
ybogdano has joined #freedesktop
Ford_Prefect has quit []
Ford_Prefect has joined #freedesktop
chipxxx has quit [Read error: Connection reset by peer]
<bentiss>
anholt, robclark, tomeu: Hey, so since Sept 8, the mesa git cache archive is not working. I managed to find out that 8aae8dc5342337c4d307c3c61e9af2e8792e81f1 is the culprit, but basically there is something wrong in the freedreno CI that doesn't have the no_scheduled_pipelines-rules. Would any of you mind having a look?
<robclark>
bentiss: hmm, 8aae8dc5342337c4d307c3c61e9af2e8792e81f1 doesn't seem like a valid commit-id?
<bentiss>
robclark: f46064d40fc1e321490d6dfffc9e85a1277bc773 sorry (that was my local revert)
<daniels>
55724c2a5e6225003d04c875c1cd04ee46c9199c is probably the suspicious one ...
<daniels>
robclark: I thought perf-traces was post-hoc triggered, not scheduled
<bentiss>
daniels: if I revert f46064d40fc1e321490d6dfffc9e85a1277bc773 I can run the scheduled pipeline in my fork, so it doesn't seem to be the ci-templates bump
<robclark>
daniels: hmm, you could be right about how it is triggered.. I'm only repeating how I ASSumed it worked ;-)
<daniels>
heh
<daniels>
in fairness it is half-scheduled
<daniels>
there's a scheduled pipeline in another project which scrapes the Mesa projects and then manually triggers all the perf jobs from merged MRs
<daniels>
bentiss: I wish I could see why scheduling it does nothing ...
<bentiss>
daniels: IIRC I found once in the logs the answer, but I can't remember where it was (I can't seem to get anything on sidekiq)
ybogdano has joined #freedesktop
danvet has quit [Ping timeout: 480 seconds]
chomwitt has quit [Ping timeout: 480 seconds]
anholt has quit [Remote host closed the connection]