daniels changed the topic of #freedesktop to: https://www.freedesktop.org infrastructure and online services || for questions about freedesktop.org projects, please see each project's contact || for discussions about specifications, please use https://gitlab.freedesktop.org/xdg or xdg@lists.freedesktop.org
todi has quit [Ping timeout: 480 seconds]
todi has joined #freedesktop
Kayden has joined #freedesktop
mvlad has quit [Remote host closed the connection]
<pinchartl>
are containers persistents on the runners ? if I run the same job twice, will the second run have access to state from the first run ? I thought the whole point of the containers was to start from a pristine state every time, is that wrong ?
<pinchartl>
I don't know what dark magic is running in those containers, but
<pinchartl>
running "pip3 install --prefix=/usr --force-reinstall ." to install virtme-ng
<pinchartl>
+ ls -al /usr/bin/vng
<pinchartl>
ls: cannot access '/usr/bin/vng': No such file or directory
<pinchartl>
+ ls -al /usr/local/bin/vng
<pinchartl>
-rwxr-xr-x 1 root root 212 Nov 30 01:56 /usr/local/bin/vng
<pinchartl>
I assume that's automatic best effort cleanup of the project's git checkout
<pinchartl>
so I shouldn't install anything outside of /builds/$project/ in any job except the container build job if I wanted to keep things reproducible ?
<MrCooper>
you just can't rely on anything (not) being there from a previous run
privacy has quit [Remote host closed the connection]
<pinchartl>
MrCooper: thanks
<pinchartl>
can I rely on /builds/$project/ not being affected by previous runs, or not even that ?
<MrCooper>
nothing
<pinchartl>
scary
<MrCooper>
it may be a new container from scratch, or it may be an existing one from a previous run
<MrCooper>
though the cleanup you pointed out in the job log might make sure the project checkout is pristine, not sure if that's an fdo script or a GitLab improvement
<pinchartl>
I thought the gitlab git clone strategy meant that the project's directory would be a fresh checkout
<daniels>
err?
<daniels>
so it doesn't reuse previous containers, no, it always creates a new container with the specified image
<pinchartl>
daniels: in my experience, things installed to /usr/local/ are kept between runs
<pinchartl>
I had to give --force-install to pip3
<MrCooper>
yeah, "it always creates a new container with the specified image" is definitely incorrect
gert31 has joined #freedesktop
<pinchartl>
I don't know if it's on purpose, but that's what happened
<pinchartl>
(I can double-check)
<MrCooper>
"Reinitialized existing Git repository in /builds/pinchartl/libcamera/.git/" wouldn't be possible with a fresh container, would it?
<bentiss>
so... what happens is at the end of the run, the gitlab-runner takes a snapshot of the container and creates a "volume". Next time you re-run roughly the same job on the same runner, gitlab-runner picks up that volume and uses it as a not so fresh start
<bentiss>
so if you install stuff anywhere, and happen to reuse the same runner, there are chances that the stuff is still installed
<bentiss>
but if you are on a completely different runner you'll end up with a completely fresh start, which explains why you can not rely on anything previously done
<bentiss>
(not to mention that the heuristic of picking up the previous volume is obscure to me)
<bentiss>
pinchartl, MrCooper ^^
<bentiss>
karolherbst: good news, I just had a meeting with whot this morning, and we found a path forward regarding spam detection!
<bentiss>
karolherbst: in gitlab 16.6 we can now "trust" a user, and this will bypass any spam detection mechanism
<bentiss>
karolherbst: the idea is that when a user has been internal for 14 days, we mark them as trusted and they'll never see a recaptcha anymore
<bentiss>
I 'just' need to update to gitlab 16.6... which I'll probably do in a bit
<MrCooper>
cool
Inline has joined #freedesktop
ascent12 has joined #freedesktop
<karolherbst>
bentiss: cool
ascent12_ has quit [Ping timeout: 480 seconds]
<karolherbst>
bentiss: could we also trust any user using oauth? I don't think any spam uses that...
<bentiss>
there are a lot of spammers using oauth and gmail
<bentiss>
unfortunately
<karolherbst>
ohh really..
<karolherbst>
than non gootla oauth :D
<karolherbst>
*then *google
<bentiss>
yeah, the couple of time I looked deeply in the logs, it was 50/50
<karolherbst>
I see
<karolherbst>
well.. 14 days should be good enough
<karolherbst>
it's at least better than what we have
<bentiss>
yeah, we can also reduce it, and there'll be the option of manually trust the user from the gitlab spam UI
<bentiss>
so worse case, we get pinged, we click on the user done
<karolherbst>
could we also trust users through that trust account request thing?
<karolherbst>
but I guess there is no api still
<bentiss>
yes but let's see first how this behave, and yes, there is no API right now
<bentiss>
so I'll have to run a daily job to run the ruby cchanges
<bentiss>
starting the gitlab 16.6 upgrade now
<bentiss>
and it's completely done now
<bentiss>
karolherbst: please refrain to "trust" users right now, we'll need to test a little bit first
<karolherbst>
fair enough
<bentiss>
well... I'm not sure I'll have time right now to do it, so maybe we can start trusting a few folks caught in spam logs right now
<karolherbst>
maybe update the gitlab notice and state "if you get annoyed by spam, ping us and we'll remove you from the spam checking" or something
<karolherbst>
and then we'll just do it for every user who complains/asks
<bentiss>
well, we need to do tests first, but that's the idea, yeah
<karolherbst>
*complains
<bentiss>
setting the "trusted" field through ruby is easy enough that I can solve the problem for all regular users
<bentiss>
but we just want to see how this shows up in the logs
<karolherbst>
fair
<bentiss>
by we, I mean whot mostly, but it's night time for him now
<karolherbst>
heh
<bentiss>
also, I'll probably wipe the spam logs clear once this process is set up properly
<bentiss>
(through ruby, again, because the UI is shit)
<karolherbst>
what can you even do through ruby? everything? Could we also clear up some of the report entries through ruby?
<bentiss>
karolherbst: yes, everything can be done
<karolherbst>
cool
<bentiss>
I'll also plan on clearing the extra abuses reports
<karolherbst>
:)
<karolherbst>
cool thanks
<bentiss>
but I need to grab lunch first
<karolherbst>
luckily those reports have ids.... so we could e.g. ping you in the future if there are new stuck ones and you could remove them?
<karolherbst>
I suspect ruby needs extra power to do this kinda stuff
<bentiss>
you need access to the cluster, yeah
<karolherbst>
anyway.. 2688 and 2675 would need to be closed :)
<bentiss>
yeah, saw them
<karolherbst>
though might be simple to scan all open ones for deleted users and be smart about it...
<bentiss>
but there are actually a little bit more, I suspect 2675 is actually 5 in 1
<bentiss>
smart is overrated...
<karolherbst>
:D
<karolherbst>
yeah.. no idea how that shows up in ruby, but I think closing that one would close it entirely
<karolherbst>
I _think_ if it says "by 5 users" that there were multiple reports
<bentiss>
I can try right now
<bentiss>
yeah there are showing as individual un-closed
<karolherbst>
but the UI is kinda broken there
<bentiss>
I guess the reports were from a different version of gitlab and it got stuck
<karolherbst>
maybe...
<karolherbst>
though I kinda wished the UI would allow you to close the report anyway
<bentiss>
karolherbst: it's as simple as `AbuseReport.find(2726).close!`
<karolherbst>
I guess it's smart about not showing the UI element to close it...
gert31 has quit [Quit: Leaving]
<karolherbst>
yeah.. it's all gone now
<bentiss>
there you go, no extra delete users abuse reports :)
<karolherbst>
nice
<bentiss>
anyway, lunch time now, bbl
<karolherbst>
have fun
<karolherbst>
ohhh
<karolherbst>
the new gitlab is much faster
<karolherbst>
somehow
<pinchartl>
bentiss: thanks for the explanation
* pinchartl
is still puzzled as to why 'pip3 install --prefix=/usr .' installes in /usr/local/ in the container, while it honours the prefix when testing locally
<eric_engestrom>
every time I assigned it to Marge, Marge broke
<eric_engestrom>
oh, I should've check sooner: the error marge is giving ("these changes already exist in branch main") is actually correct, so this MR should be closed
<eric_engestrom>
but Marge should not start misbehaving like this when two people try to merge the same change...
<bentiss>
eric_engestrom: do you want the powers to restart it yourself?
<bentiss>
it's kind of blind right now, but at least you are not depending on me ;)
<bentiss>
sure, what could be wrong?
<eric_engestrom>
indeed!
vkareh has joined #freedesktop
<eric_engestrom>
haha, button clicked
<eric_engestrom>
does this clear the cache then?
<bentiss>
oh, no, marge got an error :)
<eric_engestrom>
or whatever you did last time?
<bentiss>
and it's back
<bentiss>
yeah, basically this kills the container it's running it, asking to respin it, and then given that the repos are cloned in /tmp, well, fresh start
<eric_engestrom>
ack
<eric_engestrom>
perfect, thanks!
<bentiss>
eric_engestrom: also if you merges anything on the main branch of this repo (fdo/marge-bot) it will get automatically deployed
<eric_engestrom>
yep, I figured :)
<bentiss>
just saying in case you want to push stuff there
<eric_engestrom>
speaking of, do we want to keep our fork in sync with upstream?
<bentiss>
no my problem tbh :)
<eric_engestrom>
haha
<bentiss>
check with DavidHeidelberg, daniels, and others
<eric_engestrom>
ack
<eric_engestrom>
I'll create an issue on the repo, so that everyone watching it gets notified
<bentiss>
sounds like a good plan
<bentiss>
karolherbst: FWIW, I got the logs/tests so we can start adding trusted users IMO
<bentiss>
whot: ^^
agd5f has quit [Read error: Connection reset by peer]
agd5f has joined #freedesktop
utsweetyfish has joined #freedesktop
ximion has joined #freedesktop
<pinchartl>
is there a known trick to be able to use in FDO_DISTRIBUTION_EXEC a script that is in another git tree than $CI_REPOSITORY_URL ?
<pinchartl>
I do so in before_script: for other jobs by cloning the other tree manually
<bentiss>
you can always curl it then bash
<pinchartl>
but for the .fdo.container-build, that doesn't when well
<pinchartl>
I can try that yes
* pinchartl
wonders if the runners are particularly busy for some reason, or if it's within the normal standard deviation
<pinchartl>
btw do we have stats about the runners workload over time ? I'm curious if there are particular time windows I should target to avoid disturbing work loads
<MrCooper>
when North Americans are asleep tends to be good :)
<pinchartl>
so I should switch from my personal time zone to my geographical local time zone and it should be fine
* pinchartl
makes a note to start the day before 14:00am
privacy has joined #freedesktop
<MrCooper>
hehe
ximion has quit [Quit: Detached from the Matrix]
flom84 has joined #freedesktop
tzimmermann has quit [Quit: Leaving]
bmodem has joined #freedesktop
* bentiss
is mass trusting not external people who's account was created more than 14 days ago
<bentiss>
that's 32477 users who will not see recaptcha anymore
flom84 has quit [Quit: Leaving]
<MrCooper>
wow, that's a lot
damian has quit []
<karolherbst>
how many of those are spam users ....
<karolherbst>
ohh wait
<karolherbst>
that's the "not external" bit I guess
<karolherbst>
is still confused on what external really means here
thaller is now known as Guest8664
thaller has joined #freedesktop
<bentiss>
yeah, given that we enabled that a year ago (or so), we can assume that not external people are not spammers (or they forgot they have an account)
<bentiss>
and even if they are, we can catch them quickly enough with the Spam label (and the :do_not_litter: emoji which is ready but not deployed entirely)
Guest8664 has quit [Read error: No route to host]
<bentiss>
karolherbst: only a little bit more than 10 pages of spamlog :)
thaller is now known as Guest8665
thaller has joined #freedesktop
<bentiss>
I've kept the last 14 days in case we want to do more correlations with the gitlab logs
<bentiss>
but that's still better than 732 pages IIRC