foureyes has quit [Remote host closed the connection]
pzanoni has quit [Remote host closed the connection]
pzanoni has joined #freedesktop
meowo has joined #freedesktop
meowo has quit [autokilled: Suspected spammer. Mail support@oftc.net with questions (2021-06-04 02:28:32)]
flagrama22 has joined #freedesktop
flagrama22 has quit [Remote host closed the connection]
pzanoni has quit [Remote host closed the connection]
pzanoni` has joined #freedesktop
Diag has joined #freedesktop
Diag has quit [Remote host closed the connection]
ximion1 has quit []
Payhn has joined #freedesktop
Payhn has quit [Remote host closed the connection]
blue__penquin has joined #freedesktop
wilywizard has joined #freedesktop
wilywizard has quit [Remote host closed the connection]
chomwitt has joined #freedesktop
scummos has joined #freedesktop
scummos has quit [autokilled: Suspected spammer. Mail support@oftc.net with questions (2021-06-04 05:45:48)]
xyproto has joined #freedesktop
xyproto has quit [Remote host closed the connection]
<psychon>
I just did a "git pull" expecting new commits, but "nothing happened"
<psychon>
did git://anongit.freedesktop.org/git/cairo break? I get newer commits from git@gitlab.freedesktop.org:cairo/cairo.git (but apparently I never pulled from that before since there were lots of new branches)
danvet has joined #freedesktop
<bentiss>
psychon: I am finishing migrating the repo from the old to the new cluster. I should have restored the git hooks this morning, but we then need to have new commits on those repos for the sync to happen
<bentiss>
and transfer just finisihed!
<psychon>
okay, thanks a lot for keeping things running!
<psychon>
sorry that I only ever show up and complain, and complain about stuff you already know
<bentiss>
psychon: no worries, and I prefer having users "complaining" about broken stuff because I am not shielded from mistakes
bdiddy has joined #freedesktop
bdiddy has quit [Remote host closed the connection]
blue__penquin has quit []
blue__penquin has joined #freedesktop
MrCooper_ has joined #freedesktop
LBlaboon has joined #freedesktop
yk has quit [Remote host closed the connection]
LBlaboon has quit [Remote host closed the connection]
MrCooper has quit [Ping timeout: 480 seconds]
cosmouser1 has joined #freedesktop
cosmouser1 has quit [Remote host closed the connection]
adjtm has quit [Ping timeout: 481 seconds]
adjtm has joined #freedesktop
MrCooper_ is now known as MrCooper
yk has joined #freedesktop
adjtm has quit [Ping timeout: 480 seconds]
adjtm has joined #freedesktop
fzakaria26 has joined #freedesktop
fzakaria26 has quit [autokilled: Suspected spammer. Mail support@oftc.net with questions (2021-06-04 09:05:22)]
<daniels>
bentiss: FWIW I'm going to be away until Sun night
<bentiss>
daniels: oh, ok
<bentiss>
I have a couple of question before you leave if you don't mind thoug :)
<bentiss>
daniels: do you mind if I delete the images we made on google cloud before leaving? it costs $120 a month for nothing (I think we can consider them obsolete now)
<bentiss>
daniels: and I have now finished migrating the data from no-replics, but there are still some snippets and 2 "design" repos around that gitlab say they either have migrated (for the snippets) or don't exists for the design
<daniels>
bentiss: heh, sure
<bentiss>
I was thinking of making a copy of no-replicas (129MB) and store it somewhere before killing the pod
<daniels>
yep, no issue with deleting the old disk snapshots, they were only safety for when I was moving storage
* daniels
ndos
<daniels>
that seems sensible
<bentiss>
OK thanks and enjoy your week end
<daniels>
if the repos break we can manually restore them somewhere else
<daniels>
thanks, you too!
adjtm has quit [Ping timeout: 482 seconds]
chomwitt has quit [Ping timeout: 480 seconds]
chomwitt has joined #freedesktop
wwalker13 has joined #freedesktop
wwalker13 has quit [Remote host closed the connection]
chomwitt has quit [Ping timeout: 480 seconds]
adjtm has joined #freedesktop
anthepro has joined #freedesktop
anthepro has quit [autokilled: Suspected spammer. Mail support@oftc.net with questions (2021-06-04 11:04:13)]
shbrngdo has quit [Remote host closed the connection]
shbrngdo has joined #freedesktop
adjtm has quit [Quit: Leaving]
ximion has joined #freedesktop
<emersion>
how much memory and disk space do we have inside GitLab runners?
<alatiera>
depends on the runner
<alatiera>
gst-* ones are very beefy
<__tim>
why do you ask? :)
<emersion>
i'm running a VM inside a runner, and I'm wondering what good defaults would be
<emersion>
and it seems like 5GiB of disk space isn't enough for graphviz
<psychon>
I only ever manage to hit time limits, but I also don't run VMs in CI...
<__tim>
psychon, those time limits are set in the project settings though, no?
<bentiss>
emersion:disk space is tight because there is a lot of caching done by gitlab runner and we need to cache the docker images too
<alatiera>
there are multiple time limits, but usually you are hitting project ones
<bentiss>
emersion: we *should* have roughly 66% of the disk free on the runners (that's ~450GB, because raid5)
blue__penquin has quit []
<bentiss>
but I remember whot complaining that some times the disks were full, so it depends on the load of the others
<emersion>
right. but here we'd also be growing the VM disk size quite a bit
<bentiss>
I mean other people running jobs
<emersion>
since our containers are cached
<bentiss>
what you mean?
<bentiss>
(BTW, ci-templates handles qemu loads and does all the caching for you)
<emersion>
yeah, but ci-templates is a hella complicated
<emersion>
i don't grok it
<emersion>
i mean, if i have a big VM disk inside my cached container, it's probably an issue?
<bentiss>
sorry, I really don't have the time to look into your problem :( Otherwise I would have proposed to submit a MR
<emersion>
yeah np at all
<emersion>
i originally wanted to submit a MR for ci-templates
<emersion>
but it just ended up blocking everything
<bentiss>
emersion: in theory no, if you use ci-templates and/or set the correct labels on the conatiner itself
<emersion>
ok, so big containers aren't too much of an issue
<emersion>
good to know, thanks
<bentiss>
well, they can be, but we already have some, so...
blue__penquin has joined #freedesktop
blue__penquin has quit []
blue__penquin has joined #freedesktop
<psychon>
__tim: yes, they are, but if I open a MR, CI runs in the context of my fork
<psychon>
the "main project's" time limit are only used after a MR is accepted, it seems, not for the actual "test this MR"
<__tim>
ah, right
<psychon>
I got my last MR to pass this way (and afterwards I re-set the time limit back to 1h)
<psychon>
sadly, cairo-svg is awfully close to needing one hour even when run in its own job :(
<MrCooper>
psychon: FWIW this depends on the MR author's access level in the target project; if it's developer or higher, pre-merge MR pipelines already run in the target project context
<MrCooper>
(or maybe it even depends on the access to the specific target branch?)
<psychon>
hm, okay
<psychon>
sadly CI finished in 57 minutes after I bumped the limit of my own cairo fork to 2h...
<MrCooper>
the URL of the pipeline/job pages shows which context is used
<MrCooper>
.../psychon/<project>/... vs .../<target namespace>/<project>/...
<gitlab-bot>
cairo issue (Merge request) 187 in cairo "Move test failure lists into separate files with one test name per line" [Merged]
<psychon>
and I bet I have quite a lot access in the cairo namespace
<__tim>
psychon, didn't ebassi have some patches for the test suite as well that might speed up some things (because it would allow to make use of all the cores)? (might be dependent on dropping autotools though, don't remember)
<psychon>
anyway, all of this basically means that "bumping the timeout" is not an option since it wouldn't be used for MRs from "non-members"
<gitlab-bot>
cairo issue (Merge request) 188 in cairo "CI: Split test execution into per-backend jobs" [Opened]
<MrCooper>
what I described can only work for pipelines which are created when the MR already exists
<psychon>
instead of having one "build & test everything" job, that MR creates one "build everything" and seven "run tests" jobs
<MrCooper>
it might even only apply when there are separate MR vs branch pipelines, as is the case in Mesa
<psychon>
...and four "test jobs" are done in less than 5 minutes, two take around 20 minutes and cairo-svg takes a whooping 52 minutes
<psychon>
MrCooper: could you briefly comment on what mesa id doing in MR pipelines and branch pipelines? is one of them "less heavy stuff"?
<MrCooper>
having separate pipelines isn't a goal, it's a consequence of guarding jobs by changes:
<MrCooper>
because those apply differently to branches vs MRs
<MrCooper>
well, that and only running jobs automatically in pre-merge pipelines for Marge Bot
<MrCooper>
not sure that helps, sorry :)
<psychon>
sounds more complicated than what cairo has so far :)
<MrCooper>
yeah, I suspect it's one of the most complex CI schemes so far
<MrCooper>
(in our GitLab)
<bentiss>
mupuf (and tanty maybe?): I am going to take indico down for a few minutes (15 min tops) to move it to the new cluster
<mupuf>
bentiss: sweet!
<mupuf>
good luck!
<bentiss>
mupuf: that also mean you'll lose credentials, but we can figure this out later
<bentiss>
credentials to k3s
<mupuf>
yes, I'll just need them back before July
<bentiss>
k, that should be doable
<MrCooper>
psychon: the complexity is necessary to keep Mesa from clogging up the CI resources all the time (it actually did, when the pipeline and MR throughput were much smaller still :)
<bentiss>
mupuf: also, I am using velero (https://velero.io/) to do the transfer, which can come in as a very handy solution for you to make backups: I can just add a schedule backup for you
<mupuf>
that could indeed be a great tool for this! I wonder how this works with the DB, but I guess we could dump it as sql and backup that
<bentiss>
mupuf: "it just works"
<bentiss>
I tried it twice already, and the db works just fine
<mupuf>
bentiss: I wonder if psql writes to disk can be considered atomic though
<bentiss>
mupuf: plan is: disconnect ingress/IP -> wait 2 min, backup, restore in new cluster, bind new cluster to old IP, done
<bentiss>
so there will not be any writers while doing the backup, so we should be good
<mupuf>
ah, sure!
<bentiss>
OK, disconnecting indico now
chomwitt has joined #freedesktop
<bentiss>
backup in progress...
<bentiss>
completed...
hikiko has quit [Remote host closed the connection]
<bentiss>
restoring
<bentiss>
mupuf: and done
<bentiss>
mupuf: could you check if there is anything wrong, please?
<mupuf>
sure, on it!
<bentiss>
thanks!
<mupuf>
so far, so good!
<mupuf>
let's try to make edits too
<bentiss>
\o/
hikiko has joined #freedesktop
<mupuf>
worked too
<mupuf>
I'll consider it good-enough
<bentiss>
<3 <3
<bentiss>
that was easy :)
<mupuf>
well, great to hear!
* bentiss
will not say that he has been trying to make velero working since this mornign
ximion has quit []
<mupuf>
and I'm happy to hear about velero!
<mupuf>
ha ha ha ha
<bentiss>
mupuf: actually, there is still a change of storage class to do in the config, the new cluster has both ssd and hdd, and I forced the restore to happen on ssd, but we need to reflect that in the config
<mupuf>
the config of... the cluster?
<bentiss>
the helmfile deployment
<bentiss>
the deployment says one particular class, the config says one other
<bentiss>
I bet we want them to be the same or you might end up in a bad state at some point
karolherbst has quit [Quit: Konversation terminated!]
karolherbst has joined #freedesktop
karolherbst has quit []
karolherbst has joined #freedesktop
<mupuf>
bentiss: I see! Thanks for handling it... I feel like I have been dropping the ball here :s
<mupuf>
I'll get back to it, I promise!
<mupuf>
too many things happening in our renovation, and I can't set the pace for others
<bentiss>
mupuf:don't tell me
<bentiss>
well I got a pretty good incentive: If I did nothing we would have lost quite some data on June 1st, so...
<mupuf>
yeah, that's a pretty good one
<mupuf>
ok. time to go out of this furnace of an office, and go chill downstairts
<mupuf>
have a good weekend guys!
<bentiss>
thanks see you!
ninja[m]2 has joined #freedesktop
ninja[m]2 has quit [Remote host closed the connection]
raoel has joined #freedesktop
raoel has quit [Remote host closed the connection]
shawn-ogg has joined #freedesktop
shawn-ogg has quit [Remote host closed the connection]
patwid4 has joined #freedesktop
patwid4 has quit [Remote host closed the connection]
blue_penquin is now known as Guest865
vmesons has quit [Read error: Connection reset by peer]
vmesons has joined #freedesktop
jarthur has joined #freedesktop
Sumera[m] has quit []
Sumera[m] has joined #freedesktop
pzanoni` is now known as pzanoni
chomwitt has quit [Ping timeout: 480 seconds]
blue__penquin has quit []
cmk_zzz has joined #freedesktop
cmk_zzz has quit [autokilled: Suspected spammer. Mail support@oftc.net with questions (2021-06-04 16:31:15)]
ngcortes has joined #freedesktop
VelcroPad has joined #freedesktop
alanc has quit [Remote host closed the connection]
alanc has joined #freedesktop
Zenton has joined #freedesktop
Zenton has quit [Remote host closed the connection]
chomwitt has joined #freedesktop
ximion has joined #freedesktop
karolherbst_ has joined #freedesktop
karolherbst is now known as Guest881
karolherbst_ is now known as karolherbst
Guest881 has quit [Ping timeout: 480 seconds]
ngcortes has quit [Remote host closed the connection]
jpsamaroo has joined #freedesktop
pastly-antispam has quit [Quit: time for a tune up]
<jpsamaroo>
is this a good place to ask dbus-c questions?
<jpsamaroo>
assuming it is, is there a reason why dbus_message_iter_recurse aborts with 'You can't recurse into an empty array or off the end of a message body' even though the arg is an array and get_element_count > 0?
pastly-antispam has joined #freedesktop
NickG365 has joined #freedesktop
NickG365 has quit [Remote host closed the connection]
ngcortes has joined #freedesktop
vmesons has quit [Remote host closed the connection]
ChmEarl8 has joined #freedesktop
ChmEarl8 has quit [Remote host closed the connection]
<jpsamaroo>
nevermind, i was holding my C compiler wrong
enilflah has quit [Ping timeout: 480 seconds]
pastly-antispam has quit [Remote host closed the connection]
shbrngdo has quit [Remote host closed the connection]
shbrngdo has joined #freedesktop
gauge has joined #freedesktop
gauge has quit [Remote host closed the connection]
<bl4ckb0ne>
will the monado chan move to OFTC like the rest of the freenode chans?
<imirkin>
each group is free to do whatever they please
* bl4ckb0ne
re adds freenode to his bouncer
pastly-antispam has joined #freedesktop
jstein has joined #freedesktop
ngcortes has quit [Remote host closed the connection]
reillybrogan has joined #freedesktop
<reillybrogan>
So I just logged into the FD gitlab (using oauth to gitlab.com). I'm trying to subscribe to a few issues to keep track of their progress but the button is disabled for me (displays the crossed out circle when I hover over it). Anyone know if there's anything I missed?
<reillybrogan>
My email is showing as "verified" and "default notification email" under the email preferences
<gitlab-bot>
GitLab.org issue (Merge request) 61953 in gitlab "Fix ability for non project member to subscribe to an issue" [Bug, Devops::Plan, Frontend, Group::Project Management, Priority::2, Section::Dev, Severity::2, Workflow::Production, Merged]
<gitlab-bot>
GitLab.org issue 330033 in gitlab "Notifications switch cannot be enabled for projects where it previously could be" [Backend, Backend Complete, Bug, Devops::Plan, Group::Project Management, Priority::2, Section::Dev, Severity::2, Workflow::Production, Closed]
jpsamaroo has left #freedesktop [I <3 Microsoft Windows]
<reillybrogan>
pendingchaos, Thanks! That workaround worked perfectly
nedbat has joined #freedesktop
nedbat has quit [Read error: Connection reset by peer]
ngcortes has joined #freedesktop
danvet has quit [Ping timeout: 480 seconds]
saint__ has joined #freedesktop
saint__ has quit [Remote host closed the connection]