daniels changed the topic of #freedesktop to: GitLab is currently down for upgrade; will be a while before it's back || https://www.freedesktop.org infrastructure and online services || for questions about freedesktop.org projects, please see each project's contact || for discussions about specifications, please use https://gitlab.freedesktop.org/xdg or xdg@lists.freedesktop.org
nnm has quit []
nnm has joined #freedesktop
killpid_ has joined #freedesktop
FireBurn has quit [Quit: Konversation terminated!]
co1umbarius has joined #freedesktop
columbarius has quit [Ping timeout: 480 seconds]
DragoonAethis has quit [Quit: hej-hej!]
DragoonAethis has joined #freedesktop
PuercoPop has joined #freedesktop
killpid_ has quit [Quit: Quit.]
lsd|2 has joined #freedesktop
karolherbst_ has joined #freedesktop
karolherbst has quit [Ping timeout: 480 seconds]
PuercoPop has quit [Ping timeout: 480 seconds]
AbleBacon has quit [Read error: Connection reset by peer]
epony has quit [Remote host closed the connection]
epony has joined #freedesktop
dcunit3d_ has quit [Ping timeout: 480 seconds]
dcunit3d has joined #freedesktop
GNUmoon has quit [Remote host closed the connection]
GNUmoon has joined #freedesktop
ximion has quit [Quit: Detached from the Matrix]
bmodem has joined #freedesktop
tzimmermann has joined #freedesktop
i-garrison has quit []
i-garrison has joined #freedesktop
sima has joined #freedesktop
bmodem has quit [Quit: bmodem]
bmodem has joined #freedesktop
<alatiera>
hmm, I've noticed that the build log keeps rebuilding the my image with the following in the
<alatiera>
time="2023-08-31T06:17:05Z" level=fatal msg="fetching blob: blob unknown to registry"
<alatiera>
any ideas?
<alatiera>
might be an one off post migration and hiccups hopefully
<alatiera>
the tag does show up in the fork's registry, it's weird
<alatiera>
bentiss for gst images, anything that's not gstreamer/gstreamer, gstreamer/cerbero andmaybe gstreamer/meson-ports/*, you can mass delete from the registry and user forks
<bentiss>
alatiera: I am just re-fetching your fedora image...
<alatiera>
bentiss oh it's fine if it was swept up by some hiccup or not, I am more worried about the template having a bug
<alatiera>
like we had with the mesa rebuilds on windows recently
<bentiss>
alatiera: no, it was a misconfiguration from my part where I was pointing the registry at the wrong data backend (still on google cloud)
<bentiss>
so we had 24h of pushes to GCP that are "lost" and that need manual fetching
<alatiera>
ah okay
<bentiss>
alatiera: anyway, your image should be fixed now (hopefully)
<alatiera>
bentiss awesome, thanks!
<alatiera>
for expires after btw, I was thinking we could probably add it on the template as is
<alatiera>
and default to 'if upstrea_repo image move on, else rebuild the one in the fork registry with expires-after by default'
<alatiera>
hah, so now it does find the image indeed but tries to copy it to the upstream registry 🤦
<mupuf>
hakzsam: don't, Marge will fail anyway if it takes longer than 60 minutes
<alatiera>
mupuf no, the timeout is configurable per instance
<mupuf>
alatiera: per Marge instance?
<alatiera>
yes
<mupuf>
ok, but do we really want that?
<alatiera>
she has a different timeout set, and it's also taking into account the whole pipeline duration
<mupuf>
90 minutes is a looooong time
<hakzsam>
well, if it requires more than 60 min, it should be bumped?
<alatiera>
also marge doesn't know about queued or executed distinctions
<hakzsam>
otherwise, how do I create that container?
<alatiera>
she just sees the total number
lsd|2 has quit []
<MrCooper>
hakzsam: you reassign to Marge once the container is built
<MrCooper>
building containers is an exceptional case, tuning Marge's timeout for that means potentially wasting a lot of time when something goes wrong in a pipeline
<mupuf>
+1
<mupuf>
but then... the question is: why do we still create rootfses?
<mupuf>
can't we just extract a container, add the kernel/initrd and be done?
<mupuf>
why do we duplicate all of this work?
<cwabbott>
seems like something's very wrong with CI atm
<bentiss>
also I don't have the faintest idea on how ./artifacts/lava/lava-submit.sh is generated. I can only see ./.gitlab-ci/lava/lava-submit.sh and no other reference. I wonder if it does work only because we are caching the volumes and it was in the repo at some time
<DavidHeidelberg[m]>
bentiss: it's not generated, it's passed fron artifacts
<bentiss>
DavidHeidelberg[m]: yeah, but there is no job that creates it or place it in the artifacts
<DavidHeidelberg[m]>
The rootfs job should check if the container exist,which in this case does nothing (since the container is already in place)
<bentiss>
DavidHeidelberg[m]: but we get a 404 after, so it's not there, no?
<DavidHeidelberg[m]>
The artifacts are prepared by `debian-testing` or any other `debian-.*` jobs
<DavidHeidelberg[m]>
Btw. Afk food, I'll be back in 40 minutes, then 1 meeting and then I'll look into it :)
<bentiss>
DavidHeidelberg[m]: k, no worries and enjoy!
<cwabbott>
bentiss: fwiw, seems like prepare-artifacts.sh does "cp -Rp .gitlab-ci/lava artifacts/"
<cwabbott>
afaict this got changed recently and it's supposed to be produced by alpine/x86_64_lava_ssh_client
<bentiss>
found it (I think) -> https://gitlab.freedesktop.org/mesa/mesa/-/jobs/48288325 the step script is doing nothing because we failed at pulling the gating script, and then given that the gitlab CI steps were not even run, it is considered a passing job, and there are no artifacts
<bentiss>
testing my theory by restarting this job
<bentiss>
regarding the "too many connections" on the db, it seems we are using maybe too many webservice workers, as they account for roughly 50% of the max available connections. The rest is used by sidekiq's jobs
<bentiss>
diminishing the number of webserive pods from 16 to 10, we'll see if that changes something
Guest1341 is now known as koike
koike is now known as koike-lounge
koike has joined #freedesktop
sravn has left #freedesktop [WeeChat 3.5]
<DavidHeidelberg[m]>
cwabbott: good catch, the new dependency remove dependency on the debian-.* artifacts
<DavidHeidelberg[m]>
so it works, because artifacts are fast, but not all the time
<cwabbott>
woah, I actually said something useful!
<cwabbott>
when I look at CI stuff I'm mostly flailing around
<DavidHeidelberg[m]>
... maybe
<DavidHeidelberg[m]>
I'm just looking into it deeper, maybe it's ok.. but I see large area where issue can be
<DavidHeidelberg[m]>
What I recall this previously happen to me when I invoked some hack triggering pipeline with ci_run_n_monitor script and then re-enabling jobs, where gitlab lose dependencies and trigger the job, even when it misses artifacts from previous stages
<DavidHeidelberg[m]>
if it happen now in regular pipeline, for this job everything should be in-place, so it could be some gitlab bug or it wrongly parses needs/dependencies keywords in this case
<bentiss>
DavidHeidelberg[m]: I think in that particular case it wasn ´t the needs/dependencies the problem
<bentiss>
the problem was that the job that was supposed to run and produce the artifacts did not even execute itself, and was marked as passed
<bentiss>
because we had a timeout error while fetching the gating script
bmodem has joined #freedesktop
nuclearcat2 has joined #freedesktop
An0num0us has quit [Ping timeout: 480 seconds]
MrCooper has quit [Remote host closed the connection]
MrCooper has joined #freedesktop
Haaninjo has joined #freedesktop
Ahuj has quit [Ping timeout: 480 seconds]
rpavlik has joined #freedesktop
AbleBacon has joined #freedesktop
bmodem has quit [Ping timeout: 480 seconds]
<DavidHeidelberg[m]>
bentiss: that would make sense. Is possible to catch the failure at that point and fail?
tzimmermann has quit [Quit: Leaving]
killpid_ has quit [Ping timeout: 480 seconds]
bmodem has joined #freedesktop
An0num0us has joined #freedesktop
<bentiss>
DavidHeidelberg[m]: that's the weird part. This is supposed to fail if the script fails, as if you don't have enough privileges. But this time it just went through
i509vcb has quit [Quit: Connection closed for inactivity]
bmodem has quit [Ping timeout: 480 seconds]
bmodem has joined #freedesktop
bmodem has quit [Excess Flood]
bmodem has joined #freedesktop
tnt has left #freedesktop [#freedesktop]
i509vcb has joined #freedesktop
ximion has joined #freedesktop
mvlad has quit [Remote host closed the connection]
alanc has quit [Remote host closed the connection]
alanc has joined #freedesktop
Kayden has quit [Quit: -> lunch]
bmodem has quit [Ping timeout: 480 seconds]
flom84 has joined #freedesktop
jani has quit []
jani has joined #freedesktop
ximion has quit [Quit: Detached from the Matrix]
jani has quit []
jani has joined #freedesktop
jani has quit []
jani has joined #freedesktop
jani has quit []
mattst88 has joined #freedesktop
<mattst88>
could someone point me at a hopefully-simple .gitlab-ci.yml I could copy from to enable arm/aarch64 CI builds for pixman?
<mattst88>
(people keep submitting arm and aarch64 fixes, but in the process break the other, and I'm getting tired of it)
Haaninjo has quit [Quit: Ex-Chat]
jani has joined #freedesktop
jani has quit []
systwi_ has joined #freedesktop
systwi_ has quit [Remote host closed the connection]
<anholt_>
oh, pixman doesn't have much CI does it.
<anholt_>
ci-templates would be useful if you want to cache all that dnf and pip setup so you don't have so long to set up the build
vkareh has quit [Quit: WeeChat 3.6]
<anholt_>
the equivalent to what you have now would be to add like "arm-build: image: fedora:28:arm64 tag: arm64" with the same "script" -- use an f28 arm docker image from docker, run it on fd.o's arm64 runners.
<mattst88>
presumably as the result of not having a tag declaration
<anholt_>
mattst88: amd64-build was on equinix-m3l, which is x86
<anholt_>
your arm64-build does need a tag
<anholt_>
amd64's fail looks like just intermittent fdo fail
<mattst88>
ah, okay
epony has quit [Remote host closed the connection]
epony has joined #freedesktop
ximion has joined #freedesktop
An0num0us has quit [Ping timeout: 480 seconds]
<DavidHeidelberg[m]>
mattst88: you must `tags: - aarch64`
<DavidHeidelberg[m]>
and change container tag, since otherwise it's the x86 one
<mattst88>
thanks, I'll give that a try
<DavidHeidelberg[m]>
mattst88: btw. for the last fail, you need to install something like`python3-pip` (on Debian is named that way, on fedora probably slightly different)