daniels changed the topic of #freedesktop to: https://www.freedesktop.org infrastructure and online services || for questions about freedesktop.org projects, please see each project's contact || for discussions about specifications, please use https://gitlab.freedesktop.org/xdg or xdg@lists.freedesktop.org
vkareh has quit [Quit: WeeChat 4.1.1]
thaller has quit [Read error: Connection reset by peer]
thaller has joined #freedesktop
alatiera has quit [Quit: Connection closed for inactivity]
<pinchartl>
bentiss: following your advices, I'm "nearly" there
<pinchartl>
I'm using .fdo.b2c-image@debian
<pinchartl>
I was expecting the host container, inside which the b2c VM runs, to be based on the container I've created with .fdo.container-build@debian
<pinchartl>
but it looks like only the container running
<pinchartl>
*inside* the VM is based on that
<pinchartl>
so my question is, is it possible to use .fdo.b2c-image and customize the packages present in the host container ?
<pinchartl>
or does .fdo.b2c-image assume pretty much everything will be run inside the VM ?
<pinchartl>
I was planning to run the build in the host container and the unit tests in the VM. this is what gstreamer does to run some of its tests, using virtme-ng
<pinchartl>
is that a bad idea ?
<whot>
pinchartl: iirc the host container is a "random" one since it only needs to run the b2c container image in qemu.
<whot>
pinchartl: this one to be precise: image: quay.io/freedesktop.org/ci-templates:qemu-base-2023-11-24.1
<whot>
pinchartl: the best approach for what you want to do is use the image as normal job to build, then pass the build artifacts to the b2c job. I think the hid-tools job bentiss linked to yesterday(-ish) does that
<whot>
pinchartl: that builds in the host (like you want) and starts b2c from that image which forwards $PWD so you have access to the artefacts and can just run the tests. can't remember why we did it this way though, I think it predates the fdo.b2c-image
nektro has quit [Remote host closed the connection]
nektro has joined #freedesktop
vyivel has quit [Read error: Connection reset by peer]
vyivel has joined #freedesktop
bmodem has joined #freedesktop
alatiera has joined #freedesktop
ximion has quit [Quit: Detached from the Matrix]
sima has joined #freedesktop
tzimmermann has joined #freedesktop
<bentiss>
pinchartl: it's like what whot said. You basically have 3 options: 1. build your src in a previous stage, export them as artifacts, use plain .fdo.b2c-image to run the tests through qemu (like in hid-tools, through hid-tools has not "build" step), 2. use plain .fdo.b2c-image, start the vm, build everything in the vm and then test it, or 3. have a custom container capable of
<bentiss>
running qemu, build your project, curl vm2c.py, run the VM (like libinput)
<bentiss>
pinchartl, whot: for 2. this is what I tried initially for libinput, but this lead to some weird compilation errors, and so it was safer to build outside the VM
<bentiss>
(and it's also using less resources, because why would you need to use the VM to compile when you can just use the host)
<bentiss>
1. has the advantage of being able to retry the test job without having to rebuild, so it can come handy in some situations where tests are flaky
mvlad has joined #freedesktop
nnm has quit []
nnm has joined #freedesktop
pjakobsson has joined #freedesktop
<MrCooper>
whot: FYI, needs: can be used with jobs in the same stage for a while now
<MrCooper>
since GitLab 14.2, Mesa has made use of this for two years
<pinchartl>
whot: I had considered building and testing in two separate jobs, but the artificacts are ~500MB in size
<pinchartl>
bentiss: ^^
<pinchartl>
so option 1 isn't a good fit
<pinchartl>
it's mostly due to the fact that meson assumes unit tests are run from the build directory
<pinchartl>
so I have to package the whole build in artificats if I want to do that
<pinchartl>
workarounds are likely possible to drop some files, but that sounds a bit fragile
<pinchartl>
I'll give option 3 a try
<bentiss>
pinchartl: k, so your best bet is to add qemu to your build&test image, and either store vm2c once and for all in the image, or just curl it everytime like libinput does
<bentiss>
pinchartl: FWIW, for fedora, libinput adds "qemu-img qemu-system-x86-core qemu-system-aarch64-core" so that should be what you roughly want
<pinchartl>
thanks
<pinchartl>
I'm also looking at an example from gstreamer
<pinchartl>
interestingly, they compile the guest kernel for the VM in the container preparation step, and store it in the container
<pinchartl>
while I store it as a package
<pinchartl>
using a package was nice, as I could delete it and re-run the job from an existing pipeline
<pinchartl>
but that's mostly something useful during development of the CI, less so once the CI scripts will be more stable
<bentiss>
using a package also means you can reuse it in a separate pipeline :)
<bentiss>
there is no "good" answer, only the one that matches your needs
<bentiss>
As long as you stay on the fdo runners, the network impact is ~0
<pinchartl>
I'll need my own runners at some point, but those will run must smaller artifacts (~35MB)
AbleBacon has quit [Read error: Connection reset by peer]
<alatiera>
reprovisioned one of the windows runners that died, if there issues are probably due to that
<alatiera>
also expect hiccups
ximion has joined #freedesktop
<bentiss>
PSA: I've redeployed marge using hookiedookie. It should still behave the same but when there is a push on main on https://gitlab.freedesktop.org/freedesktop/marge-bot, this will kill all current marges, and restart them with the fresh code
<pinchartl>
no copyright lawsuit from Fox yet ? :-)
<bentiss>
not AFAICT
* bentiss
crosses gingers
<bentiss>
fingers even :)
bmodem has quit [Ping timeout: 480 seconds]
AbleBacon has joined #freedesktop
lsd|2 has joined #freedesktop
blatant has quit [Quit: WeeChat 4.1.1]
tzimmermann has quit [Quit: Leaving]
DodoGTA has quit [Quit: DodoGTA]
DodoGTA has joined #freedesktop
gert31 has joined #freedesktop
bmodem has joined #freedesktop
tzimmermann has joined #freedesktop
i509vcb has joined #freedesktop
Haaninjo has joined #freedesktop
pkira has joined #freedesktop
<pinchartl>
continuing with newbie questions, is there an easy way to run manual commands in a container after a job has finished ? I'm debugging the CI scripts, and having to run a pipeline every time is quite slow (not to mention that it wastes resources)
tzimmermann has quit [Quit: Leaving]
thaller is now known as Guest8563
thaller has joined #freedesktop
pkira has quit []
Guest8563 has quit [Read error: No route to host]
* pinchartl
is puzzled
<pinchartl>
bentiss: I'm looking at the hid-tools CI
<pinchartl>
why are things always complicated ? :-)
* pinchartl
wonders what the best option is
<bentiss>
and regarding your other question: no you can not re-run manual commands after a job ended. Well, you can restart the job, and if the job fetches the script through curl, then you can cheat :)
<bentiss>
pinchartl: do you need systemd?
<pinchartl>
I've cheated a few times with fetching a script through curl indeed :-)
<pinchartl>
I don't need a full systemd, but I need udev
<bentiss>
(also, note that the FDO_EXPIRES_AFTER: 4h doesn't means your image will be removed from the registry, just that the runners know that after that time, they can uncache it)
<pinchartl>
ah, I didn't know that
<pinchartl>
thanks
<bentiss>
well, it doesn't change much now. If you want, you can clean up the registry of your project in the gitlab-UI, and the images will be purged after 24/48h
gert31 has quit [Quit: Leaving]
<pinchartl>
I've deleted a few images from the registry already
<bentiss>
pinchartl: thanks appreciated. But TBH it might be a drop: right now we have 11TB of registry data, with ~100TB available. But if everybody cleans up they repo from time to time, that's better ;)
<pinchartl>
I'm sure it's a drop at the moment, yes :-)
<pinchartl>
when we'll have a 80GB image to build a chrome os package, however... :-)
<pinchartl>
but if we go that way, I think we'll have our own runner
<pinchartl>
with a registry local to the runner
<bentiss>
we would appreciate that as well :)
<pinchartl>
does gitlab-runner handle caching of images and artifacts locally, or is that something fdo had to implement in the runner machines ?
<bentiss>
images are cached by docker, so nothing to worry about. artifacts are pulled everytime, by the gitlab-runner
<pinchartl>
ok
<pinchartl>
out of curiosity, does gitlab-runner support podman, or does it require docker ?
<bentiss>
pinchartl: it does, but last deployment we had we were having issues, and finally going back to docker fixed it. It could have been related to the podman version shipped in debian, but we were having pretty bad network issues with coreOS, so we switched back to debian
<bentiss>
pinchartl: but my own runner is using podman and I know others also are using it (even in rootless mode for some)
suporte has joined #freedesktop
agd5f has quit [Remote host closed the connection]
agd5f has joined #freedesktop
ds` has quit [Quit: ...]
ds` has joined #freedesktop
thaller has quit [Remote host closed the connection]
thaller has joined #freedesktop
<karolherbst>
anybody else seeing AI based spam hitting gitlab? Just curious if others think they've seen some in the past as well or not...
alanc has quit [Remote host closed the connection]
alanc has joined #freedesktop
thaller has quit [Ping timeout: 480 seconds]
ximion has joined #freedesktop
lsd|2 has quit [Remote host closed the connection]