ChanServ changed the topic of #freedesktop to: https://www.freedesktop.org infrastructure and online services || for questions about freedesktop.org projects, please see each project's contact || for discussions about specifications, please use https://gitlab.freedesktop.org/xdg or xdg@lists.freedesktop.org
keithp has joined #freedesktop
keithp has quit [Quit: ZNC 1.8.2+deb2+b1 - https://znc.in]
keithp has joined #freedesktop
systwi_ has joined #freedesktop
co1umbarius has joined #freedesktop
columbarius has quit [Ping timeout: 480 seconds]
systwi has quit [Ping timeout: 480 seconds]
keithp has quit [Quit: ZNC 1.8.2+deb2+b1 - https://znc.in]
ximion has quit [Quit: Detached from the Matrix]
keithp has joined #freedesktop
keithp has quit [Quit: ZNC 1.8.2+deb3.1 - https://znc.in]
keithp has joined #freedesktop
tzimmermann has joined #freedesktop
keypresser86 has quit []
Leopold__ has joined #freedesktop
sima has joined #freedesktop
Leopold_ has quit [Ping timeout: 480 seconds]
lyudess has joined #freedesktop
alanc has quit [Remote host closed the connection]
alanc has joined #freedesktop
Lyude has quit [Ping timeout: 480 seconds]
ofourdan has joined #freedesktop
AbleBacon has quit [Read error: Connection reset by peer]
ximion has joined #freedesktop
ximion has quit []
DodoGTA is now known as Guest2803
DodoGTA has joined #freedesktop
Guest2803 has quit [Ping timeout: 480 seconds]
<karolherbst>
daniels: all the arm builders are kinda... broken
<pq>
failed several jobs in the past 10 mins or so
<pq>
still failing - daniels ?
<mupuf>
pq: I paused the runner
<pq>
thanks!
<mupuf>
bentiss: FYI ^
<daniels>
ugh
* daniels
kicks it
<bentiss>
daniels: do you want me to migrate arm-7 to coreos?
<bentiss>
might be just easier
<bentiss>
(and that's something I planned to do eventually)
<bentiss>
arm-7 is now deleted. arm-10 is spinning up
<daniels>
heh, sure
<emersion>
thanks mupuf!
<mupuf>
emersion: you're welcome :)
<DavidHeidelberg[m]>
bentiss: Hey! Have you heard about our lord saviour ccache and shared cache accross runners? I'll sum up my idea a bit. I working on gfx-ci kernel builds and when `ccached` it's like 1 minute instead of 10. The trick is, that cache is only local. The `ccache` itself takes around ~ 150M. I was thinking with some mindful approach, it could work global FDO caching.
<DavidHeidelberg[m]>
Let me drop an example: You build kernel (6.3.x, upreving one by one), it has ~ 200M, you generate ccache on 1st, then use it for a month. After month one job will wipe ccache (to not keep the cruft) and re-cache again. Would be approach like that accaptable, if it was not abusing Infra with upload per each job, but still getting pretty huge benefit?
<DavidHeidelberg[m]>
It could work that way for the projects which are not in extra active development but needs to be also rebuild often.
<DavidHeidelberg[m]>
*active development = don't change buildsystem/compiler/linker options
<bentiss>
DavidHeidelberg[m]: I would be happy with anything that reduces the time spent in jobs, yes. (and ccache is already activated, mesa uses it a lot). but be mindful that runners are considered for now unsecure, and so a shared cache is problematique
<bentiss>
problematic
<DavidHeidelberg[m]>
bentiss: oh right, supply chain. You could eventually inject some crafted data into ccache
<bentiss>
yeah, and simply mess up with the ccache server because you would need to have credentials somewhere on the runner
<DavidHeidelberg[m]>
bentiss: I'll be back, have to grab a bit food
<DavidHeidelberg[m]>
bentiss: maybe one different workflow from specific repo, where would be "verified people" who could push into `ccache`?
<bentiss>
DavidHeidelberg[m]: we could reuse s3.fd.o with the job token for that, but it would not be a generic runner capability IMO
<DavidHeidelberg[m]>
doesn't have to be generic, if it'll look easily implementable
<bentiss>
yeah, we can easily add a new bucket ccache (or just reuse the ones we have if that works), make a permission rule for it, and then the code can push/pull the ccache cache in the before_ or after_ scripts
keypresser86 has joined #freedesktop
AbleBacon has joined #freedesktop
ximion has joined #freedesktop
<DavidHeidelberg[m]>
bentiss: so the workflow could be `before: curl "if ! $CCACHE_REBUILD; do https://s3.../namespace/proj/ccache.tar.zst | tar -C ccache/` and the `.gitlab-ci.yml` would have one pipeline a month which would have `CCACHE_REBUILD`? If it's tied to `namespace/project`, it could be safe?
<DavidHeidelberg[m]>
I have still feeling I missing something important in the workflow.
<bentiss>
would once a month be enough?
<bentiss>
shouldn't this be updated on every push on main?
<DavidHeidelberg[m]>
for kernel for example yes. the incremental changes are not that bad. Maybe once week, but the project can adjust that just fine
<bentiss>
outside of that, I think that's good enough
tzimmermann has quit [Quit: Leaving]
<DavidHeidelberg[m]>
bentiss: with each kernel patch for example? I see some gain, but the overhead would be probably higher?
<bentiss>
because you are not uploading to the ccache with credentials written somewhere in the runner
<bentiss>
DavidHeidelberg[m]: FWIW, on my kernel CI, I plan (had actually but got busted) to rebuild the git archive whenever a push to the master branch is done
<bentiss>
in hid.git, the master branch is only updated when Linus pulls it, which happens at most once a week
<bentiss>
so not all patches, just when the common ancestor gets updated
<bentiss>
DavidHeidelberg[m]: the missing bit in your approach is that I don't think you can write to https://s3.../namespace/proj/ccache.tar.zst atm, the common artifacts have a dedicated name IIRC
<DavidHeidelberg[m]>
bentiss: for git tree pushing of the tarball will probably cost more than fetching ~ 200 commits or something, no?
bionade24 has quit [Remote host closed the connection]