ChanServ changed the topic of #dri-devel to: <ajax> nothing involved with X should ever be unable to find a bar
clamps has quit [Remote host closed the connection]
pcercuei has quit [Quit: dodo]
ap51 has quit [Ping timeout: 480 seconds]
iive has quit [Quit: They came for me...]
sima has quit [Ping timeout: 480 seconds]
heat has joined #dri-devel
heat_ has quit [Read error: No route to host]
danylo has joined #dri-devel
i509vcb has quit [Quit: Connection closed for inactivity]
flynnjiang has joined #dri-devel
yuq825 has joined #dri-devel
yyds has joined #dri-devel
co1umbarius has joined #dri-devel
flynnjiang1 has joined #dri-devel
flynnjiang has quit [Remote host closed the connection]
columbarius has quit [Ping timeout: 480 seconds]
heat has quit [Ping timeout: 480 seconds]
flynnjiang1 has quit [Ping timeout: 480 seconds]
camus has joined #dri-devel
glennk has quit [Ping timeout: 480 seconds]
crabbedhaloablut has quit []
flynnjiang has joined #dri-devel
flynnjiang1 has joined #dri-devel
flynnjiang has quit [Read error: Connection reset by peer]
Company has quit [Quit: Leaving]
flynnjiang1 has quit [Ping timeout: 480 seconds]
jernej has quit [Quit: Free ZNC ~ Powered by LunarBNC: https://LunarBNC.net]
jernej has joined #dri-devel
kts has joined #dri-devel
YuGiOhJCJ has joined #dri-devel
kts has quit [Quit: Leaving]
kzd has quit [Ping timeout: 480 seconds]
Duke`` has joined #dri-devel
fab has joined #dri-devel
fab is now known as Guest11760
<Venemo> Lynne: sorry for the late reply I was away from the keyboard. I am the right person to ping about radv mesh shaders, and I will look into it after I return from my holidays. until then please open a mesa issue if you haven't already so it won't get forgotten
Guest11760 has quit []
fab_ has joined #dri-devel
fab_ is now known as Guest11761
rppt has quit [Quit: ZNC 1.7.5+deb4 - https://znc.in]
sghuge has quit [Remote host closed the connection]
sghuge has joined #dri-devel
glennk has joined #dri-devel
Guest11761 has quit [Ping timeout: 480 seconds]
nektro has quit [Remote host closed the connection]
nektro has joined #dri-devel
crabbedhaloablut has joined #dri-devel
fab has joined #dri-devel
pcercuei has joined #dri-devel
mort_ has quit [Quit: The Lounge - https://thelounge.chat]
tyalie has quit []
mort_ has joined #dri-devel
tyalie has joined #dri-devel
<mareko> DavidHeidelberg: it's still broken, temporarily removing radeonsi from the CI is being considered https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/26643
fab has quit [Quit: fab]
fab has joined #dri-devel
<mareko> and radv
mort_ has quit [Remote host closed the connection]
mort_ has joined #dri-devel
xroumegue has quit [Ping timeout: 480 seconds]
fab has quit [Ping timeout: 480 seconds]
sima has joined #dri-devel
xroumegue has joined #dri-devel
mort_ has quit []
<mareko> This is an announcement that radeonsi and all jobs that depend on it will be removed from the CI on December 29, 2023 due to a libdrm upgrade issue. 4 days should be more than enough time for interested parties to resolve it.
<Venemo> mareko: what exactly is the issue with libdrm?
<mareko> Venemo: the latest libdrm is required, but the CI was changed to use libdrm from the distro, which was obviously a mistake
<Venemo> ouch
<Venemo> we should just revert that CI change then, no?
<mareko> too many conflicts and it's like 5 changes in 1 commit, so not revertable
<mareko> I switched it back to builds from source manually, but some jobs still have errors
<mareko> radeonsi and radv are good in there, but swrast and layered ones can't find that libdrm, this is the pipeline: https://gitlab.freedesktop.org/mesa/mesa/-/pipelines/1064179
junaid has joined #dri-devel
<mareko> usually when we release a new libdrm, we do it because we want to require it in Mesa 30 seconds later
<mareko> we never release libdrm if it's not required by Mesa immediately
<Venemo> ouch
<Venemo> I think such a change should not have been made during the holidays
<mareko> it'll be remembered for years to come though
<mareko> I'm entertaining the idea of using gallium/rtasm to convert preamble NIR to x86 bytecode and run it on the CPU
junaid has quit [Remote host closed the connection]
yyds has quit [Remote host closed the connection]
mclasen has joined #dri-devel
<Venemo> whoah
rppt has joined #dri-devel
rppt has quit []
rasterman has joined #dri-devel
rppt has joined #dri-devel
Company has joined #dri-devel
mclasen has quit [Ping timeout: 480 seconds]
djbw has quit [Read error: Connection reset by peer]
mclasen has joined #dri-devel
rsalvaterra has quit []
rsalvaterra has joined #dri-devel
dviola has joined #dri-devel
nashpa has joined #dri-devel
Net147_ has quit []
Net147 has joined #dri-devel
dliviu has quit [Ping timeout: 480 seconds]
yyds has joined #dri-devel
yuq825 has left #dri-devel [#dri-devel]
alyssa has joined #dri-devel
YuGiOhJCJ has quit [Quit: YuGiOhJCJ]
<alyssa> mareko: surely interpreting NIR would be easier and not much more expensive?
<alyssa> though JIT'ing it is cooler :D
<DavidHeidelberg> mareko: just keep the DRM as is and wait until I fix it. On christmas day I was not really excited to start changing CI.
<DavidHeidelberg> *Christmas day and weekend.
heat has joined #dri-devel
rppt has quit [Quit: ZNC 1.7.5+deb4 - https://znc.in]
rppt has joined #dri-devel
mclasen has quit [Ping timeout: 480 seconds]
mclasen has joined #dri-devel
kts has joined #dri-devel
yyds has quit [Remote host closed the connection]
yyds has joined #dri-devel
kts has quit [Quit: Leaving]
alyssa has quit [Quit: alyssa]
kts has joined #dri-devel
iive has joined #dri-devel
rasterman has quit [Quit: Gettin' stinky!]
yyds has quit [Remote host closed the connection]
heat has quit [Remote host closed the connection]
heat has joined #dri-devel
kts has quit [Read error: Connection reset by peer]
<DavidHeidelberg> Ok, since we have libdrm under our control, we could you generate artifacts or put the build into S3 and just install it from our CI.
<DavidHeidelberg> I understand we want fresh libdrm in the CI, on other hand, it's dependency for multiple projects and it make sense to keep it within packaging. This way we could use the package we generate in mesa/drm repository.
<DavidHeidelberg> mareko: what do u say?
<DavidHeidelberg> is it acceptable to you?
mclasen has quit []
mclasen has joined #dri-devel
mclasen has quit [Ping timeout: 480 seconds]
junaid has joined #dri-devel
tlwoerner_ has joined #dri-devel
tlwoerner has quit [Ping timeout: 480 seconds]
fab has joined #dri-devel
fab has quit [Remote host closed the connection]
fab has joined #dri-devel
xerpi[m] has quit []
mclasen has joined #dri-devel
i-garrison has quit [Remote host closed the connection]
i-garrison has joined #dri-devel
sima has quit [Ping timeout: 480 seconds]
i509vcb has joined #dri-devel
fab has quit [Quit: fab]
fab has joined #dri-devel
glennk has quit [Ping timeout: 480 seconds]
mclasen has quit [Ping timeout: 480 seconds]
glennk has joined #dri-devel
mclasen has joined #dri-devel
tobiasjakobi has joined #dri-devel
mclasen has quit []
mclasen has joined #dri-devel
junaid has quit [Ping timeout: 480 seconds]
fab has quit [Quit: fab]
fab has joined #dri-devel
junaid has joined #dri-devel
fab has quit [Quit: fab]
fab has joined #dri-devel
fab is now known as Guest11806
DodoGTA has quit [Quit: DodoGTA]
DodoGTA has joined #dri-devel
Haaninjo has joined #dri-devel
gouchi has joined #dri-devel
gouchi has quit [Remote host closed the connection]
Guest11806 has quit []
alanc has quit [Remote host closed the connection]
alanc has joined #dri-devel
mclasen has quit [Ping timeout: 480 seconds]
Mary has quit []
Mary has joined #dri-devel
Mary has quit []
Mary has joined #dri-devel
tobiasjakobi has quit []
tlwoerner_ has quit []
tlwoerner has joined #dri-devel
cocomo has joined #dri-devel
<karolherbst> DavidHeidelberg: can it be done in a way, that any developer can dump the version of those packages?
Duke`` has quit [Ping timeout: 480 seconds]
Duke`` has joined #dri-devel
<karolherbst> could we even CI it? Like something checking if the requested version is available and if not, trigger a build and package it (and keeping the other versions)?
<karolherbst> (unless we don't care if we accidentally bump the required version on stable branches)
<karolherbst> (then the latest would do)
<DavidHeidelberg> I'm not saying it's that straightforward as we had it in CI, but on other hand our CI isn't one small script with two deps to compile but pretty huge monster
<karolherbst> DavidHeidelberg: no, I meant that mesa requests a version and triggers an external pipeline to wait for the build to finish rather
<karolherbst> also
<karolherbst> I don't think we can go with "always latest" unless stable maintainer say it doesn't matter
<DavidHeidelberg> hmm, can you rephrase, not sure about your idea
<karolherbst> CI should also catch that backporting certain commits could require a newer libdrm without us noticing
<karolherbst> like.. in mesa CI we check if a _specific_ libdrm version is in the repo, if not, we ask an external CI pipeline to build it for us and add it to the repo
<karolherbst> but as I said, it's only needed if we can't go with "always latest" also on stable branches
<karolherbst> but anyway.. personally I don't see the point of adding this complexity
<karolherbst> what's the benefit here anyway?
<DavidHeidelberg> we could do manually dpkg -i _libdrm-2.4.118_, but building infra for libdrm with minimal changes seems to be overkill
<karolherbst> using deb files also seems overkill to me honestly
<DavidHeidelberg> not if other software depends on it
<DavidHeidelberg> or do we plan to compile all the software above libdrm?
<karolherbst> why? is that an apt limitation?
<karolherbst> other build systems have ways of marking something as installed without actually installing that package so deps are resolved, but other packages depending on it can be installed regardless
<karolherbst> *package
<DavidHeidelberg> I understand we compile by hand few last-tier test suites and utils. I don't understand why mess with core-system packages which are relied on by other installed packages
<DavidHeidelberg> if we want to manage everything by hand, we can invent new distro for our CI
<karolherbst> what are those packages?
cocomo has left #dri-devel [#dri-devel]
<DavidHeidelberg> 2-3 years back, when we did some simple tests, pointed our builds to /usr/local/lib... I get it. Now we have 10 scenarios, 4 different containers with different package set, multiple architectures
<karolherbst> sure, but what packages do we need which depend on libdrm?
Mangix has quit [Read error: Connection reset by peer]
Mangix has joined #dri-devel
<DavidHeidelberg> I know there are. I had my fun like year ago and it was a lot of pain
flom84 has joined #dri-devel
<karolherbst> couldn't we just build the deb files inside mesa CI and install it there?
<DavidHeidelberg> now I'll get to my main point and that's not spending 3x 1 hour for build armhf/arm64 and x86_64 containers, but just install prepared packages from one repo
<DavidHeidelberg> it's really not fun to work with CI (as a CI developer).
<karolherbst> we don't have to build all deb packages in one go
<DavidHeidelberg> and yet, libdrm CI gets downloaded compiled in 30 seconds
<karolherbst> the deb file could be the artifact of one job and then the container building ones just pull in all those deb files
<DavidHeidelberg> but when we have other packages, which can be installed in few seconds, we can do same for libdrm
<DavidHeidelberg> I was preparing this for last few months, but this "let's bump libdrm and reintroduce all ugliness" made me enjoy my Christmas with hacking up on this
<mareko> DavidHeidelberg: if there is a timeframe for implementing the solution, I'll drop my earlier statements
<DavidHeidelberg> yes, I have the libdrm done in the repo, but I'm doing some final tests
<karolherbst> DavidHeidelberg: well, nobody force you to work on this, or did somebody?
<DavidHeidelberg> right
<DavidHeidelberg> I need to do few more force pushes, since currently the repo doesn't contain hash in commit msg from which it was created
<karolherbst> nothing against a deb repo in itself, I just prefer that we have all the logic inside mesa, as otherwise we could run into other issues we haven't before (like e.g. mismatched between used libdrm APIs and declared dependency version)
<DavidHeidelberg> karolherbst: yeah, problem is we compile & build already double digit of projects
<DavidHeidelberg> of course, we could do better ccaching
<DavidHeidelberg> but then we would have to spend lot of time preparing some nice framework for each project to be properly ccached
<karolherbst> yeah.. so that was my building the deb package idea
<karolherbst> just have a job for each package
<DavidHeidelberg> so it's easier just rebuild one package which gets affected
<DavidHeidelberg> yes. Welcome in ci-deb-repo
<mareko> distro-provided libdrm can be overwritten by our own, just build it and ninja install
<karolherbst> but that's outside of mesa
<DavidHeidelberg> mareko: alpine has it with alpine:lastest so that can be quickly resolved
<DavidHeidelberg> for Fedora... hmm. I have to check
<DavidHeidelberg> karolherbst: yeah, but as you can see there is like 8 packages. But in reality in Mesa we have like 20
<mareko> why are we talking about other packages?
<DavidHeidelberg> mesa CI should be machinery for building 20 packages. it should install stuff, run few scripts and give us the containers
<karolherbst> the thing is just.. what do we then do with the fedora/other containers?
<karolherbst> what if we come to the situation we have to bump it in all distributions?
<DavidHeidelberg> it doesn't matter. Fedora just do build. Alpine just does build. All the other testing is done on Debian.
<karolherbst> sure
<karolherbst> but they might also need updated libdrm
<DavidHeidelberg> we just bump to recent versions. The testing will be unaffected (except build test, which isn't very sensitive to new libs & stuff)
<mareko> when you update libdrm, you don't need to update any other packages
<DavidHeidelberg> I can bump alpine in upstream in matter of minutes
<karolherbst> so maybe we shouldn't use debian then?
<DavidHeidelberg> mareko: that's why you update only libdrm
<DavidHeidelberg> you don't need touch other packages
<DavidHeidelberg> karolherbst: enjoy rewriting our CI :)
<karolherbst> ohh, I would totally do it to get rid of debian
<DavidHeidelberg> guess why we have Debian. Because it's like immutable
<DavidHeidelberg> we don't have to care that something there will change
<karolherbst> ?
<DavidHeidelberg> except what we change
<karolherbst> how is that different with other repos
<karolherbst> *distros
<karolherbst> sure they update packages more often, but debian also could pull in some update breaking us
<DavidHeidelberg> the difference how often. in Debian this happen rarely
<karolherbst> okay, so getting rid of debian isn't an option anymore?
<DavidHeidelberg> Also what I was working on was improving our situation to have stable repo for mesa-ci
<karolherbst> yeah....
<DavidHeidelberg> I mean, it'll cost you enormous amount of work with little to zero benefit
<karolherbst> probably
krumelmonster has quit [Ping timeout: 480 seconds]
<karolherbst> but debian annoys me personally
<DavidHeidelberg> what is annoying with debian is packaging. and I tried to "workaround it" with the repo I was working on
<DavidHeidelberg> as you could see the yaml file I sent into chat, I think it's much better than f**** with debian/ directory :D
<karolherbst> yeah...
<karolherbst> debian packaging is pure pain
<DavidHeidelberg> Mesa has few people involved in Debian, if you need quick bump, you'll just increase the number, should work like ... let's be pessimistic 90% of time.
<DavidHeidelberg> when it'll not, you can ping me or someone from Debian packaging and we'll fix asap :) (it'll have to be fixed for debian anyway later, so...)
<karolherbst> "quick bump" never worked for me in debian
<karolherbst> even for critical bug fixes
<DavidHeidelberg> the trick is we have the yaml and some automated magic behind this in the ci-deb-repo
<karolherbst> does debian package allows us to have multiple versions of the same package in the repo and choose which version to install?
Haaninjo has quit [Quit: Ex-Chat]
<karolherbst> anyway.. I'm a little concerned with the idea of managing a single repo (probably used by multiple projects) but not being able to install a specific version
<DavidHeidelberg> karolherbst: I thought about it 😉
<karolherbst> and just installing it inside `/usr` and forcing the package system to assume it's installed is the solution with the lowest friction imho
Duke`` has quit [Ping timeout: 480 seconds]
pzanoni has quit [Ping timeout: 480 seconds]
<DavidHeidelberg> developer laptop: "deb [trusted=yes] https://gitlab.freedesktop.org/gfx-ci/ci-deb-repo/-/raw/bookworm/ bookworm main"
<DavidHeidelberg> CI scripts: "deb [trusted=yes] https://gitlab.freedesktop.org/gfx-ci/ci-deb-repo/-/raw/$HASH/ bookworm main"
<karolherbst> but if we can't have like multiple versions in the same deb repo, we could just have a http server where you download specific versions from
<DavidHeidelberg> so we'll have the repo always fixed at some specific software-set
<karolherbst> mhhhhh
<DavidHeidelberg> and when you want to bump, you just update the hash
<karolherbst> I'd rather download deb files then
<DavidHeidelberg> again, 3 archs, 50 packages (libdrm has for example multiple)
<DavidHeidelberg> you don't want to go one by one
<karolherbst> I mean.. it's just differs how you pull the packages, right?
<DavidHeidelberg> yeah, you can still browse the GitLab UI and see the packages in repo
<DavidHeidelberg> but the CI will pull them as needed
krumelmonster has joined #dri-devel
<karolherbst> instead of adding them into a deb repo, you upload them somwhere, and instead of installing them, you download those, sure it's more painful, but again, how would we implement "install a _specific_ versions of X" then?
<karolherbst> what if the version of ilibdrm inside that deb repo is 2.4.119 in the deb repo, but I want to CI that it still builds with 2.4.110 (because that's the required version in the build system)?
<karolherbst> s/nside that deb repo//
<DavidHeidelberg> you can for ci-deb-repo
<DavidHeidelberg> then you edit libdrm.yml to old version
<DavidHeidelberg> and you point CI to your fork @your_hash
<karolherbst> so in the worst case, every project using that, creates a fork pinning specific versions?
<karolherbst> what if you also want to juggle with stable/main branches?
<karolherbst> so you mirror that branching in your work as well?
<DavidHeidelberg> the stable branches will be fixed @some_commit
<DavidHeidelberg> you can make stable branch
<DavidHeidelberg> so, let's say I create branch from hash_a; then I'll push new commit; and I'll point to the new hash from the bookworm_23_1 branch
<DavidHeidelberg> it's much better than what we have now
<DavidHeidelberg> and the trick is, you can make stable branch only when you need some bump
<karolherbst> couldn't we simplify this with external pipelines?
<karolherbst> like
<karolherbst> you have your config file of all packages as your input, the downstream deb repo CI does it magic only building what it needs to, and has a mapping of config_file -> deb repo container
<karolherbst> mhhh or maybe other idea
<DavidHeidelberg> I see an issue, how can you use it externally (for example to test on your laptop)?
<DavidHeidelberg> git checkout $hash_a; git checkout -b bookworm_23_3; # do change; git push
<DavidHeidelberg> how can you beat this?
<karolherbst> why would I want to use it locally?
<DavidHeidelberg> well, why would someone run any tests locally
<karolherbst> I didn't mean running tests, I mean install deb packages from that repo
<DavidHeidelberg> but these packages contains libs, llvm, tests, whatever :)
<karolherbst> for llvm we should just use the upstream repo honestly
<DavidHeidelberg> also debugging CI isn't very exciting with all waiting and abusing of FDO
<karolherbst> but anyway, not everybody uses debian locally
<DavidHeidelberg> sure
<DavidHeidelberg> I'm not saying my solution solves all the problems of universe, but on other hand gives us software @versions which we need, while we keep stable system underneath
<DavidHeidelberg> and gives us very precise control which version is in which pipeline, while we don't have to download,compile and install it by hand
<DavidHeidelberg> and also allows us nice rollbacks without rebuilding stuff (for example new package causing problems, but we also did some other changes to CI meanwhile.. we just change hash and keep the changes in our CI untouched).
<karolherbst> right...
<DavidHeidelberg> While I get this is not that strong argument, but I spend like 2 years (incl. people from my team) playing with CI and I believe I see painpoints which can be solved. I wouldn't want to make my life worse... of course I can overlook something, but I hope this improves things for better.
<DavidHeidelberg> there are other solutions which yield also nice improvements, maybe a bit better, maybe a bit worse, but so far this seems to me doable and workable :D
<karolherbst> yeah, I just kinda wished that managing multiple versions wouldn't be soo annoying, but apparently there are patches to reprepro to do that...
heat has quit [Remote host closed the connection]
heat has joined #dri-devel
<DavidHeidelberg> karolherbst: I'm thinking I could eventually keep the old version inside the repository 4ever
<DavidHeidelberg> then you could just pin it in our CI
<karolherbst> yeah..., that would make it easier at least
<karolherbst> maybe have a config which lists which versions to keep or something..
<karolherbst> or rather...
<DavidHeidelberg> we could keep everything, it's like git lfs
<karolherbst> each project build jobs checks if all versions exists and only builds the missing ones
<karolherbst> or we rely on storage and to never mess it up
<DavidHeidelberg> it's just the reference
<karolherbst> mhh git lfs might indeed be an option
<karolherbst> but we might also have to rebuild versions
<DavidHeidelberg> I see, experimental reprepro has it :)
<DavidHeidelberg> that's why I like the whole repo hash downgrading mechanism, you have all packages build against each other (and we can be sure the Debian as a base won't suprise us)
flom84 has quit [Quit: Leaving]
<karolherbst> well, worst case we'll see if it checks out or not
<daniels> karolherbst: (yes you can have multiple versions of the same package present in a single repo, or overlapping in multiple repos, and select either a specific version, or latest from repo foo)
<daniels> apt install libdrm1=2.4.119-1
junaid has quit [Remote host closed the connection]
<daniels> or apt install libdrm1/karol-weird-experiment-repo
<DavidHeidelberg> daniels: yup, the reprepro thou has it only in recent (this year) experimental version :P
<daniels> DavidHeidelberg: really?
<DavidHeidelberg> yup, I just checked the https://packages.debian.org/experimental/reprepro
<karolherbst> is there another tool to manage repos, or would have done it through this "take this directory and make it into a repo" tihng?
<daniels> we can just install reprepo from experimental?
<DavidHeidelberg> daniels: yup :)
<DavidHeidelberg> anyway, I'm going to sleep, any idea how to reset gitlab-ci GIT caching?
<daniels> huh?
<daniels> I’m missing some context for the last question
<DavidHeidelberg> I do force-push to git repo where later CI pipeline pushes... but it forget about my force push all the time
<DavidHeidelberg> so it look like I didn't do git push --force from my PC
<DavidHeidelberg> daniels: does it make sense?
<daniels> DavidHeidelberg: sorry I was just getting home but also -EPARSE … I assume you don’t mean GitLab CI itself pulling the repo the pipeline is in, but some kind of internal cache?
<DavidHeidelberg> yup, it seems gitlab caches the remote (I do git remote add; git fetch; git checkout)
<DavidHeidelberg> daniels: there should be 2 commits (initial and push), nothing between
<fluix> I'm also confused. what are these gitlab-ci bot (?) commits
<fluix> like this just looks like you pushed and then whatever gitlab bot is pushed more commits
<DavidHeidelberg> fluix: it's from the CI job
<fluix> and what's wrong
<DavidHeidelberg> the trick is, when I push locally 1st empty commit (git push --force) it'll ignore it on another push from CI and restore previous ones)
<DavidHeidelberg> and it just appending to what I force pushed into initial commit
<DavidHeidelberg> if you look at the same link, this is how the branch should look like
<fluix> "it'll ignore it" what's the first it
<DavidHeidelberg> now it's empty, then +1 commit from CI (but without these old ones)
<fluix> the link right now only has your empty commit
<DavidHeidelberg> yes. it should. but problem is, when CI pushes, it restores the "removed" commits
<DavidHeidelberg> even when it shouldn't
<fluix> example gitlab ci log?
<fluix> however it's cloning, it means it has these commits. unless you explicitly configure some cache I don't think it's doing anything wrong. if you want your behaviour it sounds like you need a git reset
<daniels> DavidHeidelberg: yeah that does get cached, there’s a YAML variable to explicitly disable that iirc, but you really just want to fetch a branch and checkout a specific revision
<DavidHeidelberg> good idea for a workaround :)
<DavidHeidelberg> thanks!
<DavidHeidelberg> even better, recalled I could remove the branch
<fluix> gitlab auto caches repositories? like you're saying https://gitlab.freedesktop.org/gfx-ci/ci-deb-repo/-/jobs/53124278#L36 already pulls in the extra commits?
<fluix> anyways, glad you solved it. this does look like a rather complex setup though
<DavidHeidelberg> fluix: yup, GitLab loves to do some magic to "improve stuff"
<daniels> fluix: the working directory gets cached by default
<DavidHeidelberg> it's not that bad in complexity terms, anyway I don't want to pollute in future heavy used repo with some testing :)
<fluix> how does `git remote add` not fail with `error: remote origin_gitlab already exists.
<fluix> then?
<fluix> nvm, there's a git remote remove
<fluix> gotcha, thanks!
<DavidHeidelberg> fatal: couldn't find remote ref bookworm ... 2 lines ...fatal: a branch named 'bookworm' already exists
<DavidHeidelberg> F$#@