ChanServ changed the topic of #freedesktop to: https://www.freedesktop.org infrastructure and online services || for questions about freedesktop.org projects, please see each project's contact || for discussions about specifications, please use https://gitlab.freedesktop.org/xdg or xdg@lists.freedesktop.org
<mkurz>
Hi! I just signed at your GitLab instance (https://gitlab.freedesktop.org/) but an admin needs to confirm me? Is that correct?
<mkurz>
sorry, mean "...signed up..."
AbleBacon has quit [Read error: Connection reset by peer]
<mupuf>
mkurz: no, we shouldn't need to accept you
<mupuf>
how long have you been waiting for?
<mkurz>
mupuf: not sure anymore...
<mkurz>
mupuf: My email is confirmed but when I try to loggin it says "Your account is pending approval from your GitLab administrator and hence blocked. Please contact your GitLab administrator if you think this is an error."
<mkurz>
would be nice if you could unlock me
<mkurz>
same username like here
<mupuf>
done
<mupuf>
sorry for the delay, it should have been automatic
<mkurz>
mupuf: Thanks! Got the Email "Your GitLab account request has been approved!"
<mkurz>
mupuf: It's ok ;) Thanks!
<mupuf>
you're welcome!
mkurz has quit [Quit: Leaving]
Leopold has quit []
Leopold has joined #freedesktop
thaller has joined #freedesktop
Leopold___ has joined #freedesktop
Leopold has quit [Ping timeout: 480 seconds]
MajorBiscuit has quit [Read error: Connection reset by peer]
MajorBiscuit has joined #freedesktop
___nick___ has joined #freedesktop
karolherbst_ is now known as karolherbst
<bentiss>
mupuf: is there an easy way to fetch the b2c initramfs version from the downloaded binary?
<mupuf>
bentiss: hmm, no. I think the easiest would be for us to put the version in the name
<bentiss>
nah... because people can just change it
<mupuf>
or, if you are ready to decompress the image, then we *could* put a file in there
<mupuf>
but that means xz -d, then extract the cpio archive
<mupuf>
if that works for you, then we can do it
<bentiss>
can't we add some metadata to xz?
<mupuf>
not as far as I know, but if we can, that would be better, indeed!
<bentiss>
actually we could add a file and use xz -l...
<mupuf>
bentiss: we need to check if linux is fine with this though
<mupuf>
but if it is, that sounds like a very good plan!
<mupuf>
but if we can't b2c already has `/etc/b2c.version` in the cpio
<bentiss>
oh... proabbly uncompressing just that file should be easy enough, no?
genpaku has quit [Read error: Connection reset by peer]
genpaku has joined #freedesktop
<mupuf>
bentiss: is this easy-enough? bsdtar -Oxf out/initramfs.linux_amd64.cpio.xz etc/b2c.version
<bentiss>
mupuf: don't have bsdtar locally
<mupuf>
ok, let's see if there are other ways using cpio
<bentiss>
well, not on all machines
<bentiss>
ideally a python module would be easier
<mupuf>
I see
<bentiss>
well, maybe not
<mupuf>
bentiss: the reason you are asking this is because you would like vm2c to use the right command line based on the version of the initramfs?
<mupuf>
for sure, it would be nice to be able to know, rather than being forced to keep backwards compatibility ... forever?
<mupuf>
especially since we say we won't (even though we do)
<bentiss>
mupuf: yes, but OTOH, we don't really care I think
* mupuf
wanted to just keep vm2c fine with both the latest release and the current version of the code
<mupuf>
and have CI enforce that
<bentiss>
mupuf: yes, and I thank you for that
<bentiss>
it's my usage that is wrong
<bentiss>
so I'm trying to see what would be acceptable
<mupuf>
it will still fail if people do not upgrade their b2c, but we could indeed try to take that into account by reading the b2c version from the archive
<bentiss>
the problem with reading the version is that we will end up keeping a list of endless compat flags, which is not better
<mupuf>
well, not so endless as I want to keep backwards compatibility for a given major version
<bentiss>
right now, I'm thinking more: compat if the version is not released, ditch it ASAP (when released)
<mupuf>
so, we'll just need one disctinction when we do remove support for `b2c.container` (not sure why I would, given that it has literally no maintainance cost)
<bentiss>
because we can always "lock" the versions by specifying the actual download urls
<bentiss>
(as a user)
<mupuf>
compat if the version is not released, ditch it ASAP (when released) --> this would mean that b2c would make a backwards incompatible change without a deprecation notice
<mupuf>
err, a transition period*
<bentiss>
well, the other option is to deprecate the option in the initramfs, and remove it after the next release
<bentiss>
which might be what you did
<mupuf>
yep, and I am happy with doing that
<mupuf>
yes, it is what I did, you may still use b2c.container... and I may keep that forever
<bentiss>
oh, right I missed that part
<mupuf>
basically, b2c.run and b2c.container are aliases
<bentiss>
so now the only question is "how do we remember to update vm2c after a release"
<mupuf>
simple: with the CI patch I proposed, if someone removes a parameter vm2c requires, the job will fail
<bentiss>
yeah, I was writing roughly the same
<mupuf>
so... not much of an overhead IMO
Leopold___ has quit [Ping timeout: 480 seconds]
<bentiss>
still we should fail the job if it is using a deprecated API IMO
<bentiss>
instead of passing it and wait for a breakage
djrscally has quit [Ping timeout: 480 seconds]
<bentiss>
OK, so let's just merge your MR now, and work on the details in a separate issue
<mupuf>
I guess we can add a deprecation notice for some of the parameters, and have vm2c fail if any deprecation notice is found... but that means as soon as I generate a new release, CI would fail until I update it
<mupuf>
update vm2c to stop using the deprecated options*
<mupuf>
or we could enable this only for release pipelines, which is probably better
<mupuf>
but yeah, I believe we should be able to merge what I proposed, and support the usecase you have
<bentiss>
yes, and nope. We can detect that the deprecation happens in the "args" case, but not in the "non-args" case
<mupuf>
what do you mean?
<bentiss>
basically, if the run against `vm2c tests amd64 (latest release)` shows a deprecation, we fail
<bentiss>
if testing against the just produced initramfs shows deprecations, we paper over
<bentiss>
another option is to have a special stage for tags, that creates an issue automatically if there is a deprecation
<mupuf>
what you proposed first would break CI right after a release. The latter would work, but I think it is just best to enforce "No deprecated options used" in every release pipeline.
<mupuf>
that may slow down the release process, but this is fine by me :)
<bentiss>
mupuf: "allowed to fail"?
<mupuf>
hmm, but then I need to monitor for failures... Well, that's another option, yeah :)
<mupuf>
let's just write an issue about this, as for now, this is all theoritical :)
<bentiss>
yep
Leopold_ has joined #freedesktop
GNUmoon has quit [Remote host closed the connection]
GNUmoon has joined #freedesktop
miracolix has joined #freedesktop
djrscally has joined #freedesktop
Leopold_ has quit [Remote host closed the connection]
<eric_engestrom>
danvet: a new git release added a security feature that broke the mirroring
<eric_engestrom>
all the github mirrors are stuck to that date
<emersion>
maybe we should just ditch them?
<eric_engestrom>
the github mirrors? I kinda agree, especially given what github have been doing lately
<emersion>
i wonder what use-case they serve
<emersion>
if it's about providing a fancy UI to browse the code online, maybe gitlab.fdo mirrors would be better
<eric_engestrom>
I believe the argument was "everyone's on github, we should have a mirror there so people find our code", but this isn't something I really agree with anymore, and this "discoverability" has the downside of people sending PRs on github instead of reading the README or repo description to see where the real repo is
<alanc>
it was a nicer UI when all we had was cgit on fd.o, but much less so now that we have gitlab
<emersion>
yeah, i don't think gitlab.fdo is any less discoverable than github
<eric_engestrom>
agreed with both
<eric_engestrom>
I'm not strongly opposed to having github mirrors, but I would slightly prefer if we dropped them (but a clean delete, not leaving them in the current "no longer synced" state)
<eric_engestrom>
I don't feel strongly enough about it to really push for that though
<emersion>
yeah, i'd be in favor of deleting or archiving them as well
<emersion>
in any case, the current state is misleading
<__tim>
what's the problem with github mirroring? The gstreamer mirrors seem fine?
<eric_engestrom>
oh right, archiving is a thing, I forgot about that
<emersion>
wayland repos are stuck as well
<emersion>
daniels, thoughts? ^
<daniels>
I don't mind deleting them
<daniels>
we put them up there when we had everything on cgit as a reasonable way for people to fork up-to-date things from GitHub and be able to better discover what they were, since that's what people were doing anyway
<eric_engestrom>
__tim: is that mirror tied to kemper (cgit) pushes as well? that's where I found the "new git version refuses to push" issue
<daniels>
I don't think there's much any point in doing that these days though
Leopold___ has quit [Ping timeout: 480 seconds]
<daniels>
eric_engestrom: there are three sets of mirrors
<daniels>
there's the mesa3d org, then there's the wayland-project org, then there's the freedesktop org which mirrors everything
<__tim>
ah no, the gstreamer github mirror is something github does pulling from gitlab
<daniels>
*four sets :)
<__tim>
not tied to cgit or a post-push hook
<danvet>
so if the mirrors are dead, should we nuke them to avoid confusion?
<daniels>
I'd happily do that, yeah
<eric_engestrom>
I agree
<danvet>
I honestly don't care
<eric_engestrom>
should we ask the mailing list first?
<danvet>
just less confusion pls
<danvet>
tbh I think if we want a mirror, maybe on our gitlab instead ...
<eric_engestrom>
actually, that would be a bunch of mailing lists
<eric_engestrom>
(so no)
<daniels>
eric_engestrom: could you please email freedesktop@ + wayland-devel@ + dri-devel@ + mesa-dev@ announcing that we're going to nuke them?
<daniels>
the others can figure it out from freedesktop@
<eric_engestrom>
ok, doing that
<emersion>
thanks
<emersion>
where is the script doing the mirror thing?
<emersion>
(once we make a decision, would be nicer to remove that as well)
<daniels>
emersion: it's in the repos on kemper.fd.o
<emersion>
oh, so for wayland it was a gitlab -> cgit -> github chain ?
<emersion>
fun :P
<daniels>
yeah ...
<daniels>
so on kemper, they're all hooks/post-receive.d - mesa/wayland have their own to push to the org-specific ones, and then every other repo has something which just echos a path into a socket created by github-mirror.service
<emersion>
cool, thanks for the info
<daniels>
alanc: I'm pretty sure that's due to libX11 having commits which GitHub considers to be invalid (because we started with a truly ancient version of git which sometimes wrote out commits that newer versions reject)
<daniels>
emersion: np
<eric_engestrom>
haven't checked the other ones, but mesa does the echo in a named pipe
<alanc>
daniels: it's not just libX11 but a ton of Xorg projects, like when we converted to github we changed the naming scheme of the mirrors to drop the subdirectory
<emersion>
i am not sure we have mirroring turned on in gitlab
<eric_engestrom>
emersion: ah, didn't realise it could be turned off; could you check?
<emersion>
ah, it seems like it's fine
<eric_engestrom>
(you're an admin, right?)
<eric_engestrom>
ah, thanks :)
<eric_engestrom>
also, while writing this I remembered that in Mesa we use the github mirror to run the macOS CI, so we/I'll setup that push mirror and let's not delete it
<DavidHeidelberg[m]>
etag: "6c138244d440da10d8a2b5e2b77ce4c3-13" # doesn't look like MD5, is it because the file was uploaded long time ago?
<DavidHeidelberg[m]>
but that doesn't make much sense, since migration to s3 was kinda recent
<DavidHeidelberg[m]>
rest of the traces passed just fine with MD5 tag checks
Leopold has quit []
Leopold has joined #freedesktop
miracolix has quit [Remote host closed the connection]
<DavidHeidelberg[m]>
there is around ~ 5 traces with this type of e-tag
<bentiss>
DavidHeidelberg[m]: multi-part upload? with the number of parts at the end (maybe, I vaguely remember something)
<bentiss>
and in case of multi-part upload, the md5 is individual per chunk, the total doesn't mean a lot
<DavidHeidelberg[m]>
When downloading, the file already whole, right? I cannot download each chunk standalone.
<bentiss>
yep
<bentiss>
see the code in ci-fairy s3cp, it's a bit clunky
<DavidHeidelberg[m]>
Any chance I can force rewrite these value to real md5s?
<bentiss>
not that I can think of
<bentiss>
but there is a way to recreate it
<DavidHeidelberg[m]>
I can handle scenarios where the etag is empty, but not where it contains invalid md5 :( (and also these traces would be vulnerable to damage on nfs)
<DavidHeidelberg[m]>
Recreating sounds good to me
<DavidHeidelberg[m]>
It's around 5 files what I noticed currently
<DavidHeidelberg[m]>
+2 in restricted traces
djrscally has quit [Ping timeout: 480 seconds]
<bentiss>
I am pretty sure I managed to recreate it somehow. I thought it was in the pytest, but not apparently