ChanServ changed the topic of #freedesktop to: https://www.freedesktop.org infrastructure and online services || for questions about freedesktop.org projects, please see each project's contact || for discussions about specifications, please use https://gitlab.freedesktop.org/xdg or xdg@lists.freedesktop.org
co1umbarius has joined #freedesktop
columbarius has quit [Ping timeout: 480 seconds]
alyssa has joined #freedesktop
<alyssa>
daniels: bentiss: I'd like to plumb planet.fd.o into fediverse. Do you think it's sane to hack up Venus to make that happen? Or should I just write a cron job to scrape https://planet.freedesktop.org/rss20.xml once an hour and call that good enough?
<alyssa>
(The latter is certainly easier for me since I don't have to learn a foreign codebase, lol. Less efficient but maybe that's ok.)
<alyssa>
And if the latter -- is there a preferred fd.o server to run the script on? (Bearing in mind it needs our mastodon api key so shouldn't be world-readable to random xorg members)
<alyssa>
(I don't particularly care that sysadmins would be able to extract the key, there are far worse things you could do..)
<alyssa>
(After a few minutes thought really leaning towards the latter since scraping rss.xml seems downright reasonable. And it'd be an excuse to learn some rust or something.)
<airlied>
I'd think scraping it is fin
<airlied>
I'd rather we don't hack the venus install, I barely maintain what's there as it is
<airlied>
though I mostly only look after kernelplanet.org
ximion has quit [Quit: Detached from the Matrix]
AbleBacon has quit [Read error: Connection reset by peer]
<bentiss>
daniels: if it needs to be a one shot, we could use a webhook on pipeline results on the planet fdo project, and run the script in the cluster itself, but reusing the gitlab pipeline is way easier IMO
<bentiss>
well, to be complete, we *can* run cronjobs in the k8s cluster, but it's a PITA to setup, so better not to :)
<daniels>
I would strongly prefer a solution which didn’t scrape and second-guess Venus’s output though; given that Venus is hideously unmaintained and needs love, and this will presumably be immediately ‘gifted’ to admins to maintain forever, having it depend on reverse-engineering Venus’s output would be bad
mvlad has joined #freedesktop
tanty has quit [Remote host closed the connection]
<bentiss>
BTW, gitlab security update pending, as well as new minor version. Doing that in a bit
damian has quit []
Haaninjo has joined #freedesktop
<bentiss>
migration in "only" 37 min, \o/
<bentiss>
welcome to gitlab 15.11.1, (last minor release before 16.0 FWIW)
<mupuf>
bentiss: \o/
todi has joined #freedesktop
MrCooper has quit [Ping timeout: 480 seconds]
ivyl has quit [Quit: end of flowers]
ivyl has joined #freedesktop
vkareh has joined #freedesktop
karolherbst_ is now known as karolherbst
<alyssa>
airlied: valid about not hacking venus, also would rather htat
<alyssa>
emersion: oh, nice
nnm has joined #freedesktop
<DavidHeidelberg[m]>
bentiss: wooo!
<mupuf>
alyssa, emersion: Would the summary need to be abbreviated to fit in the 500 char limit?
<mupuf>
unless we want to run our own instance, without any limits?
<emersion>
i'm not sure including the summary is worth it here
<emersion>
the summary is usually just the first few sentences
<emersion>
i'm not sure how well that fits into a toot format
<mupuf>
Yeah
<mupuf>
Maybe the first paragraph could be used, but that's about it
<alyssa>
mupuf: ostensibly, we can raise the limit if we like, at least if we pay for hosting an instance instead of using floss.social
<alyssa>
(Hard NAK on adding "run a fedi instance" to the list of sysadmin roles. But paid hoisting with our own domain was reasonably affordable IIRC.)
<alyssa>
that being said, I mostly envisioned just title + author + link to original
<emersion>
yeah, that sounds sufficient to me
<alyssa>
I want to promote people's source blogs, not just aggregate the content. So that should be enough, plus maybe a sentence of teaser or something
<daniels>
eh?
<daniels>
why are admins banned from running a fediverse instance?
<daniels>
running a thing that serves HTTP isn't the hard part, it's the moderation and community gardening and whatever that is
<alyssa>
daniels: oh I mean if the admins want to, I won't complain
<daniels>
dunno, you'd have to ask him
<alyssa>
I mean, hard NAK on board voting that "we're going to tell bentiss that congrats he now gets to also host a fedi instance on top of his usual CI firefighting responsibilities"
<daniels>
tbf I don't think you get to veto a consensus vote, but yeah I see where you're going
<alyssa>
i mean
<alyssa>
i would assume anholt at least would agree that redirecting admin time from gitlab/CI to social media is, suboptimal for our priorities :p
* bentiss
stays hidden while things are flying
<alyssa>
bentiss: hi you
<bentiss>
nope, not here :)
<alyssa>
:D
<alyssa>
anyway the extent of the discussion about instance hosting was "we could host one i guess" "that sounds like work how about we use floss.social" "cool and good yes ok"
<alyssa>
I don't really have strong feelings other than "don't further burden the sysadmins when we can just, not"
<mupuf>
+1 for not having our own instance
<mupuf>
maybe in 5 years if it becomes clear mastodon will be the IRC of social media... but I wouldn't hold my breath!
<daniels>
tbf fixing the root cause of being terrified of our infrastructure collapsing if one person burns out and walks away would also solve this problem
<daniels>
but hey, not my circus
<emersion>
i don't really see a lot of value in hosting our instance
<emersion>
hosting mastodon does take a lot of time
<emersion>
stuff breaks, upgrades need to be done, etc
<alyssa>
daniels: extremely valid.
<alyssa>
but yeah, I don't see value in us hosting an instance on fd.o infrastructure.
<alyssa>
(I do see value in paying a nominal amount for a hosted instance with the x.org domain. But that's a different discussion.)
<alyssa>
(And I assume just from an economies of scale perspective, the dedicated hosting companies are likely going to be cheaper since they can do bulk upgrades for all instances and such.)
<mupuf>
Sure, but if we get a company to host, we'll still need to moderate the instance
<mupuf>
otherwise...
* mupuf
looks at the spam issue in gitlab
<alyssa>
yeah, this is true.
<emersion>
i think it also depends what we want the instance to be
<alyssa>
locking registrations (since it would just be official board run accounts, and maybe member accounts if there's interest in that)
<alyssa>
would help half of the problem
<emersion>
yeah
<alyssa>
doesn't help with people replying with "DOWNLOAD FREE MOVIES!!!" spam, but, meh.
<emersion>
anyways, it's no big deal anyways, since mastodon makes it very easy to migrate accounts
<alyssa>
:+1:
<alyssa>
--
<alyssa>
Anyway, back to the original question about "least gross way to plumb planet into fedi"
<alyssa>
To see if I can summarize
<alyssa>
airlied suggests scraping and not hacking venus
<alyssa>
daniels says no cronjob, either one-shot or long-running
<alyssa>
("Alyssa, you're younger than the rest of us")
<mupuf>
Hehe
<emersion>
i feel so old when 40 YO devs explain to me how k8s work
<mupuf>
Thx for the link, Arek!
<zmike>
Part-of: <f{merge_request.web_url}>
<zmike>
🤔
<alyssa>
cron jobs but with more yaml
<alyssa>
seems legit, I can work with this
<alyssa>
thanks all! :)
<daniels>
zmike: was wondering if anyone else would notice that
<zmike>
you know I have an eye for detail
<daniels>
literally just that one MR
<alyssa>
daniels: To clarify, do you object to parsing the rss20.xml? (not scraping the html)
<daniels>
alyssa: do what you like
<alyssa>
was hoping for admin approval *sweats*
<alyssa>
bentiss: ^
<bentiss>
alyssa: do what you like :)
* alyssa
gets water to avoid dehydration from the massive production of sweat
<alyssa>
("Gross")
<bentiss>
alyssa: IMO using a new job in the planet project has the benefits of: you're in charge, you get to look at the logs, and you don't need us for anything :)
<alyssa>
bentiss: \o/
<alyssa>
I'm sold on the new job in the CI pipeline part, just a question of where the input comes from
<alyssa>
and with 1 vote for "don't hack venus" and 1 vote for "don't scrape venus output", yknow,.
* emersion
gives approval to alyssa
<alyssa>
thanks simon
<alyssa>
:p
<bentiss>
if you add a new job that depends on the pages one, there is a high chance the output of Venus will be available directly as if you were in the Venus job
<emersion>
RSS is standard, so i don't count it as hacking venus
<emersion>
IOW, if we replace venus with something else at some point, that something else likely supports RSS too, and your masto bridge still works with minimal effort to adapt it to the new thing
<bentiss>
basically planet has a pages job that exports an artifact that is used as a gitlab page. If you add a job on top, that depends on it, gitlab should pull the artifacts from the previous job, and you just jhack your RSS file locally
<alyssa>
ok, neat. and then the timing is synced, which is nice.
<bentiss>
yeah
<alyssa>
(instead of getting a cascading rube goldberg effect)
<alyssa>
one potential issue I see is knowing what the most recent post was to avoid double posting
<alyssa>
ostensibly that wants a sideband data store
<alyssa>
but grabbing the list of mastodon posts and checking against that might work just as well
<bentiss>
alyssa: or s3.fd.o?
<alyssa>
slightly more scraping involved but, scraping the output of the script itself doesn't seem so bad :D
<alyssa>
(although weirdly recursive and not enough yaml)
<bentiss>
alyssa: or even simpler, if you need a side storage, use a git project
<emersion>
eh
<daniels>
venus does already have a per-feed cache which tells you what's new and what isn't
<emersion>
👌
<emersion>
but difficult to extract eh
<emersion>
i mean without hacking venus
<daniels>
right, but I mean we do maintain venus, so
<__tim>
no Spam label on freedesktop/freedesktop?
jarthur has joined #freedesktop
<alyssa>
..did I miss a gitlab update
<mupuf>
Alyssa: gitlab has a cache feature, to keep data between jobs
<mupuf>
No need for an external git tree
<daniels>
mupuf: you can't rely on gitlab cache tho
<bentiss>
mupuf: isn't the cache per runner?
<mupuf>
Why not?
<daniels>
indeed
<daniels>
I can't remember if it's per-runner-per-slot, but it's certainly per-runner
<mupuf>
There is a way to share it, using S3
<bentiss>
mupuf: not adding S3 credentials statically on the runneers
<Venemo>
the CI pipeline is all-green indeed, but the MR isn't merged
<Venemo>
I asked here about the same bug a few weeks ago and I was told that this should be fixed
<Venemo>
but apparently it isn't
<Kayden>
yeah looks like marge gave up on it 22 minutes ago and 6 minutes ago the pipeline finished
<Venemo>
this is a waste of CI resources IMO
<daniels>
no disagreement here
<daniels>
going to guess that the latest-finishing job in that pipeline was stoney
<Venemo>
I don't know how to check that
<daniels>
if you click on the pipeline at the top it shows you the pipeline; if you click on the jobs tab it shows you the jobs
<daniels>
and from a brief look, a618 taking 43m to complete is the obvious candidate
<daniels>
anyway that is being fixed
sima has quit [Ping timeout: 480 seconds]
<Venemo>
I don't see times on the pipeline page
<Venemo>
oh I see it's on a different tab
<daniels>
if you're curious, the stoney+a618 Chromebooks have spectacularly unreliable UART, so we've been splitting the job into one job which runs on UART but does nothing other than set up network + rootfs + etc, then another which SSHes into the machine once it's set up and actually runs the tests + monitors it
<anholt_>
daniels: hope you're hitting tgl too at the same time.
<daniels>
anholt_: yeah, all of ours tbf
<anholt_>
cool
<anholt_>
how many 660s did we have?
<daniels>
we've got 9 running
<robclark>
hmm, now "venus" is not just virtgpu vk driver and qcom video enc/dec which already causes enough confusion.. but also $something_else? -ETOOMANYVENUS
<daniels>
robclark: oh?
* robclark
was reading scrollback
<robclark>
at least I assume we are not using vulkan for something RSS related?
<Sachiel>
server side rendering of the RSS feed
<Venemo>
daniels: wouldn't it be better to use a proper GFX8 desktop GPU instead of that chromebook?
<robclark>
ahh..
<Sachiel>
uh.. I hope you didn't think I was serious
<robclark>
Oh, you meant server side rendering w/ gl (rather than just turning into html)..
<daniels>
robclark: oh yeah, venus is the thing that runs planet.fd.o
<daniels>
Venemo: sure, if someone wants to supply it
<robclark>
ahh
<daniels>
Venemo: *supply and maintain
<Venemo>
should be easier to maintain than what you have now
<daniels>
it's extremely not ...
<daniels>
booting x86 desktop systems is very different to booting systems where you e.g. have full control over the bootloader
<daniels>
so Valve went and wrote b2c from scratch to solve that usecase, but it doesn't work for other stuff we (Collabora) do with non-Mesa CI, and it doesn't work for a lot of the boards we have in Mesa CI right now either, so we'd either have to throw away our existing system and go make b2c work everywhere, or maintain b2c in parallel to LAVA
<daniels>
and starting from scratch isn't super appealing with ~300 devices
DodoGTA has quit [Quit: DodoGTA]
DodoGTA has joined #freedesktop
<Venemo>
interesting. I thought mupuf made sure that our solution is reusable
<daniels>
well, there's nothing conceptually preventing b2c from working everywhere, but someone would have to go do all that work, and it's ... not a small amount
<daniels>
plus I get the sense that it wouldn't necessarily scale down super well to devices with limited I/O bandwidth
DodoGTA has quit [Quit: DodoGTA]
DodoGTA has joined #freedesktop
<airlied>
daniels: just put desktop gpus into arm boards, problem solved, just wire me my consultants fee
<anholt_>
daniels: also the "assume local storage" thing is kind of a big deal.
<daniels>
yeah, I mean competent NVMe would cost more than most of the boards, but the main thing is that half the time it doesn't make much difference even if you do eat the cost, since the I/O bandwidth is dire anyway
<daniels>
so just setting the container up would I think hurt a fair bit, and you can't do it in tmpfs because there's not enough RAM
Kayden has quit [Quit: leave office]
alanc has quit [Remote host closed the connection]