ChanServ changed the topic of #asahi-dev to: Asahi Linux: porting Linux to Apple Silicon macs | Non-development talk: #asahi | General development | GitHub: https://alx.sh/g | Wiki: https://alx.sh/w | Logs: https://alx.sh/l/asahi-dev
bps has joined #asahi-dev
<chadmed> hm literally just having DCP out of shutdown slaughters idle power time, tty or desktop doesnt matter
<chadmed> 12+ hours on t6000 down to like 6-7 hours
bps has quit [Ping timeout: 480 seconds]
<chadmed> only thing i can think of is that 1hz mode that comes with VRR, macos very aggressively uses that
<chadmed> you can notice it sometimes even using it with things that should be interactive, the apple music app for example has animated "now playing" graphics that give away the 1hz mode unless youre wiggling the cursor
compassion has quit [Quit: lounge quit]
cylm_ has quit [Ping timeout: 480 seconds]
compassion has joined #asahi-dev
gabuscus has quit []
pthariensflame has joined #asahi-dev
gabuscus has joined #asahi-dev
abd has quit [Ping timeout: 480 seconds]
pthariensflame has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
tobhe_ has joined #asahi-dev
tobhe has quit [Ping timeout: 480 seconds]
Z750 has quit [Quit: bye]
Z750 has joined #asahi-dev
chadmed has quit [Remote host closed the connection]
chadmed has joined #asahi-dev
hanss has joined #asahi-dev
hanss has quit [Quit: WeeChat 3.8]
JSkilton has joined #asahi-dev
<JSkilton> Mrs. Skilton gets her holes filled and satisified by Rush's Geddy Lee. Later, Geddy meets Jean Pierre Manikariza, her husband, and gets filled by his big black cock! Read all about it! https://pastebin.com/1ExdrDQA
JSkilton was banned on #asahi-dev by ChanServ [*!*@149.19.169.187]
JSkilton was kicked from #asahi-dev by ChanServ [You are not permitted on this channel]
JSkilton has quit [Remote host closed the connection]
nsklaus has joined #asahi-dev
Retr0id has joined #asahi-dev
elvishjerricco has quit [Read error: No route to host]
knedlik has joined #asahi-dev
elvishjerricco has joined #asahi-dev
<knedlik> The assumption of me reading the logs was correct indeed
elvishjerricco has quit [Read error: No route to host]
elvishjerricco has joined #asahi-dev
<knedlik> To be honest, the part giving me a hard time is mainly the metadata of the plugin
elvishjerricco has quit [Ping timeout: 480 seconds]
elvishjerricco has joined #asahi-dev
<marcan> jannau: nice!
<knedlik> I'm still failing to grasp how LV2 works... LADSPA is no different. Is there perhaps something else I could help with that's a bit easier for your regular software dev?
<marcan> honestly, of all the things we need to do, "write an LV2 plugin" is probably the most "regular software dev" task we have...
<marcan> everything else is going to be drivers and systems stuff
<knedlik> That was very awkward wording from me
<knedlik> I'm perfectly capable (atleast seemed like it so far) of making low-level stuff - I find it fun even. The problem with the LV2 plugin is the very niche SDK that's just hard to grasp for me
<marcan> https://github.com/RustAudio/rust-lv2 looks pretty simple fwiw
<knedlik> Oh that looks definitely easier than the original C library
<marcan> A modern language with modern macro features makes life easier, who'dve thought :-)
<knedlik> Rust has some uniqueness of it's own, but better than taking hours to finally compile without errors only to segfault
<chadmed> Knedlik: if it helps, we dont really care about a gui for the plugin at this stage since i can manipulate its ports/controls directly
<knedlik> Well I'm still struggling with the metadata lol
<chadmed> looks like that rust lib has a macro for that
<knedlik> Then their guide translation is wrong :-)
<knedlik> Seems the turtle metadata still has to be there
<knedlik> What should the gain and freq max/min threshold be? 0 to 1 or something else?
knedlik has quit [Remote host closed the connection]
<chadmed> hm
<chadmed> i dont think we'd ever want more than +24dB on the gain
<chadmed> since we're interested only in boosting, the "min" should be 0 dB, at which level the plugin will not alter the amplitude of any part of its input signal
<chadmed> freq wise i think a sensible max is probably 280
___nick___ has joined #asahi-dev
knedlik has joined #asahi-dev
___nick___ has quit []
<knedlik> Got it. Do I understand the algo right that we leave the bass be and then boost the harmonics?
___nick___ has joined #asahi-dev
<chadmed> yep precisely, we dont want to touch the fundamentals
<knedlik> Sounds good
<knedlik> How do I test it after I'm done though?
<knedlik> Also the stuff we boost is stuff that meets the condition frequency <= frequencyThreshold by the gain input?
<chadmed> yep thats correct
<chadmed> i recommend using Carla to test it, just point it to the directory youve built the plugiin in and it should pick it up as an LV2 plugin
<knedlik> Just works, I like it
<jannau> marcan: aided by added verification in the dcp firmware with syslog error prints. after I got the size right it complaining about a surface with size 0x0 at 1920x1080
<j`ey> chadmed: is pipewire going to load this plugin eventually or what?
<jannau> that check wasn't there in 12.x
<chadmed> j`ey: yup, itll live between the IRs and the real output
<chadmed> actually it cant live there
<chadmed> yeah no its fine living before the IRs thats cool
<knedlik> chadmed: You wrote in the Yaks page that the bass tones need to be cut out - if the fundamental was say 100hz, which frequencies do we alter and in what way?
<chadmed> oh no you dont need to worry about that in the plugin, we do that in hardware
<chadmed> aiui the effect does not rely on the fundamental being subtracted, we just need to do it in our specific case because the fundamentals overload the tiny little woofers
<chadmed> so bc we want this to be a general effect and end up as part of LSP we dont want to do machine-specific stuff
<knedlik> Ah, so I quite literally boost the frequencies under the threshold?
<chadmed> no no leave those alone, we just want to boost the harmonics of the frequencies under the threshold
<chadmed> dont touch the fundamentals below the threshold at all
<chadmed> theres a research paper that explains it a little bit better than that matlab page
<knedlik> So let's return to the 100hz example - which frequencies do I boost if the threshold was for example 230hz?
<chadmed> harmonics of everything below 230hz
<knedlik> Harmonics of for example 100 are 200, 300, 400?
<chadmed> silly question, do you have ipv6 access?
<j`ey> chadmed: >_<
<chadmed> :P
<knedlik> I'm unfamiliar with ipv6 tbh, but I don't think my network has ipv6
<chadmed> okay uhhh let me sort something out
<chadmed> i have a couple of papers here that you might benefit from skimming
<chadmed> but i cant link them because theyre paywalled by ieee
<knedlik> Oof
<chadmed> its ok i have the pdfs, but my webzone is ipv6 only for the time being because i have many better things to spend my day on than being on hold with my isp
<marcan> Knedlik: you don't boost any frequencies, what you do is apply a nonlinearity to the filtered bass signal and then bandpass only the high band out
<marcan> there is no frequency-dependent processing other than filtering
<knedlik> Okay that's quite technical terms I'm not familiar with
<marcan> what you are doing is generating harmonics, removing the fundamental, and adding that to the original signal
<knedlik> Aaahhh
<chadmed> ^
<chadmed> yeah, not "boosting" sorry
<marcan> and that "full wave integrator" from the original link looks a bit dodgy to me, you will want to experiment with different algorithms for the nonlinearity
<marcan> the property you want is that it generates all harmonics (not just odd or even)
<marcan> since that will psychoacoustically make us perceive the original frequency
<chadmed> also keep in mind that we dont want _all_ the harmonics, because this can ruin the effect
<marcan> we might want to apply some basic EQ to the output yes, e.g. to shape away the very high harmonics
<marcan> there are many algorithms for creating harmonics, "saturation" is one keyword to look for
<knedlik> So what's the configurable gain for again?
<chadmed> the gain of the entire low pass chain as mixed back into the stereo signal
<marcan> depending on the harmonic generation algorithm you use, that will probably also have knobs
<marcan> e.g. that envelope thing described in that pdf
<marcan> you'll want to play around with those
<chadmed> the front end of the plugin should split the incoming stereo signal into high and low, with the high passing straight through to the back end
<chadmed> the "gain" knob will be how loud the processed low-pass signal is mixed back in
<chadmed> with 0 dB obviously being nothing but whatever your chosen algorithm is doing
<knedlik> Okay, I guess I'll go through the pdf and try to grasp the concepts
<knedlik> Low-pass signal? As in the stuff below the freq threshold?
<marcan> also, be careful with the band split. if you are adding the signals back later, you need to make sure to avoid phase cancellation.
<marcan> if you are just splitting once, then I think just calculating a highpass and then the lowpass as original - highpass should be fine (for obvious reasons)
<marcan> but if you try a discrete highpass/lowpass with typical minimum-phase algorithms, you will run into nasty cancellation at the crossover IIRC
<knedlik> Okay these terms are going straight over my head
<marcan> audio engineering yay :)
<knedlik> The Mathworks link would work also?
<chadmed> yeah, not sure how "good" it is since i dont have matlab to test it
<chadmed> but its a basic implementation and should do what it says on the tin
<chadmed> there are a ton of tricks and dark evil magic you can do to make it sound much better than a basic implementation but we're not bothing with that
<chadmed> if youre interested, you can google timbre matching ;)
<chadmed> given how much cpu time coreaudio can use i think apple might be doing it, but as i have mentioned before apple are trying way too hard
<marcan> ultimately the right answer to any audio problem is "whatever sounds good"
<marcan> and messing around and experimenting is critical to achieving that ;)
<chadmed> its why i dont really bother with fastidious and meticulous measurement beyond collecting a baseline
<chadmed> i could spend hours crunching numbers and plugging TSPs into various formulae to get that frequency response curve absolutely and totally flat
<chadmed> but thats not fun and not even materially better than just doing it by ear if you know what to listen for
<knedlik> The tweaking here would be the gain and freq, right?
<chadmed> also i dont want to sign TI's NDA to get the simulation software for the tas codecs
<chadmed> those would be the basic knobs that we absolutely need, yeah
<chadmed> you can treat the algorithm you build as a black box if youd like, but exposing any tunable parameters to the plugin host would be useful too
<chadmed> easier to get it to sound right in carla than by continually recompiling it
<chadmed> and to upstream it those knobs would have to be present anyway
<knedlik> Sounds good. Any idea if lv2 already has implementations for stuff like the crossover filter?
<chadmed> plugins dont really talk to each other in that way since thats the job of the plugin host
<chadmed> your plugin is a big 4U box that lives in a rack
<chadmed> and on the back of it you have 4 XLR plugs, L and R in and out
<knedlik> Wdym? I'm basically asking "do I have to implement my own
<chadmed> and im giving a long winded "yes" :P
<chadmed> you cant really talk to other plugins outside of the context of the plugin host
<chadmed> as i said conceptually consider your plugin a skeuomorph of a rack mounted effect
<marcan> OTOH it is perfectly fine to prototype all this by chaining together a pile of existing plugins
<marcan> e.g. some LSP EQ in linear phase mode to split the bands, simple mixer to mono, calf saturator or something fancier for harmonics, some gain and mix in again should be doable without writing any code
<marcan> TBH I bet FabFilter Saturn all on its own can do a decent bass enhancer like this with the right settings, since it has multiband processing built in
<marcan> (not free but it might be good as a reference)
<marcan> chadmed: what's the low cut of the speakers roughly?
<knedlik> But this chaining wouldn't work as the final solution, right?
<marcan> right, it's not ideal
<marcan> but like a basic one-pole low pass filter is one line of code
<chadmed> anything below ~280 hz is pretty dead without all the boosting im doing in the IRs
<chadmed> Knedlik: the problem with chaning, the fact that LSP doesnt have a harmonics generator notwithstanding, is the plugin host (pipewire) has a fixed amount of housekeeping to do with each plugin
<chadmed> and we need this to be as light and fast as possible for obvious reasons
<chadmed> we cant have people burning half a cpu core on DSP just trying to play a cruddy mp3 or youtube video
<marcan> anyway, I need to get some dinner
<marcan> bbl
<knedlik> I'm not quite sure how to implement the filter
roxfan2 has joined #asahi-dev
roxfan has quit [Ping timeout: 480 seconds]
<knedlik> I'm still struggling to be honest - I can't even begin to grasp the high-level concepts
<chadmed> is it the maths?
<knedlik> I guess yeah
<chadmed> fair enough, signal processing is really hard to wrap your head around at a fundamental level
<knedlik> I think I would be better off doing some of that systems stuff tbh
<chadmed> thats cool
<chadmed> you can add VRR to DCP if youre feeling adventurous ;)
<knedlik> I'll still need to look that up, but sounds a lot better than biquad transfer coefficients haha
<chadmed> DCP is our display controller, and VRR is variable refresh rate
<knedlik> Ah
<chadmed> the hardware supports it and i _think_ we know mostly what needs to be added to the driver to make it work
<chadmed> its just that no ones looked at it yet
<chadmed> but its going to be important for power draw on the laptops
<chadmed> its also necessary for high refresh rates full stop
<knedlik> I can give that a try, what file or folder should I look in? Also as you say you have an idea, is that written down somewhere?
<chadmed> jannau might have a better idea of what exactly needs to be done, but the gist of it aiui is that the hardware timestamps frames (expected) but we dont handle it in the driver and dont tell the drm subsystem that we can do VRR
<chadmed> theres probably more that needs to happen though
<jannau> chadmed: have you tried how much of the "dcp" power use is the backlight at minimal brightness? use the gpio for the backlight to turn it off. I hope it's minimal though
<chadmed> the backlight power draw is negligible, the test i ran with the partial boot (dcp shut down, screen on) confirmed that
<chadmed> its not measurable over general system noise at the lowest brightness
<jannau> Knedlik: you have to look drivers/gpu/drm/apple, especially the iomfb* files. that is the part which communicates with the display coprocessor
<knedlik> Would this be one of the things I need a 2nd puter for? No prob if yes, going home in a few anyway
<jannau> you want to look at struct dcp_swap, that one has 3 timestamps
<jannau> yes, you would need a second computer for it. I have an idea what the timestamps are but it will be certainly helpful to check what macOS does with our hypervisor and tracing for dcp in m1n1:proxyclient/hv/trace_dcp.py
<jannau> Knedlik: do you have by any chance a 13" macbook pro? if yes the touchbar daemon might be a good project
<knedlik> I do actually
<knedlik> I'd take that, any general overview of what I should do?
<jannau> great. I think that would be also a good first step for working on the drm driver later
<jannau> ChaosPrincess has implemented the HW support needed for the touchbar
<ChaosPrincess> Knedlik: grab asahi-wip, merge those two, https://github.com/AsahiLinux/linux/pull/102 https://github.com/AsahiLinux/linux/pull/137 - this will turn the touchbar into the second screen. write something that grabs the touchscreen digitizer and the screen, show f-keys, process touches and send them as keyboard events
<jannau> that is a drm driver for the display and touchscreen driver for the touch input
<jannau> if your comfortable or motivated with writing rust you want to use https://github.com/Smithay/drm-rs for the display
<knedlik> Which device should I do the touchbar stuff on? MacOS, Asahi, or the second PC?
<ChaosPrincess> asahi
<jannau> and https://crates.io/crates/input for touchscreen events and generated key events
<jannau> Knedlik: if you have a m2 based macbook pro you also need to update m1n1 for the touchscreen
<knedlik> I have an M1
<jannau> ok, then you need just the kernel with those two pull requests merged
knedlik has quit [Remote host closed the connection]
stickytoffee has joined #asahi-dev
kesslerdupont has joined #asahi-dev
<kesslerdupont> knedlik, I was reading the logs and saw your messages. I can try and help you with the audio stuff if you still haven't figured it out
kesslerdupont has quit [Quit: Lost terminal]
kesslerdupont has joined #asahi-dev
bcrumb has joined #asahi-dev
bcrumb has quit []
bcrumb has joined #asahi-dev
bcrumb has quit []
kesslerdupont has quit [Quit: leaving]
knedlik has joined #asahi-dev
<knedlik> I'd rather leave the audio to someone who knows what they're doing, kessler
<knedlik> Back to the touchbar, how would I get the drivers in the PRs going on my machine?
<ChaosPrincess> clone the kernel repo, checkout asahi-wip and then merge the branches i linked
<knedlik> I meant like how do I ensure the drivers are running so I can test my shennannigans
<jannau> easiest would probably to add the pull requests as patches to the linux-asahi PKGBUILD
<ChaosPrincess> for screen - start kde with wayland and you will see it
<j`ey> Knedlik: have you already got asahi running?
<jannau> add the patches to the source array, run updpkgsums and makepkg
<knedlik> Yep, currently booted in Wayland Plasma
<knedlik> jannau: I'm unsure how to do that, I never really worked with PKGBUILDs more than running them
<knedlik> So far it seems like it's automatically finding stuff?
<jannau> the pkgbuild has support for patches as long as the as they are named *.patch
<knedlik> Do you have an example?
<jannau> I'm not sure how well the touchbar PRs rebase against the asahi-6.2-11 kernel so cloning the the asahi kernel tree and the two branches https://github.com/WhatAmISupposedToPutHere/linux would probably a good first step
<knedlik> The default branch or the one in the latest PR?
<ChaosPrincess> Knedlik: git clone https://github.com/AsahiLinux/linux.git; cd linux; git cd asahi-wip; git remote add what https://github.com/WhatAmISupposedToPutHere/linux; git fetch -a what; git merge what/adp what/z2-touchscreen
<knedlik> I'll try that, thanks so much
<jannau> that makes building the PKGBUILD more complicated
<ChaosPrincess> sorry, no idea how arch people do it, i dont run arch :P
<jannau> so I would try to rebase the the PRs on asahi-6.2-11
<jannau> Knedlik: https://paste.debian.net/1278203/ with touchbar_display.patch and touchbar_touchscreen.patch being the changes from the pull request
<knedlik> I'm not sure how to create the patches tbh
<ChaosPrincess> git diff from to >sth.patch
<knedlik> Ah, and the from would be Asahi wip branch and to being the PR?
<ChaosPrincess> yes
<jannau> https://github.com/AsahiLinux/linux/pull/102.patch https://github.com/AsahiLinux/linux/pull/137.patch but I'm not sure if those apply against asahi-6.2-11 used in the PKGBUILD
<knedlik> I'm not sure what you're going for? The .patch links are invalid, or empty at the very least
<knedlik> Huh
<knedlik> Curl doesn't work, mb
<jannau> you have to follow redirects with -L
<knedlik> Yeah
<knedlik> Unexpected argument --size_t-is-usize found
knedlik has quit [Remote host closed the connection]
knedlik has joined #asahi-dev
knedlik_ has joined #asahi-dev
knedlik has quit [Remote host closed the connection]
knedlik_ is now known as knedlik
knedlik is now known as Knedlik
Knedlik has quit [Quit: Konversation terminated!]
cylm has joined #asahi-dev
nepeat has quit [Quit: ZNC - https://znc.in]
nepeat has joined #asahi-dev
___nick___ has quit [Remote host closed the connection]
nepeat has quit [Quit: ZNC - https://znc.in]
<marcan> chadmed: fwiw I played around prototyping this with Saturn and with Calf Saturator (which is pretty crappy but sort of works) with EQs around them for band splitting and it does work, though I need to test it on the real speakers (I was testing on my headphones with a sharp highpass to emulate the bad speaker response)
<marcan> definite is going to need a lot of tweaking to get it to sound good on several kinds of material though
<marcan> *definitely
<marcan> it's tricky getting the cutoff frequencies right and the right response for the bass boost signal
<marcan> and if you overdo it you get a mess of intermodulation products which doesn't sound any good
cylm has quit [Quit: WeeChat 3.6]
nepeat has joined #asahi-dev
nepeat has quit []
nepeat has joined #asahi-dev
abd has joined #asahi-dev
abd has quit [Ping timeout: 480 seconds]
Dementor has quit [Remote host closed the connection]
Dementor has joined #asahi-dev
pthariensflame has joined #asahi-dev
hxliew has quit []
pthariensflame has quit []
hxliew has joined #asahi-dev
user982492 has joined #asahi-dev
<cy8aer> I wonder what would be better: adjust the sound on a lower level or adjust it at user space e.g. with easy effects: If you pre-adjust you may have problems with additional adjustments at the user level? IMHO the sound never gets better with more sound filtering or frequency dependant compressing with many stages.
user982492_ has joined #asahi-dev
yrlf has quit [Quit: The Lounge - https://thelounge.chat]
yrlf has joined #asahi-dev
user982492 has quit [Ping timeout: 480 seconds]
abd has joined #asahi-dev
chadmed_ has joined #asahi-dev
<chadmed_> cy8aer: what do you mean lower level? we cant do any DSP in the kernel
<chadmed_> also youre literally just wrong about the sound never getting better with filtering, these machines being proof right in front of you
<chadmed_> DSP is everywhere and im sorry to tell the audiophools that its never going away
<chadmed_> marcan: i did fear that, like i said im all but certain apple are doing timbre matching and other tricks to get it passable and even then its unconvincing at times :/
<chadmed_> we'll give it a go and if it ends up being more trouble than its worth... well we have a pretty passable solution at the moment, we just have to be careful about how loud we let it go
ChaosPrincess has quit [Quit: WeeChat 3.8]
ChaosPrincess has joined #asahi-dev
<chadmed_> cy8aer: more to your point just FYI, even on machines where you have "direct"
<chadmed_> access to the hardware, you probably dont and theres probably some DSP happening in the background
<chadmed_> so people with laptops are probably applying their own effects over whatever the OEM has happening in firmware anyway
<chadmed_> likewise people who put effects on their xxxtreme ultra gaming rgb 666 headshot predator pro x headsets
<chadmed_> they actually sound like shit if you disable the oem filtering, hell most of them sound pretty terrible even _with_ it
<chadmed_> what we're doing in pipewire is not fancy or flashy or trying to replace what users expect to be able to do on a desktop machine, what we're doing is abstracting away machine-specific nuances so that the _rest_ of userspace sees a proper stereo output
kesslerdupont has joined #asahi-dev
<kesslerdupont> chadmed: it is reassuring to hear that the goal is to have a flat response and not to make everything bass boosted because I want to be able to hear things as they are supposed to sound
<chadmed_> kesslerdupont: that was always the goal, ive never been happy with how macos handles the speakers
<chadmed_> yeah its fancy and it sounds "good" but an exaggerated harman curve is not really what people buying a "pro" machine are after is it
<chadmed_> plus as has been pointed out, keeping it relatively neutral lets people apply colour to their own personal taste
<chadmed_> obviously we're not going for a _fully_ reference sound because its just not possible but its close, with just a tinge of warmth to make it more natural and comfortable to listen to
abd has quit [Remote host closed the connection]
chadmed_ has quit [Remote host closed the connection]