ChanServ changed the topic of #asahi-gpu to: Asahi Linux: porting Linux to Apple Silicon macs | GPU / 3D graphics stack black-box RE and development (NO binary reversing) | Keep things on topic | GitHub: https://alx.sh/g | Wiki: https://alx.sh/w | Logs: https://alx.sh/l/asahi-gpu
surge9n has joined #asahi-gpu
surge9n has quit [Ping timeout: 480 seconds]
Nspace has quit []
Nspace has joined #asahi-gpu
surge9n has joined #asahi-gpu
uniq has joined #asahi-gpu
surge9n has quit [Ping timeout: 480 seconds]
uniq has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
surge9n has joined #asahi-gpu
uniq has joined #asahi-gpu
surge9n has quit [Ping timeout: 480 seconds]
surge9n has joined #asahi-gpu
surge9n has quit [Ping timeout: 480 seconds]
uniq has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
uniq has joined #asahi-gpu
uniq is now known as sortIuniq
sortIuniq is now known as uniq
skipwich has quit [Quit: DISCONNECT]
skipwich has joined #asahi-gpu
uniq has quit [Ping timeout: 480 seconds]
uniq has joined #asahi-gpu
chadmed has quit [Quit: Konversation terminated!]
chadmed has joined #asahi-gpu
c10l has quit [Quit: Bye o/]
c10l has joined #asahi-gpu
MajorBiscuit has joined #asahi-gpu
Major_Biscuit has joined #asahi-gpu
MajorBiscuit has quit [Ping timeout: 480 seconds]
uniq has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<bluetail> phire is that good or bad?
uniq has joined #asahi-gpu
<phire> Theoretically good. I have no idea why nobody has implemented it. Maybe there is some major roadblock I'm unaware of, or maybe separate processes is good enough.
<phire> Or maybe everyone just abandons python as soon as they need to do multithreaded
uniq has quit [Ping timeout: 480 seconds]
<mort_> I've made the mistake of making something in python, having it be too slow, then instead of rewriting it in C++ or something I multiprocess the python code, you end up with something that's still too slow but now heats up the room
surge9n has joined #asahi-gpu
uniq has joined #asahi-gpu
<phire> I have a bad habbit of whenever I think performance might be an issue in the future, I write in in c++
<phire> useally I would have been better off waiting until preformance was actually an issue (which almost never happens), and then rewriting it. or the hot parts of it
<kode54> bonus points for me, I don't even know Python, so I would never start with that
<phire> do you know other languages? or do you always go for c/c++?
<kode54> I usually go for C/C++
<kode54> though there was one time in high school where I got a random idea while I was supposed to be doing my assignments
<kode54> and just randomly jotted down 6502 assembly on sheets of note paper
<kode54> I believe I wrote code to use bit rotations to rotate a horizontal raster font, to rotate it to vertical raster for a dot matrix printer
<kode54> for no appreciable reason, other than I thought it would be neat
<phire> Ha! I'm kind of the opinion that everyone should know a high-preformance language, and a low-brainpower language. In my case, C++ for high preformance, python or typescript for low brainpower.
<phire> But I guess everyone doesn't agree
<kode54> I would like to know low brain power languages
<kode54> but something always gets in my way of trying to take in a new language
<kode54> C++ was already learned over two decades of trial and error
<phire> I used to be a huge fan of c, because it was small enough to know the whole language, and most of the standard library
<kode54> should probably take this to offtopic, if you're there
<kode54> ah, you're not
<phire> I am
uniq has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
uniq has joined #asahi-gpu
surge9n has quit [Ping timeout: 480 seconds]
pjakobsson has quit [Ping timeout: 480 seconds]
uniq has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
pjakobsson has joined #asahi-gpu
uniq has joined #asahi-gpu
surge9n has joined #asahi-gpu
uniq has quit [Quit: Textual IRC Client: www.textualapp.com]
pjakobsson has quit [Ping timeout: 480 seconds]
<lina> I ending up learning D to work on Inochi2D and I kind of like it...
<lina> I think it's going to be my go-to language for things that I'd write in Python but I need better performance for
<mort_> I found python nice to work with actually when dealing with giant data sets, it's surprisingly effective to just give yourself a ton of swap space and store all your data in python's built-in datastructures, since then you get pretty good powerful disk-backed but RAM-cached datastructures for free
<mort_> the same would work for most other languages but not for JS sadly, since that has a much more limited address space it seems
<lina> Python is actually pretty good at moving bulk data around, but the moment you need an inner loop, it does kind of fall apart...
<mort_> yea..
<mort_> it's my go-to tool for natural language processing stuff but it's really bad at that tbh unless all you do is use libraries which have the inner loop in C
<lina> When I was using D it was a bit scary, because I was writing algorithms the same way I'd write them in Python for experiments, even poorly optimized textbook O(n^2) stuff... and then the app kept running at 60 FPS.
<lina> That was a bit of a revelation, because in C that would've taken much more code, and in Python it would've been way too slow...
<mort_> python is extremely surprisingly slow
<mort_> but it makes sense, when you consider that operations which in low-level languages are literally a single register-register instruction is at least a hash table lookup or two lol
<lina> Yeah, hah. Though on the flipside, you can do some really crazy things with the object model. It was perfect for prototyping the GPU driver for that reason.
<mort_> great work on that by the way! I hope the rewrite in rust goes as smoothly as it can
<lina> Spend an hour or so to get fancy color-coded nested GPU structure diff displays, that sort of thing...
<lina> Thanks!
<lina> I hope the DRM folks like the idea ^^;
<mort_> it will be the first major test of Rust in the kernel, right? Everything else seems to mostly have been proof of concept stuff unless I've missed something
<lina> Yeah, the first large driver, though people are working on reasonably nontrivial drivers like NVMe too.
<mort_> for a monolithic kernel with way too much ring 0 code, rust truly seems like a good to dramatically reduce security bugs
pjakobsson has joined #asahi-gpu
<Sobek[m]> Do you have any pointers at that NVMe effort out of curiosity ?
tdsrts^ has joined #asahi-gpu
<Sobek[m]> Any reason why NVMe is a prime target for rust drivers ?
<sven> nvme is simple enough, you can write a driver in <1000 lines
<sven> and yet it's spread very widely
<sven> the kernel driver is just very complex because it has to support all the random broken hardware out there and because it has to be fast
<sven> that being said, i'm not sure you'd gain much from writing it in rust
<sven> hrm.. thinking about it some more you'd gain some rust bindings for common kernel apis like DMA
<_jannau_> and have a simple enough driver to prove the bindings work
<lina> Yes, the folks writing the NVMe driver are working on PCIe and DMA abstractions too as a side effect
<lina> It's a pretty good PCIe guinea pig driver
<lina> And easy to benchmark, so it can also answer some questions about Rust vs. C performance
<opticron> Dlang is my favorite hobby language and python is a close 2nd
<opticron> mostly for the same reasons you're discovering :D
Chainfire has joined #asahi-gpu
tdsrts^ has quit [Ping timeout: 480 seconds]
<Chainfire> Python is slow, yes, certainly much too slow for domains like drivers. But its expressiveness and resulting development speed (if used correctly - like any dynamic language you can really shoot future-self in the foot) makes it prime for productivity where man-hours are worth much more than CPU cycles
<mort_> I never looked at dlang really, the fact that it was pushed so hard while the compiler was closed source really left a bad taste in my mouth
<mort_> sort of like how C#'s closed source history would make me unlikely to choose it even if there are open-source implementations of it today
<mort_> shows a deep philosophical difference between myself and whatever company or community is behind D I suppose
<lina> By the way, for those who missed it, I'm going to stream starting the actual GPU driver tomorrow ^^
<Chainfire> lina> will this open the gates for compute as well, or is that something again far beyond graphics?
<lina> Compute needs someone to care about it... I could work on adding kernel support, but without someone to first work on the OpenCL side in Mesa, that's not very useful ^^;
<lina> Not even Panfrost has compute yet...
<lina> It's not especially hard or anything as far as I know, but it's something someone needs to explicitly pick up
<Chainfire> I care more about it than graphics :') But I have neither the time nor the relevant experience to work on it, at least short term. Either way, that's good to know, thanks.
<daniels> lina: Panfrost does have compute
<Chainfire> This may perhaps be silly question, but would it be easier to support Metal than OpenCL ?
<daniels> Chainfire: Metal on Linux isn't a thing
<daniels> lina: (ah sorry, I didn't spot the subtlety of compute vs. CL - core compute support is there, it's just not hooked up to Clover because Clover ... rusticl ftw!)
Race has quit [Ping timeout: 480 seconds]
<lina> daniels: Huh, last week I was looking at the kernel driver and I think the compute queue was not used? Or is that going through the 3D ones?
<lina> I might've missed something
<daniels> lina: yeah, they still go through the 3D queue iirc
<mort_> compute is just part of the normal vulkan API, right? So if vulkan gets sorted out, won't that kind of take care of compute?
<mort_> ignoring all the software which still uses opencl of course
<lina> If it's going through the 3D queue, the same trick would work on AGX without kernel driver changes I think (Apple introduced compute instantiations from render batches AIUI), but you still need someone to hook up all the userspace plumbing to make it all work...
<lina> But it shouldn't be that hard to support the dedicated compute queue, I just don't have any reason to do it / test code to use with it right now.
<lina> Ideally someone would do what alyssa did for 3D, and make it all work on the macOS kernel driver first, at least to some extent. Then we can hook up the kernel side.
<daniels> mort_: GLES also has compute shaders
<daniels> lina: yep :)
MajorBiscuit has joined #asahi-gpu
Major_Biscuit has quit [Ping timeout: 480 seconds]
MajorBiscuit has quit [Ping timeout: 480 seconds]
bisko has joined #asahi-gpu
c10l has quit [Quit: Bye o/]
c10l has joined #asahi-gpu
Michael[m]123 has joined #asahi-gpu
Votes78 has quit [Killed (NickServ (Too many failed password attempts.))]
Votes78 has joined #asahi-gpu
thinkalex[m] has joined #asahi-gpu
MajorBiscuit has joined #asahi-gpu
alyssa has joined #asahi-gpu
<alyssa> lina: Panfrost definitely has compute support
<alyssa> GLES3.1 conformance entails compute shaders
<alyssa> admittedly that's less featureful than OpenCL compute
<alyssa> (Actually, I have a Rusticlfrost branch which got with a few fails of OpenCL 3.0 conformance. on the backburner for now though.)
<alyssa> HOWEVER mali hasn't had a dedicated compute queue since mali-t600 over a decade ago
<alyssa> just have a "fragment" queue and an "everything else" queue (js0 and js1 respectively ... you saw something about js2 which was a funny compute-only thing which doesn't exist since forever ago)
<alyssa> what's missing in panfrost.ko is plumbing to use JS2 for compute-only workloads on ancient Malis
<alyssa> for slightly better perf or something
<alyssa> not relevant to real use cases today :)
<cr1901> rusticle-frost?
<alyssa> sure
clararussell[m] has joined #asahi-gpu
MajorBiscuit has quit [Ping timeout: 480 seconds]
alyssa has quit [Quit: leaving]
carlosstive[m] has joined #asahi-gpu
kazukih[m] has joined #asahi-gpu
surge9n has quit [Ping timeout: 480 seconds]
Race has joined #asahi-gpu
surge9n has joined #asahi-gpu
RoelAlejandroPerezCandanoza[m] has joined #asahi-gpu
surge9n has quit [Ping timeout: 480 seconds]
Donaldbtc[m] has joined #asahi-gpu
hutchinson70[m] has joined #asahi-gpu
surge9n has joined #asahi-gpu
surge9n has quit [Ping timeout: 480 seconds]
surge9n has joined #asahi-gpu
Donaldbtc[m] has quit [autokilled: This host violated network policy. Mail support@oftc.net if you feel this is in error. (2022-08-16 23:51:39)]
surge9n has quit [Ping timeout: 480 seconds]