ChanServ changed the topic of #panfrost to: Panfrost - FLOSS Mali Midgard & Bifrost - Logs https://oftc.irclog.whitequark.org/panfrost - <macc24> i have been here before it was popular
<alyssa>
Nice!
atler is now known as Guest1555
atler has joined #panfrost
Guest1555 has quit [Ping timeout: 480 seconds]
Danct12 has quit [Remote host closed the connection]
Danct12 has joined #panfrost
Danct12 has quit [Remote host closed the connection]
Danct12 has joined #panfrost
camus has joined #panfrost
camus1 has quit [Read error: Connection reset by peer]
<icecream95>
zig cc makes it really* easy to cross-compile for Windows from ARM...
<HdkR>
I've heard that zig is really nice
<icecream95>
* Though trying to find any libraries will involve a lot of online searching and downloading sketchy installers off SourceForge, but that's just Windows developement normally, isn't it?
<icecream95>
(zig cc is a frontend to clang, so compiles C/C++)
<HdkR>
Hopefully vkpkg and winget improves that situation soon
<HdkR>
Maybe chocolatey as well
<HdkR>
Downloading sketchy things off of SourceForge is always a nightmare
stano_ has quit [Ping timeout: 480 seconds]
stano has joined #panfrost
cphealy_ has joined #panfrost
cphealy has quit [Ping timeout: 480 seconds]
camus1 has joined #panfrost
warpme_ has quit [Quit: Connection closed for inactivity]
camus has quit [Ping timeout: 480 seconds]
Net147 has quit [Quit: Quit]
Net147 has joined #panfrost
nlhowell has joined #panfrost
rasterman has joined #panfrost
camus has joined #panfrost
camus1 has quit [Ping timeout: 480 seconds]
warpme_ has joined #panfrost
rando258` has joined #panfrost
rando25892 has quit [Ping timeout: 480 seconds]
<robmur01>
"for Windows from ARM" erm, aren't those two things orthogonal? (He types from Windows on Arm...) :P
<macc24>
why would you use windows on arm
<robmur01>
because it's more on-message than Windows on x86, and in several ways more useful
<icecream95>
robmur01: I was compiling for i386-windows-gnu from armv7l-linux-gnueabihf
<icecream95>
Now I can actually run some applications with it, time to finally merge Gallium Nine support?
nlhowell has quit [Ping timeout: 480 seconds]
wwilly_ has quit []
camus1 has joined #panfrost
camus has quit [Remote host closed the connection]
<daniels>
HdkR: we use chocolatey for Mesa Windows CI but it's a bit of a non-deterministic nightmare; I've heard good things from VMware people about scoop.sh, but I just assume vcpkg is probably going to kill them all
nlhowell has joined #panfrost
nlhowell has quit [Ping timeout: 480 seconds]
<HdkR>
daniels: Interesting
<alyssa>
robmur01: I run Linux on Arm. Is there really nobody doing that in, you know, ARM OSS? :p
<alyssa>
daniels: nondeterministic and pkg management should not go together
<robmur01>
alyssa: sure, we've been increasingly using Arm-based workstations for several years now (primarily big fat ThunderX2 and eMAG boxes); I'm just the one weirdo in the kernel team who would rather use WSL and help pilot WoA than have a Thinkpad running Ubuntu as a WFH machine ;)
<daniels>
alyssa: tell me about it
<alyssa>
daniels: nondeterministic and pkg management should not go together
<daniels>
...
<HdkR>
I'm curious how non-determinism even gets in to package management. File conflicts that aren't resolved and multithreaded installation?
<urja>
alyssa: oh yeah sorry i had attempted to PM you a couple of typos but +g happened
<daniels>
mostly bad handling of partial error states, compounded by partial error states occuring a lot because of SourceForge, and then some weirdness caused by it wanting to install its handlers through your user profile, but your user profile not getting reloaded until you log out and log back in again which is pretty hard to do in the context of a Docker container
<urja>
in the valhall doc that is, anyways i rechecked and i think the only one i know now is that DISCARD is only valid in a "frgment shader" :P
<HdkR>
Oh jeez
<alyssa>
urja: well you can't use DISCARD in a vrtx shader, you know ;-P
<HdkR>
If you're wanting to make a thread go away in the thread mask in compute, is there a better way than discard? :D
<HdkR>
"return"?
<alyssa>
HdkR: return, yeah
<urja>
ha
<HdkR>
Is there a way to change the active thread mask to bring back idle threads?
<HdkR>
Or does it support some magic for derivative calculations even with idle threads? :D
<alyssa>
urja: fixed now, thanks
<alyssa>
HdkR: Yes and no respectively, AFAIU
<HdkR>
interesting
<alyssa>
er wait mis read
<alyssa>
No and yes respectively, I mean
<alyssa>
well
<alyssa>
depending on your definitons I can answer either question as yes or no.
<HdkR>
haha :D
cphealy_ has quit []
cphealy has joined #panfrost
camus has joined #panfrost
camus1 has quit [Ping timeout: 480 seconds]
<tomeu>
bbrezillon: btw, how do you think I should key the shaders+rsd for clear attachment?
<tomeu>
guess per-single-format would be too verbose, and there's some way of sharing shaders between formats
<bbrezillon>
tomeu: yep, I think you only need 3 shaders (and their rsd), the blend descs can be emitted on-demand
<tomeu>
so float, int and uint?
<bbrezillon>
right
<tomeu>
hmm, but isn't the BLEND descriptor packed next to the RSD?
<bbrezillon>
it is, but you can keep the rsd as a template (in CPU mem)
<tomeu>
ah, ok, gotcha
<tomeu>
do you think it's a win to keep it in memory?
<tomeu>
seems quite straightforward
<bbrezillon>
maybe not
<bbrezillon>
I mean, we're not at that level of optimization yet, so take the easiest path :)
<tomeu>
ack
<bbrezillon>
tomeu: you actually need more than 3 shaders, because the RT to clear might be between 0 and 7
<bbrezillon>
plus the Z/S targets
stano has quit [Ping timeout: 480 seconds]
<tomeu>
ah yeah, I have been ignoring z/s so far
<bbrezillon>
so it's actually 3*8 + 2 = 26 shaders if I'm correct
<tomeu>
bbrezillon: hmm, where is the RT specified?
<bbrezillon>
it's the index of the blend descriptor
<bbrezillon>
so, say you need to clear RT3, you'll have blend[0-2] disabled, and blend[3] active
<bbrezillon>
and your shader needs to update the correct RT too
<tomeu>
ah, need to set the blend_descriptor[n].color_component_enable to 0 when not clearing n
<tomeu>
what I don't understand is why this is going to influence how many shaders we keep around, as the blend descriptor can be emitted per each clearattachments call
<bbrezillon>
tomeu: because you those clears to happen as part of the main batch (without a new fragment job), and that means you depend on the attachement defined for the current subpass