ChanServ changed the topic of #panfrost to: Panfrost - FLOSS Mali Midgard & Bifrost - Logs https://oftc.irclog.whitequark.org/panfrost - <macc24> i have been here before it was popular
atler is now known as Guest146
atler has joined #panfrost
Guest146 has quit [Ping timeout: 480 seconds]
pendingchaos has quit [Read error: Connection reset by peer]
pendingchaos has joined #panfrost
camus has joined #panfrost
aquijoule__ has joined #panfrost
aquijoule_ has quit [Ping timeout: 480 seconds]
camus1 has joined #panfrost
camus has quit [Remote host closed the connection]
davidlt has joined #panfrost
camus1 has quit [Ping timeout: 480 seconds]
camus has joined #panfrost
camus1 has joined #panfrost
davidlt has quit [Ping timeout: 480 seconds]
camus has quit [Ping timeout: 480 seconds]
Lyude has quit [Quit: WeeChat 3.0.1]
<tomeu> bbrezillon: guess the only option for passing the clear color to the clear attachment shader is nir_var_mem_ubo, ?
Lyude has joined #panfrost
davidlt has joined #panfrost
<bbrezillon> tomeu: yep
<tomeu> ok, so I need something like what we do for indirect drawing/dispatch
<bbrezillon> it should be lowered to a push constant at compile time
<bbrezillon> tomeu: panvk_meta has a few nir_load_ubo() too
<bbrezillon> tomeu: BTW, adding support for push constants shouldn't be too hard
<tomeu> for the vkCmdPushConstants api call, right? as otherwise everything else is already in place for push constants?
<bbrezillon> (in case you prefer to declare a nir_var_mem_push_const and use nir_intrinsic_load_push_constant)
<bbrezillon> well, you still have to hook up nir_intrindic_load_push_constant at the compiler level
<bbrezillon> but you have examples on how this is done in the uniform->push-constant optimization pass
<bbrezillon> (bi_opt_push_ubo.c and mir_promote_uniforms.c)
<tomeu> cool, I will look at that later
<tomeu> will first do clear attachments with ubos
<bbrezillon> sure, np
<icecream95> I've been rewriting some of the push constants code recently; maybe I could try implementing Vulkan push constants myself while I'm at it
rasterman has joined #panfrost
nlhowell has joined #panfrost
warpme_ has joined #panfrost
<tomeu> icecream95: nice!
camus has joined #panfrost
camus1 has quit [Read error: Connection reset by peer]
camus has quit [Remote host closed the connection]
camus has joined #panfrost
* icecream95 is finally getting around to wiping Chrome OS off of speedy, which he hasn't booted into for two or three years now
<tomeu> what's a good way of causing a fault from within a shader to make sure it's executing?
<HdkR> Write past the end of an unsized SSBO? :D
<robmur01> I guess there's no GL equivalent of "int x = *(char *)0;"?
<HdkR> Not unless you're Nvidia, which added pointers to GLSL
<icecream95> tomeu: An infinite loop should at least cause the shader to timeout
<robmur01> hmm, I see arrays, and a note that computing a massively out-of-bounds index and accessing it would be UB :D
* robmur01 saves the GLSL spec to read later. Maybe it's time I actually learned how this stuff works...
<HdkR> UB or zero depending on if robust is supported
<icecream95> Uh oh...
<icecream95> $ git diff
<icecream95> error: bad signature 0x00000000
<icecream95> fatal: index file corrupt
<urja> ha
<urja> i just got a nice spew of random git errors and some ext4 metadata checksum errors in dmesg... good job microSD card :/
<icecream95> I think it may be speedy's unreliable µSD card reader rather than the card itself causing this corruption
alpernebbi has joined #panfrost
<macc24> turns out when you don't have dependencies libraries don't work
camus1 has joined #panfrost
camus has quit [Ping timeout: 480 seconds]
nlhowell has quit [Ping timeout: 480 seconds]
camus has joined #panfrost
camus1 has quit [Ping timeout: 480 seconds]
<tomeu> bbrezillon: is PANVK_DEBUG=trace expected to work? I get a single job of compute type in my chain
<bbrezillon> tomeu: yep, last I tried it was working properly
<tomeu> hmm, looks like it should be a tiler job (the clear attachment), then a compute job (the copy_img2buf)
<tomeu> so I need to find out why my job isn't being executed as the first one
<bbrezillon> both attached to the same batch?
<tomeu> ah, the scoreboard is a different one for each
<tomeu> so I guess they aren't (but should be)
<bbrezillon> yes, copy helpers create a new batch
<bbrezillon> I don't think they should be part of the same batch
<tomeu> ah, and on addition, the clear attachment is on a secondary cmdbuf
<tomeu> will first test the clear attachments with only primary cmdbufs
hanetzer1 has quit []
<macc24> is there any reason why mesa would not build libGLX.so with this meson config: https://bpa.st/OL3A ?
<macc24> any time i run glxinfo or glxgears it just segfaults, while for example kmscube works fine
<alyssa> macc24: you pass a lot of options
<alyssa> -Dplatforms=wayland,x11 -Dglx=dri -Ddri3=true
<alyssa> are all unnecesary
<alyssa> as is "kmsro" in gallium-drivers
<macc24> alyssa: ok
<alyssa> probably unrelated to your problem but still :)
<macc24> trying 21.1 branch now
hanetzer has joined #panfrost
* macc24 regrets enabling lto
<HdkR> If you use LTO, make sure to use clang+lld for thinlto, where it sucks way less :P
<macc24> ugh finally
<macc24> "Error: couldn't find RGB GLX visual or fbconfig" i'll count this as progress
<tomeu> bbrezillon: I'm facing a problem because in a renderpass with just a clearattachments I don't have a pipeline, and we depend on it to pass the tls_size and wls_size
<tomeu> bbrezillon: should there be an implicit pipeline or something like that?
<alyssa> tomeu: then tls_size = wls_size = 0 ..
<alyssa> (actually wls_size = 0 for any renderpass)
<tomeu> alyssa: but, I do have a shader for the clearattachment
<alyssa> (Unless Vulkan lets shared memory be used in non-compute stages)
<tomeu> hmm, I tried to use 0 values when there's no pipeline, but got invalid 0x58 faults
<tomeu> guess could have been due to some oter issue
<HdkR> alyssa: No shared memory outside of compute
<alyssa> ack
<alyssa> tomeu: If there is no pipeline, just your clear, then those should be zero. 0x58 faults is unrelated.
<alyssa> In fact you should assert() your shader has tls_size == 0, because if your clear shader spills, that's a bug.
<bbrezillon> tomeu: hm, I'm pretty sure the tls/wls size is initialized to 0
<tomeu> bbrezillon: yeah, the problem is that we don't have a pipeline when the batch is closed on endrenderpass
<bbrezillon> oh, right, we do assume that the state will have a pipeline bound
<bbrezillon> I think I specialized the close function in panvk_meta.c exactly for this reason
<bbrezillon> anyway, tls/wls size of 0 should work
<tomeu> ok, decode.c doesn't like that much
alyssa has quit [Quit: leaving]
camus1 has joined #panfrost
camus has quit [Ping timeout: 480 seconds]
camus has joined #panfrost
camus1 has quit [Ping timeout: 480 seconds]
* urja built drm-misc-next
<urja> so far, no MMU wonkiness (y)
<macc24> \o/
<urja> it's "funny" how pretty much everything GL in firefox needs to be force-enabled ... if you run firefox from the console it says that it found no GPUs via PCI (RK3288 goes "excuse me, what is that?") and thus there obviously isnt a GPU ... (huh)
<macc24> we need mali that is on pcie
<urja> Crash Annotation GraphicsCriticalError: |[0][GFX1-]: No GPUs detected via PCI (t=3.09026) |[1][GFX1-]: glxtest: process failed (received signal 11) (t=3.09046) [GFX1-]: glxtest: process failed (received signal 11)
<urja> i dunno how their glxtest shoots itself in the foot tho (that's a segfault)
<HdkR> I love the sigsegv when PCI GPUs aren't found
macc24 has quit [Quit: WeeChat 3.2]
macc24 has joined #panfrost
<macc24> >kill xwayland
<macc24> >it freezes entire machine
<macc24> why???
macc24 has quit [Quit: WeeChat 3.2]
aquijoule__ has quit []
aquijoule__ has joined #panfrost
richbridger has joined #panfrost
aquijoule__ has quit []
richbridger has quit [Remote host closed the connection]
richbridger has joined #panfrost
macc24 has joined #panfrost
richbridger has quit []
richbridger has joined #panfrost
camus1 has joined #panfrost
richbridger has quit []
richbridger has joined #panfrost
camus has quit [Read error: Connection reset by peer]
mixfix41_ has joined #panfrost
mixfix41 has quit [Ping timeout: 481 seconds]
<urja> I opened up openscad and well the failure mode is new... the preview is just rendering all the shapes (this is the thing that's trying to compose geometry on the GPU by (ab)using the z-buffer etc)
davidlt has quit [Ping timeout: 480 seconds]
<urja> hmm, i feel like i should find/create/adapt some simple(ish) opencsg demo/test
Daanct12 has joined #panfrost
Danct12 has quit [Ping timeout: 480 seconds]
camus has joined #panfrost
* icecream95 runs xbps-install openscad
camus1 has quit [Read error: Connection reset by peer]
<urja> https://urja.dev/testscad.scad.txt here's what i use as a simple demo object (cube with a smaller cube cut out from the corner)
<urja> just in case you need to know how to openscad to see the issue, icecream95 ^^
<icecream95> urja: I'm not seeing anything obviously wrong, maybe this only affects Midgard, so I'll have to wait till I get home before debugging
<urja> that's what it's like on the C201 now ... basically, the big cube mostly exists where it should not anymore :P
<urja> (there's a couple pixels of the "cutout" cube drawing lines there but i think that's just the amount I moved it outside the big cube (zf value))
<icecream95> alyssa: 0884fc548e8 ("pan/bi: Add a constant subexpression elimination pass") is the first bad commit
<icecream95> (This is bisecting the Bifrost dEQP failures)
<icecream95> Oh, this affects v7 as well, you could have found that out yourself...
alpernebbi has quit [Quit: alpernebbi]
rasterman has quit [Quit: Gettin' stinky!]
warpme_ has quit [Quit: Connection closed for inactivity]
camus1 has joined #panfrost
camus has quit [Read error: Connection reset by peer]
<macc24> icecream95: is... bifrost broken?