ChanServ changed the topic of #dri-devel to: <ajax> nothing involved with X should ever be unable to find a bar
rasterman has quit [Quit: Gettin' stinky!]
angerctl has joined #dri-devel
pcercuei has quit [Quit: dodo]
tzimmermann_ has joined #dri-devel
mclasen has quit []
mclasen has joined #dri-devel
tzimmermann has quit [Ping timeout: 480 seconds]
sdutt has quit [Ping timeout: 480 seconds]
kts_ has quit []
columbarius has joined #dri-devel
co1umbarius has quit [Ping timeout: 480 seconds]
mclasen has quit []
mclasen has joined #dri-devel
ponchik has joined #dri-devel
sdutt has joined #dri-devel
vsyrjala has quit [Remote host closed the connection]
mclasen has quit []
heat has joined #dri-devel
mclasen has joined #dri-devel
ella-0_ has joined #dri-devel
ella-0 has quit [Read error: Connection reset by peer]
mclasen has quit []
mclasen has joined #dri-devel
mclasen has quit []
mclasen has joined #dri-devel
mclasen has quit []
mclasen has joined #dri-devel
mhenning has quit [Quit: mhenning]
mclasen has quit []
mclasen has joined #dri-devel
heat has quit [Remote host closed the connection]
tlwoerner has quit [Quit: Leaving]
ponchik has left #dri-devel [#dri-devel]
tlwoerner has joined #dri-devel
mclasen has quit []
mclasen has joined #dri-devel
mclasen has quit []
mclasen has joined #dri-devel
garrison has joined #dri-devel
i-garrison has quit [Read error: Connection reset by peer]
YuGiOhJCJ has joined #dri-devel
i-garrison has joined #dri-devel
garrison has quit [Read error: Connection reset by peer]
Duke`` has joined #dri-devel
mclasen has quit []
mclasen has joined #dri-devel
mclasen has quit [Ping timeout: 480 seconds]
jewins has quit [Ping timeout: 480 seconds]
danvet has joined #dri-devel
alanc has quit [Remote host closed the connection]
alanc has joined #dri-devel
Surkow|laptop has quit [Quit: 418 I'm a teapot - NOP NOP NOP]
lemonzest has joined #dri-devel
gouchi has joined #dri-devel
Haaninjo has joined #dri-devel
slattann has joined #dri-devel
linkmauve has left #dri-devel [#dri-devel]
gouchi has quit [Remote host closed the connection]
Akari has quit [Ping timeout: 480 seconds]
ced117 has quit [Ping timeout: 480 seconds]
gouchi has joined #dri-devel
ced117 has joined #dri-devel
ced117_ has joined #dri-devel
ced117 has quit [Ping timeout: 480 seconds]
linkmauve has joined #dri-devel
slattann has quit []
gawin has joined #dri-devel
YuGiOhJCJ has quit [Quit: YuGiOhJCJ]
libv_ has joined #dri-devel
libv has quit [Ping timeout: 480 seconds]
mclasen has joined #dri-devel
pcercuei has joined #dri-devel
sdutt has quit [Ping timeout: 480 seconds]
lemonzest has quit [Remote host closed the connection]
lemonzest has joined #dri-devel
libv_ has quit []
agd5f has quit [Read error: Connection reset by peer]
libv has joined #dri-devel
gouchi has quit [Remote host closed the connection]
gouchi has joined #dri-devel
gouchi has quit [Remote host closed the connection]
tzimmermann_ has quit []
mclasen has quit []
mclasen has joined #dri-devel
kts has joined #dri-devel
kts has quit [Ping timeout: 480 seconds]
kts has joined #dri-devel
gawin has quit [Ping timeout: 480 seconds]
flto has quit [Remote host closed the connection]
flto has joined #dri-devel
tzimmermann has joined #dri-devel
rasterman has joined #dri-devel
mclasen has quit []
mclasen has joined #dri-devel
Akari has joined #dri-devel
kts has quit [Ping timeout: 480 seconds]
baryluk_ has quit []
<karolherbst> jekstrand: so... I might need some help figuring out what passes to run, but I'll try mimic what we have in clc_spirv_to_dxil as I expect it to be the closest to what's actually working long term
FireBurn has quit [Quit: Konversation terminated!]
gawin has joined #dri-devel
mclasen has quit [Ping timeout: 480 seconds]
heat has joined #dri-devel
vsyrjala has joined #dri-devel
gawin has quit [Ping timeout: 480 seconds]
Surkow|laptop has joined #dri-devel
_whitelogger_ has joined #dri-devel
_whitelogger has quit [Read error: Connection reset by peer]
kchibisov_ has quit [Ping timeout: 480 seconds]
<karolherbst> jekstrand: first kernel running \o/
<karolherbst> airlied: is there a way to get better stacktraces when stuff crashes within llvmpipes shaders? :D
<airlied> karolherbst: nope
<karolherbst> :(
<karolherbst> I mean, I know why it crashes, but was wondering if there would be a way generally :/
<alyssa> airlied: by the way, I've been using llvmpipe for a lot of stuff at uni
<alyssa> has worked disturbingly well
<alyssa> thank you for your hard work :]
<karolherbst> a world where llvmpipe didn't exist: full vulkan 1.3 and OpenGL 4.6 support for apples M1s
<karolherbst> already there
<alyssa> karolherbst: rude :-p
<airlied> karolherbst: like it might be possible to at least not mangle the stack so you can see it's inside an llvmpipe shader :-P
<karolherbst> mhhh
<airlied> karolherbst: but I've never really dug into it that far, since it involves LLVM
<airlied> inside the shader of course there is no real stack, since mostly it's a single inline function
<karolherbst> alyssa: just stating the obvious :p
<airlied> alyssa: thanks!
<karolherbst> airlied: yeah... at first I was surprised by those weirdo stacks, but
anarsoul|2 has joined #dri-devel
anarsoul has quit [Read error: Connection reset by peer]
gawin has joined #dri-devel
sdutt has joined #dri-devel
<alyssa> malloc_consolidate(): unaligned fastbin chunk detected
<alyssa> sounds... bad
<karolherbst> alyssa: -Db_sanitize=address :P
<alyssa> karolherbst: ==2687==ASan runtime does not come first in initial library list; you should either link runtime to your application or manually preload it with LD_PRELOAD.
<alyssa> *blink*
<karolherbst> yeah... you need to LD_PRELOAD it
<alyssa> ah
<karolherbst> ASAN_OPTIONS detect_leaks=0:abort_on_error=1:alloc_dealloc_mismatch=0' LD_PRELOAD /lib64/libasan.so.6 is what I usually use within gdb
<alyssa> Direct leak of 88 byte(s) in 1 object(s) allocated from:
<alyssa> not helpful asan :-p
<karolherbst> mhh
<karolherbst> ehh.. missing =
<karolherbst> ASAN_OPTIONS='detect_leaks=0:abort_on_error=1:alloc_dealloc_mismatch=0' LD_PRELOAD=/lib64/libasan.so.6
<karolherbst> I hope that is fine now
<karolherbst> I see a future where CPUs are so fast, that we will run everything with libasan (or similiar) enabled :D
<imirkin> gotta come up with _some_ way to make those fast CPUs slow again
<karolherbst> of course
<karolherbst> I am just thinking the "we add sw mitigations for broken hw" to the complete end, why not do the same for broken sw
<alyssa> isn't that called javascript
<alyssa> karolherbst: well, issue goes away under asan with no noise
<karolherbst> fun...
<alyssa> how.. delightful
<karolherbst> maybe a rare case of glibc being broken
<karolherbst> could try valgrind and hope things aren't painfully slow
<alyssa> gah
<alyssa> or I could hope the issue goes away naturally and ignore it since it's just X11 ....
<karolherbst> ahh if it happens only under X11 I'd ignore it :p
<alyssa> Might just be screwed up LIBGL_DRIVERS_PATH stuff
<alyssa> running harbinger.sh seems to help
<alyssa> ok not a panfrost bug, I just really screwed up my system
<alyssa> good to know
<alyssa> so GNOME comes up and is just totally screwed up... progress? lol
<alyssa> I can work with that
<alyssa> oh it's faulting okay
<karolherbst> uhhhh... why didn't anybody reminded me that binding resources to kernels was _that_ painful
* karolherbst curses set_global_binding
<alyssa> karolherbst: are you trying to use the clover gallium api?
<alyssa> is the mesa/st gallium api so unfit for CL?
<karolherbst> I don't
<karolherbst> or at least I wasn't
rpigott has quit [Remote host closed the connection]
<karolherbst> set_global_binding is indeed an API only used by clover.. mhhh
rpigott has joined #dri-devel
gawin has quit [Ping timeout: 480 seconds]
<karolherbst> but I think we need an API like this
<karolherbst> let's see what st/mesa does for ssbos
rpigott has quit [Remote host closed the connection]
<karolherbst> alyssa: ahh yeah.. the "normal" gallium API won't help here, as st/mesa doesn't have to know anything except "this is ssbo nr 2 and it is that big"
<karolherbst> but for CL we actually have to know the GPUs address of a buffer
<alyssa> grumble
mwalle has joined #dri-devel
<karolherbst> and the driver has to make sure it's valid when the kernel launches and everything
<airlied> yeah unless the state tracker does all the work of lowering all of that somehow, but I don't think it's a great plan when you get to SVM
* airlied isn't sure if thats what the d3d12 frontend essentially does
<airlied> lower globals to ssbo, lower kernel_input to a ubo, handwave magic
<karolherbst> airlied: d3d12 uses ssbos for CL global mem
* airlied did some of it when I was messing with clover/zink
<karolherbst> oh well
<karolherbst> at least I reached my goal for today, let's see if I can make the kernel not crash :D
<karolherbst> but all of that is messy, especially I also have to deal with rust
rpigott has joined #dri-devel
<alyssa> is the plan to delete clover after this?
<karolherbst> well... I don't think we can
<karolherbst> I don't bother with llvm based backends
<alyssa> so?
<karolherbst> r600 wouldn't be supported
<alyssa> llvmpipe and radeonsi both ingest NIR nwo?
<alyssa> now
<karolherbst> and radeonsi potentially
heat has quit [Read error: No route to host]
<alyssa> and also don't ship clover ttbomk
heat has joined #dri-devel
<karolherbst> alyssa: r600 is probably the best supported CL driver....
<karolherbst> I think...
<airlied> radeonsi in native mode is probably the best working
<karolherbst> not r600? I alsways assumed that CL images just don't work with radeonsi
<airlied> not sure they work with r600 all that well :-P
<karolherbst> :D
heat has quit [Remote host closed the connection]
heat has joined #dri-devel
* airlied will at some point rebase the radeonsi nir cl support
<alyssa> i love doing things 'at some point'
<airlied> it could also be a when the mood takes me :-P
<alyssa> true
<karolherbst> wait until I pass all of CL 3.0 CTS on llvmpipe or something :D
OftenTimeConsuming has quit [Remote host closed the connection]
jewins has joined #dri-devel
<alyssa> ...there can be multiple pipe_screens in a process? T_T
<karolherbst> alyssa: well..... yes?
<alyssa> ...why...
<karolherbst> one for each frontend
<karolherbst> or.. are those shared?
* karolherbst is confused
<karolherbst> but yeah, I think that's how it could happen, although maybe not always
<karolherbst> alyssa: why would it matter?
<alyssa> was breaking some code of mine
<alyssa> code is still broken but. hey.
OftenTimeConsuming has joined #dri-devel
<alyssa> meh, let's hope it'll come up on deqp-gles3 naturally
<icecream95> alyssa: I guess we could use two RA nodes for the source, and have a field where each RA node can set another node that must have an adjacent solution
<icecream95> (to fix having five staging registers)
<alyssa> I did think about that, but it seems.. clumsy?
<icecream95> We could also switch to using 23-bit indices and 9-bit linear constraints..
jewins has quit [Ping timeout: 480 seconds]
<alyssa> Ooh, even better, I hit that issue for real on Valhall (even with lower_txd = true)
<icecream95> Still a texture instruction I gather?
<alyssa> Yeah, same test even
<alyssa> (Since Valhall tex uses an extra 2 staging registers compared to Bifrost)
<icecream95> So.. it could have up to seven staging registers?
<alyssa> I mean even on Bifrost the limit is much higher than 5
<alyssa> in "theory" TEXC can read up to 11 registers
<alyssa> though I think when I counted I could only get up to 8... so not sure where 11 comes from
<icecream95> How would you encode that?..
<alyssa> like, from an API perspective?
<icecream95> sr_count can't be more than 7, can it?
<alyssa> which arch are we talking about
<icecream95> Valhall?
<alyssa> right, I didn't count for Valhall
<alyssa> ...I should go do that
<icecream95> I note Bifrost.. does not even encode the staging register count?
<alyssa> Correct
leah has quit [Ping timeout: 480 seconds]
<alyssa> icecream95: on bifrost, I think the actual limit is 7 regs
<alyssa> In principle there are valid texture operation descriptors specifying more, but I think anything higher wouldn't be allowed by APIs
<alyssa> The winner being "textureGradOffset of a 2D shadow array texture with an indirectly indexed texture and a separate indirectly indexed sampler"
<alyssa> which ... might actually be possible in ES3.1 tbh
<alyssa> as for Valhall.. let's see
jewins has joined #dri-devel
leah has joined #dri-devel
<alyssa> rrright, so the texture/sampler indices don't take up staging regs on valhall, sure
<alyssa> but the S/T coodinates do
<alyssa> so it still tops out at 7 staging registers
<alyssa> so yes sr_count is capped at 7
tzimmermann has quit [Quit: Leaving]
mclasen has joined #dri-devel
tanty has quit [Remote host closed the connection]
tanty has joined #dri-devel
anarsoul|2 has quit [Read error: Connection reset by peer]
anarsoul has joined #dri-devel
<karolherbst> Rust doesn't like our compute API
<alyssa> Womp
<karolherbst> list of pointers and stuff...
HerrSpliet has quit [Ping timeout: 480 seconds]
kchibisov_ has joined #dri-devel
RSpliet has joined #dri-devel
<karolherbst> throwing enough Arcs on the problem is probably going to solve it
mclasen has quit []
mclasen has joined #dri-devel
* karolherbst searching for "Extending borrow lifetimes in rust"
<karolherbst> rustc messages about as_ref() are just super broken :(
<karolherbst> it seems that the suggested place for as_ref is mostly wrong
jewins has quit [Remote host closed the connection]
jewins has joined #dri-devel
kchibisov_ has quit []
kchibisov_ has joined #dri-devel
dliviu has quit [Quit: Going away]
kchibisov_ has quit []
dliviu has joined #dri-devel
rasterman has quit [Quit: Gettin' stinky!]
Duke`` has quit [Ping timeout: 480 seconds]
kchibisov_ has joined #dri-devel
nsneck has quit [Quit: bye]
nsneck has joined #dri-devel
kchibiso- has joined #dri-devel
kchibisov_ has quit [Read error: Connection reset by peer]
kchibiso- has quit []
<karolherbst> wondering why stuff doesn't work: block = {0, 0, 0}
kchibisov_ has joined #dri-devel
kchibisov_ has quit []
kchibisov_ has joined #dri-devel
kchibisov_ has quit []
kchibisov_ has joined #dri-devel
kchibiso- has joined #dri-devel
kchibisov_ has quit [Read error: Connection reset by peer]
<karolherbst> ahhh
<karolherbst> something is still wrong
<karolherbst> ehh... I think it's fencing
lemonzest has quit [Quit: WeeChat 3.4]
heat_ has joined #dri-devel
heat has quit [Read error: Connection reset by peer]
mclasen has quit []
<karolherbst> ohhhhh
<karolherbst> it works :O
mclasen has joined #dri-devel
<karolherbst> \o/
<karolherbst> just only one thread is executed
anarsoul has quit [Ping timeout: 480 seconds]
<karolherbst> okay, so why is that
<karolherbst> airlied: is there something I am missing? the values in pipe_grid_info are set, but...
<karolherbst> ohhh...
<karolherbst> ehh wait
<karolherbst> no, that's all fine
<karolherbst> weird
anarsoul has joined #dri-devel
<karolherbst> ehh.. nir->info.workgroup_size_variable
<karolherbst> maybe that
Wally has joined #dri-devel
<Wally> [useless question] I was reading the xf86 nouveau driver source code and I came across ".fp" files like: https://gitlab.freedesktop.org/xorg/driver/xf86-video-nouveau/-/blob/master/src/shader/exac8nv110.fp
<Wally> what are they?
<karolherbst> assemblies
<Wally> assemblies for what?
<Wally> the nv gpus?
<karolherbst> for shaders we execute on the GPU
<karolherbst> yes
<Wally> what file type are they?
<karolherbst> depending on the consumer
<karolherbst> #ifndef ENVYAS is a hack used by the build system
<karolherbst> so it invokes cpp to generate either a C source or a file we push into envyas
<karolherbst> check the Makefile for more info
danvet has quit [Ping timeout: 480 seconds]
<Wally> what is envyas, other than a compiler hack?
<karolherbst> jekstrand, airlied: \o/ \o/ https://gist.github.com/karolherbst/3664b2081a1ba3a9fa4f2cde538f9c2f \o/ \o/
<karolherbst> it was the wrokgroup info in nir :D
<karolherbst> let's see how much passes and what crashes
<karolherbst> Wally: an assembler
<Wally> ah
<Wally> ic
Wally has quit [Quit: Page closed]
<airlied> karolherbst: nice!
<karolherbst> yeah.. finally
<karolherbst> about time
<karolherbst> I still remember when I neded like 30 seconds to run through all tests, now it feels more like 30 minutes
<karolherbst> should cache more nirs