ChanServ changed the topic of #dri-devel to: <ajax> nothing involved with X should ever be unable to find a bar
mbrost has quit [Ping timeout: 480 seconds]
khfeng has joined #dri-devel
markyacoub has joined #dri-devel
cengiz_io has joined #dri-devel
mbrost has joined #dri-devel
tursulin has quit [Remote host closed the connection]
mbrost has quit [Ping timeout: 480 seconds]
<imirkin> anholt_: is there a way to get deqp-runner to make xml files for the failures even when they're in the "baseline" file?
<imirkin> otherwise the failures are "lost forever" once they're added to the fails (obviously one can run them individually, but you don't get them out of a deqp-runner run)
<imirkin> [otoh, if you have infinity failures, generating xml files for all of them each time is probably a giant waste, so i could see it both ways]
rasterman has quit [Quit: Gettin' stinky!]
vivijim has joined #dri-devel
pnowack has quit [Quit: pnowack]
ybogdano has quit [Ping timeout: 480 seconds]
mclasen has quit []
mclasen has joined #dri-devel
co1umbarius has joined #dri-devel
tarceri_ has quit [Read error: No route to host]
columbarius has quit [Ping timeout: 480 seconds]
<alyssa> imirkin: not saying it wouldn't be a useful feature, but it's not really what deqp-runner is for?
<alyssa> (namely, for detecting regressions, not fixing bugs.)
<imirkin> why does it generate the xml files when it's not in the fails then?
<alyssa> So if you got a regression you can see what you broke, particularly if it's running in a CI environment where you might not have the hardware to reproduce locally
<imirkin> alyssa: and if you want to work on a previously accepted failure, you're SOL then?
<alyssa> why would you need deqp-runner for that?
<imirkin> that's my point though -- the xml file is either interesting or not, it doesn't depend on whether it's a new regression
<imirkin> either you have the hw and you will track it down locally
<imirkin> or you don't, and you want to see the xml file
<imirkin> or you can't reproduce locally and you want to see what the "remote" machine sees, again irrespective of being a previously accepted failure
<alyssa> fair enough
<imirkin> [in my personal case this is all local, and i'm using deqp-runner to run the tests, and woudl like to see new regressions and also be able to debug old ones]
<alyssa> right, that's still my question -- why are you interested in deqp-runner for debugging old tests locally?
<alyssa> if a single test, run individually;
<imirkin> individual ones? not in the least.
<imirkin> exactly
<imirkin> but there are a bunch of tests
<imirkin> and i dunno which one i want to focus on
<alyssa> ahh, fair enough
<imirkin> and i'm doing deqp-runner runs anyways
<alyssa> ok, I see the use case
<alyssa> I mean the boring answer is to do a run with a blanked fails file. but shrug
<imirkin> sure
<imirkin> well, the actual boring answer is to feed the fails file to the deqp caselist-file arg ;)
<alyssa> hehehe
<alyssa> yes
<imirkin> it doesn't make xml files for flake failures either, it seems
<imirkin> where i'd think it'd be much more useful to keep a record
<imirkin> (i mean of known flakes)
oneforall2 has quit [Quit: Leaving]
<imirkin> i see stuff like "ERROR - dEQP error: Mesa 22.0.0-devel implementation error: Invalid state in _mesa_program_state_string" -- any way to figure out which test to attribute it to?
oneforall2 has joined #dri-devel
dliviu has quit [Ping timeout: 480 seconds]
tarceri has joined #dri-devel
dliviu has joined #dri-devel
vivijim has quit [Ping timeout: 480 seconds]
<anholt_> imirkin: when I want one I just drop the xfail marking for a run.
tarceri has quit [Remote host closed the connection]
thellstrom has quit [Read error: Connection reset by peer]
<anholt_> for ci we can't upload all the xfails qpas due to artifacts space consumption, but maybe for local runs we should have a flag for "do the qpas anyway"
thellstrom has joined #dri-devel
<imirkin> anholt_: yeah, there are a few separate use-cases
<imirkin> which are moderately well-served by the existing functionality. someone (e.g. me) will always want maor ;)
<anholt_> if you drop an issue in the repo it'll remind me later :)
<imirkin> will do!
tarceri has joined #dri-devel
vivijim has joined #dri-devel
SanchayanMaity has joined #dri-devel
vivijim has quit [Ping timeout: 480 seconds]
mbrost has joined #dri-devel
mclasen has quit [Ping timeout: 480 seconds]
mbrost has quit [Read error: Connection reset by peer]
The_Company has quit []
JohnnyonF has joined #dri-devel
JohnnyonFlame has quit [Ping timeout: 480 seconds]
JohnnyonF has quit [Read error: Connection reset by peer]
tzimmermann has joined #dri-devel
<imirkin> gyah. who thought that it was a good idea to generate INVALID_OPERATION instead of INVALID_ENUM for illegal targets in glEGLImageTargetTexStorageEXT... grr!
sdutt has quit [Read error: Connection reset by peer]
itoral has joined #dri-devel
jewins has quit [Ping timeout: 480 seconds]
jernej has quit [Quit: Free ZNC ~ Powered by LunarBNC: https://LunarBNC.net]
alatiera has quit [Ping timeout: 480 seconds]
alanc has quit [Remote host closed the connection]
alanc has joined #dri-devel
jernej has joined #dri-devel
jernej_ has joined #dri-devel
jernej has quit [Ping timeout: 480 seconds]
mattrope has quit [Read error: Connection reset by peer]
jernej_ has quit [Ping timeout: 480 seconds]
mattrope has joined #dri-devel
danvet has joined #dri-devel
pnowack has joined #dri-devel
jernej has joined #dri-devel
Duke`` has joined #dri-devel
alanc has quit [Remote host closed the connection]
fxkamd has quit []
alanc has joined #dri-devel
mattrope has quit [Ping timeout: 480 seconds]
thellstrom1 has joined #dri-devel
thellstrom has quit [Ping timeout: 480 seconds]
rasterman has joined #dri-devel
itoral has quit [Remote host closed the connection]
itoral has joined #dri-devel
pcercuei has joined #dri-devel
pnowack has quit [Quit: pnowack]
pnowack has joined #dri-devel
jagan_ has joined #dri-devel
pcercuei has quit [Quit: bbl]
thellstrom1 has quit []
thellstrom has joined #dri-devel
itoral has quit [Remote host closed the connection]
itoral has joined #dri-devel
guru_ has joined #dri-devel
pcercuei has joined #dri-devel
oneforall2 has quit [Ping timeout: 480 seconds]
jagan_ has quit [Remote host closed the connection]
tolszak has joined #dri-devel
<tolszak> Hello, Can anhone give a hint about who have access to Mali DDK and/or can fix https://gitlab.freedesktop.org/mesa/mesa/-/issues/5605 ? Who should I contant with to get the quotation for the fixup?
<HdkR> Mali DDK is unrelated to this issue since it is panfrost?
<tolszak> HdkR: Seems like to find a bug we need to look at cmdstream from Mali DDK and compare it with what panfrost does
<HdkR> ah. Need blob inspection I see
<tolszak> I have some more input today, I can update a task with it. I know exactly the value when the issue happens, it is not related to texture
<javierm> jani: we came to the same color for the bikeshed :)
<tolszak> it also happens with simple shader
<tolszak> no texturing needed
<tolszak> I've also texted it with panfrost libGL, instead of GLES - the same issue
<tolszak> with X11
pcercuei has quit [Read error: Connection reset by peer]
pcercuei has joined #dri-devel
pcercuei has quit []
pcercuei has joined #dri-devel
mclasen has joined #dri-devel
flacks has quit [Quit: Quitter]
flacks has joined #dri-devel
<daniels> tolszak: the issue isn't money, it's time
itoral has quit [Remote host closed the connection]
<tolszak> daniels: I can imagine, just asking...
adjtm has joined #dri-devel
* alyssa stares at bifrost
<alyssa> Apparently this data flow analysis has to be done per-quad instead of per-thread. How delightful.
Lucretia-backup has joined #dri-devel
Lucretia has quit [Ping timeout: 480 seconds]
nashpa has joined #dri-devel
dliviu has quit [Ping timeout: 480 seconds]
kmn has quit [Quit: Leaving.]
nchery has joined #dri-devel
vivijim has joined #dri-devel
<pinchartl> newbie question: what's the OpenGL (ES) API (or extensions) that are used to obtain a fence fd that can be passed as an IN fence to a KMS atomic commit ?
<pinchartl> I've found EGL_ANDROID_native_fence_sync, with the EGL_SYNC_NATIVE_FENCE_ANDROID fence type that will create (if I understand correctly) an fd-backed EGLSyncKHR. is that it, or is there anything else ?
<pinchartl> (especially for non-android platforms)
<tomeu> enunes: looks like the lima runner ran out of disk space: https://gitlab.freedesktop.org/gallo/mesa/-/jobs/15744661
<daniels> enunes: ^ you might want to use https://gitlab.freedesktop.org/freedesktop/helm-gitlab-config/-/blob/packet-ha/gitlab-runner-provision/docker-free-space.py (with the systemd .service file and the exclusions file alongside it in the tree)
<enunes> tomeu daniels oh, thanks for the heads up, I'll fix that
<enunes> and look at adding that script
<tomeu> thanks!
<pq> pinchartl, you're in the right track at the very least, I think. I don't recall the details.
<daniels> pinchartl: that's right, you can look at either https://gitlab.freedesktop.org/daniels/kms-quads/ or kmscube for an example of using it
<daniels> basically, create an EGLSyncKHR with the right param and tie it to your flush, then call the dup-to-native-fd entrypoint that that ext adds
sdutt has joined #dri-devel
<pq> ..in that exact order or it won't work (right).
<pq> that is, cannot dup before flush
<pinchartl> pq: daniels: thanks
<pinchartl> I wasn't sure if the Android extension was the standard these days, or if there was another extension that ha superseded it
<pinchartl> is the dup needed because fence consumers typically close the fd after waiting, or is there another reason ?
<pq> pinchartl, the fd does not exist until you "dup"
<pq> so it's not exactly dup but more like create
<pinchartl> oh ? I thought it was created when the fence was created, and could be queried through the EGL_SYNC_NATIVE_FENCE_FD_ANDROID attribute
<pinchartl> the extension specification says, about eglDupNativeFenceFDANDROID(),
<pinchartl> duplicates the file descriptor stored in the
<pinchartl> EGL_SYNC_NATIVE_FENCE_FD_ANDROID attribute of an EGL native fence sync
<pinchartl> object and returns the new file descriptor.
<pq> I don't think the fence is an fd to begin with. You have to export it to create the fd.
<pinchartl> ah, the spec also says that EGL_SYNC_NATIVE_FENCE_FD_ANDROID can't be queried to avoid violating the fd ownership rule
<pq> yeah
<daniels> right, you can specify it when you create the fence if you want an in-fence to EGL; if you want an out-fence from EGL, then you have to wait, because you can't create future fences
<daniels> you can only materialise the fence fd when the relevant work has been flushed
<pq> Doesn't really matter when the fd gets created, on flush or dup, from this side of the API :-)
<pinchartl> daniels: you mean flushed from the command queue to the GPU (glFlush()), but not executed yet, otherwise it would be quite pointless, right ?
<daniels> correct
<pinchartl> I'm adding in-fence support to libcamera. it's slightly pointless at the moment as V4L2 doesn't support fences, so we have to wait in software, but as the Android camera HAL passes fences in the HAL API, we have to handle the software wait anyway
<daniels> hehe
<daniels> fences in V4L2 sure would be nice, modifiers too :)
<pinchartl> we're adding them to the libcamera API, and we'll plumb them to the backend APIs when they will be available there
mbrost has joined #dri-devel
Company has joined #dri-devel
nsneck has joined #dri-devel
jewins has joined #dri-devel
fxkamd has joined #dri-devel
alatiera has joined #dri-devel
alatiera is now known as Guest5710
Guest5710 is now known as alatiera
mattrope has joined #dri-devel
<alyssa> hm, meson-clang is failing for me... why
<alyssa> works on clang locally
<alyssa> CI logs are unhelpful
ppascher has quit [Ping timeout: 480 seconds]
Duke`` has quit [Ping timeout: 480 seconds]
<daniels> alyssa: logs run with -k0, so you'll need to scroll or ^F error:
<alyssa> ^F?
<imirkin> Ctrl+F
<imirkin> aka "find shortcut"
<alyssa> only error I see is the boring Job failed at the end
<alyssa> 62/99 mesa:panfrost / bifrost_tests FAIL 0.07s (exit status 1)
<alyssa> the test is passing locally with clang, and passing in CI with gcc.......
<daniels> oh, right
<jenatali> Was just about to paste the same link
<daniels> ../src/panfrost/bifrost/test/test-dual-texture.cpp:117: Failure
<daniels> Failed
<daniels> [ FAILED ] DualTexture.DontFuseDualTexWrongStage (2 ms)
<daniels> (also 99 and 109)
<alyssa> Ok, see that now, thank you
<alyssa> those results seem pretty identical to me but uh
<alyssa> what UB am I relying on today
* alyssa spins the wheel of C
<jenatali> Probably some uninitialized memory
ppascher has joined #dri-devel
<alyssa> hm
<alyssa> valgrind is clear..
lemonzest has joined #dri-devel
<daniels> alyssa: -Db_sanitize=address
<daniels> if that's clear locally, fire the same at CI
* alyssa rebuilds all of mesa
<daniels> I mean, there's no magic in it; all the invocations are in the logs, so either it's hardware-specific differences or you're just building with different options
<alyssa> hardware specific is entirely possible if x86_64 + clang has different UB behaviour than arm64 and gcc
<daniels> C? unknowable UB? c'est impossible.
<imirkin> different ABI can cause differences
khfeng has quit [Ping timeout: 480 seconds]
<imirkin> (struct packing, etc)
<alyssa> nod
tolszak has quit [Read error: Connection reset by peer]
tolszak has joined #dri-devel
<alyssa> still works locally, uh
<alyssa> trying against CI
Duke`` has joined #dri-devel
<imirkin> alyssa: do you do anything that's ssensitive to uninitialized padding space in structs?
<imirkin> e.g. computing hashes of structs
<alyssa> yes.
<alyssa> but that's supposed to all be zeroed.
<alyssa> well. "supposed"..
<imirkin> ok
<imirkin> you zero by doing memset, right?
<imirkin> not foo = {}?
<alyssa> = {}
<alyssa> is this not how C works
<jenatali> That doesn't initialize padding
<imirkin> i didn't _think_ so, but i haven't kept up with all the language lawyering
<alyssa> oh god it's not
<jenatali> Or if you have unions, it'll only initialize the first member of a union, so if your second/... is larger, that'll be uninitialized too
<alyssa> that's it i quit
<imirkin> jenatali: oh, that's a fun one. did not know that.
<alyssa> daniels: Is Collabora hiring any positions for Rust developers?
<alyssa> :-p
<imirkin> only rusty developers :p
<zmike> if I'm adding a driconf option and I hit an assert like ../src/util/xmlconfig.c:1264: driQueryOptionb: Assertion `cache->info[i].name != NULL' failed.
<zmike> what have I done wrong
<imirkin> you have added a driconf option...
<daniels> alyssa: someone forgot to sign up for the training I see
<imirkin> (sorry, that was just too tempting)
<jenatali> imirkin: Was a huge pain when trying to retrofit our user->kernel structs to support Linux 32-bit. We've got pointers in those structs and I just wanted to union them with 64-bit values... except the 64-bit value needed to be first to avoid uninitialized high bits in the kernel view of the pointer
<alyssa> daniels: a big mistake, up there with thinking bifrost was a good idea
<daniels> alyssa: I used to maintain KDE in Debian which led to update velocity being bound by a) my ISDN line and b) the speed of kdebase builds on m68k (roughly a week). no-one's perfect.
<daniels> jenatali: the Linux-on-Linux approach is just to uint64_t everything which might be a pointer, and cast through uintptr_t to get there. please do not ask what this means for 128-bit architectures.
<imirkin> zmike: can you put up your patch somewhere? i can have a look if you like.
<imirkin> zmike: iirc you have to add things in about 75 places to make driconf work, and you probably only hit 74 of them
<jenatali> daniels: Yeah, I'm aware. Like I said, retrofitting. Essentially we unioned our pointers with uint64_t. We only support little-endian archs, so the union works, except it makes brace-init a pain, since now you're writing a uint64_t instead of an actual pointer
fxkamd has quit []
ppascher has quit [Ping timeout: 480 seconds]
gpuman has joined #dri-devel
gpuman_ has quit [Ping timeout: 480 seconds]
gpuman has quit [Remote host closed the connection]
gpuman has joined #dri-devel
<daniels> well, hopefully you don’t have to do WSL2 on Morello any time soon :)
illwieckz has quit []
mbrost_ has joined #dri-devel
tzimmermann has quit [Quit: Leaving]
mbrost has quit [Ping timeout: 480 seconds]
CasioMarket has joined #dri-devel
<alyssa> that sounds terrifying
nchery has quit [Remote host closed the connection]
nchery has joined #dri-devel
CasioMarket has left #dri-devel [#dri-devel]
luckyxxl has joined #dri-devel
JohnnyonFlame has joined #dri-devel
luckyxxl has quit [Ping timeout: 480 seconds]
<jenatali> Yeah, WSL2 still requires a Windows host, and Windows doesn't support any big-endian architectures these days
<jenatali> So, we've got bigger fish to fry
<danvet> hey I made a "big endian is dead" slide to answer that questions on a linux ioctl struct design talk years ago already :-)
<danvet> and at least for gpus that turned out to be rather accurate
<danvet> only thing is stuff like kvm console on s390 or old ppc
<imirkin> mixed endian defeated me
<imirkin> LE GPU on a BE CPU
<imirkin> (since all GPUs are LE)
<danvet> yeah be gpus died for good
<imirkin> when such a GPU tries to access GART vs VRAM ... brain explodes.
<alyssa> danvet: hold my beer
<danvet> alyssa, you have a be gpu somewhere?
<imirkin> did those exist beyond the SGI era?
<anholt_> anyone to ack https://gitlab.freedesktop.org/mesa/piglit/-/merge_requests/597 ? gets us some useful coverage on freedreno
<imirkin> looking
<anholt_> (looking at doing a ci uprev for vulkan-cts-1.2.8.0)
<anholt_> vkcts has some nice new tests, but also some overhead improvements and I'll take anything we can get there.
sdutt has quit []
sdutt has joined #dri-devel
<anholt_> thanks!
pekkari has joined #dri-devel
<jekstrand> dj-death: This is fun.... I think I found a whole category of bougs tests
<jekstrand> dEQP-VK.synchronization.cross_instance.suballocated.write_copy_buffer_to_image_read_copy_image.image_128x128_r32g32b32a32_sfloat_timeline_semaphore_fence_fd
<bnieuwenhuizen> bogus how?
<jekstrand> Trying to export sync FDs from timeline semaphores
<jekstrand> Need to dig more
<jekstrand> Yup, looks like that's what they're doing. *sigh*
* jekstrand comments out his flashy new assert and throws it at CI again
ngcortes has joined #dri-devel
ngcortes_ has joined #dri-devel
pekkari has quit [Quit: Konversation terminated!]
ngcortes_ has quit []
ngcortes has quit []
* jekstrand preps a gerrit CL
ngcortes has joined #dri-devel
ngcortes has quit []
<robclark> danvet: at least some of the arm SoCs are bi-endian and can be booted in b/e mode.. arm-smmu has a be config bit.. I've seen b/e arm fixes occasionally, although not a thing I encourage.. maybe I should add some kconfig depends on CPU_LITTLE_ENDIAN
<imirkin> robclark: would such a config bit exist on the GPUs inside those SoC's as well?
<robclark> not aware of such a bit.. but I guess that isn't really the sort of thing that would show up in blob cmdstream traces
tolszak has quit [Read error: Connection reset by peer]
<imirkin> heheh
tolszak has joined #dri-devel
gouchi has joined #dri-devel
illwieckz has joined #dri-devel
ybogdano has joined #dri-devel
guru_ has quit [Remote host closed the connection]
oneforall2 has joined #dri-devel
columbarius has joined #dri-devel
co1umbarius has quit [Ping timeout: 480 seconds]
tolszak has quit [Ping timeout: 480 seconds]
<jekstrand> bnieuwenhuizen, dj-death: https://gerrit.khronos.org/c/vk-gl-cts/+/8421
<alyssa> danvet: older Malis tried very hard to be bi-endian
<alyssa> or at least, interop seamlessly with big-endian hosts
<alyssa> every (vertex/texture/render) format had a "big endian?" bit which would do the appropriate byte swaps, etc.
<alyssa> likely the memory LOAD/STORE instructions (for SSBOs / atomic counters / OpenCL) had a .be flag for the same
<alyssa> In effect, everything app-visible could be made big-endian
<imirkin> oooh, nice. that's what you need to make it all work correctly.
<imirkin> (and by "nice" i mean "what a waste of effort")
<alyssa> (What about the endianness of buffers that aren't app/driver visible? Well, that's an implementation detail ... for all I know Mali encodes all its internal bytes upside down and half-word swapped, it wouldn't change a thing.)
<imirkin> the thing that always threw me for a loop was trying to sort out GART vs VRAM access
<imirkin> since VRAM data would be LE, but GART would be BE
<alyssa> imirkin: Heh, yeah. I don't know if any big-endian Mali products ever shipped, but the hardware is all there.
<imirkin> which is esp fun for copying e.g. index buffers, which might be 1- 2- or 4-byte values
<vsyrjala> imirkin: isn't that gart vs. vram difference just because you enabled some magic byte swapper for one but not the other?
<imirkin> vsyrjala: no. it's because the GPU is natively LE
<imirkin> but the CPU is natively BE
<imirkin> and the CPU writes its integer data in BE
<alyssa> clearly the GPU should have been BE.
<imirkin> alyssa: i'll get right on that ;)
<alyssa> actually... did ancient Mali support byte-swapped index buffers?
<imirkin> vsyrjala: and yeah, the nvidia GPUs had a bit to auto-swap some stuff at the boundaries. like mmio writes and pushbuf commands.
<imirkin> and the "copy things" had an option where you could tell it to do byteswaps for either 2- or 4-byte values. but you had to know to tell it.
<alyssa> oh man, it did :o
<alyssa> Yet another fact I have tried to purge from my memory.
<alyssa> You can just... feed it BE index buffers. Don't worry about it.
<imirkin> when you receive an index buffer, it's just data. it's given a size at draw time, and it's much too late by then.
<vsyrjala> imirkin: what i mean is that either you have byte swappers for everything, or imo the only sane way is to not use any byte swappers and just swap eveyrthing by hand
<imirkin> vsyrjala: so .... how would you do that? think through the index buffer example
<imirkin> we can declare that GART is LE-land
<imirkin> but ... the CPU wants to write the index buffer. so you give it a temp buffer...
<alyssa> copies, copies everywhere!
<imirkin> and then when you copy it into LE-land ... do you byte-swap it as 4-byte values? 2-byte values? no byteswap since it's byte values in the first place?
<vsyrjala> you obviously byteswap based on the size of the elements
<imirkin> how do you know the size of the elements?
<imirkin> you only find out when doing glDrawElements()
<imirkin> but this is at glBufferData() time
<vsyrjala> oh, well. gl sucks then i guess
<imirkin> hehehe
<imirkin> and yeah ... esp for earlier GL versions, you'd _know_ it was an index buffer
<imirkin> so you could hold the copy until the draw
<imirkin> but mesa isn't really set up for that
<alyssa> imirkin: preemptive NAK of any core mesa changes to set it up for that ;-p
<vsyrjala> sure. no one really thinks about mixed endian until it's too late
<imirkin> alyssa: don't worry. the G5's PSU died. not gonna happen.
<alyssa> imirkin: On the other hand, my BE-support Mali is alive and well :p
<alyssa> supporting
<jekstrand> With modern GL, the only way to do BE is if the HW has native support.
<imirkin> jekstrand: yeah, even with infinite cleverness it all breaks down with SSBO
<jekstrand> Maybe GL 1.2 could be done without it but nothing modern
<jekstrand> imirkin: SSBOs are easy. You byte-swap in the shader.
<jekstrand> It's writing as an SSBO and then using as an index buffer where you're truly hosed.
<imirkin> jekstrand: yeah, i guess it depends how you arrange things. but if you delcare that VRAM is native-endian and GART is cpu-native-endian, this stuff becomes quite annoying
tobiasjakobi has joined #dri-devel
<imirkin> and for the GPUs in question, you have no choice but to declare VRAM as native-endian
<jekstrand> Oh, and texture buffers. Those'll mess with you.
<imirkin> right. same problem as index buffers.
<imirkin> GL 2.x is largely achievable.
tobiasjakobi has quit [Remote host closed the connection]
<alyssa> jekstrand: /me stares at byte swapped SSBO instructions on mali
<dj-death> jekstrand: thanks
thellstrom has quit [Quit: thellstrom]
thellstrom has joined #dri-devel
lemonzest has quit [Quit: WeeChat 3.3]
alarumbe has joined #dri-devel
illwieckz has quit [Remote host closed the connection]
X-Scale has joined #dri-devel
X-Scale` has quit [Ping timeout: 480 seconds]
boistordu has joined #dri-devel
boistordu has quit []
boistordu has joined #dri-devel
boistordu has quit []
illwieckz has joined #dri-devel
danvet has quit [Ping timeout: 480 seconds]
danvet has joined #dri-devel
Duke`` has quit [Ping timeout: 480 seconds]
<dschuermann> is there an easy way to dump the spirv on parsing errors?
<dschuermann> just a single vtn_assert isn't too meaningful in a multi-threaded application
gouchi has quit [Remote host closed the connection]
alyssa has left #dri-devel [#dri-devel]
danvet has quit [Ping timeout: 480 seconds]
boistordu has joined #dri-devel
<zmike> hm not sure if there's an equivalent to MESA_GLSL=dump for it
<zmike> might want to add something?
nchery has quit [Quit: Leaving]
boistordu has quit [Remote host closed the connection]
<dschuermann> :/
<dschuermann> I didn't expect to be the first one running into this kind of problem ;)
<zmike> zink has its own facilities for dumping stuff, otherwise I probably would have
ngcortes has joined #dri-devel
<jekstrand> dschuermann: MESA_SPIRV_FAIL_DUMP_PATH
<jekstrand> dschuermann: You're not. :)
rasterman has quit [Quit: Gettin' stinky!]
<dschuermann> perfect, thx alot!
<jekstrand> dschuermann: It'll even dump one file per fail
<jekstrand> It's not totally thread-safe; there's a static variable ++ in there that probably should be a p_aomtic_inc, but oh, well.
<jekstrand> But, assuming you don't get burned absurdly badly on that race (seems close to impossible), you'll get one file per fail.
pnowack has quit [Quit: pnowack]
thellstrom1 has joined #dri-devel
thellstrom has quit [Read error: Connection reset by peer]
pcercuei has quit [Quit: dodo]