<ezzieyguywuf>
I tried LD_LIBRARY_PATH=${PWD}/build/install/lib/x86_64-linux-gnu/ vulkaninfo but this still just shows my system-wide llvmpipe
unerlige has quit [Ping timeout: 480 seconds]
<ezzieyguywuf>
I thought about using VK_DRIVER_FILES but I'm not sure which icd file is appropriate
<ezzieyguywuf>
I tried VK_DRIVER_FILES=${PWD}/build/install/share/vulkan/icd.d/lvp_icd.x86_64.json but again this seems to return my system-wide llvmpipe
<airlied>
run meson devenv
<airlied>
that is the right json file for lavapipe
<airlied>
maybe try VK_ICD_FILENAMES
<pendingchaos>
"meson devenv -C builddir command" should work
<pendingchaos>
lvp_icd.x86_64.json might have an absolute path to the shared object in the install directory, which would require running ninja install first
<ezzieyguywuf>
also though I may be looking at the wrong part of vulkaninfo, I thought "GPU id : 0 (llvmpipe (LLVM 19.1.7, 256 bits))" meant "mesa 19.1.7" but I don't think that's right
<pendingchaos>
no, that's the llvm version
<pendingchaos>
search for "driverVersion" in the output
<ezzieyguywuf>
b/c I also see " driverInfo = Mesa 25.1.0-devel (git-3c0e0c3d04) (LLVM 19.1.7)" elsewhere
<ezzieyguywuf>
that's def newer than my system version
<ezzieyguywuf>
> run meson devenv < do I still need to do this? what does this accomplish?
<pendingchaos>
runs a command with the VK_DRIVER_FILES environment variables
<pendingchaos>
it works for GL too and you might find it more convenient to use than the environment variables, with less typing and no ninja install step needed
<ezzieyguywuf>
gotcha
<ezzieyguywuf>
so I'm happy with how things look with this local build, is it possible to install it alongside my system-wide mesa? e.g. my specify --prefix to /usr/local or smthn?
<glehmann>
I could only see that happening if you have invocations for different draws in one subgroup. If all invocations are from one subgroup, dynamically uniform values will always be not divergent
<tzimmermann>
jfalempe, thanks for the ast review. the draw functions and the format helpers have common code for converting pixel formats. i have a few patches to share the per-pixel functions. i'll post em today or tomorrow
<glehmann>
of course it's possible that divergence analysis can't prove that a value is not divergent, but for things like push constant loads you would typically solve that by forcing the array index to be uniform in the backend isel
<jfalempe>
tzimmermann: ah thanks, I have this on my todo-list for quite some time ;)
<dj-death>
then Anv should replace idx == MESA_VK_ATTACHMENT_UNUSED by (idx == MESA_VK_ATTACHMENT_UNUSED || idx == MESA_VK_ATTACHMENT_NO_INDEX)
<bbrezillon>
are you sure that's correct? NO_INDEX means the depth/stencil buffer might be read by the shader
<bbrezillon>
so you might have a feedback loop here
<bbrezillon>
the problem is, when no input attachment remapping is provided, the depth/stencil attachment is mapped to the NO_INDEX input attachment
<dj-death>
yeah, I don't know
<bbrezillon>
but it will only be read if the read reads an input attachment with NO_INDEX
<dj-death>
I'll try that commit to see
tobiasjakobi has joined #dri-devel
tobiasjakobi has quit [Remote host closed the connection]
<bbrezillon>
I tried various things, and the only way I could get rid of this failure was to drop the test on the idx entirely, but I'm sure it's incorrect :-)
<bbrezillon>
I guess what we'd need is some sort of frag_shader_reads_depth_or_stencil test instead
yrlf has quit [Quit: Ping timeout (120 seconds)]
yrlf has joined #dri-devel
<dj-death>
I guess something is missing the initialization of the dynamic state
<dj-death>
probably at BeginRendering
<bbrezillon>
what do you mean?
<bbrezillon>
the dynamic ial state will expose NO_INDEX for the depth/stencil buffer if no VkRenderingInputAttachmentIndexInfoKHR is provided (either dynamically or through vkCmdSetXxx)
<bbrezillon>
But this is just a remapping information. If the depth/stencil attachment is not read, and the remapping says NO_INDEX for DS, the attachment is still not read
haaninjo has joined #dri-devel
davispuh has joined #dri-devel
Duke`` has joined #dri-devel
aravind has quit [Ping timeout: 480 seconds]
<dj-death>
yeah we're probably leaning too much on the runtime data
<dj-death>
we should look at the pipeline too
kts has quit [Remote host closed the connection]
kts has joined #dri-devel
sghuge has quit [Quit: WeeChat 3.5]
sghuge has joined #dri-devel
<dj-death>
are the collobora intel runners offline?
epoch101 has joined #dri-devel
sghuge has quit []
sghuge has joined #dri-devel
warpme has joined #dri-devel
<daniels>
dj-death: yes atm
alanc has quit [Remote host closed the connection]
alanc has joined #dri-devel
warpme has quit [Ping timeout: 480 seconds]
unerlige has joined #dri-devel
tzimmermann has quit [Quit: Leaving]
fab has quit [Quit: fab]
fab has joined #dri-devel
fab is now known as Guest12138
epoch101 has quit []
epoch101 has joined #dri-devel
frieder has quit [Remote host closed the connection]
jsa1 has quit [Ping timeout: 480 seconds]
unerlige has quit [Ping timeout: 480 seconds]
Nasina has joined #dri-devel
Nasina has quit [Read error: Connection reset by peer]