ChanServ changed the topic of #dri-devel to: <ajax> nothing involved with X should ever be unable to find a bar
heat_ has joined #dri-devel
heat has quit [Read error: No route to host]
jewins has joined #dri-devel
alanc has quit [Remote host closed the connection]
alanc has joined #dri-devel
Leopold_ has quit [Ping timeout: 480 seconds]
Leopold_ has joined #dri-devel
co1umbarius has joined #dri-devel
columbarius has quit [Ping timeout: 480 seconds]
<robclark>
DemiMarie: I mean pinning all the memory would make things simpler but would be quite suboptimal if you don't consider infinite RAM as a for-free thing
<DemiMarie>
robclark: this is where I wish graphics supported recoverable page faults
<robclark>
hmm, some things can.. but would perhaps not be optimal.. I mean page fault on cpu stalls one thread.. page fault on gpu stalls perhaps 100's of threads
<robclark>
so even if we *could* do it (depending on hw and gen) doesn't mean we *should*
thellstrom has quit [Ping timeout: 480 seconds]
Hmeedo has joined #dri-devel
Hmeedo has quit [Remote host closed the connection]
Danct12 has quit [Remote host closed the connection]
Danct12 has joined #dri-devel
smilessh has joined #dri-devel
smiles_1111 has quit [Read error: Connection reset by peer]
Company has quit [Quit: Leaving]
jewins has quit [Ping timeout: 480 seconds]
heat_ has quit [Read error: No route to host]
heat_ has joined #dri-devel
Danct12 is now known as Guest10484
Danct12 has joined #dri-devel
bmodem has joined #dri-devel
<robclark>
luc: that could be problematic but from a quick look panfrost shrinker like it doesn't wait on fences.. so probably too primitive to run into problems.. also the count_objects impl iterating shrinker_list under a device global lock is going to mean system is unusable under sufficient memory pressure before you hit the reclaim deadlock problem
<robclark>
so basically not a problem simply because you have bigger problems ;-)
Fijxu has joined #dri-devel
Fijxu_ has joined #dri-devel
aravind has joined #dri-devel
epoll has quit [Ping timeout: 480 seconds]
Fijxu has quit [Remote host closed the connection]
epoll has joined #dri-devel
i509vcb has quit [Quit: Connection closed for inactivity]
chipxxx has quit [Remote host closed the connection]
chipxxx has joined #dri-devel
chipxxx has quit [Remote host closed the connection]
i-garrison has joined #dri-devel
danvet has joined #dri-devel
chipxxx has joined #dri-devel
chipxxx has quit [Remote host closed the connection]
chipxxx has joined #dri-devel
<mareko>
cwabbott: I can only see block-level dominance, which is used by nir_opt_cse
itoral has joined #dri-devel
<mareko>
cwabbott: there is some interesting stuff we could do with instruction-level dominance: if an instruction dominates 2 outputs, the instruction result can become a new output, and the dominated instructions computing the 2 outputs can be moved into the next shader; also, if an instruction post-dominates 2 inputs, the instruction resuilt can become a new input and the post-dominated instructions can be
<mareko>
moved into the previous shader
<mareko>
*result
Danct12 is now known as Guest10491
Danct12 has joined #dri-devel
Guest10491 has quit [Remote host closed the connection]
Danct12 is now known as Guest10494
Danct12 has joined #dri-devel
Guest10494 has quit [Ping timeout: 480 seconds]
robobub_ has joined #dri-devel
sathish has joined #dri-devel
Danct12 is now known as Guest10497
Danct12 has joined #dri-devel
Guest10497 has quit [Ping timeout: 480 seconds]
Danct12 is now known as Guest10498
Danct12 has joined #dri-devel
Guest10498 has quit [Ping timeout: 480 seconds]
fab has joined #dri-devel
fab has quit [Read error: Connection reset by peer]
garrison has joined #dri-devel
i-garrison has quit [Ping timeout: 480 seconds]
jaganteki has quit [Remote host closed the connection]
macromorgan has quit [Read error: Connection reset by peer]
macromorgan has joined #dri-devel
Guest10484 has quit [Ping timeout: 480 seconds]
sghuge has joined #dri-devel
pochu has joined #dri-devel
Duke`` has joined #dri-devel
YuGiOhJCJ has joined #dri-devel
<tomeu>
David Heidelberg: not sure, but I would expect to have picked it up from the tree of a freedreno dev?
jaganteki has joined #dri-devel
Haaninjo has joined #dri-devel
godvino has joined #dri-devel
godvino has quit [Quit: WeeChat 3.6]
Lyude has quit [Read error: Connection reset by peer]
Lyude has joined #dri-devel
aravind has quit [Ping timeout: 480 seconds]
aravind has joined #dri-devel
aravind has quit [Remote host closed the connection]
aravind has joined #dri-devel
aravind has quit [Ping timeout: 480 seconds]
aravind has joined #dri-devel
rasterman has joined #dri-devel
heat_ has quit [Remote host closed the connection]
heat_ has quit [Remote host closed the connection]
MajorBiscuit has quit [Ping timeout: 480 seconds]
heat has joined #dri-devel
heat has quit [Remote host closed the connection]
heat has joined #dri-devel
jewins has joined #dri-devel
Danct12 has quit [Quit: WeeChat 3.8]
jdavies has joined #dri-devel
jdavies is now known as Guest10523
Guest10523 has quit []
kts has quit [Quit: Konversation terminated!]
heat has quit [Read error: No route to host]
heat has joined #dri-devel
bmodem has quit [Remote host closed the connection]
bmodem has joined #dri-devel
kts has joined #dri-devel
fxkamd has joined #dri-devel
yuq825 has quit []
alarumbe has quit [Quit: ZNC 1.8.2+deb2 - https://znc.in]
alarumbe has joined #dri-devel
kts has quit [Quit: Konversation terminated!]
kzd has joined #dri-devel
Zopolis4_ has quit []
Leopold_ has quit [Ping timeout: 480 seconds]
kts has joined #dri-devel
<mareko>
forking nir_dominance.c and replacing nir_block with nir_instr did the job :)
MajorBiscuit has joined #dri-devel
<zmike>
mareko: any other comments on my glthread MR? would like to get that merged before branch
heat has quit [Remote host closed the connection]
heat has joined #dri-devel
aravind has quit [Ping timeout: 480 seconds]
stuarts has joined #dri-devel
mhenning has joined #dri-devel
cheako has joined #dri-devel
garrison has quit []
<cwabbott>
mareko: uhh, no, that would be a terrible idea
<cwabbott>
don't do that
<cwabbott>
I'm not at a computer now to tell you the exact name but we have something to tell you if an instruction dominates another
krushia has quit [Ping timeout: 480 seconds]
<cwabbott>
replicating all the dominance stuff per instruction when dominance within a block is trivial would be... not smart
Leopold_ has joined #dri-devel
mbrost has joined #dri-devel
kts has quit [Quit: Konversation terminated!]
pochu has quit [Quit: leaving]
i509vcb has joined #dri-devel
fxkamd has quit []
Leopold__ has joined #dri-devel
Leopold__ has quit []
mbrost has quit [Remote host closed the connection]
mbrost has joined #dri-devel
mhenning has quit [Quit: mhenning]
Leopold_ has quit [Ping timeout: 480 seconds]
i-garrison has joined #dri-devel
Leopold_ has joined #dri-devel
khfeng has quit [Ping timeout: 480 seconds]
bmodem has quit [Ping timeout: 480 seconds]
garrison has joined #dri-devel
i-garrison has quit [Read error: Connection reset by peer]
<tomeu>
luc: don't know if it has changed since (it may have because of OpenCL), but it used to be that such a job would have timed out in the kernel by then
<DavidHeidelberg[m]>
mareko: could ` mesa: Enable NV_texture_barrier in GLES2+ ` introduce failure in SKQP test: gles_lcdblendmodes ?
<DavidHeidelberg[m]>
precisely flake failure
<DavidHeidelberg[m]>
asking, because in this MR the flake was first seen
<tomeu>
I'm a bit lost trying to figure out what is lowering my clz to ufind_msb
<tomeu>
the clz is in the spirv, but something is lowering it to ufind_msb even if I have .lower_uclz = false,
<tomeu>
and I cannot find what code is doing that, even if such an optimization is in nir_opt_algebraic.py
<tomeu>
but I don't see it in nir_opt_algebraic.c
<tomeu>
my GPU does have a CLZ instruction, that's why I would prefer not to have that lowering
<jenatali>
tomeu: Have you tried NIR_DEBUG?
<tomeu>
jenatali: ufind_msb is already there in the first dumped shader
<tomeu>
it is as if it was done by vton, but I haven't found the code that does it
<jenatali>
Huh weird
<tomeu>
oh crap, I see now what is going on
<tomeu>
nir_clz_u isn't really emitting clz()
<tomeu>
it is as if the lowering pass had been moved up to vton
<tomeu>
I assumed that nir_clz_u would be emitting nir_op_uclz
<tomeu>
airlied: that seems to be your deed, do you remember why did you go that way?
<zmike>
he's gone on holiday for some weeks
<DavidHeidelberg[m]>
sorry mareko the ping about `mesa: Enable NV_texture_barrier in GLES2+` was mislook, it should be for ajax
iive has joined #dri-devel
djbw has joined #dri-devel
<robclark>
DavidHeidelberg[m]: it does seem highly likely that NV_texture_barrier could be implicated in gles_lcdblendmodes flake.. was it already marked flakey for the gl_lcdblendmodes version of the test
<DavidHeidelberg[m]>
cmarcelo: could you stop and postpone run of the jobs radeonsi-raven-* from your pipeline?
<DavidHeidelberg[m]>
we currently having outages and I need to verify some flakes hitting main mesa
<DavidHeidelberg[m]>
outage (2 from 5 machines is available :( )
<cmarcelo>
DavidHeidelberg[m]: sure
<DavidHeidelberg[m]>
Thanks 🙏
kzd_ has joined #dri-devel
<cmarcelo>
DavidHeidelberg[m]: I think I managed to cancel all of them.
JohnnyonFlame has joined #dri-devel
<DavidHeidelberg[m]>
yup, thank you!
kzd has quit [Ping timeout: 480 seconds]
heat has quit [Remote host closed the connection]
heat has joined #dri-devel
kzd_ has quit []
soreau has quit [Ping timeout: 480 seconds]
ngcortes has joined #dri-devel
kzd has joined #dri-devel
junaid has joined #dri-devel
krushia has joined #dri-devel
<cmarcelo>
anyone familiar with the venus-lavapipe CI job. I'm trying to reproduce a failure locally (via vtest), but not clear I have the setup right. am I supposed to be able to test it using the vtest bypass as described by https://docs.mesa3d.org/drivers/venus.html#vtest -- is that a correct way to reproduce that setup, or am I missing something?
<cmarcelo>
in particular the vulkan software implementation (I'm assuming this is what is called "lavapipe") doesn't seem to be in the loop, as the virgl_test_server seems to use an underlying GL implementation.
soreau has joined #dri-devel
Haaninjo has joined #dri-devel
junaid has quit [Remote host closed the connection]
avoidr_ has joined #dri-devel
<anholt>
cmarcelo: I've successfully tested with vtest using those instructions. virgl_test_server does probably do some gl work at startup, but if you're running a vk cts test, then I don't see how you'd end up with anything other than vulkan doing real work?
<anholt>
cmarcelo: note that vtest won't be exactly the same, since venus-lavapipe is running an actual VM (crosvm-runner.sh). but most likely any refactor you're doing would be reproducible across just vtest.
avoidr has quit [Ping timeout: 480 seconds]
<cmarcelo>
anholt: I guess I was misled by the gl work at startup. it makes more sense now.
<cmarcelo>
I still don't get the failure itself but the vtest server side seems unhappy when executing it: vtest_resource_create_blob called virgl_renderer_resource_export_blob which failed (-22)
<cmarcelo>
going deeper turns out some VIRGL_RENDERER_BLOB_FD_TYPE_OPAQUE when handling the export blob case. wondering if I'm hitting a limitation of what vtest can do here.
Duke`` has quit [Ping timeout: 480 seconds]
<cmarcelo>
fact that it happens also in the main branch makes me thing that's the case.
<cmarcelo>
and... test passes if I use vulkan software impl directly. :(
<cmarcelo>
anholt: how experimental is venus? trying to figure out if this is a case of adding a skip and move on or keep digging?
<DavidHeidelberg[m]>
cmarcelo: one lead can be CI failure rate for venus jobs 😉 but it's getting more stable recently
<cmarcelo>
DavidHeidelberg[m]: how can I see this?
<cmarcelo>
MR is currently being blocked by this venus job failing, so I was assuming it was stable/passing
<DavidHeidelberg[m]>
cmarcelo: when you open issues, filter by CI tag (usually most recent report is in "open", older in "closed")
<DavidHeidelberg[m]>
s/CI/CI Daily/
<DavidHeidelberg[m]>
Yeah, currently it should be fairly stable, if you repeat the job run and it still fails, it's probably your mistake :D
<cmarcelo>
anholt: in https://docs.mesa3d.org/drivers/venus.html the instructions for using crosvm depends on having a valid image, is there an easy way for me to reproduce locally the image CI uses for this?
<anholt>
cmarcelo: top of the job log should have a fetching of the rootfs, I'd just download that and unpack it to find the image ci is using.
<anholt>
cmarcelo: ah, right. the docker container's what I actually mean (harbor.freedesktop.org/mesa/mesa/debian/x86_test-vk:2023-04-02-piglit-2391a83d--2023-03-27-virglrenderer-crosvm--d5aa3941aa03c2f716595116354fb81eb8012acb). I've been reading too many lava logs.
ESkilton has joined #dri-devel
rasterman has joined #dri-devel
<daniels>
tomeu: looks like nir_op_uclz was added in an MR which was written before SPIR-V CL, but SPIR-V CL got merged first; when the MR to add op_uclz was finally ready it was only ever hooked up for GL and never fixed the vtn path
oneforall2 has quit [Ping timeout: 480 seconds]
<ESkilton>
Learn all about Richard Simmons getting dateraped by Regina native Edna Skilton! https://pastes.io/3zb8ipyqbv
ESkilton has quit [Max SendQ exceeded]
ESkilton has joined #dri-devel
luc has quit [Remote host closed the connection]
ngcortes has quit [Ping timeout: 480 seconds]
rasterman has quit [Quit: Gettin' stinky!]
MajorBiscuit has quit [Quit: WeeChat 3.6]
gouchi has quit [Remote host closed the connection]
<ESkilton>
Geddy Lee has hot sex with Mrs. Skilton and also tries black dick for the first time! https://pastebin.com/1ExdrDQA
ESkilton has quit [autokilled: Please do not spam on IRC. Email support@oftc.net with questions. (2023-04-10 21:30:30)]
<daniels>
robclark or MrCooper might be around
Zopolis4_ has joined #dri-devel
<i509vcb>
irccloud doesn't really show who is an op here
<kchibisov>
i509vcb: that's because you ask ChanServ to get op or something.
avoidr_ has quit []
<daniels>
yeah, /m chanserv access #dri-devel list
<i509vcb>
yeah it seems like that, I remember a fun drama event where I had someone saying someone didn't have op and no one know until the *!*@* ban was whipped out
<daniels>
robclark: thanks
<robclark>
np
<DavidHeidelberg[m]>
mareko: imho the MR breaks previously working GLES on raven, why should I add flake, when it'll keep to be broken for raven? Some stuff probably start being flaky broken
<DavidHeidelberg[m]>
would make sense disable it for affected HW until resolved, but adding flake to code which worked just fine before this feature landed seems really wrong
<robclark>
if that skqp test was failing 80% of the time that would probably be problematic IRL (chrome/ium heavily uses skia and skqp is part of android cts)
<DavidHeidelberg[m]>
also LibreOffice uses Skia
<DavidHeidelberg[m]>
thou probably not much often on ES2+
ngcortes has joined #dri-devel
Fijxu has joined #dri-devel
mbrost has quit [Remote host closed the connection]
heat has quit [Remote host closed the connection]
heat has joined #dri-devel
<mareko>
re-assigned to marge
Fijxu has quit []
heat has quit [Remote host closed the connection]
heat has joined #dri-devel
Fijxu has joined #dri-devel
Haaninjo has quit [Quit: Ex-Chat]
oneforall2 has joined #dri-devel
ngcortes_ has joined #dri-devel
Zopolis4_ has quit []
iive has quit [Quit: They came for me...]
ngcortes has quit [Ping timeout: 480 seconds]
Haaninjo has joined #dri-devel
<mareko>
cwabbott: ssa_def_dominates() uses block dominance in combination with the instruction index to determine which instruction is first, which is ok for some uses and very fast, but it's not true dominance of the SSA def graph. A true SSA def dominance would have to use nir_foreach_src to walk the graph. Also, it would have to handle the case that an immediate dominator doesn't exist for loads that don't