ChanServ changed the topic of #dri-devel to: <ajax> nothing involved with X should ever be unable to find a bar
Thymo has quit [Ping timeout: 480 seconds]
kts has joined #dri-devel
kzd has quit [Quit: kzd]
columbarius has joined #dri-devel
co1umbarius has quit [Ping timeout: 480 seconds]
jewins has quit [Ping timeout: 480 seconds]
benjaminl has quit [Ping timeout: 480 seconds]
heat has quit [Read error: No route to host]
heat has joined #dri-devel
benjaminl has joined #dri-devel
benjaminl has quit [Ping timeout: 480 seconds]
kts has quit [Quit: Konversation terminated!]
kts has joined #dri-devel
benjaminl has joined #dri-devel
sassefa has joined #dri-devel
sassefa has quit [Remote host closed the connection]
sassefa has joined #dri-devel
benjaminl has quit [Ping timeout: 480 seconds]
kzd has joined #dri-devel
sassefa has quit [Ping timeout: 480 seconds]
Thymo has joined #dri-devel
benjaminl has joined #dri-devel
benjaminl has quit [Ping timeout: 480 seconds]
benjaminl has joined #dri-devel
<DemiMarie>
karolherbst: nope, it isn’t purely cosmetic as users can turn off overcommit
<DemiMarie>
karolherbst: using caller-specified GPU VA is necessary for virtGPU native contexts anyway
<DemiMarie>
Is there a reason for not supporting Xe on platforms that support i915, even though Xe offers new features like VM_BIND and (hopefully someday) virtGPU native contexts? Qubes OS may use virtGPU native contexts in the future, and Spectrum OS almost certainly will.
heat has quit [Ping timeout: 480 seconds]
yuq825 has joined #dri-devel
HerrSpliet has joined #dri-devel
alpalcone has quit [Remote host closed the connection]
alpalcone has joined #dri-devel
RSpliet has quit [Ping timeout: 480 seconds]
kts has quit [Quit: Konversation terminated!]
Company has quit [Quit: Leaving]
kts has joined #dri-devel
benjaminl has quit [Ping timeout: 480 seconds]
<HdkR>
t/14
haasn` has joined #dri-devel
rsalvaterra has quit [Quit: No Ping reply in 180 seconds.]
rsalvaterra has joined #dri-devel
aravind has joined #dri-devel
Leopold_ has joined #dri-devel
benjaminl has joined #dri-devel
Leopold__ has quit [Ping timeout: 480 seconds]
benjaminl has quit [Ping timeout: 480 seconds]
bmodem has joined #dri-devel
LaserEyess_ has joined #dri-devel
LaserEyess is now known as Guest1897
LaserEyess_ is now known as LaserEyess
Guest1897 has quit [Ping timeout: 480 seconds]
YuGiOhJCJ has quit [Quit: YuGiOhJCJ]
benjaminl has joined #dri-devel
kts has quit [Quit: Konversation terminated!]
benjaminl has quit [Ping timeout: 480 seconds]
Lyude has quit [Quit: Bouncer restarting]
tzimmermann has joined #dri-devel
Lyude has joined #dri-devel
bgs has joined #dri-devel
bgs has quit [Remote host closed the connection]
bgs has joined #dri-devel
bmodem1 has joined #dri-devel
bmodem has quit [Ping timeout: 480 seconds]
benjaminl has joined #dri-devel
ngcortes has quit [Ping timeout: 480 seconds]
i509vcb has quit [Quit: Connection closed for inactivity]
pallavim_ has quit [Read error: Connection reset by peer]
benjaminl has quit [Ping timeout: 480 seconds]
Duke`` has joined #dri-devel
itoral has joined #dri-devel
kzd has quit [Ping timeout: 480 seconds]
bgs has quit [Remote host closed the connection]
sgruszka has joined #dri-devel
YuGiOhJCJ has joined #dri-devel
benjaminl has joined #dri-devel
fab has quit [Quit: fab]
benjaminl has quit [Ping timeout: 480 seconds]
cef has quit [Quit: Zoom!]
rasterman has joined #dri-devel
kts has joined #dri-devel
alanc has quit [Remote host closed the connection]
alanc has joined #dri-devel
sima has joined #dri-devel
benjaminl has joined #dri-devel
sghuge has quit [Remote host closed the connection]
sghuge has joined #dri-devel
benjaminl has quit [Ping timeout: 480 seconds]
fab has joined #dri-devel
kxkamil2 has quit [Remote host closed the connection]
kxkamil2 has joined #dri-devel
pjakobsson has quit [Ping timeout: 480 seconds]
pjakobsson has joined #dri-devel
kts has quit [Quit: Konversation terminated!]
aravind has quit [Ping timeout: 480 seconds]
benjaminl has joined #dri-devel
pochu has joined #dri-devel
<dj-death>
gfxstrand: I'm trying to get EXT_attachment_feedback_loop working in Anv
<dj-death>
gfxstrand: and I think I'm at the point where some of the changes made to the runtime are not working for us
<dj-death>
gfxstrand: in particular "vulkan,anv,dozen: Use VK_IMAGE_LAYOUT_ATTACHMENT_FEEDBACK_LOOP_OPTIMAL_EXT"
<dj-death>
gfxstrand: the assumption is that feedback loop is equivalent to the old VK_IMAGE_LAYOUT_SUBPASS_SELF_DEPENDENCY_MESA
<dj-death>
gfxstrand: but that's break a bunch of cases for us, because we have to disable compression for VK_IMAGE_LAYOUT_ATTACHMENT_FEEDBACK_LOOP_OPTIMAL_EXT
<dj-death>
gfxstrand: so with the runtime change, it's disabling compression for a lot more cases than we need to
<dj-death>
gfxstrand: and also there is a disconnect between the layouts inserted by the runtime and what the app is using
<dj-death>
so we get inconsistent layout transitions from the runtime/app
frieder has joined #dri-devel
benjaminl has quit [Ping timeout: 480 seconds]
pcercuei has joined #dri-devel
aswar002 has quit [Remote host closed the connection]
aswar002 has joined #dri-devel
kts has joined #dri-devel
lynxeye has joined #dri-devel
swalker_ has joined #dri-devel
swalker_ is now known as Guest1910
benjaminl has joined #dri-devel
swalker__ has joined #dri-devel
Guest1910 has quit [Ping timeout: 480 seconds]
benjaminl has quit [Ping timeout: 480 seconds]
dtmrzgl1 has quit []
dtmrzgl has joined #dri-devel
rauji___ has quit []
bmodem1 has quit [Ping timeout: 480 seconds]
fab has quit [Quit: fab]
fab has joined #dri-devel
fab has quit [Remote host closed the connection]
benjaminl has joined #dri-devel
cmichael has joined #dri-devel
benjaminl has quit [Ping timeout: 480 seconds]
isoriano has joined #dri-devel
<karolherbst>
DemiMarie: turning off overcommit breaks things, soo....
<karolherbst>
but anyway
<karolherbst>
it's still cosmetic as _nobody_ handles OOM correctly
<karolherbst>
it's almost impossible to do so correctly
<karolherbst>
it's not worth the effort unless your job is to run on embedded platforms with no RAM
<javierm>
DemiMarie: for Qubes OS wouldn't something like virtio-wayland be more suitable?
<javierm>
that is, just make whatever is composited in the VM a wayland client ?
<karolherbst>
DemiMarie: developers time?
<karolherbst>
also sometimes it's better to have clear cut inside drivers if hw design changes too much
<karolherbst>
and xe does look like such a point
<qyliss>
javierm: wayland clients still want GPU access sometimes
isoriano has quit [Ping timeout: 480 seconds]
<qyliss>
native contexts are the most promising way to do that
benjaminl has joined #dri-devel
isoriano has joined #dri-devel
isoriano has quit [Remote host closed the connection]
isoriano has joined #dri-devel
<MrCooper>
DemiMarie: if you're talking, your messages aren't getting through to IRC
vliaskov has joined #dri-devel
benjaminl has quit [Ping timeout: 480 seconds]
<dottedmag>
javierm: Qubes in X provides tighter integration than just "show a VM in a box": you can have windows from different VMs overlapping and stacked on top of each other
<qyliss>
dottedmag: that's what virtio-wayland is for
<qyliss>
(although virtio wayland is on the way out, in favour of wayland virtio-gpu contexts, which despite the name do not necessarily involve a GPU)
Haaninjo has joined #dri-devel
<javierm>
qyliss: yeah, is called virtio-gpu because the QEMU device backend is virtio-gpu even if you want to use it only for KMS :)
<qyliss>
I think we're still talking about slightly different things
<qyliss>
as with Wayland contexts there's no guest KMS involved
<qyliss>
(and QEMU so far doesn't support it, only crosvm)
<javierm>
qyliss: I know. I was referring to name of the (not yet existent) virtio-gpu wayland context and the fact that would be gpu in the name
<javierm>
because in that case the virtio-gpu device in the guest would be used as a transport to send the wayland data
isoriano has quit [Remote host closed the connection]
<javierm>
instead of the /dev/wl0 or whatever is called the devnode that is used in the Android/ChromiumOS guest kernels
isoriano has joined #dri-devel
<qyliss>
wdym not yet existent?
<qyliss>
it does exist
<qyliss>
just not existent in QEMU?
<javierm>
qyliss: really? I didn't know that crosvm supported wayland over virtio-gpu contexts
<qyliss>
it has done for a long time
<qyliss>
I don't think it's the default on Chrome OS yet
<javierm>
interesting, so they will be able to drop that downstream chardev driver
<javierm>
qyliss: cool, thanks a lot for the reference
<javierm>
I know that there were plans to replace the downstream solution using virtio-gpu contexts but didn't know that were implemented yet
<javierm>
qyliss: and yeah, it would be great for qemu to have feature parity with crosvm w.r.t virtio-gpu {native,wayland} contexts
benjaminl has joined #dri-devel
<DavidHeidelberg[m]>
There is follow big MR for rename (x86 and amd64 to x86_64, armhf to arm32, and i386 to x86_32 ) https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/23282 Please A-b, R-b (generally it gets tested as it builds, the Alpine MR is required for this change)
<dcbaker>
karolherbst: … I’m sorry. I’m being the pain here. First thing when I get into work
<dcbaker>
Sorry
<DemiMarie>
karolherbst: libxml2, libcurl, systemd, ffmpeg, SQLite, wlroots, and BoringSSL *do* handle OOM correctly, and at least libxml2, wlroots, BoringSSL, and SQLite have testing to make sure they do.
benjaminl has quit [Ping timeout: 480 seconds]
<DemiMarie>
karolherbst: my understanding was that Xe supports all Xe-architecture GPUs, which is gen11+ IIUC. Feel free to correct me if I am wrong.
<karolherbst>
yeah, which is like way less than what i915 supports
<DemiMarie>
I know
<karolherbst>
ohh wait, the question was the other way around
<karolherbst>
supporting xe inside i915
<DemiMarie>
No
jfalempe has quit [Ping timeout: 480 seconds]
<DemiMarie>
At least that was not my intent
<karolherbst>
then I think it's hard to understand the question
<DemiMarie>
My intent was to ask why Xe will not replace i915 on the architectures both support.
<karolherbst>
ahh
<karolherbst>
well.. amdgpu has the same situation
<karolherbst>
it's usually done when there is some transition happening
<karolherbst>
but there will be only one default driver for a speciic set of hardware
<DemiMarie>
Why not make Xe the default on newer kernels?
<DemiMarie>
It is (presumably) much less code and so less attack surface.
<karolherbst>
that might or might not happen
<karolherbst>
DemiMarie: I am sure it's all broken
<karolherbst>
it might work in some best case OOM situation
<karolherbst>
but it won't work in all OOM situations
<karolherbst>
unless you seriously know what you are doing
<karolherbst>
sure.. some projects might know that, but that's just a handful of things and even then you get cases it just won't work reliable
<karolherbst>
but for libs it's easier, just return an error and don't log anything, job done
<karolherbst>
and then the application ooms/crashes, because it logs the oom error
<DemiMarie>
karolherbst: my PoV is that crashing on OOM is not a library’s decision to make.
<karolherbst>
yeah and we try our best to return the error
<DemiMarie>
Especially for a lib like Mesa that one cannot avoid using.
<karolherbst>
it's still all cosmetic
<karolherbst>
and just "because we have to do it"
<DemiMarie>
On desktop, probably. On embedded, no.
jfalempe has joined #dri-devel
<karolherbst>
I'm not saying we shouldn't do it, just that it's all best effort and not really worth the effort doing more than just return the error
<pq>
DemiMarie, you mean they handle malloc returning NULL correctly? I suppose one could also fail to find a page to back a growing stack.
<pq>
handling already established mmaps hitting OOM is a whole another... *cough*
<karolherbst>
pq: well that's not a problem if you disable overcommit, no?
<karolherbst>
but yeah...
<daniels>
libwayland-server and libwayland-client don't handle OOM, because we don't expect that every single Wayland call will be bracketed with a check for OOM and safe recovery if you are OOM
<karolherbst>
I'm not convinced any project handles OOM correctly
<pq>
is it? oh yeah, then brk() returns NULL and glibc... does what?
<daniels>
(bearing in mind that safe recovery is basically impossible, because you're not going to be able to call anything useful to help you recover. want to save your file you were working on? good luck.)
<karolherbst>
it crashes?
<karolherbst>
might return an error and sets errno?
<pq>
karolherbst, I mean doesn't glibc call brk() to extend stack? wait, how does that work...
<karolherbst>
I'm sure the error just gets passed to the caller
<pq>
what caller?
<karolherbst>
of glibc
<DemiMarie>
daniels: do you mean that both abort?
<pq>
but I was just calling my own functions and suddenly they need more stack space
<karolherbst>
mhh
<karolherbst>
guess you crash then
<karolherbst>
just disable all stack memory
<pq>
or was there a default size for stack instead of growing it on-the-spot...
<karolherbst>
you still need one on function calls if the function needs stack mem, no?
<karolherbst>
but maybe you have a page of stack by default
<karolherbst>
or something
<karolherbst>
there was something for that
<pq>
every function call more or less needs stack, you need the return address somewhere :-)
<karolherbst>
right.. but you can reuse the same allocation until you need to grow
<karolherbst>
the default is.. 256 bytes?
<pq>
right, so how does the growing part work?
<karolherbst>
or something?
<daniels>
emiyes, both abort
<karolherbst>
pq: yeah.. so the compiler inserts stuff to increase it ... :)
benjaminl has joined #dri-devel
<karolherbst>
but if you really care you can also just incrase it to something you are sure you won't need more
<pq>
maybe... or maybe there is just a huge mmap that the kernel populates on demand?
<karolherbst>
well.. how would the kernel know you need more stack mem
<pq>
page fault
<karolherbst>
mhhh
<pq>
the same way all backing pages come into play
<karolherbst>
right...
kts has joined #dri-devel
<pq>
maybe disabling overcommit means that all stack area is populated by real memory from the start?
<karolherbst>
yeah, should be
<karolherbst>
but on an incrase you still need to allocate more memory
<pq>
sounds like a big waste :-D
<karolherbst>
it's just 4k
HerrSpliet has quit [Quit: Bye bye man, bye bye]
<karolherbst>
"some" kernel drivers need more for a single function :P
<karolherbst>
but you can disable automatic stack incrase and handle it all manually
<karolherbst>
but then figuring out when to incrase it is just pure pain
RSpliet has joined #dri-devel
<karolherbst>
and then it crashes if you run out of it
<karolherbst>
so you just don't use stack memory
<pq>
so... if you mmap e.g. 16 MB initially for the stack, populate only the first page, isn't that overcommit? or is that "mmap" completely implicit, so it doesn't count for overcommit but the kernel still populates more pages as needed?
<karolherbst>
and malloc _everything_
<pq>
I'm talking about userspace stack, not kernel stack, btw.
<pq>
they're different I believe
<karolherbst>
if you don't use it, it's not allocated by default, yes
<pq>
userspace can easily eat mega and gigabytes of stack
<karolherbst>
yeah
<karolherbst>
but the point is.. systems who care, disable overcommit and handle it manually
<karolherbst>
and then to stay sane they also can't use the stack, because... of those issues
<emersion>
pq, it seems like there's MAP_GROWSDOWN, and the kernel automatically allocates
<pq>
emersion, thanks! That sounds like the logical choice.
rasterman has quit [Quit: Gettin' stinky!]
<pq>
so code eats stack om nom nom, page fault, kernel fails to find a free page for more stack, what do you do? SIGSEGV.
benjaminl has quit [Ping timeout: 480 seconds]
<karolherbst>
correct
<karolherbst>
could install a signal handler :)
<karolherbst>
and do the wrong thing
<karolherbst>
could try to free some caches and hope for the best
<pq>
yes, with a preallocated alternate stack, and...
<karolherbst>
yeah....
<pq>
but you can't return from SEGV handler to re-try, can you?
<karolherbst>
fun thing.. how would libraries report their stack use?
<karolherbst>
uhm...
<karolherbst>
I think you can
<karolherbst>
only SIGKILL is fatal
<karolherbst>
though you'd SIGSEGV again if you didn't fix it
<emersion>
pq, you can
<pq>
I'm shocked - and it actually re-tries the faulting instruction?
<karolherbst>
sure
<karolherbst>
and then you end up in a signal handler loop :)
<emersion>
i've only ever used it with longjmp in the handler
<emersion>
to implement a safe{} block for C of course
<emersion>
which is the contrary of Rust's unsafe{} ;)
<karolherbst>
sounds cursed
<emersion>
that's the whole point :P
<hwentlan_>
emersion, melissawen, I don't know if you've had time to look at the color pipeline KMS uAPI... I've been working on a patchset for it, but will probably still take a couple weeks or so before I can post a first iteration (using VKMS) for RFC
<pq>
siglongjmp doesn't work, you don't know where to jump to, if a random piece of code in a random lib exceeds your stack allocation? or can you fecth the faulting code address, and...
<karolherbst>
welll...
<karolherbst>
you can parse the stack....
<karolherbst>
(if you have proper stack pointers)
<pq>
hwentlan_, btw. did you get that IRC discussion I pinged you on err... some weeks ago but after the hackfest?
<emersion>
there are also libcs which use a fixed stack size, like musl
<karolherbst>
anyway.. my point was: nobody gets it all correct
<emersion>
hwentlan_: oh, so you're going to implement each color block in VKMS?
<pq>
hwentlan_, about the new generic KMS color pipeline design
<hwentlan_>
pq, I don't remember. Got a link by any chance?
<pq>
lemme see... did emersion bookmark it while on holidays?
<hwentlan_>
emersion, not each color block necessarily but it's a good vehicle to iterate quickly on a qemu VM while working on new KMS API
<emersion>
i read the discussion but don't remember anything about it now
<hwentlan_>
but might implement one or two useful blocks in VKMS if they're easy to implement... like an sRGB EOTF and inv_EOTF, and maybe a matrix... could even do a custom 1D LUT...
<hwentlan_>
reading
<pq>
I don't remember the contents either, but I remember thinking you wanted to see it. :-)
<pq>
*you should see it
<pq>
maybe it's nothing new now
<pq>
hwentlan_, if you do a custom 1D LUT in VKMS with selectable tap distribution, you could implement any enumerated TF with const tables.
<pq>
as an implementation detail of enumerated TFs
<pq>
and implementing inverse would be as easy as swapping the tap distribution array with the LUT value array
<pq>
till tomorrow .o/
bmodem has quit [Ping timeout: 480 seconds]
djbw has joined #dri-devel
benjaminl has joined #dri-devel
pochu has quit [Ping timeout: 480 seconds]
tzimmermann has quit [Quit: Leaving]
benjaminl has quit [Ping timeout: 480 seconds]
benjaminl has joined #dri-devel
<gfxstrand>
dj-death: See also the discussion with cwabbott I've been having the last couple weeks. I think turnip wants something split too.
<gfxstrand>
dj-death: That said, I fear not disabling compression for subpass self-dependencies is also broken, we're just not seeing it in CTS tests. We definitely disable it in those cases on iris and i965.
<dj-death>
gfxstrand: thanks
<dj-death>
gfxstrand: seems to match the testing I did with not disabling compression and having all the feedback loop tests pass :/
<dj-death>
gfxstrand: then what's left is the missing transition I guess
<dj-death>
I'm not sure how we can deal with this
<dj-death>
would need to look at all the barriers, and check if we're in a render pass, then put the source layout back to what the runtime picked
cmichael has quit [Quit: Leaving]
<zmike>
sounds like more cts coverage is needed
<gfxstrand>
dj-death: IDK what you mean.
<gfxstrand>
You should be able to disable compression just based on the layout. The FEEDBACK_LOOP layout disables compression. The only bits you need to flip on and off per-pipeline is the bits about early depth tests.
<dj-death>
but it shows the inconsistent layout transitions
<gfxstrand>
dj-death: Oh, well that's a bug...
<gfxstrand>
I'm surprised we haven't seen that before
<gfxstrand>
I doubt your patch fixes it, though.
<dj-death>
it leaves the initial app layout
<gfxstrand>
Which is to say that we could see that with any combination of an initial/final layout that don't match subpass layout
<dj-death>
so they're consistent at least in the tests
<dj-death>
I think in this case the app is doing a barrier in the render pass
<dj-death>
maybe we mark image compressed incorrectly
<gfxstrand>
They can't do a layout transition mid-render-pass
<gfxstrand>
Oh, but they are doing a barrier to let them do back-to-back subpass draws.
<gfxstrand>
And that one they're passing GENERAL->GENERAL.
<dj-death>
yeah
<gfxstrand>
Yeah, short of intercepting vkCmdBarrier(), there's no way we can fix that in the runtime.
<dj-death>
yeah
<gfxstrand>
As long as the driver ignores the layout portion of any barrier with identical initial and final layouts, it should be okay.
<dj-death>
I guess I'll get on that tomorrow :)
<dj-death>
it has to be in the runtime I think
<dj-death>
the driver can guess what the runtime did, but if that changes :|
<gfxstrand>
We should at the very least document that corner.
<gfxstrand>
Yeah, that's the limitation of the runtime being a toolbox and not actually hooking everything.
<gfxstrand>
Some days, I question that choice.
<gfxstrand>
We may eventually want to move to a more gallium-like thing where everything goes into the runtime and the runtime calls into drivers instead of the other way around.
<gfxstrand>
Vulkan isn't as thin as it once was.
jewins has joined #dri-devel
swalker__ has quit [Remote host closed the connection]
<karolherbst>
a new gallium? funky
<karolherbst>
well.. it would help implementing CL in a more modern environment
<karolherbst>
given it has a command buffer extension...
<karolherbst>
so something like gallium, just more lower level + explicit command buffer control _might_ be something which _might_ work out
<karolherbst>
but maybe layering it on top of vulkan is possible, but I'm reluctant using zink, because the gallium API is a nightmare for it long term
bgs has joined #dri-devel
frieder has quit [Remote host closed the connection]
<gfxstrand>
I don't think it would be the pluggable mess that gallium is.
<gfxstrand>
Like, it wouldn't be trying to abstract across state trackers.
<gfxstrand>
Just make the layering more clear.
<karolherbst>
yeah.... probably. I mean, we have way more experience with that stuff now anyway :)
jewins has quit [Quit: jewins]
Kayden has quit [Quit: -> JF]
<deathmist1>
hey, trying to bisect AMD RX6600 OGL graphical artifacting where 23.1.x is bad and 23.0.x is good, but 23.0-branchpoint also is bad; what should I try now?
<psykose>
does 23.1 branch from somewhere in 23.x.y? it's possible for a fix to have been picked to 23.0.x but not 23.1 (had that happen before)
<psykose>
so 23.0-branch would be bad but 23.0.something wouldn't be, and 23.1 would also be bad
<deathmist1>
I tested 23.0.0 as good fwiw
<psykose>
hm, not sure then
<eric_engestrom>
deathmist1: if you can bisect on the commits in `main` that's probably going to work better :)
<kisak>
You can do a reverse bisect for what fixed the issue between 23.0-branchpoint and 23.0.0. in a reverse bisect, you're trying to find the commit that fixed the issue, good = bad and bad = good in the git commands
<kisak>
after that, you can cherry pick the commit you find until you're testing newer than that commit in git main.
jhli has quit [Remote host closed the connection]
<eric_engestrom>
deathmist1: kisak's idea is also good, and less work than what I suggested :P
<eric_engestrom>
deathmist1: when you find the commit that fixes the issue for you in the 23.0.x branch, ping me so that I can make sure 23.1.x include the fix as well
<kisak>
otherwise, you could finish the reverse bisect, then follow the (cherry picked from commit...) to try to be the known good reference point.
<deathmist1>
kisak: you might be onto something, I see that was in 23.0.0-rc5 and tried 23.0.0-rc1 -> same artifacting I see on 23.1.x, gonna test adding that on top of rc1 now before going further
<Newbyte>
deathmist1: what if you just manually enable/disable glthread?
<Newbyte>
like, without rebuilding. that's possible right?
benjamin1 has quit [Ping timeout: 480 seconds]
<eric_engestrom>
yeah, if I'm not mistaken deathmist1 you can disable it by running `mesa_glthread=false your_app`
ngcortes has joined #dri-devel
<eric_engestrom>
go to any known-bad version and run that to see if it fixes it
<deathmist1>
well I rebuilt anyway and yep looks like that was it, going back to 23.1.1 and trying that as well
<psykose>
wonder why i don't see any issues with the same gpu and versions
<kisak>
At least that tells you it's not really a regression. glthread got enabled by default on radeonsi, but there was too much feedback that came in that it made sense to push that back another 3 months to nail down the rough edges. The issue was there in the older mesa versions, but it was hidden away.
sgruszka has quit [Remote host closed the connection]
kts has quit [Quit: Konversation terminated!]
<Newbyte>
psykose: well, are you testing teh same thing as deathmist?
<kisak>
The right answer here is to file a bug report so that the radeonsi devs have something to ponder, and disable glthread of that one game until there's a fix to test.
Cyrinux9 has quit []
<psykose>
Newbyte: good point, i keep forgetting i run the compositor in vulkan which isn't the same
<CounterPillow>
I don't see one in drm_crtc_send_vblank_event so this code seems at least wrong in so far as it doesn't balance the put in all cases, unless I'm grossly misunderstanding the code here
<CounterPillow>
Ok apparently the corresponding gets are in a different file somewhere
kts has joined #dri-devel
ngcortes has quit [Read error: Connection reset by peer]
sravn has quit []
sravn has joined #dri-devel
jfalempe has quit [Read error: Connection reset by peer]
<deathmist1>
eric_engestrom: fwiw "mesa_glthread=false epiphany" in my case didn't seem to make a difference with the busted mesa
<deathmist1>
it did print the ATTENTION override messages at least but was ineffective in the end
<jenatali>
anholt: FYI that first patch doesn't compile
<anholt>
fixed
<airlied>
robclark, Kayden : could you quick ack the patches in 23291?
<anholt>
did you audit every driver for needing to report 0 for these?
<airlied>
anholt: yup that's why there's patches in it
<anholt>
I guess since you have svga in there then it's not just driven by CI fails
<anholt>
ack for fd/crocus/iris
<jenatali>
anholt: Looks like I'm seeing a draw with just a VS/FS where the VS outputs all 0s for position
<jenatali>
Seems like this is supposed to be doing a read from a buffer in the FS but since it's not, it's leaving wrong data in the output
<jenatali>
I can keep debugging, just figured I'd throw that out in case it rings a bell
<anholt>
the GS isn't bound?
<jenatali>
Not at the D3D level according to PIX
<anholt>
well, that's certainly information!
<jenatali>
Lemme see if I can find a more scoped down test for easier debugging, this one loops so much it'd be had to break on the right spot...
sukrutb has joined #dri-devel
<anholt>
oh, it's not going to be in the GS, because we have failures on non-layered targets. so it's actually the st->pbo.layers being set that's getting us.
orbea has quit [Remote host closed the connection]
orbea has joined #dri-devel
<jenatali>
Oh, I see what's going on
<jenatali>
The VS is overwriting position with 0s
<jenatali>
That nir_store_var with a writemask of (1 << 2) seems to be not behaving appropriately
<jenatali>
Our compiler backend assumes that I/O has been lowered to temps, i.e. each output is only written by one store, where the data is accumulated into a temp first
<jenatali>
For CL/VK we run the appropriate passes, but for GL we just rely on mesa/st doing that for us, but this PBO VS doesn't have all the same lowering done on it that app shaders do
<jenatali>
Of course we could also just implement PIPE_CAP_VS_LAYER_VIEWPORT...
<anholt>
and with that hint, I see the error in v3d as well :)
<jenatali>
Yeah, it's already implemented in the compiler, just a one-liner to add the pipe cap fixes the test for me :P
flto has quit [Remote host closed the connection]
flto has joined #dri-devel
flto has quit [Remote host closed the connection]
flto has joined #dri-devel
<anholt>
I'm going to just make that VS less surprising for backends.
<jenatali>
Yeah, good idea. I'm push that cap anyway, 'cause why not
<jenatali>
pushing*
xantoz has joined #dri-devel
xantoz has quit []
xantoz has joined #dri-devel
sima has quit [Ping timeout: 480 seconds]
djbw has quit [Remote host closed the connection]
iive has joined #dri-devel
vliaskov has quit [Ping timeout: 480 seconds]
sravn has quit [Read error: Connection reset by peer]
sravn has joined #dri-devel
sghuge has quit [Remote host closed the connection]
sghuge has joined #dri-devel
bgs has quit [Remote host closed the connection]
djbw has joined #dri-devel
rasterman has quit [Quit: Gettin' stinky!]
sghuge has quit [Remote host closed the connection]
sghuge has joined #dri-devel
pcercuei has quit [Quit: dodo]
<Kayden>
airlied: iris/crocus patches gets my ack as well
<Kayden>
airlied: you're adding task/mesh in gallium...?
<airlied>
Kayden: just enough bits to bridge lavapipe/llvmpipe