ChanServ changed the topic of #dri-devel to: <ajax> nothing involved with X should ever be unable to find a bar
sneil has joined #dri-devel
sneil__ has quit [Ping timeout: 480 seconds]
mlankhorst_ has joined #dri-devel
mhenning has joined #dri-devel
ngcortes has quit [Remote host closed the connection]
mbrost_ has joined #dri-devel
iive has quit []
mbrost has quit [Ping timeout: 480 seconds]
ngcortes has joined #dri-devel
tursulin has quit [Read error: Connection reset by peer]
`join_subline has quit [Ping timeout: 480 seconds]
ybogdano has quit [Ping timeout: 480 seconds]
jewins1 has quit [Remote host closed the connection]
jewins has joined #dri-devel
mclasen has quit []
mclasen has joined #dri-devel
alyssa has joined #dri-devel
<alyssa> zmike: subgroup
* zmike dies
<icecream95> alyssa: You should have forked zmike before saying that, it wouldn't have been so dangerous then
<alyssa> :D
<alyssa> gdb ./zmike
<alyssa> r
* karolherbst dies
* icecream95 passes '-ex run' to avoid typing r
<karolherbst> what I need is an AI setting breakpoints I want without having to actually type those
<icecream95> That's called a segfault
<karolherbst> I guess an AI adding segfaults is more credible
<karolherbst> for v in self.variables_with_mode(nir_variable_mode::nir_var_uniform) :3
co1umbarius has joined #dri-devel
columbarius has quit [Ping timeout: 480 seconds]
sdutt has joined #dri-devel
nchery has quit [Read error: Connection reset by peer]
FireBurn has joined #dri-devel
<FireBurn> It's been a bad week for Horizon Zero Dawn / PRIME gaming. Two kernel bugs, two mesa bugs and a bug in vulkan-loader
<FireBurn> Are there any plans to add a PRIME system into CI?
<karolherbst> with luck there already are some
<karolherbst> would be interesting to know which of the runners have CPUs with iGPUs
`join_subline has joined #dri-devel
<alyssa> cwabbott: This optimization might be too obscure/stupid to matter, but in principle `gl_PointSize = <dynamically uniform>` can be lifted into a preamble shader, saving O(n) memory bandwidth for the write and O(n) again for reading non-constant psiz
<alyssa> and on Bifrost, allows IDVS to be used more often
<alyssa> (Bifrost doesn't allow psiz writes with IDVS due to a hw limitation)
<alyssa> wondering if that would be sane to implement with a backend hook into the preamble infra
mbrost_ has quit [Ping timeout: 480 seconds]
Daanct12 has joined #dri-devel
thellstrom has quit [Ping timeout: 480 seconds]
ngcortes has quit [Remote host closed the connection]
Emantor has quit [Quit: ZNC - http://znc.in]
Emantor has joined #dri-devel
mhenning has quit [Quit: mhenning]
ella-0 has joined #dri-devel
rgallaispou1 has joined #dri-devel
rgallaispou has quit [Ping timeout: 480 seconds]
ella-0_ has quit [Read error: Connection reset by peer]
kts has joined #dri-devel
linearcannon has quit [Quit: Textual IRC Client: www.textualapp.com]
mclasen has quit [Ping timeout: 480 seconds]
camus has joined #dri-devel
mags has joined #dri-devel
mags has quit []
quiltedstars has joined #dri-devel
Daaanct12 has joined #dri-devel
Daanct12 has quit [Read error: Connection reset by peer]
Daaanct12 is now known as Daanct12
jewins has quit [Ping timeout: 480 seconds]
dinfuehr has joined #dri-devel
dinfuehr_ has quit [Ping timeout: 480 seconds]
YuGiOhJCJ has joined #dri-devel
siqueira has quit []
siqueira has joined #dri-devel
quiltedstars has quit [Remote host closed the connection]
tchar has quit [Read error: Connection reset by peer]
rodrigovivi has quit [Read error: Connection reset by peer]
hwentlan____ has quit [Read error: Connection reset by peer]
eric_engestrom has quit [Read error: Connection reset by peer]
ogabbay has quit [Read error: Connection reset by peer]
ezequielg has quit [Read error: No route to host]
hfink has quit [Read error: Connection reset by peer]
tchar has joined #dri-devel
hwentlan____ has joined #dri-devel
benettig has quit [Read error: No route to host]
krh has quit [Read error: No route to host]
lileo___ has quit [Read error: Connection reset by peer]
lileo___ has joined #dri-devel
ogabbay has joined #dri-devel
ezequielg has joined #dri-devel
siqueira has quit []
robher has quit [Remote host closed the connection]
quantum5 has quit [Remote host closed the connection]
quantum5 has joined #dri-devel
SanchayanMaity has quit [Remote host closed the connection]
melissawen has quit [Quit: ZNC 1.8.2+deb2+b1 - https://znc.in]
LexSfX has quit [Remote host closed the connection]
SanchayanMaity has joined #dri-devel
gpiccoli has quit [Quit: Bears...Beets...Battlestar Galactica]
eric_engestrom has joined #dri-devel
krh has joined #dri-devel
Kayden has quit [Remote host closed the connection]
krushia has quit [Remote host closed the connection]
gpiccoli has joined #dri-devel
hfink has joined #dri-devel
robher has joined #dri-devel
Kayden has joined #dri-devel
benettig has joined #dri-devel
krushia has joined #dri-devel
rodrigovivi has joined #dri-devel
LexSfX has joined #dri-devel
siqueira has joined #dri-devel
melissawen has joined #dri-devel
slattann has joined #dri-devel
slattann has quit []
pnowack has joined #dri-devel
pnowack has quit []
kts has quit [Ping timeout: 480 seconds]
kts has joined #dri-devel
Duke`` has joined #dri-devel
mattrope has quit [Read error: Connection reset by peer]
mclasen has joined #dri-devel
lemonzest has joined #dri-devel
wbrbr has joined #dri-devel
mclasen has quit [Ping timeout: 480 seconds]
thellstrom has joined #dri-devel
alanc has quit [Remote host closed the connection]
alanc has joined #dri-devel
thellstrom has quit [Ping timeout: 480 seconds]
Daanct12 has quit [Remote host closed the connection]
tanty has quit [Remote host closed the connection]
tanty has joined #dri-devel
tanty has quit []
tanty has joined #dri-devel
ppascher has quit [Quit: Gateway shutdown]
Lucretia has joined #dri-devel
JohnnyonFlame has quit [Ping timeout: 480 seconds]
wbrbr has quit [Remote host closed the connection]
enunes- has joined #dri-devel
enunes has quit [Ping timeout: 480 seconds]
libv_ has joined #dri-devel
libv has quit [Ping timeout: 480 seconds]
enunes- has quit []
enunes has joined #dri-devel
<Newbyte> regarding issue 5510 in Mesa, would hacking around it by ignoring render nodes from Exynos be acceptable upstream?
<Newbyte> until a better solution has been made
<emersion> i'd rather not merge that
<emersion> the proper solution isn't that much work
gouchi has joined #dri-devel
<Newbyte> emersion: do I understand it right that it's this where I'd need to assign something else if Exynos is being used: https://gitlab.freedesktop.org/mesa/mesa/-/blob/main/src/gallium/winsys/kmsro/drm/kmsro_drm_winsys.c#L118
<Newbyte> * it's this place where I'd
<emersion> yeah
<emersion> by checking ro->kms_fd for instance
<Newbyte> thanks
libv_ has quit []
libv has joined #dri-devel
Haaninjo has joined #dri-devel
kts has quit [Ping timeout: 480 seconds]
dinfuehr_ has joined #dri-devel
dinfuehr has quit [Ping timeout: 480 seconds]
<cwabbott> alyssa: there's nothing in there for moving output writes to the preamble
<cwabbott> I guess you'd have to have your own pass to do that
<cwabbott> we already have our own pass to push UBOs in the preamble
JohnnyonFlame has joined #dri-devel
sdutt has quit [Ping timeout: 480 seconds]
macc24 has quit [Ping timeout: 480 seconds]
<jannau> I'm working on the drm driver for the display controller on apple silicon macs. The macbook pro 14"/16" models have rounded corners and a 74 pixel high cut-out (notch) at the top of the display
<jannau> our initial strategy is to pretend that those top 74 lines do not exists
<jannau> that's easy to implement for simple-framebuffer, just modify the fb pointer and the height
<emersion> jannau: the preferred solution would be to have a hardware DB somewhere
<jannau> I'm not sure how annoying that is to implement in the drm driver. maybe it's enough to modify the height in reported modes (just the native display resolution) and offset the start y position in the actual HW swap calls
<emersion> and just make that info available to compositors
<emersion> there is a thread about this on dri-devel
<emersion> started by a postmarketOS dev
<Newbyte> caleb?
<jannau> yes, passing the information along so compositors can make informed decicions where to display things is of course the preferred solution
<jannau> but based on that thread it is going to take months/years before that is supported in the majority of compositors
<emersion> that's not a good excuse to hack things away IMHO
<emersion> i can type the wlroots part of it if you'd like
mclasen has joined #dri-devel
flacks has quit [Quit: Quitter]
flacks has joined #dri-devel
JohnnyonFlame has quit [Ping timeout: 480 seconds]
pcercuei has joined #dri-devel
camus has quit []
camus has joined #dri-devel
camus1 has joined #dri-devel
camus has quit [Remote host closed the connection]
mighty17 has left #dri-devel [#dri-devel]
kts has joined #dri-devel
dinfuehr has joined #dri-devel
dinfuehr_ has quit [Ping timeout: 480 seconds]
maxzor has quit [Ping timeout: 480 seconds]
agx has quit [Remote host closed the connection]
agx has joined #dri-devel
camus1 has quit []
mclasen has quit []
kts has quit [Ping timeout: 480 seconds]
mclasen has joined #dri-devel
kts has joined #dri-devel
kts has quit [Read error: Connection reset by peer]
macc24 has joined #dri-devel
kts has joined #dri-devel
quiltedstars has joined #dri-devel
quiltedstars has quit []
rasterman has joined #dri-devel
camus has joined #dri-devel
JohnnyonFlame has joined #dri-devel
gouchi has quit [Remote host closed the connection]
gouchi has joined #dri-devel
<alyssa> cwabbott: Sure, it'd have to be backend-specific anyway
mclasen has quit []
mclasen has joined #dri-devel
heat has joined #dri-devel
<karolherbst> airlied: one thing I am wondering about how libclc caching is implemented in clover... how are we making sure that different devices are getting the same libclc or is it all the same?
kchibisov_ has joined #dri-devel
<karolherbst> normally I'd expect the cache to be keyed per device anyway.. maybe I should just do that in rusticl and use it for everything :)
kchibisov_ has quit []
heat has quit [Remote host closed the connection]
kchibisov_ has joined #dri-devel
<alyssa> jannau: there are only like 5 compositors to care about *sweat*
kchibisov_ has quit []
heat has joined #dri-devel
kchibisov_ has joined #dri-devel
YuGiOhJCJ has quit [Quit: YuGiOhJCJ]
kchibisov_ has quit []
JohnnyonFlame has quit [Ping timeout: 480 seconds]
kchibisov_ has joined #dri-devel
mclasen has quit []
mclasen has joined #dri-devel
gouchi has quit [Remote host closed the connection]
krushia has quit [Ping timeout: 480 seconds]
mclasen has quit []
mclasen has joined #dri-devel
mclasen has quit []
mclasen has joined #dri-devel
mclasen has quit []
mclasen has joined #dri-devel
kchibisov_ has quit []
kchibisov_ has joined #dri-devel
kchibisov has quit []
kchibisov_ has quit []
kchibisov has joined #dri-devel
<karolherbst> guess using get_disk_shader_cache is the way to go
<karolherbst> alyssa: just 5?
<alyssa> ...does textureGrad respect sampler lod bias?
<alyssa> (and clamps)
famfo has quit []
famfo has joined #dri-devel
kts_ has joined #dri-devel
famfo has quit []
famfo has joined #dri-devel
kts has quit [Ping timeout: 480 seconds]
famfo has quit []
famfo has joined #dri-devel
heat has quit [Remote host closed the connection]
heat has joined #dri-devel
famfo has quit []
famfo has joined #dri-devel
JohnnyonFlame has joined #dri-devel
<karolherbst> alyssa: mhh, maybe? We do have encoding space for clamps on textureGrad and nir_lower_tex does say "We're ignoring GL state biases for now." when lowering _txd
mclasen has quit []
mclasen has joined #dri-devel
<alyssa> karolherbst: Yeah, that's why I was confused
JohnnyonFlame has quit [Read error: Connection reset by peer]
* alyssa wonders if there are piglits for this
<alyssa> Passed: 17/38 (44.7%)
<alyssa> that's, like, progress :-p
<karolherbst> fun
<karolherbst> I hope all fails are due to hw unsupported _txd ops
<alyssa> currently we do lower_txd and are conformant
<alyssa> trying to wire in the hw support though, since the lower_tex code is... ALU heavy.
<imirkin> alyssa: tex-miplevel-selection piglit has grad tests
<karolherbst> alyssa: I am sure the hw doesn't support all _txd
mclasen has quit []
<imirkin> it doesn't on nvidia. but i think it does on most other chips
<karolherbst> well.. at least not 3D
mclasen has joined #dri-devel
<karolherbst> imirkin: maybe, but there is a lowering pass explicitly for 3d tex grads
<alyssa> imirkin: Ooh, that does seem to test the interaction
<alyssa> thanks
<imirkin> karolherbst: on nvidia? yes.
<karolherbst> no, in nir
<imirkin> oh
<imirkin> i guess someone else had the problem then :)
<karolherbst> somebody has put it there for a reason I think
<karolherbst> :)
<imirkin> i wonder how that works though. on nvidia we implement it by using some pretty low-level concepts...
<karolherbst> check out lower_gradient_cube_map
<imirkin> quadon/off, etc
<karolherbst> the amount of options->lower_txd_* flags
<imirkin> alyssa: that test takes 1000 different args, so try to look at how it's run in the tests/opengl.py file.
<karolherbst> imirkin: tldr: it calculates the lod inside the shader and calls textureLod
<imirkin> oh dear
<karolherbst> yeah..
<imirkin> i guess you gotta do what you gotta do...
<karolherbst> but not sure if there is a generic way lowerint 3D _txd to 2D _txds without hw specific ops
<imirkin> definitely not
<imirkin> on nvidia we just stuff the "right" fake values into the various lanes and just call texture() :)
<karolherbst> yeah...
<imirkin> "oh look, magically the derivatives work out to what we wanted all along, who could have imagined"
<imirkin> but that's not exactly portable.
<karolherbst> I wonder...
<karolherbst> maybe with shuffle?
<imirkin> maybe with some of the advanced ops like that yea
<imirkin> i dunno what all has become available with the subgroup exts and whatnot
<karolherbst> I mean.. on supported archs we just shuffle and enable "quad mode" if that's a fitting term
<imirkin> on all arch's...
<imirkin> (for the non-natively-supported variants)
<karolherbst> ehh right.. shuffle was an opt for maxwell
<karolherbst> and kepler2?
<alyssa> Passed: 28/38 (73.7%)
<imirkin> there was always mov's with lanemasks
<alyssa> ...progress?
<imirkin> alyssa: just 10 more...
<imirkin> s/2/3/
<karolherbst> mhh
<imirkin> karolherbst: it's the same since nv50
<karolherbst> yeah.. maybe I have to check what quadop actually does
<imirkin> we use "quadop" which is like a shuffle
<imirkin> it has alu ops built into it
<karolherbst> or my knowledge contradicts what's in the source code
<imirkin> but by being clever, you can get it ot od whatever
<alyssa> 2D/3D txd working, cube maps broken
<karolherbst> mhh yeah
<alyssa> apparently, the hw ignores cube maps for txd and treats them like 2D images for face 0
<alyssa> which is... probably not right...
<karolherbst> imirkin: ahh now I know why I am confused.. because the same code works on volta, but we don't have those fancy quadops
<karolherbst> there is just "it's now quad op time" and nothing else
<imirkin> hehe
<imirkin> we've gotten a lot of mileage out of that code. surprising it works across all those gens with very few adjustments
<karolherbst> yeah.. I think we don't correctly understand that quadop stuff :D
<karolherbst> but somehow we got it right
<karolherbst> anyway .. for volta it's really just put into quad op mode and do shuffles more or less, but what that quad op mode is? no idea
<karolherbst> maybe really just bundling threads into quad and execute shuffles or maybe it's something more hw specific
<imirkin> karolherbst: i never understood why recentering it on lane 0 rather than the "natural" lane made things work in vs
<imirkin> but can't fight with reality
<karolherbst> yeah... no clue
<karolherbst> my only guess is that our understanding is wrong
<imirkin> it's what the blob did, and what made tests pass, so ... yea
<alyssa> Ooh my first instruction taking more than 4 registers of inputs, exciting!
<alyssa> (^ of staging inputs, more than 6 registers total)
<karolherbst> nice
<alyssa> 64-bit gradient descriptor + 32-bit array index + 32-bit shadow comparator + 32-bit offsets
<karolherbst> I think our longest instructions take 8 regs
<imirkin> yeah, 2x group of 4
<imirkin> which is also why we don't have native tex grad for > 2d
<imirkin> too many values. need a lot more than 8.
<alyssa> it's split into 2 instr on mali
<karolherbst> although I am wondering if volta changed that
<alyssa> "calculate gradient descriptor from derivatives" + "sample with gradient descriptor"
kts_ has quit [Ping timeout: 480 seconds]
<karolherbst> potentially we have three slots for sources
<imirkin> alyssa: ah, smart
<karolherbst> so I guess the max would be 12 + random stuff
<imirkin> karolherbst: maybe? should trace blob to see what it does with textureGrad(sampler3D)
<karolherbst> it's not supported
<imirkin> ?
<karolherbst> by the hw I mean
<imirkin> you mean it does the lowering?
<imirkin> ah ok
<alyssa> dEQP-GLES3.functional.shaders.texture_functions.texturegradoffset.sampler2darrayshadow_fragment
pochu has quit [Quit: leaving]
* alyssa stares
<imirkin> alyssa: fwiw the _vertex ones fail for everyone for some reason
<imirkin> i've mostly given up on them
<alyssa> ...does the gradient descriptor depend on the array index?
<alyssa> i'm pretty sure no.
<imirkin> no.
<alyssa> ok then that's not my problem
<karolherbst> imirkin: BMMA potentially
<imirkin> ?
<karolherbst> warp bit matrix multiply
<karolherbst> and add
<karolherbst> it does take three vector sources
<karolherbst> yeah.. but only 10 values at most I think
<karolherbst> although no idea
<alyssa> Oh fricking heck
<alyssa> Exposes a RA bug
<alyssa> icecream95: 5 staging registers doesn't work with 8-bit linear*
<alyssa> the obvious fix probably breaks your opts, so punting on it
<alyssa> we do not seem to optimize txd(constant)
mclasen has quit []
Akari has quit [Ping timeout: 480 seconds]
mclasen has joined #dri-devel
jewins has joined #dri-devel
digetx has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
digetx has joined #dri-devel
kts_ has joined #dri-devel
hikiko has quit [Remote host closed the connection]
hikiko has joined #dri-devel
Akari has joined #dri-devel
tzimmermann has joined #dri-devel
sdutt has joined #dri-devel
rasterman has quit [Quit: Gettin' stinky!]
ppascher has joined #dri-devel
kchibisov has quit [Remote host closed the connection]
kchibisov has joined #dri-devel
kchibisov has quit []
kchibisov has joined #dri-devel
kchibisov has quit []
danvet has joined #dri-devel
kchibisov has joined #dri-devel
yogesh_mohan has quit [Ping timeout: 480 seconds]
<karolherbst> the fun can begin...
mhenning has joined #dri-devel
yogesh_mohan has joined #dri-devel
gouchi has joined #dri-devel
<karolherbst> imagine passing 60% of all vulkan/OpenGL tests without running a single shader ...
<imirkin> just have to have enough glGet() tests...
<karolherbst> guess that's easier for GL as the vulkan CTS has like hundred? thousend of tests?
kchibisov has quit []
<karolherbst> fun tests are if they pass even though you are supposed to run something
<karolherbst> 1: mad fp32 ................passed 0.00 @ {0x0p+0, 0x0p+0, 0x0p+0} and I have no idea why
jewins has quit [Ping timeout: 480 seconds]
gouchi has quit [Remote host closed the connection]
<airlied> karolherbst: I don't think there is any way to get different libclc's per device
<karolherbst> airlied: not the libclc itself, but the generated nir_shader can be different per device
gouchi has joined #dri-devel
mclasen has quit []
mclasen has joined #dri-devel
<karolherbst> I mean.. we do pass in device specific values into nir_load_libclc_shader
<karolherbst> maybe they don't matter for the result, but...
HerrSpliet has joined #dri-devel
RSpliet has quit [Ping timeout: 480 seconds]
mclasen has quit []
mclasen has joined #dri-devel
danvet has quit [Ping timeout: 480 seconds]
mclasen has quit []
mclasen has joined #dri-devel
rasterman has joined #dri-devel
mclasen has quit []
mclasen has joined #dri-devel
mclasen has quit []
mclasen has joined #dri-devel
LexSfX has quit [Ping timeout: 480 seconds]
kchibisov has joined #dri-devel
mclasen has quit []
LexSfX has joined #dri-devel
jewins has joined #dri-devel
mclasen has joined #dri-devel
angerctl has quit [Ping timeout: 480 seconds]
gouchi has quit [Remote host closed the connection]
kchibisov has quit []
kchibisov has joined #dri-devel
mclasen has quit []
mclasen has joined #dri-devel
angerctl has joined #dri-devel
mclasen has quit []
mclasen has joined #dri-devel
heat has quit [Remote host closed the connection]
mclasen has quit []
mclasen has joined #dri-devel
lemonzest has quit [Quit: WeeChat 3.4]
Haaninjo has quit [Quit: Ex-Chat]
kts_ has quit [Ping timeout: 480 seconds]
angerctl has quit [Ping timeout: 480 seconds]
<Lynne> airlied: the video decode spec seems to have been updated, VkVideoDecodeCapabilitiesKHR needs to be set in VkVideoCapabilitiesKHR.pNext
Duke`` has quit [Ping timeout: 480 seconds]
kts_ has joined #dri-devel