ChanServ changed the topic of #dri-devel to: <ajax> nothing involved with X should ever be unable to find a bar
gawin has quit [Ping timeout: 480 seconds]
<imirkin>
ah, at least div-by-imm
sdutt has joined #dri-devel
anujp has quit [Ping timeout: 480 seconds]
Haaninjo has quit [Quit: Ex-Chat]
anujp has joined #dri-devel
iive has quit []
bgs has quit [Ping timeout: 480 seconds]
bgs has joined #dri-devel
<mareko>
zmike: I just inlined it and set both TC flags
<zmike>
mareko: you mean in the si subdata hook or ?
Lucretia has quit []
Viciouss has quit [Ping timeout: 480 seconds]
Lucretia has joined #dri-devel
rasterman has quit [Quit: Gettin' stinky!]
dllud has joined #dri-devel
Viciouss has joined #dri-devel
bgs has quit [Read error: Connection reset by peer]
bgs has joined #dri-devel
tursulin has quit [Ping timeout: 480 seconds]
ella-0_ has joined #dri-devel
<jessica_24>
hey vsyrjala: I saw your comment on my patch (https://patchwork.freedesktop.org/patch/466272/?series=97875&rev=1), but was confused about where it's guaranteed that the nonblocking flip will never be put on the commit queue before cursor ioctl's called. Is this guaranteed somewhere within the code?
ella-0 has quit [Read error: Connection reset by peer]
Company has quit [Quit: Leaving]
mclasen has quit []
mclasen has joined #dri-devel
fxkamd has quit []
mclasen has quit []
mclasen has joined #dri-devel
co1umbarius has joined #dri-devel
columbarius has quit [Ping timeout: 480 seconds]
bluebugs has quit [Read error: No route to host]
bluebugs has joined #dri-devel
<jenatali>
There we go, DrawAuto works now
<imirkin>
yay, the 0 applications which use this feature will finally work
<jenatali>
Yup
<HdkR>
I'm sure the test cases that abuse the API can be considered applications right? :P
oneforall2 has quit [Read error: Connection reset by peer]
<zmike>
mareko: right, though you do still have one
mbrost_ has joined #dri-devel
shankaru has joined #dri-devel
Mooncairn has quit [Quit: Quitting]
mbrost has quit [Ping timeout: 480 seconds]
cef is now known as Guest727
cef has joined #dri-devel
Guest727 has quit [Ping timeout: 480 seconds]
ngcortes has quit [Ping timeout: 480 seconds]
<zmike>
mareko: in any case, I think it'd still be nice to have a util function for it
<mareko>
only 2 TC flags are needed AFAIK
<zmike>
having to list out the flags everywhere is cumbersome, and it's also not the most intuitive for new contributors
<mareko>
probably
<zmike>
that's why I linked the MR since it has the tc patch for it
ybogdano has quit [Ping timeout: 480 seconds]
Akari` has quit []
Akari has joined #dri-devel
kevinx has joined #dri-devel
<kevinx>
daniels:https://lore.kernel.org/lkml/20220117083820.6893-1-kevin3.tang@gmail.com/ Exchange review has been done, could you help commit to drm-misc?
Duke`` has joined #dri-devel
ppascher has quit [Ping timeout: 480 seconds]
dllud_ has joined #dri-devel
dllud has quit [Read error: Connection reset by peer]
i-garrison has quit []
i-garrison has joined #dri-devel
danvet has joined #dri-devel
mclasen has quit [Ping timeout: 480 seconds]
itoral has joined #dri-devel
Duke`` has quit [Ping timeout: 480 seconds]
mszyprow_ has joined #dri-devel
itoral_ has joined #dri-devel
mlankhorst has joined #dri-devel
mbrost_ has quit []
sdutt has quit [Ping timeout: 480 seconds]
itoral has quit [Ping timeout: 480 seconds]
mattrope has quit [Read error: Connection reset by peer]
Wally has quit [Remote host closed the connection]
tzimmermann has joined #dri-devel
kevinx has joined #dri-devel
<danvet>
javierm, looks good, I think we can bikeshed this more on-list?
<javierm>
danvet: yes, just wanted to check it was more or less what you have in mind before posting to the list
<pq>
javierm, would it be true to say that DRM can be very complex as it support complex graphics devices, but for simple graphics devices it can be very simple?
<pq>
underlining the latter would be nice - maybe it needs to be conditioned on choosing the right helpers and examples?
<HdkR>
Also a guide about not hecking up the ioctl struct packing would be nice :>
<pq>
HdkR, isn't that already written, or is it still only in danvet's blog? But I also think that is not relevant to the intended audience on this page.
<javierm>
pq: not sure if that's true since even for simple drivers one needs to get familiar with DRM/KMS/atomic modsetting, GEM, etc
<HdkR>
Ah, I guess this is just getting acquainted rather than implementation details
<pq>
well, I mean, is it not in the DRM docs yet? That's the only place that matters.
<javierm>
which I actually considered adding it but didn't know if was something introductory
<javierm>
pq, HdkR: posted to the list btw, feel free to bikeshed there
<pq>
HdkR, exactly, targeted at people who prefer to write fb drivers, because DRM is "too hard".
<HdkR>
I see
<pq>
d'oh
<pq>
so presumably the intended audience would never add new ioctls, or even touch any ioctl code
<javierm>
pq: that was my thought too. Was thinking about someone having a fbdev driver in a vendor tree or something and wanting to get familiar with DRM
<javierm>
or porting a fbdev driver already in drivers/{video,staging}
ella-0_ has quit [Read error: Connection reset by peer]
<danvet>
HdkR, pq it's been merged since quite a while because people liked it
<demarchi>
in this one... and others on other emails in this series
<MrCooper>
maybe they have that option enabled in their subscription settings then
<demarchi>
humn... ok
<demarchi>
let me try to find that setting and turn it off for me
<demarchi>
thanks
<demarchi>
easier not to lose emails that way
Lucretia has quit []
ManMower has joined #dri-devel
Lucretia has joined #dri-devel
Haaninjo has joined #dri-devel
frytaped has joined #dri-devel
kevinx has quit [Quit: Connection closed for inactivity]
frytaped has quit [Quit: frytaped]
jfalempe_ has quit []
jfalempe has joined #dri-devel
<jfalempe>
javierm, I was wondering what's the difference between efifb driver and sysfb-simplefb driver ? they are both framebuffer for UEFI ?
flacks has quit [Quit: Quitter]
flacks has joined #dri-devel
<pcercuei>
jfalempe: from my very limited knowledge - the first one is built on top of UEFI, the second one is for using with a pre-initialized video stack, where you basically just write to a frame buffer
<jfalempe>
pcercuei, thanks, so you can use any of them when booting in uefi mode.
<javierm>
jfalempe: do you want the short or the long answer ? :)
<javierm>
but basically what pcercuei said, the efifb driver takes the framebuffer directly from the global struct screen_info that's set EFI stub, using the information from the EFI GOP
<javierm>
while the simplefb and simpledrm drivers use a platform device I/O memory resource that's set by the sysfb-simplefb driver
itoral_ has quit [Remote host closed the connection]
itoral_ has joined #dri-devel
<jfalempe>
just knowing why there are two different driver, and when to use one or the other would be enough ;)
Net147 has quit [Quit: Quit]
Net147 has joined #dri-devel
<javierm>
jfalempe: it's for historical reasons. There are actually *many* drivers for firmware set-up framebuffers: vga16fb, vesafb, offb, efifb, simplefb to name a few
<javierm>
then there are platform specific code that fills information about the framebuffer for the drivers to pick it up. In most cases this is in a global struct screen_info
<javierm>
and then there is also platform code that register a platform device to match a driver
<javierm>
for example for vesa it's a vesa-framebuffer device, for efi a efi-framebuffer device and so on
<jfalempe>
ok
<javierm>
jfalempe: but at some point a simplefb driver was introduced, mostly for platforms using Device Trees but this could also be used for EFI platforms if the "simple-framebuffer" device was used
<javierm>
then you had efifb and simplefb that could do the same. And then tzimmermann wrote a simpledrm driver that matches the "simple-framebuffer" device too
itoral_ has quit [Remote host closed the connection]
itoral has joined #dri-devel
<javierm>
jfalempe: I think one source of confusion is that drivers/firmware/sysfb_simplefb.c is only about registering the "simple-framebuffer" but there's also drivers/video/fbdev/simplefb.c
<javierm>
probably that should be renamed
<jfalempe>
yes I was a bit confused to find framebuffer driver in drivers/firmware/
<tzimmermann>
jfalempe, that not the driver
<javierm>
jfalempe: yeah, because it's not really a framebuffer driver but platform code to setup the framebuffer memory resource and register the platform device
<tzimmermann>
it only sets up the simple-frambuffer device. the actual driver is simpledrm/simplefb
<jfalempe>
also is efifb going to be deprecated at some points, or the two are required for different platform ?
<javierm>
jfalempe: there shouldn't be needed anymore with simpledrm
<tzimmermann>
efifb is going to be deprecated
<tzimmermann>
and not used by most distributions
<tzimmermann>
liek the rest of the fbdev drivers
<jfalempe>
ok thanks javierm and tzimmermann , that's all I needed.
<tzimmermann>
the original simple-framebuffer devices came from the respective device tree nodes. someone added support for vesa and efi to the device code, so that simple-framebuffer could provide these as well
<tzimmermann>
we could still have an efidrm or vesadrm driver, if that would provide any benefit
<javierm>
exactly and could be extended to support let's say vga16 if someone really needs it
<tzimmermann>
for vga16, i'd actually would want a drm driver so that we could program the palette
<javierm>
tzimmermann: right, but you could also check in simpledrm if screen_info.orig_video_isVGA == VIDEO_TYPE_VGAC or screen_info.orig_video_isVGA == VIDEO_TYPE_EGAC
<javierm>
but that feels like layering violation and feature creep for simpledrm
itoral has quit [Remote host closed the connection]
<javierm>
I noticed this when doing a make allmodconfig to test the nomodeset changes
<tzimmermann>
javierm, a-b: me
<javierm>
tzimmermann: cool, thanks
mclasen has joined #dri-devel
<jani>
daniels: so the fdo gitlab issue board thing we have is pretty limited? I was looking for a way to create a board or a list with basically label = (foo or bar) but even that doesn't seem possible :(
<jani>
daniels: even the issue search does not have basic boolean logic? :o
<javierm>
danvet, tzimmermann: I got a-b for a few of the drivers but probably won't get at this point more since was posted more than month before
<javierm>
and I don't really want to stewardship such a big patch-set. The changes are trivial anyways
alyssa has joined #dri-devel
<alyssa>
I saw anv source code in my dreams
<alyssa>
am I doing Mesa hard enough
<alyssa>
(panvk too)
<dj-death>
alyssa: not sure I would call that a dream
<alyssa>
anv had this massive comment block that I was somehow sure was written by Kayden (so maybe it was iris but magically a Vulkan driver)
<alyssa>
with section headers in big ASCII art bubble letters
<zmike>
today in #dri-devel: source code fanfic
<danvet>
javierm, usually we go with asking someone here to do a general ack on all the remaining ones
<danvet>
javierm, if it's just about the s/module_*_driver/drm_module_*_driver patches, then a-b: me and push the lot
Lucretia has quit []
<danvet>
but you can also pester someone else in these cases
<javierm>
danvet: yeah, it's just that. Because the patch-set that added those macros already landed a couple of weeks ago
<danvet>
also rule of thumb is to wait 2 weeks and then just push with a general ack
<danvet>
javierm, yeah go ahead with my acks
<javierm>
danvet: perfect, thanks a lot
Lucretia has joined #dri-devel
ppascher has joined #dri-devel
<alyssa>
08:31
<alyssa>
Daniel Almeida
<alyssa>
do I have to do anything more than compiling in debug mode to enable these?
<alyssa>
Um. Copy paste fail. Sorry.
<alyssa>
Or drag and drop fail or... sorry Daniel and everyone else..
<jani>
alyssa: could be worse ;)
<alyssa>
jani: quite
* alyssa
still isn't sure how that text ended up here, um
shankaru has quit [Quit: Leaving.]
shankaru has joined #dri-devel
MajorBiscuit has quit [Ping timeout: 480 seconds]
sdutt has joined #dri-devel
<ccr>
zmike, might be waste of time, but if you can give me one pair of those bigger trace files, I can at least try to see if something can be done. but the "problem" is most likely this being Python, so dunno if anything can be done.
<cwabbott>
anyone have any opinions on what a new driver's *_report_fossils.py should look like?
<ccr>
zmike, ah.
<cwabbott>
anv_report_fossils and radv_report_fossils seem to have a bunch of copy-pasted code, but radv_report_fossils has a bunch more extra stuff (?)
<cwabbott>
so I guess the options are copy+paste even harder, try to factor out the code in one of them?
<pendingchaos>
I made radv-report-fossils.py so that it should be re-usable for other drivers by just changing/expanding the "statistics" and "executables" globals
xxmitsu has quit [Ping timeout: 480 seconds]
<cwabbott>
how would one actually extend it though? rename it to just "report-fossils.py" (although that exact name is taken) and try to detect the driver?
<cwabbott>
or try to move everything into a module and have it take statistics+executables as arguments?
<pendingchaos>
I think renaming it and adding new statistics and executables would work
<pendingchaos>
unless turnip has a statistic with the same name as radv but needs to be treated differently for some reason
<cwabbott>
I was thinking more like "if driver == radv: statistics = [...] elif driver == turnip: statistics = [...]"
<cwabbott>
but I guess that would work
<ccr>
zmike, "time ./pytracediff.py data/cts.xml data/cts.xml -IN" real 0m11.353s on my ye olde Haswell .. of course those are identical, but about same for getting to "output" phase if diffing cts.xml and ctx.xml.gz .. of course it'll take long time for the full diff to be outputted .. hmm.
<zmike>
ccr: the diff I was trying was just missing a draw call somewhere
<cwabbott>
also, the executables all have short names, like FS, VS, TCS, etc. already so we probably wouldn't need an "executables" map
mattrope has joined #dri-devel
<ccr>
zmike, I deleted one draw call from cts.xml -> cts2.xml and "time ./pytracediff.py data/cts.xml data/cts2.xml -INM" gives me about 12 seconds total.
<zmike>
ccr: weird, was taking a very long time here
<ccr>
zmike, if possible, can you provide the exact files you used?
<zmike>
that's one of the files
<ccr>
and the other?
<zmike>
added
* ccr
checks
<ccr>
real 0m11.231s
<zmike>
wtf
<ccr>
it might be that output takes a long time if you let it output the full calls. try these options: -INM .. or -INC
<zmike>
huh yea with INC it's much faster
<zmike>
neat
<ccr>
\:D/
<zmike>
will test more with this hten
<ccr>
see --help for what those options do
<ccr>
zmike, cool. I'll try to see if I can optimize the full output somehow, though it may be just a throughput issue .. perhaps less OO indirection is in order or something.
<zmike>
:/
<zmike>
optimization is the worst
<zmike>
I still need to figure out a good way to zero out index_bias for non-indexed calls in the trace output
<zmike>
always annoying spurious diffs
jewins has joined #dri-devel
rgallaispou has quit [Read error: Connection reset by peer]
<ccr>
with this custom diff implementation it would be possible to more sanely ignore differences in some attributes, though it introduces the problem of keeping such things in sync with whatever changes in the trace file generation
<zmike>
simpler to just zero it in the output
fxkamd has joined #dri-devel
<alyssa>
cwabbott: out of interest, why is shader-db report.py agnostic to the underlying driver, but fossils aren't?
<cwabbott>
alyssa: iiuc, fossilize spits out the "user-readable name" which a driver author might not want to read
frieder has quit [Remote host closed the connection]
<alyssa>
Hmm?
<cwabbott>
"Subgroups per SIMD" or "Instructions with SY sync bit" might be nice for some game dev looking at renderdoc, but staring at it a million times as a driver dev might get old
<cwabbott>
or at least, I think that's the rationale
<cwabbott>
and "Tessellation Evaluation + Geometry Shaders" might help you understand what's going on but not good for quickly scanning a list with thousands of shaders
<alyssa>
Sure... still not 100% clear why that requires duplicating code, though
<alyssa>
Oh. I see.
* alyssa
mumbles
<alyssa>
Yeah that's annoying
<cwabbott>
fossilize uses VK_KHR_pipeline_executable_properties which is kinda written for both the "user taking a peek thru renderdoc" and driver-dev usecases
<alyssa>
still not 100% on why that needs duplicating code, "Cycle Count" --> "cycles" seems like an everyone transofmration
<cwabbott>
I *think* I can get away with just adding my statistics and calling it a day
<alyssa>
As long as no drivers use the same human readable string for different driver dev readable string (we're not humans, sorry) ... seems like we could do a union of all of the drivers
<alyssa>
and if there *are* collisions that's a strong reason to change one of the drivers in Mesa, since having inconsistent naming across Mesa drivers will confuse the humans
<alyssa>
</unsolicited_uninformed_advice>
<ccr>
spamvice?
<cwabbott>
the annoying thing is something like max_waves
MajorBiscuit has joined #dri-devel
<cwabbott>
it's basically the same idea between radv & turnip
<cwabbott>
but amd and qualcomm have different terminology for the unit that a wave is assigned to
<cwabbott>
AMD calls it a SIMD, qualcomm calls it a "SPTP" apparently (yuck) and until I discovered that we were calling it a "core"
<alyssa>
I don't even remember what Arm calls it
<alyssa>
I think what you calls waves, we call warps
<cwabbott>
so the user-visible name has a good reason to diverge, but the name in the table should probably be the same
<alyssa>
I guess a "core" as well
<cwabbott>
I guess there is a provision for multiple csv names?
tango_ has joined #dri-devel
kevinx has quit [Quit: Connection closed for inactivity]
ppascher has quit [Ping timeout: 480 seconds]
jewins has quit [Remote host closed the connection]
jewins has joined #dri-devel
itoral has quit [Remote host closed the connection]
pendingchaos has quit [Quit: No Ping reply in 180 seconds.]
pendingchaos has joined #dri-devel
nchery has quit [Ping timeout: 480 seconds]
nchery has joined #dri-devel
devilhorns has quit [Remote host closed the connection]
devilhorns has joined #dri-devel
Wally has joined #dri-devel
Duke`` has joined #dri-devel
<Wally>
How does the kernel drm send instructions and set registers to amd gpus? I read the ISA ad didnt see much on that
lemonzest has quit [Quit: WeeChat 3.4]
Wally has quit [Remote host closed the connection]
mszyprow_ has quit [Ping timeout: 480 seconds]
gouchi has joined #dri-devel
<jekstrand>
I don't know if it's horrible or brilliant that docker is the best solution for cross-build on Fedora...
lemonzest has joined #dri-devel
<demarchi>
jekstrand: I'd say it's "acceptable"... particularly if you think that then this same solution should work in whatever distro you are on
<jekstrand>
Yeah, it's certainly not as terrible as it could be.
mvlad has quit [Remote host closed the connection]
<alyssa>
gawin: I don't understand the purpose of the change.
<anholt>
alyssa: avoiding recursion in your compiler when a for loop would do is good. it would probably end up a tail call, but I'm not good at predicting that.
<gawin>
mainly readability, though should also be a bit nicer for compiler (stack)
<alyssa>
anholt: It's definitely not a coding style I would favour, and it looks like it should be a tail call... but I don't understand the purpose of changing it *now
<alyssa>
gawin: Stack usage is only relevant if tail call optimization fails.
ahajda has quit []
<alyssa>
not saying it's a bad patch I just don't understand what prompted it
<alyssa>
and changes to improve readability are still changes and therefore risk regressions, as I've unfortunately learned..
<alyssa>
(My approach has been to do that sort of clean up iff there's a functional/performance change to be made in the area.)
<alyssa>
(I don't know if that's the right approach but it has worked out ok...)
<Kayden>
pepp: hey, thinking I might be able to use the PIPE_BIND_PRIME_BLIT_DST field from !14615...any thoughts when those might land?
<anholt>
alyssa: the r300 compiler backend is pretty strange, making it less so is nice what with people starting to look at it again
* jekstrand
now has a cross-build setup capable of building all of dEQP for aarch64 in 3.5 minutes. \o/
<Kayden>
o.O
<alyssa>
anholt: fair enough
<Kayden>
ah, right, many-core desktop
<jekstrand>
x86 is also 3.5 minutes. :)
<alyssa>
jekstrand: IIRC the M1 is also in that ballpark :-p
<alyssa>
though I haven't built deqp since the summer
<daniels>
jekstrand: \o/
<jekstrand>
Oh, the sins I have committed....
<jekstrand>
It involves both docker AND icecream. :)
<alyssa>
$ git commit -m "sin(Θ), sin(ɑ)"
<anholt>
daniels: so, was talking with krh about ci stuff this morning. having Mesa pipelines get stuck behind NM and gst sucks. I'd like to have some reserved capacity for Mesa, and given that we don't have that from packet I'm thinking of standing up some big gcp instances so we can churn through our swrast testing jobs without having to shard the job to a bunch of 8-thread instances and hoping fd.o has shared runners free.
<anholt>
any concerns with that?
<anholt>
(I'm not talking about doing the build jobs on big instances, because I think build jobs have a bunch of single-thread time like meson or git fetches)
<gawin>
alyssa: +1 to Emma's comment, r300 has really long functions with hardcoded workarounds. (still haven't read all code around textures)
<alyssa>
gawin: You don't have to convince me, I don't have any radeon hardware ;-)
danvet has quit [Ping timeout: 480 seconds]
<airlied>
jekstrand: just get an M1 already :-P
<daniels>
anholt: yeah for sure, that sounds like a really good idea, especially if we can go wider on the testing and get overall completion quicker
* alyssa
is curious how dEQP runtimes compare between softpipe and llvmpipe
ahajda has joined #dri-devel
<anholt>
alyssa: shockingly close, unfortunately. the joys of llvm.
<alyssa>
given how much dEQP is "compile once, run once"
<alyssa>
*nod*
<jekstrand>
dEQP is usually compiler-bound, yes.
<alyssa>
jekstrand: ^on hardware..
<anholt>
jekstrand: it can be interpreter-bound instead if you use softpipe!
<alyssa>
also I'm not sure that's true for us
bluebugs has joined #dri-devel
<alyssa>
at least for baby deqp, surprising amounts of time ends up being stupid stuff like switching round modes to calculate reference images
gouchi has quit [Remote host closed the connection]
LexSfX has quit []
LexSfX has joined #dri-devel
LexSfX has quit []
ngcortes has joined #dri-devel
The_Company has joined #dri-devel
Haaninjo has quit [Quit: Ex-Chat]
Company has quit [Ping timeout: 480 seconds]
<bylaws>
alyssa: I saw in your blog post how you mentioned you can't replace driver without root
<bylaws>
Of course needs downstream kernel driver support though
<ccr>
zmike, did some Python profiling and changed some stuff. might be faster now by default, though big traces may still be problematic in some cases.
<alyssa>
bylaws: That's horrifying, thank you linking truly incredible :-d
<zmike>
ccr: yeah I'm not expecting them to be instant or anything
<zmike>
will check it out
<alyssa>
bylaws: In my case, ioctls to /dev/mali0 were blocked by SELinux, I think
<alyssa>
and also no dmesg access makes debugging a herculean task
<jenatali>
Oof... the GLSL linker packs variables from different streams together? Why...
<jekstrand>
Efficiency!
<jekstrand>
There was an MR some time ago to shut that off
<jenatali>
I think I'm going to need that
<jekstrand>
It was wreaking havvoc on some low-end mobile GPU (Mali 400, maybe?)
<jenatali>
I'm relying on NIR vars to emit DXIL signature entries, and a single DXIL signature entry can only belong to one stream
<jenatali>
I guess I could split it myself afterwards, but I'd just as soon not pack them in the first place
mszyprow_ has joined #dri-devel
Lucretia has quit []
<jekstrand>
Ugh... Yeah, I think it shuts that off when XFB is in use
<jekstrand>
But you may need it off more than that
<jenatali>
Yeah, I mean if XFB is off I could just ignore anything that's not in stream 0 I guess since it's effectively dead
<jenatali>
But that also sounds more complicated than just not merging things from different streams
pnowack has quit [Quit: pnowack]
Lucretia has joined #dri-devel
<zmike>
just read the variable data you coward
<jenatali>
Hm?
<zmike>
this is for enhanced layouts or just streams in general?
<jenatali>
For ARB_transform_feedback3 multi-stream GS
<jenatali>
Er, the combination of that with gpu_shader5
* airlied
assumes with enchanced layouts you probably have to deal with it anyways
<zmike>
what I remember of enhanced layouts is more packing
<zmike>
not less
<jenatali>
A single varying that has some components in multiple streams?
<zmike>
I'd recommend starting with the piglit tests if you aren't already
<imirkin>
jenatali: the real question is ... why not pack them together
<imirkin>
since each component is essentially independent ... who cares
<imirkin>
(except in cases where it's not actually independent. heh.)
<HdkR>
Oops, piglit took my everything.
<jenatali>
Yeah it'd be fine except, it's just a translation from NIR variables to DXIL signatures, so I'd need to unpack them into separate variables (they can keep their location slots packed)
ahajda has quit []
<graphitemaster>
omg wtf they added transform feedback to vulkan, why xd
<graphitemaster>
is it optional in vulkan is the question
<jekstrand>
Not really
<graphitemaster>
kill it with fire please
<jekstrand>
It's not if you want to support DXVK gaming
<graphitemaster>
did we kill geometry shaders yet
<jekstrand>
We can't kill anything. Ever.
<graphitemaster>
quads were killed
<HdkR>
And everyone wants DXVK gaming. Even in ARM land.
<jekstrand>
Exception that proves the rule?
<bylaws>
alyssa: would be weird for selinux to block something the driver needs to access... Maybe it's blocked in adb but allowed under app context? Should be easy to test with run-as <debuggable app package name>
<mareko>
you can kill anything... with consequences
<jekstrand>
^^
<jekstrand>
The question is how bad are the consequences
<jekstrand>
and are they worth it?
robert_mader has joined #dri-devel
robert_mader has left #dri-devel [#dri-devel]
<graphitemaster>
this is why you kill things sooner rather than later
<graphitemaster>
fewer consequences
<jekstrand>
We also successfully killed HW atomic counters.
<HdkR>
We should have killed GS a long time ago. Waited for mesh life :P
<jekstrand>
And fp64 is optional in Vulkan
<mareko>
there'll be a lot more killing in the future for sure
<graphitemaster>
mesh life still not here, everyone too slow
<graphitemaster>
waiting for the day I can ISSUE a dispatch / draw from a compute shader - without any CPU involvement
<jekstrand>
Nvidia has an extension for that, sort-of.
<mareko>
GS is just a mesh shader with some additional sysvals, so it's kinda killed but not really
<jekstrand>
It's really GS+XFB where everything has truly and fully gone off the rails.
<jenatali>
Agreed
<mareko>
if you know how to emulate XFB, I'm all ears
<HdkR>
How soon until we can do RT and dynamic mesh generation? :P
<mareko>
graphitemaster: isn't it a task shader?
<graphitemaster>
?
pcercuei has quit [Quit: dodo]
<graphitemaster>
I want to put the main loop of the engine on the GPU, have it allocate and prepare data for buffers and everything and dispatch other shaders and draws. With the CPU just being used to service some IO mostly, but even that the GPU has direct access to HDD and can late-latch read mouse input. That's the future
<graphitemaster>
DMA snoop or what ever you have to do.
<graphitemaster>
turn the hierarchy inside out otherwise
<jekstrand>
Why don't we just port Linux to run on GCN?
<jekstrand>
Who needs a CPU?
<zmike>
pretty sure linux has been run on nintendo gamecubes before?
<alyssa>
bylaws: Dunno. I am more than happy to use ChromeOS and ChromeOS android and not have these issues. (and of course, use mainline Linux)
<alyssa>
just needed the device
<imirkin>
jekstrand: yeah, i hear GPUs are a lot faster anyways
<HdkR>
zmike: gc-linux yes. But that's just a PowerPC CPU :P
<HdkR>
iMac G3 shipping the same CPU, paired single ops and everything.
<alyssa>
mareko: Mali has no hw for GS, tess, or XFB
<alyssa>
and yet
<alyssa>
and yet Arm ships drivers supporting all 3
<alyssa>
I, uh, I once made the mistake of looking at the code they generate to emulate GS+xfb
<alyssa>
it's.. it's not pretty
<HdkR>
alyssa: Please add support for all these unfeatures, DXVK on Panfrost is only becoming more and more likely :P
<alyssa>
will do right after we get funding for another 12 full time panfrost devs
lstrano_ has joined #dri-devel
anujp has quit [Ping timeout: 480 seconds]
<HdkR>
Hm, finding twelve devs to work on Panfrost might be difficult
<gawin>
this reminds me about that vk driver for rpi's vc4, I wonder if it could handle dxvk on d3d9 level
ngcortes has quit [Ping timeout: 480 seconds]
anujp has joined #dri-devel
<bylaws>
alyssa: yeah fair enough, I only really made so we can ship turnip with our switch emu to workaround the semi broken mess that is the Android adreno blob (though their shader compiler stack is kinda cool)
<alyssa>
lolololol
<bylaws>
Very much looking forward to panVk progress, hopefully we can use on Mali oneday
<jekstrand>
bylaws: That's amazing!
<jekstrand>
I'm sure there's someone at Google who would be very put out that you got that to work. :)
<jekstrand>
And probably even more people at Qualcomm....
<jekstrand>
Let 'em squirm! That's what they get for a shit driver update story.
<bylaws>
Heh, thanks :) Writing it was an exercise in writing the least hacky code for what it still an absolute hack... Discovered some interesting things though, like the Android linker supporting ld_preload via an undocumented elf flag rather than env vars
<jekstrand>
:D
ngcortes has joined #dri-devel
cphealy_ has quit []
cphealy has joined #dri-devel
Duke`` has quit [Ping timeout: 480 seconds]
<alyssa>
bylaws: wait wat?
<HdkR>
Turnip has kgsl support. Ship it with your application to avoid the blob :P
mlankhorst has quit [Ping timeout: 480 seconds]
<mareko>
graphitemaster: that might not be a great idea; note that GPUs have much higher memory latency due to a different type of memory and cache architecture, and thus single-threaded code is much slower than a CPU
<mareko>
alyssa: I don't think it's possible to implement GS+XFB without some kind of assistance from the hw
<graphitemaster>
mareko, This brings me to my second ask. I want GPUs to have more than one type of memory that I can explicitly program against.
ngcortes has quit [Read error: Connection reset by peer]