ChanServ changed the topic of #dri-devel to: <ajax> nothing involved with X should ever be unable to find a bar
minecrell has quit [Quit: Ping timeout (120 seconds)]
minecrell has joined #dri-devel
elongbug has quit [Read error: Connection reset by peer]
srslypascal is now known as Guest10115
srslypascal has joined #dri-devel
srslypascal has quit [Remote host closed the connection]
srslypascal has joined #dri-devel
Guest10115 has quit [Ping timeout: 480 seconds]
srslypascal has quit [Remote host closed the connection]
srslypascal has joined #dri-devel
srslypascal is now known as Guest10116
srslypascal has joined #dri-devel
Guest10116 has quit [Ping timeout: 480 seconds]
MrCooper has quit [Remote host closed the connection]
MrCooper has joined #dri-devel
mbrost_ has joined #dri-devel
mbrost_ has quit []
MrCooper has quit [Remote host closed the connection]
MrCooper has joined #dri-devel
lemonzest has quit [Quit: WeeChat 3.6]
lemonzest has joined #dri-devel
jaganteki has quit [Remote host closed the connection]
co1umbarius has joined #dri-devel
columbarius has quit [Ping timeout: 480 seconds]
camus1 has quit [Ping timeout: 480 seconds]
jewins has quit [Ping timeout: 480 seconds]
yuq825 has joined #dri-devel
<mareko>
should vote/ballot/elect instructions be intrinsics or ALU instructions? what about scan/reduce?
<mareko>
Venemo: do you remember who mentioned splitting the divergent flag into subgroup-divergent and quad-divergent or even uniform (convergent across workgroups)?
<mareko>
*subgroups
camus has joined #dri-devel
Guest10072 is now known as nchery
camus has quit []
camus has joined #dri-devel
<zmike>
I vote intrinsics so everyone doesn't have to change their compilers
<Lynne>
same algorithm, but 100x faster prefix sum algorithm, yet 20x slower than opencl... it's a start
tursulin has quit [Ping timeout: 480 seconds]
<Lynne>
the memory barriers between each dispatch really kill performance
<DemiMarie>
?
<DemiMarie>
I take it that GPUs despise barriers.
<Lynne>
they do, but this is far worse than I expected
<jenatali>
+1 to intrinsics. Is there a reason to want it to be ALU?
<alyssa>
aren't they already intrinsics?
khfeng has joined #dri-devel
<alyssa>
jenatali: nominally constant folding and consistency with fddx
<jenatali>
I guess you could constant fold specific constants, but that seems like it belongs in nir_opt_intrinsics instead
<jenatali>
And yeah honestly fddx seems a bit more like an intrinsic than an alu to me 🤷
<alyssa>
valid
<alyssa>
it's definitely a funny one
<alyssa>
The constant folding for fddx is kinda funny though
<alyssa>
jenatali: FWIW hardware vendors don't agree on what fddx is, either
<alyssa>
On AGX, it's regular floating point ALU.
<alyssa>
On Midgard, it's a texture instruction.
<jenatali>
You weren't kidding on constant folding for fddx being funny
<jenatali>
All I know is WARP, where it's also a texture instruction
<alyssa>
On Valhall, it's lowered to cross-lane permutes (I forgot the API name for that) and ALU
<jenatali>
Wait no I'm thinking of calculateLod, it's just an ALU there
<alyssa>
where the permute is on the special function unit along with eg frcp
<alyssa>
jenatali: "What's the derivative of this constant input?" "0" :-D
<jenatali>
Which lanes have true for this constant true value? :P
<alyssa>
:D
<jenatali>
Trick question! Which lanes are active ;)
<alyssa>
TBH, the distinction between ALU and intrinsic has always been entirely artificial
<alyssa>
but, meh, it seems like a useful kind of artificial
<jenatali>
Yeah
<alyssa>
Midgard has a firm {ALU, load/store, texture} split to its instructions
<alyssa>
for goofy VLIW reasons
<alyssa>
and even that doesn't map completely to NIR
<jenatali>
Btw, anybody wanna review a Windows WSI / Vulkan runtime change? The only name I know to ping is gfxstrand and I feel like that's not fair :P
<alyssa>
oh right yes
<alyssa>
barriers were also texture instructions
<alyssa>
for.. Reasons
<alyssa>
:D
<jenatali>
... wat
<alyssa>
jenatali: I mean, the texture pipe already had the logic needed to do barriers because texture2D() implicitly does barriers to calculate derivatives
<jenatali>
Huh
<alyssa>
and when you're trying to implement OpenCL on your GLES2 GPU, oh look we already have barriers nice let's just tweak that
<alyssa>
:p
<alyssa>
(barrier for a workgroup vs a quad, pff details)
<jenatali>
I should go enable subgroups for CL and GL now that I added them for VK
<Lynne>
the correct stages for a compute->compute memory barrier are compute+compute, right?
<Lynne>
why is using top/bottom of pipe instead basically a noop, but a compute/compute kills performance?
smiles_1111 has quit [Read error: Connection reset by peer]
andremorishita has joined #dri-devel
smiles_1111 has joined #dri-devel
<Lynne>
in theory top/bottom should be slower as it afaik immediately inserts a barrier, of which if there are multiple, would be required to be done in-order
<Lynne>
nvm, forgot the order is src=bottom, dst=top, same performance pretty much as compute+compute
<mareko>
jenatali: algebraic with vote/ballot/elect might be interesting
heat has quit [Remote host closed the connection]
bmodem has joined #dri-devel
aravind has joined #dri-devel
<mareko>
on AMD, vote is ALU, quad_swizzle is ALU, fddx is quad_swizzle + fsub
<mareko>
it's really about how do we make it easy to write opt_algebraic-style pattern matching for intrinsics
bmodem has quit []
bmodem has joined #dri-devel
andremorishita has quit []
kzd has quit [Ping timeout: 480 seconds]
mbrost has quit [Read error: Connection reset by peer]
bmodem1 has joined #dri-devel
mattst88_ has quit [Read error: Connection reset by peer]
bmodem has quit [Ping timeout: 480 seconds]
bmodem has joined #dri-devel
tzimmermann has joined #dri-devel
mattst88 has joined #dri-devel
bmodem1 has quit [Ping timeout: 480 seconds]
bgs has joined #dri-devel
fab has joined #dri-devel
danvet has joined #dri-devel
JohnnyonFlame has quit [Read error: Connection reset by peer]
<danvet>
mlankhorst, just pushed the deadline fix to drm-misc-next, if you can do a pr still today would be great to propagate the fix (there's two in total)
OftenTimeConsuming has quit [Remote host closed the connection]
sghuge has quit [Remote host closed the connection]
sghuge has joined #dri-devel
bluetail has joined #dri-devel
<Venemo>
mareko: I mentioned changing divergence analysis to know the difference between wave-uniform and workgroup-uniform, I hadn't thought about quads yet
tursulin has joined #dri-devel
MajorBiscuit has joined #dri-devel
godvino has joined #dri-devel
thellstrom has joined #dri-devel
<danvet>
tzimmermann, ok if I just apply your patch on top of my series and send it all out together?
<danvet>
with the other polish comments from javierm addressed I mean
<tzimmermann>
danvet, i can.
<tzimmermann>
give be a bit
<danvet>
oh that works for me too, I was just about to start rebasing in the polish
<danvet>
I'll go look more at lina's rfc then
<danvet>
well need to sort out laundry first
YuGiOhJCJ has joined #dri-devel
thellstrom has quit [Quit: thellstrom]
<lina>
I still need to do a bunch of rebasing and other stuff for the Rust dependencies so I'll reply piecewise if that's okay ^^
<lina>
danvet: Also is your Mail-Followup-To intentional? That really confused me (see emails, I only realized what it was after that)
<lina>
I should sort out laundry too...
<danvet>
lina, is something wrong with the follow-up-to?
<danvet>
it's just mutt, I have no idea how any of this works
<danvet>
lina, also no worries on being slow with replies, I've been way too slow imo too so that's mostly all on me :-/
<danvet>
I think I have some minor questions on the drm_mm wrappers
<danvet>
and then a big can of worms I have no clear idea on for the core drm_device abstraction and how the traits should work
<lina>
danvet: It has everyone but you, so I was really confused when my thunderbird moved all the CCs to To (since there was no To left) and you weren't on the list any more.
<lina>
For the device removal stuff we'll have to talk to the other Rust folks since that whole problem (hardware removal vs device struct vs driver/UAPI data) is something intended to be solved at the top layer already, and I have to admit I'm not 100% clear on the rules myself. Plus that whole thing is changing anyway on the base kernel abstraction...
<lina>
Also my driver is kind of a terrible example for this because you can't hot-remove the GPU and even if you tried, you can't hot-add it again (there is no way to reset the firmware to startup state) so we can't test reconnect cycles...
pcercuei has joined #dri-devel
<danvet>
lina, yeah the device lifetime fun is I think something that needs serious work
<danvet>
and imo really not a blocker for asahi
<danvet>
maybe need to poke boqun et all whether we should have a separate session for that
<danvet>
lina, yeah I know ... I still think hotremoval (i.e. unbind driver in sysfs while userspace is running) is a good thing to validate whether your lifetime rules are solid or not
<danvet>
lots of fun stuff starts to happen
<lina>
The idea on the generic Rust device side is that device data is split into driver data itself, and Resources, and the Resources are accessed via a revocable container, so when the device goes away all those accesses fail (and there's some locking/RCU/something magic to make sure those Resources don't go away while they are being used, though of course on real hardware they'd break on hot-remove anyway, it
<danvet>
and it's as close to real hotunplug as you can get (it's how the hotunplug igts work too)
<lina>
just means the actual structures exist)
<danvet>
yeah there's always a race
<lina>
Yeah, I mean PCI is going to start returning 0xffffffff on reads or whatever on hot unplug, no way around that.
<danvet>
as long as the revoke happens it's good, since after ->remove or whatever the resources could be reassinged to a different device
<danvet>
e.g. thundrebolt iommu bars
<lina>
We should be able to test the unbind case on my driver but only once per reboot, so it's good for smoke testing but kind of painful to do repeatedly.
<lina>
Yeah
<danvet>
lina, oh the fw dies if you unbind?
<lina>
It doesn't, but there's no way to reinit it...
<lina>
The init operation is one-time as far as I can tell, so you'd have to somehow carry over piles and piles of in-memory data structures
<danvet>
oh so you'd need to keep all the channels/queues around?
<lina>
Yeah, the whole of initdata...
<danvet>
how does that work with vm? or the fw has some magic knowledge to reset at boot and only there?
<lina>
VM?
<danvet>
well I was thinking device pass-through
<danvet>
I guess that also doesn't work?
<lina>
Ah, it doesn't, other than m1n1 of course (but m1n1 is special)
<lina>
Actually true device pass-through is impossible anyway, there's no real IOMMU for that
<danvet>
ah yeah I guess then your driver really isn't a good vehicle to really test all the device lifetime stuff :-/
<lina>
Yeah...
fab has quit [Quit: fab]
<lina>
At least we can do something like spin up a dozen glmarks and pull the plug and make sure nothing blows up.
<lina>
But that's about it...
<danvet>
so I guess test it a bit, cover anything that's amiss in the todo, and then there's going to be big discussion for the long-term plan/design
<danvet>
yup
<lina>
Yeah
<danvet>
I mean the C driver side isn't really good either, we're barely at the "in theory it's possible to do it right" stage
<lina>
This is why I'm always so paranoid about firmware crashes (and why Rust was a good idea to help with that too), if that happens users have to reboot...
<lina>
I fixed another one of those yesterday, I think I'm slowly running out of them.
<danvet>
yeah userspace really doesn't cope well with a terminally dead gpu
<lina>
Thankfully this stuff is rare enough and mostly seems to be corner cases around GPU faults anyway, which happens a lot more often with known-broken tests than things people are running.
<danvet>
usually the compositor just keels over and the entire session dies
<lina>
I think only one or two people (besides Alyssa and I) have seen firmware crashes so far
<lina>
At least I made the crash handling nice now, it used to just hang everything. Now it kills all in-progress jobs and ENODEVs everything subsequent.
<lina>
So hopefully the compositor dies and you at least get a TTY to dump dmesg or whatever
Zopolis4_ has quit []
thellstrom has joined #dri-devel
<jannau>
it might even restart with software rendering
<danvet>
yeah that's the best you can do with what you have
<danvet>
as long as the display keeps displaying
<danvet>
ideally compositors would recreate their render ctx on llvmpipe and no die
<lina>
That's a good point, I need to make sure the get params stuff fails with ENODEV too, right now I think we only fail other stuff.
<danvet>
but I'm not sure whether that'll ever happen
<lina>
That way mesa init will outright fail on that driver and skip it.
<danvet>
emersion, ^^ ?
<lina>
(We also don't have the robustness stuff hooked up in mesa properly yet I think)
<danvet>
lina, so I think the super-clean solution would be to hotunplug the entire thing
<danvet>
unload the driver
<danvet>
then mesa should dtrt cleanly
<jannau>
as long as the framebuffers remain valid while in use by the display the display should be unaffected
<lina>
That's a good question, what happens to the GEM objects if you hotunplug?
<danvet>
or maybe not unbind the driver, because you might need that for introspection stuff
<lina>
The display controller is going to have those imported
<danvet>
but drm_dev_unplug() or something like that at least
<danvet>
which would give you a nice excuse to handle the hotunplug stuff :-)
<danvet>
in case you're bored ...
<lina>
Yeah, unless I soon get confident that fw crashes just aren't really a thing any more, I plan to have a coredump thing at some point in debugfs so users can take full GPU memory snapshots (of all the firmware stuff at least, which is conveniently in VA-contiguous heaps)
lynxeye has joined #dri-devel
<lina>
I also have a debug mode that puts allocator tags in-band so that's all handy to make sense of badness after the fact.
<lina>
(Right now I just do all that via the hypervisor but that's not useful for end users)
<danvet>
lina, there's devcoredump or something like that for device state dumps
<danvet>
not yet used by drm drivers since it's fairly new
<tomeu>
doesn't freedreno use it in Mesa CI?
<danvet>
yay I guess the name correctly
<danvet>
tomeu, ah yeah we have a few by now
<HdkR>
devcoredump in freedreno might even give you real crash information if you're lucky :)
<lina>
Thanks! I'll look into that ^^
<danvet>
yeah it's msm, etnaviv, amd, panfrost that seems to have support
<danvet>
just i915 not using it
<danvet>
(because that one predates devcoredump by quite a few years)
<danvet>
pinged the xe folks to make sure they use that too
<lina>
There's going to be a lot of refactoring to do on the driver and more abstractions to add... Just yesterday I fixed two firmware-involved deadlocks due to freeing interacting with faults. It's definitely going to need a lot of refactoring once we introduce a shrinker and really can't do allocs in the signaling paths and stuff like that...
<lina>
I hope with your "merge early, refactor aggressively" idea this won't block upstreaming ^^ (otherwise I think we'd spend another half year+ getting all the necessary bits in, stuff like workqueues which I don't use right now...)
jkrzyszt has joined #dri-devel
<lina>
There's definitely places where I do something kind of dumb that could be better done with existing kernel features, it's just that I don't want to introduce even more abstraction dependencies than we already have right off the bat...
fab has joined #dri-devel
sarahwalker has joined #dri-devel
<danvet>
lina, if you feel like a doc patch for drm/sched timed_out hook to point at devcoredump and maybe drm_dev_unplug in case of terminal fail might be good
<danvet>
in general it'd be awesome if you can fix doc gaps because people ramping up notice them a lot easier
* danvet
always be volunteering people to improve docs :-)
swalker_ has joined #dri-devel
<lina>
That's a good point ^^ (though I also don't use that specific path since our firmware handles faults/timeouts for us)
swalker_ is now known as Guest10151
<danvet>
lina, oh you can set a per-job timeout and it just nukes it itself?
bmodem1 has joined #dri-devel
<lina>
I think the timeouts are hardcoded or unknown fields of the initdata or something we haven't found yet. If a job faults or times out the whole firmware goes into a halt cycle, notifies the host along with a list of running jobs, then once told to resume resets the GPU and marks those jobs as complete and picks off after that.
<lina>
I've only seen hangs that didn't result in a notification when something went horribly wrong, but I also am not sure if we can even recover from that because we can't just reset the firmware, so...
<danvet>
yeah I guess for you that all would be terminal failures
bmodem has quit [Ping timeout: 480 seconds]
<lina>
Yeah, right now we just print a message but I could have it go into the "GPU crashed" codepath, it's just that so far I haven't seen any of those that wasn't traceable to something I screwed up so I'm not sure if it's worth plumbing in like that.
<lina>
Those are rarer than outright crashes.
sarahwalker has quit [Ping timeout: 480 seconds]
<lina>
As for the actual faults, I use the fault cycle to mark all jobs the firmware claimed are pending as failed (but not complete), then after resume the natural (instant) notification of completion by the firmware signals the fences (which are now marked error, as well as the feedback BO getting the error data).
<lina>
There's logic to distinguish between victim and culprit jobs etc and we do gather full fault data if available.
bmodem has joined #dri-devel
<danvet>
hm pushing the culprit/victim tracking into drm/sched might be another one, maybe even some glue to provide the usual query ioclt you need for vk/arb_robustness
<lina>
We currently have result buffers that tell userspace what happened to each job asynchronously, so it can find out after waiting on the fences (actually Alyssa pointed out that for async job cleanup we might as well just poll those buffers directly, it's way faster than an ioctl to do a nonblocking fence check so we'll probably do that)
<lina>
I think drm_sched doesn't forward fence error markers right now between the hw fences and the job fences, which bothers me... but I'm not even sure if that normally ends up in userspace when waiting on sync objects either?
djbw_ has quit [Read error: Connection reset by peer]
<danvet>
lina, oh right I've seen that discussion, but didn't reply
<danvet>
drm_fence error is kinda *meh* to "blows up in your face in interesting ways"
<danvet>
i915-gem tried to go full blast in fence error propagation
<danvet>
it resulted in app crashes taking down the compositor because it's a lot more tricky than it looks like
<danvet>
due to that I'm very much of the opinion to just dont use it
<danvet>
afaiui it's for android, and even there not really uapi
<lina>
One thing I think I would like to do is try to make sure those errors propagate to the display controller at least, because especially once we introduce compression I'm not sure I want broken framebuffers ending up there...
<danvet>
instead pass all gpu error state out-of-band to userspace in a query
bmodem1 has quit [Ping timeout: 480 seconds]
<danvet>
lina, pls no, been there, cried
<danvet>
gpu crash recovery really is all up to userspace to re-render or better re-allocate everything
<lina>
If we don't do that I hope DCP doesn't fault with arbitrary broken compressed framebuffers, because if it does that's another one we can't reboot... ^^;;
<danvet>
unless your display dies when it scans out malformed compressed stuff?
<danvet>
lina, yeah might be good to check
<danvet>
but also, by design, this is impossible to fix
<lina>
I know it dies when it tries to scan out an unmapped VA at least, and well... if compressed metadata is broken I could see that happening...
<danvet>
userspace is allowed to lie about drm_fb metadata, and it can just cpu-write garbage into memory
<lina>
And yeah, it's impossible to fix but at least we can try to make it not happen accidentally
<danvet>
unless you're compressed stuff is some kind of special stolen memory region
<lina>
(Like when any random process faults and takes down a compositor job with it)
<danvet>
we've looked into this all for a bit different reasons with memory encryption revocation for content protection
<danvet>
that one's even more annoying since it can happen to currently live scanout buffers
<lina>
DCP has some content protection stuff but I have no plans to look into that ^^
<danvet>
yeah it's just cros that wants it for netflix
bmodem1 has joined #dri-devel
junaid has joined #dri-devel
<javierm>
tzimmermann: sorry that I couldn't review your patch before and I see now that you posted a v4...
<tzimmermann>
javierm, no problem.
bmodem1 has quit []
<javierm>
tzimmermann: my comments were pretty trivial anyways and you could change before pushing if you agree
<javierm>
tzimmermann: but I mentioned I think you should keep danvet's original S-o-B tag (even when doesn't match the author in the patches)
<javierm>
*as mentioned
<danvet>
yeah if I spot the checkpatch I just add them both
<danvet>
I'm a bit confused on this
bmodem has quit [Ping timeout: 480 seconds]
<tzimmermann>
javierm, let me reply to the mail. I'm going to start the next bike shedding :P
<tzimmermann>
danvet, you sent the patches from @ffwll.ch but the sob tag says @intel.com
bmodem has joined #dri-devel
<javierm>
tzimmermann: feel free to ignore me then :) I really want this series to land since as you said this nvidia+fbcon+vfio thing comes frequently in the fedora bug reports
<javierm>
danvet: I think checkpatch is just silly, do you know if doesn't complain if you have first your author email S-o-B and then intel's ?
<danvet>
yeah I think if it's both it's ok
<javierm>
danvet: I think that even with both complains if the first S-o-B doesn't match the author
<javierm>
that is intel and then ffwll.ch will still make checkpatch complain
<javierm>
danvet: the reason why we need tzimmermann's patch is IMO just to make the code more readable and easier to understand. But from a functional POV is the same I believe
<tzimmermann>
javierm, this ^
<danvet>
yeah I put an ack on it
bmodem1 has joined #dri-devel
bmodem has quit [Ping timeout: 480 seconds]
jdavies has joined #dri-devel
jdavies is now known as Guest10155
jaganteki has joined #dri-devel
vliaskov has joined #dri-devel
bmodem has joined #dri-devel
sgruszka has quit [Ping timeout: 480 seconds]
vliaskov has quit [Remote host closed the connection]
<tzimmermann>
it only uses simple_encoder_init, which should be fine
aravind has quit [Ping timeout: 480 seconds]
<javierm>
tzimmermann: ah, Ok. I just noticed that included <drm/drm_simple_kms_helper.h>
<javierm>
but yeah, no drm_simple_display_pipe_init()
<javierm>
tzimmermann: it seems simple_encoder_init() doesn't really add that much. It's just a wrapper around drm_encoder_init() with a drm_simple_encoder_funcs_cleanup that has .destroy set to drm_encoder_cleanup
<javierm>
wonder if we should just inline that in vkms and get rid of the drm/drm_simple_kms_helper.h inclusion
<dolphin>
airlied, danvet: see #intel-gfx, there is a request to do a backmerge to pull in some deps to drm-intel-gt-next, so do ping on the mail when you've merged the current PR.
<dolphin>
I'll then do a backmerge to sync with drm-next
RSpliet has quit [Quit: Bye bye man, bye bye]
RSpliet has joined #dri-devel
jfalempe has joined #dri-devel
Sachiel_ has joined #dri-devel
Sachiel has quit [Ping timeout: 480 seconds]
srslypascal has quit [Remote host closed the connection]
<danvet>
dolphin, drm-next is at -rc4, that should be good enough for nirmoy?
Danct12 has joined #dri-devel
srslypascal has joined #dri-devel
<dolphin>
nirmoy: jani: ^^ is -rc4 good enough?
<dolphin>
nirmoy: I have to drop now, if it's not enough then let me know and I'll ask drm-next to be backmerged to more recent version and wait with the backmerge
<dolphin>
nirmoy: is the patch you need in drm-next already?
<dolphin>
no point in doing a backmerge if that doesn't resolve the issue :) have to first backmerge mainline to drm-next and only then drm-next to drm-intel-gt-next
jaganteki has quit [Remote host closed the connection]
<nirmoy>
dolphin: Yes, it is in drm-next
<danvet>
dolphin, dont backmerge pls
<danvet>
there's the broken deadline stuff in drm-next rn
<danvet>
and the bugfixes are only in drm-misc-next
<danvet>
I don't think it has an impact on i915 since that hand-rolls much of atomic, but I'm frankly not sure
<dolphin>
ok, I'll wait then until until that is in
<danvet>
maybe nag mlankhorst to get that pr going
<danvet>
dolphin, I'll maybe also pull in your pr first?
<dolphin>
yeah, I was hoping that
<danvet>
ok
<dolphin>
that's why the original ping, maybe I was too verbose :)
<dolphin>
mlankhorst: ^^ you have thus been nagged about preparing drm-misc-next PR so that we can unblock this chain of backmerges
<dolphin>
s/backmerges/{back,}merges/
<nirmoy>
thanks danvet, dolphin for looking into this.
ahajda has joined #dri-devel
Sachiel_ has quit [Ping timeout: 480 seconds]
Sachiel_ has joined #dri-devel
<tzimmermann>
javierm, we should IMHO
<javierm>
tzimmermann: yeah, posted two patches for vkms and Cc'ed you
vliaskov has quit [Remote host closed the connection]
vliaskov has joined #dri-devel
<tzimmermann>
simple_encoder_init was added because many drivers only had this pattern. i kind of regret adding it now, because it is a midlayer with little value.
<javierm>
tzimmermann: exactly
Net147 has quit [Quit: Quit]
Net147 has joined #dri-devel
vliaskov has quit [Remote host closed the connection]
vliaskov has joined #dri-devel
<jani>
dolphin: yeah -rc1 is enough
aravind has joined #dri-devel
heat has joined #dri-devel
heat has quit [Remote host closed the connection]
heat has joined #dri-devel
aravind has quit [Ping timeout: 480 seconds]
Zopolis4_ has joined #dri-devel
godvino has joined #dri-devel
<melissawen>
about the vkms multiple crtcs thing, I don't see why we need to attach the overlay planes initialization per CRTC. afaik, it's not the approach followed by vc4 and amd, where only primary and cursor planes are attached to the number of crtcs, I mean the "N planes per each crtc" thing.
<melissawen>
can anyone elaborate it better?
<emersion>
melissawen: hm, i guess it would be nice to have a way to configure this
<emersion>
it's the same for primary/cursor planes fwiw
<emersion>
some drivers advertise that a primary plane is compatible with all CRTCs
<emersion>
maybe it makes vkms simpler to make overlays specific to a single CRTC? so that would be a good first step and can be generalized later?
<emersion>
(i don't know, i haven't looked at the patches)
tdayyyyyyyy^ has quit [Remote host closed the connection]
<melissawen>
emersion, I liked your idea of restrict overlay planes to a single CRTC now. At least it sounds better to me than creating N overlay planes to each CRTCs.. I see i915 does it, but some other drivers keep overlay planes more independent.
<melissawen>
I was in doubt if it would be a requirement, thanks for extending to the crtc compatibility thing too
<emersion>
hm, i though restricting each overlay plane to a single CRTC was what the patchset was already doing
<emersion>
i should probably stop talking before looking at the code :P
Danct12 has quit [Ping timeout: 480 seconds]
<danvet>
jani, dim still managed to parse the drm-misc-next pull at least :-)
godvino has quit [Quit: WeeChat 3.6]
<danvet>
melissawen, yeah there's largely two kinds of hw, those where planes are fixed to a single crtc, and those where they're freely reassignable
<danvet>
since the later might result in some fun bugs maybe best left to a 2nd step (and with a vkms_config knob perhaps too?)
Danct12 has joined #dri-devel
robobub_ has quit []
Danct12 has quit [Ping timeout: 480 seconds]
<melissawen>
hmm.. mixed feelings
<melissawen>
but I'll write some comments in the thread
<danvet>
melissawen, I think for now I'd keep the num_planes as indicating a per-crtc value, that's probably simplest
<danvet>
maybe even keep that when they're freely assignable, lots of displays have tons of planes
<alyssa>
danvet: lina: I assume our plan for DCP (Digital Content Protection) is yarr mate?
<alyssa>
It's a, type of South American tea.
<danvet>
alyssa, I don't care either way
<danvet>
I don't judge what people get high on :-P
<danvet>
dolphin, merged everything to drm-next, up to you whether you want for intel-ci to first approve the result in drm-tip or backmerge right away
<jannau>
is there a use case which requires hdcp but not some form of protected video decoding? I wasn't planning to reverse hdcp support in dcp (apple display), at least for now. there so so many more important things to do first
vliaskov has quit [Remote host closed the connection]
vliaskov has joined #dri-devel
<alyssa>
jannau: HDCP being High DCP, for when you drink too much yarr mate?
* alyssa
will stop shitposting
rszwicht has joined #dri-devel
<alyssa>
jannau: Ostensibly, no. In practice there are some broken HDMI monitors that want HDCP even when not displaying DRM video.
<alyssa>
It's not something I've personally encountered, nor do I particularly want us to carry those patches.
<alyssa>
But rumour has it that they exist.
<daniels>
so they'll refuse to display YUV streams unless they're accompanied by HDCP, or?
<alyssa>
daniels: I thought I heard this rumour from you, if I'm making stuff up I'd be glad to be wrong
<daniels>
alyssa: it's certainly news to me
<alyssa>
oh
<alyssa>
then i'm definitely making stuff up
* alyssa
awkwardly shuffles away
<daniels>
I mean there's a ton of breakage around HDMI but that one would be novel
<daniels>
ah, maybe this was it: 'It's actually a net gain for freedom lovers. I've encountered A/V gear that only does 48 kHz, 16 bit stereo (or less) PCM if you don't do HDCP. Negotiate HDCP and you can do bitstream audio, or the full 192 kHz 8 channel 24 bit PCM.'
<alyssa>
Yeah. Comment on your blog post from someone I don't know, something you said to me, potato tomato
<alyssa>
:p
<daniels>
by the transient property, I am the internet
<alyssa>
Yes of course
<tzimmermann>
melissawen, emersion, why would you want to move planes maong crtcs? seems over-complex to me. especially for a pure software drivers with no hardware contraints
<tzimmermann>
'among'
<emersion>
tzimmermann: some hw supports it, so it's good if we can test it
jewins has joined #dri-devel
<daniels>
yeah, RPi is the usual poster child for this, but there's definitely some other hardware around that can as well
<emersion>
sometimes you have 4 planes which can move between CRTCs, they are all on CRTC 1, but your fancy video player is on CRTC 2
<emersion>
so you want to make use of the planes on that CRTC 2
<daniels>
I wouldn't say it would be my first priority for vkms, but it does make sense to do at some point at least
<emersion>
yeah, it's more of an advanced feature tbh
<tzimmermann>
for testing, i see.
<tzimmermann>
i'd start with multiple outputs and static planes TBH.
<tzimmermann>
you can only bind a plane to a single crtc at the same time. and the exact limits depend on the HW. is there really much to test with vkms?
<tzimmermann>
just checked: there's a possible_crtcs mask in struct drm_plane. that's certainly something to test
<tzimmermann>
if there would be a max_planes limit for each crtc and the device as a whole, the drm core might be able to validate HW limits automatically
vliaskov has quit [Remote host closed the connection]
vliaskov has joined #dri-devel
fxkamd has joined #dri-devel
<alyssa>
I'm feeling really deletionist
rasterman has quit [Quit: Gettin' stinky!]
fxkamd has quit [Remote host closed the connection]
Haaninjo has joined #dri-devel
fxkamd has joined #dri-devel
lemonzest has quit [Quit: WeeChat 3.6]
kzd has joined #dri-devel
yuq825 has left #dri-devel [#dri-devel]
<karolherbst>
alyssa: hey, wanna delete some code?
<danvet>
lina, is IntoGEMObject the base class trait I got confused about?
Sachiel_ is now known as Sachiel
<danvet>
about the lookup_handle discussion
<alyssa>
karolherbst: Yippee!
<karolherbst>
how much work do you want to do before you are able to delete a bunch of code?
<danvet>
dolphin, if you haven't backmerged yet I'm about to merge rodrigo's pull
<danvet>
so perhaps hold off
<alyssa>
karolherbst: Preferably none!
<alyssa>
:p
tobiasjakobi has joined #dri-devel
<karolherbst>
mhhhh
tobiasjakobi has quit []
<alyssa>
what do you want to delete?
<alyssa>
is it gallium/drivers/tegra
<alyssa>
you should def delete it
<karolherbst>
was thinking about clover
<karolherbst>
but that works as well
<karolherbst>
fight with thierry over it
<alyssa>
Ooooh deleting clover sounds like fun for the whole family
<karolherbst>
yeah
<karolherbst>
just needs some stuff to be fixed first
<karolherbst>
like landing HMM support (ugh, but I really just have to test and land it) and getting radeonsi/r600 supported
<alyssa>
:D
<karolherbst>
but
<karolherbst>
it's 16k loc
<karolherbst>
I'll look into radeonsi this weekend and see if it can be enabled already or not
bmodem has joined #dri-devel
<karolherbst>
I have no GPU to check with r600
<daniels>
aiui gerddie is the only one who ever touches r600, and he's away for a few weeks
<tagr>
dcbaker: at this point I don't care when it goes in, I just want it off my plate, to be honest =)
<tagr>
daniels: huh... didn't realize anyone else was running into that, but glad I could help
bmodem has quit [Ping timeout: 480 seconds]
Zopolis4_ has quit []
kts has joined #dri-devel
Duke`` has joined #dri-devel
lemonzest has joined #dri-devel
tzimmermann has quit [Quit: Leaving]
Haaninjo has quit [Quit: Ex-Chat]
Haaninjo has joined #dri-devel
<alyssa>
I wonder how much faster NIR would be if it weren't so darn chunky
<dottedmag>
alyssa: Sounds funny, given that Nir is a pretty common name in Israel.
<alyssa>
dottedmag: ani lo midiberet ivrit
<tursulin>
robclark: in the latest version setting of deadline is no longer unconditional but depends on non-zero timeout in the caller? Does it fix clvk like that?
<tursulin>
*still
<tursulin>
s/fix/improve/ I guess
<robclark>
tursulin: yes, that is sufficient for clvk while not boosting someone who is just querying if a syncobj is signaled
<tursulin>
hm timeout zero means just query somehow and not infinite wait?
<robclark>
right
<tursulin>
robclark: think I am okay with that, I was more concerned about the state of things in the version which had os_time_get_absolute_timeout(0) passed unconditionally to drm_syncobj_timeline_wait
<tursulin>
I don't have a good view on how clFlush then translates to some VK API in clvk
<tursulin>
clFinish actually
<tursulin>
I guess it is not passing zero but "infinite future" in the implementation?
<alyssa>
But just deleting if_uses doesn't actually look too bad
<alyssa>
and adding a `bool is_if` to nir_src to distinguish
<robclark>
tursulin: I'm not sure _exactly_ what the call path from cl is (mattst88 might know).. but it is at least waiting so it must be using some non-zero timeout ;-)
<alyssa>
actually I typed out a good chunk of that patch before bothering to search for prior art :p
<gfxstrand>
alyssa: Yeah, deleting if_uses would be amazing.
<alyssa>
gfxstrand: So no objection to doing that without the jump_if stuff?
<alyssa>
deleting 16 bytes from nir_ssa_def at the cost of 1 byte in nir_src is still a win
<gfxstrand>
Yeah
<alyssa>
(and could steal a bit from is_ssa)
<gfxstrand>
alyssa: It's not even 1B, put it next to is_ssa and it'll get lost in the padding
<gfxstrand>
Evil thought: Use the bottom bit of the pointer. :)
<gfxstrand>
Same for reg/ssa
<alyssa>
I did think about that yes
<alyssa>
One thing at a time
JohnnyonFlame has joined #dri-devel
thellstrom has quit [Ping timeout: 480 seconds]
<jenatali>
:O How did a dzn test job finish in 1:38... that's insanely fast
<gfxstrand>
alyssa: We could do that for ssa/reg as well with ssa having 0 in the bottom bit, of course.
<gfxstrand>
alyssa: Sounds dangerous... I kinda love it. :evil_grin:
<gfxstrand>
But let's start with the bool. It doesn't bloat anything (gets lost in the padding) and should be relatively safe.
<alyssa>
yep yep working on it
vliaskov has quit [Remote host closed the connection]
vliaskov has joined #dri-devel
rasterman has joined #dri-devel
tursulin has quit [Ping timeout: 480 seconds]
vliaskov has quit [Remote host closed the connection]
vliaskov has joined #dri-devel
<alyssa>
ok, typed something out. completely untested but should work right? :P
<alyssa>
it compiles, must be perfect
mbrost has joined #dri-devel
<gfxstrand>
Ship it!
<gfxstrand>
:P
jewins has quit [Quit: jewins]
jewins has joined #dri-devel
djbw_ has joined #dri-devel
Guest10151 has quit [Remote host closed the connection]
kts has quit [Quit: Konversation terminated!]
krushia has joined #dri-devel
lynxeye has quit [Quit: Leaving.]
jaganteki has joined #dri-devel
Farabi has joined #dri-devel
Farabi has quit [autokilled: We suspect this host of participating in a botnet. Mail support@oftc.net if you feel this in error. (2023-04-06 17:21:26)]
java has joined #dri-devel
duty20 has joined #dri-devel
kel_gobekli has joined #dri-devel
hayallerim has joined #dri-devel
SAWY has joined #dri-devel
cemre628 has joined #dri-devel
MERT_A27 has joined #dri-devel
delige_hasret has joined #dri-devel
sisdeki_gemi has joined #dri-devel
kKader16o has joined #dri-devel
cem37ys has joined #dri-devel
deaht has joined #dri-devel
kahve_fali has joined #dri-devel
berk has joined #dri-devel
AcimasiZ_MaN has joined #dri-devel
genc_a_izmit_parlak_arar has joined #dri-devel
bursamert has joined #dri-devel
Acem has joined #dri-devel
ulas has joined #dri-devel
aspirin_19 has joined #dri-devel
DR_BATU_ has joined #dri-devel
efe09 has joined #dri-devel
yeliz has joined #dri-devel
_NITER_tools_ has joined #dri-devel
olg47 has joined #dri-devel
astsb_37_47 has joined #dri-devel
zeynep has joined #dri-devel
firari05 has joined #dri-devel
baron_m has joined #dri-devel
romeo24 has joined #dri-devel
Muhendis_ has joined #dri-devel
DaMaStaSKo_SeKHiL has joined #dri-devel
yilmaz has joined #dri-devel
sevecen276 has joined #dri-devel
_MoDeLsS_M has joined #dri-devel
HardRocK has joined #dri-devel
gun_batImIndan_safaga has joined #dri-devel
sleep has joined #dri-devel
ALPEREN27M_ has joined #dri-devel
RepubLicC has joined #dri-devel
sandal39d has joined #dri-devel
_sisey_ has joined #dri-devel
sariseker has joined #dri-devel
Zenci_Guc has joined #dri-devel
SuBaY06 has joined #dri-devel
cenk has joined #dri-devel
Girl84_ has joined #dri-devel
MURATHAN_34_ has joined #dri-devel
sabah_uykusu has joined #dri-devel
SeRsErYy_50 has joined #dri-devel
SAWY has quit [Remote host closed the connection]
java has quit [Remote host closed the connection]
MERT_A27 has quit [Remote host closed the connection]
bursamert has quit [Remote host closed the connection]
duty20 has quit [Remote host closed the connection]
hayallerim has quit [Remote host closed the connection]
kel_gobekli has quit [Remote host closed the connection]
kahve_fali has quit [Remote host closed the connection]
delige_hasret has quit [Remote host closed the connection]
Muhendis_ has quit [Remote host closed the connection]
cemre628 has quit [Remote host closed the connection]
berk has quit [Remote host closed the connection]
olg47 has quit [Remote host closed the connection]
cem37ys has quit [Remote host closed the connection]
baron_m has quit [Remote host closed the connection]
aspirin_19 has quit [Remote host closed the connection]
kKader16o has quit [Remote host closed the connection]
deaht has quit [Remote host closed the connection]
yeliz has quit [Remote host closed the connection]
firari05 has quit [Remote host closed the connection]
efe09 has quit [Remote host closed the connection]
DR_BATU_ has quit [Remote host closed the connection]
ulas has quit [Remote host closed the connection]
DaMaStaSKo_SeKHiL has quit [Remote host closed the connection]
zeynep has quit [Remote host closed the connection]
Acem has quit [Remote host closed the connection]
astsb_37_47 has quit [Remote host closed the connection]
Zenci_Guc has quit [Remote host closed the connection]
yilmaz has quit [Remote host closed the connection]
AcimasiZ_MaN has quit [Remote host closed the connection]
genc_a_izmit_parlak_arar has quit [Remote host closed the connection]
gun_batImIndan_safaga has quit [Remote host closed the connection]
RepubLicC has quit [Remote host closed the connection]
_NITER_tools_ has quit [Remote host closed the connection]
_MoDeLsS_M has quit [Remote host closed the connection]
ALPEREN27M_ has quit [Remote host closed the connection]
sleep has quit [Remote host closed the connection]
sandal39d has quit [Remote host closed the connection]
_sisey_ has quit [Remote host closed the connection]
HardRocK has quit [Remote host closed the connection]
romeo24 has quit [Remote host closed the connection]
sevecen276 has quit [Remote host closed the connection]
Girl84_ has quit [Remote host closed the connection]
sabah_uykusu has quit [Remote host closed the connection]
SeRsErYy_50 has quit [Remote host closed the connection]
SuBaY06 has quit [Remote host closed the connection]
cenk has quit [Remote host closed the connection]
sariseker has quit [Remote host closed the connection]
MURATHAN_34_ has quit [Remote host closed the connection]
sisdeki_gemi has quit [Remote host closed the connection]
umutt22 has joined #dri-devel
fueally has joined #dri-devel
erhan_ has joined #dri-devel
TaTLiSeRSeRi35 has joined #dri-devel
tayfa_1 has joined #dri-devel
sagopa has joined #dri-devel
istserkan has joined #dri-devel
doktor_serdar has joined #dri-devel
genc_isadami_ist has joined #dri-devel
berff has joined #dri-devel
bay_gamzeli has joined #dri-devel
aylin_06 has joined #dri-devel
Olympos07 has joined #dri-devel
mehmet33 has joined #dri-devel
mediha_ has joined #dri-devel
singerrr has joined #dri-devel
goul_yorguu has joined #dri-devel
mertim_1903_19 has joined #dri-devel
_LiGhT_ has joined #dri-devel
mypsgcyi has joined #dri-devel
selin19 has joined #dri-devel
ilker_63 has joined #dri-devel
Serdarrm has joined #dri-devel
muhtemel2171 has joined #dri-devel
abraxas has joined #dri-devel
mchael has joined #dri-devel
MaTeM has joined #dri-devel
User001 has joined #dri-devel
ceren35 has joined #dri-devel
mezopotamya has joined #dri-devel
PRENS047 has joined #dri-devel
totti_ has joined #dri-devel
kardele has joined #dri-devel
_SELIN has joined #dri-devel
EmuLe_ has joined #dri-devel
borhan_m has joined #dri-devel
MURAT35674 has joined #dri-devel
yezdanur has joined #dri-devel
prens67 has joined #dri-devel
kobra26 has joined #dri-devel
agri_araba_var has joined #dri-devel
cihan_25m has joined #dri-devel
sinann_ has joined #dri-devel
atlantik_ has joined #dri-devel
GeCe_RenGi has joined #dri-devel
deiz_kurdu has joined #dri-devel
mustafa_dor_ist_m has joined #dri-devel
yeliz has joined #dri-devel
pilot_ has joined #dri-devel
Ken has joined #dri-devel
umutt22 has quit [Remote host closed the connection]
bay_gamzeli has quit [Remote host closed the connection]
erhan_ has quit [Remote host closed the connection]
_LiGhT_ has quit [Remote host closed the connection]
mchael has quit [Remote host closed the connection]
fueally has quit [Remote host closed the connection]
ilker_63 has quit [Remote host closed the connection]
mypsgcyi has quit [Remote host closed the connection]
genc_isadami_ist has quit [Remote host closed the connection]
selin19 has quit [Remote host closed the connection]
istserkan has quit [Remote host closed the connection]
sagopa has quit [Remote host closed the connection]
goul_yorguu has quit [Remote host closed the connection]
singerrr has quit [Remote host closed the connection]
tayfa_1 has quit [Remote host closed the connection]
mediha_ has quit [Remote host closed the connection]
muhtemel2171 has quit [Remote host closed the connection]
TaTLiSeRSeRi35 has quit [Remote host closed the connection]
mehmet33 has quit [Remote host closed the connection]
mertim_1903_19 has quit [Remote host closed the connection]
aylin_06 has quit [Remote host closed the connection]
borhan_m has quit [Remote host closed the connection]
doktor_serdar has quit [Remote host closed the connection]
Serdarrm has quit [Remote host closed the connection]
Olympos07 has quit [Remote host closed the connection]
ceren35 has quit [Remote host closed the connection]
cihan_25m has quit [Remote host closed the connection]
berff has quit [Remote host closed the connection]
GeCe_RenGi has quit [Remote host closed the connection]
EmuLe_ has quit [Remote host closed the connection]
mustafa_dor_ist_m has quit [Remote host closed the connection]
MURAT35674 has quit [Remote host closed the connection]
MaTeM has quit [Remote host closed the connection]
atlantik_ has quit [Remote host closed the connection]
User001 has quit [Remote host closed the connection]
_SELIN has quit [Remote host closed the connection]
totti_ has quit [Remote host closed the connection]
prens67 has quit [Remote host closed the connection]
abraxas has quit [Remote host closed the connection]
mezopotamya has quit [Remote host closed the connection]
kardele has quit [Remote host closed the connection]
PRENS047 has quit [Remote host closed the connection]
kobra26 has quit [Remote host closed the connection]
deiz_kurdu has quit [Remote host closed the connection]
sinann_ has quit [Remote host closed the connection]
yeliz has quit [Remote host closed the connection]
pilot_ has quit [Remote host closed the connection]
yezdanur has quit [Remote host closed the connection]
agri_araba_var has quit [Remote host closed the connection]
Ken has quit [Remote host closed the connection]
vliaskov has quit [Remote host closed the connection]
vliaskov has joined #dri-devel
mbrost has quit [Read error: Connection reset by peer]
pixelcluster_ has joined #dri-devel
<jenatali>
Ugh. "CI is taking too long" by literal seconds
<jenatali>
That timeout needs to be longer
pixelcluster has quit [Ping timeout: 480 seconds]
ahajda has quit [Quit: Going offline, see ya! (www.adiirc.com)]
mbrost has joined #dri-devel
<daniels>
jenatali: it doesn't; stuff needs to get fixed
<jenatali>
That'd work too
<daniels>
60 minutes is already way too long; if we push it to 90 minutes, then people will come to rely as much on 90 as they do on 60, and we'll only be able to merge 18 MRs per day
<anholt>
*attempt to merge :)
<daniels>
ha!
<daniels>
one of the biggest issues atm is that stoney+tgl+jsl have wildly unreliable UART, so using SSH for that is nearly done
<anholt>
daniels: I assume our a618 hangs waiting for serial are also unreliable uart
<jenatali>
Fair
<daniels>
anholt: yeah, presumably
jaganteki has quit [Remote host closed the connection]
<daniels>
anholt: but speaking of papering over breakage and a618, I was just staring at https://gitlab.freedesktop.org/mesa/mesa/-/pipelines/850042 - if you look at the fdno section there, a ton of a618 jobs failed with deqp complaining of empty caselist sets and 'is your caselist out of sync with your deqp binary?' - which all succeeded on retry. has anything changed in deqp-runner lately which would cause that?
<anholt>
dEQP error: FATAL ERROR: Failed to initialize dEQP: Empty test case name
<anholt>
well that's new
<anholt>
2023-04-06 16:03:23.669193: zstd: /*stdout*\: No space left on device
<jenatali>
Oof
<anholt>
this is something where we should probably be grepping for that string and highlighting it in our ci daily reports. or something even more emphatic.
<alyssa>
I thought the CI rootfs was ephermal?
<alyssa>
ephemeral
<anholt>
alyssa: it's on nfs.
<alyssa>
uff
<alyssa>
OK
<anholt>
turns out you can't do much if you load the rootfs in ram. and loading rootfs onto real storage is s l o w
<anholt>
(also, how many rmw cycles do you get on your emmc? wanna find out?)
<alyssa>
Yeah..
<alyssa>
anholt: It looks like shader-cache is enabled in CI
<alyssa>
It should, not be
<anholt>
it needs to be enabled in ci, both for coverage purposes and for perf purposes.
<alyssa>
(at least, grepping .gitlab-ci I see no hits for shader-cache and it's default on i think)
<alyssa>
coverage purposes I could see... perf how?
<anholt>
turns out caching shaders is useful for being able to get through the ctses faster.
DPA- has joined #dri-devel
<anholt>
they do like to repeat themselves a little bit.
<alyssa>
hrm.
<alyssa>
shader cache in a tmpfs then?
<daniels>
anholt: oooooohhhh, ok
<daniels>
let me go chase that up
<anholt>
alyssa: yes, that is what we do,.
<alyssa>
got it
<daniels>
anholt: thanks for having better comprehension than I do :)
DPA has quit [Ping timeout: 480 seconds]
<daniels>
koike: ^ 'no space on device' is probably something to add to our infra-error greps
<alyssa>
(Mostly asking because failing to disable the shader cache caused my CTS runs to tank as soon as I filled up.)
<karolherbst>
so uhm.. I think I'm ready to merge radeonsi support for rusticl 🙃
<alyssa>
karolherbst: r-b
<karolherbst>
but now I have to figure out meson stuff...
<karolherbst>
dcbaker: so, I think I know why I'm seeing this `isystem` problem with meson. The compiler args from dependencies are fetched differently for bindgen and C/C++ targets
<karolherbst>
so I end up with different flags
<dcbaker>
are you getting isystem when you don't expect it, or not getting isystem when you do?
<karolherbst>
but I've also seen that atm bindgen assumes it's generating bindings for a C file as well. So I think it might need some C vs C++ detection.
<karolherbst>
isystem when I don't
<karolherbst>
but the issue is really that the flags are different compared to when the same dep is used for a normal c/c++ target
<dcbaker>
yeah, I think when I initially implemented the bindgen wrapper C++ was still in the "don't use this yet" stage so I punted it
<karolherbst>
right
<karolherbst>
well.. by default it autodetects it by the file suffix
<karolherbst>
otherwise you can always explicitly state it as a compiler flag
<karolherbst>
not sure if meson should care anyway
<dcbaker>
I think we want to detect it by suffix, we already have the code to do that.
<karolherbst>
sure, but you can always pass it as a c arg into the target
<karolherbst>
but then it's weird anyway
<karolherbst>
but that's not a problem atm. The problem which need to be solved first is that isystem one, because that breaks compiling with a custom llvm once I pass the dep in
anujp has quit [Remote host closed the connection]
<dcbaker>
I suspect there are things we're already not handling correctly that we need that information for, namely I think there's now an add_project_dependencies() which takes a language argument and we should probably at the very least be pulling out the compiler flags for those
<karolherbst>
ahh
<dcbaker>
so knowing whether we need C or C++ would matter there
<dcbaker>
I dont think that I can land the b_ndebug thing until after 1.1.0 ships (rc2 just came out)
<karolherbst>
yeah, that's fine
<karolherbst>
that one isn't really critical
<dcbaker>
I've been kinda off the ball on stuff, just a lot of RL stress right now
<karolherbst>
fair
<dcbaker>
but I am personally annoyed that it didn't get in since it's on the milestone, has multiple non-maintainer reviews, and is not complicated
<dcbaker>
The isystem stuff on the other hand is probably a bug fix, so we might be able to get that done sooner
gouchi has joined #dri-devel
ngcortes has joined #dri-devel
<karolherbst>
yeah.. I can try to figure it out, but I think the solution here would be to generate the compiler flags identlical to how it's done in C/C++ targets
<karolherbst>
kinda weird that it's handled differently tbh
rasterman has quit [Quit: Gettin' stinky!]
<karolherbst>
anyway.. I need it for shader caching stuff, because... uhhh.. any other way would be even more terrible
Duke`` has quit []
vliaskov has quit [Remote host closed the connection]
Duke`` has joined #dri-devel
vliaskov has joined #dri-devel
Kayden has quit [Quit: to JF]
ngcortes has quit [Ping timeout: 480 seconds]
gio has quit [Quit: WeeChat 3.7.1]
mbrost has quit [Ping timeout: 480 seconds]
gio has joined #dri-devel
<DemiMarie>
lina: my recommendation is to include enough protection in the kernel to make sure that the display controller cannot be crashed
rszwicht has quit [Ping timeout: 480 seconds]
jkrzyszt has quit [Ping timeout: 480 seconds]
mbrost has joined #dri-devel
vliaskov has quit [Ping timeout: 480 seconds]
Haaninjo has quit [Quit: Ex-Chat]
Duke`` has quit [Ping timeout: 480 seconds]
DPA2 has joined #dri-devel
DPA- has quit [Ping timeout: 480 seconds]
DPA has joined #dri-devel
DPA2 has quit [Read error: Connection reset by peer]
DPA has quit []
mbrost has quit []
DPA has joined #dri-devel
ngcortes has joined #dri-devel
<jenatali>
Wee -600 lines from my CI xfails :)
Kayden has joined #dri-devel
a-865 has quit [Ping timeout: 480 seconds]
fab has quit [Quit: fab]
APic has quit [Ping timeout: 480 seconds]
APic has joined #dri-devel
<alyssa>
Woo
pcercuei has quit [Quit: dodo]
danvet has quit [Ping timeout: 480 seconds]
a-865 has joined #dri-devel
ngcortes has quit [Remote host closed the connection]
ngcortes has joined #dri-devel
gouchi has quit [Remote host closed the connection]
K`den has joined #dri-devel
Kayden has quit [Remote host closed the connection]
nchery is now known as Guest10204
nchery has joined #dri-devel
K`den is now known as Kayden
<alyssa>
help i'm trapped in a coccinelle factory
bluetail4 has joined #dri-devel
bluetail has quit [Ping timeout: 480 seconds]
ngcortes has quit [Ping timeout: 480 seconds]
konstantin has joined #dri-devel
konstantin_ has quit [Ping timeout: 480 seconds]
ngcortes has joined #dri-devel
MajorBiscuit has quit [Ping timeout: 480 seconds]
nchery has quit [Quit: Leaving]
<alyssa>
gfxstrand: OK, next deflate-NIR idea
<alyssa>
AFAICT, there is no NIR pass that uses both pass_flags and instr indexing
<alyssa>
Why not merge into a single field?
<alyssa>
that is, the few passes that need instructions indexed use their pass flags for the index (with pass flags expanded to 32-bit to compensate)
<alyssa>
..wait, that doesn't actually save anything because of packing, still have instr type dammit
<anholt>
alyssa: since you're in the area, you know about pahole, right?
<alyssa>
yes that's what I was looking at that made me think about it :)
<alyssa>
and has now alerted me to a much bigger waste, nir_reg_src being embedded inside nir_src directly
<alyssa>
it's in a union with nir_ssa_def*, but one's a pointer (8 bytes) and the other is the whole structure (24 bytes)
<alyssa>
=> we're wasting 16 bytes per nir_src on SSA compilers
sarnex has quit [Read error: No route to host]
<alyssa>
which is even more significant than the thing I found today
<jenatali>
:(
sarnex has joined #dri-devel
<alyssa>
The easy fix is to change to a nir_reg_src*, which fits snugly in the union at the cost of a bit more indirection when nir_reg_src is actually used
<alyssa>
The hard fix is to get rid of nir_reg_src but that's unfortunately a "later" problem. We could get most of the benefit if we got rid of register offset+indirect, but I think that might be doing something for a few backends (intel vec4, and maybe ir3)
<alyssa>
and maybe TTN and r600/sfn .. mumble
<anholt>
nir_lower_locals_to_regs says intel, r600, ir3, ntt, etc.
<alyssa>
yeah. definitely not a project for any time soon
<alyssa>
but that easy fix should be doable and get most of the win
<gfxstrand>
alyssa: Does anyone actually use the indirect crap?
<gfxstrand>
maybe ir3?
<alyssa>
gfxstrand: see 2 lines up
<alyssa>
the better question I suppose is whether it's load bearing
<gfxstrand>
alyssa: Maybe this is a good first thing to byte off to try and get rid of nir_reg_src.
<gfxstrand>
Relegate array registers to a special instruction.
<alyssa>
Yeah, that could be a decent approach
<alyssa>
Definitely not an April project. Maybe a May one
<gfxstrand>
That would probably only require touching 2-3 back-ends.
<gfxstrand>
And I'm far less worried about getting copy-prop exactly right for the array case.
<alyssa>
at least 4 backends see above
<gfxstrand>
Right
* gfxstrand
wishes the Intel vec4 back-end would die already
<gfxstrand>
We could probably make that one use load/store_scratch
ngcortes has quit [Remote host closed the connection]
ngcortes has joined #dri-devel
<alyssa>
23:37 < alyssa> the better question I suppose is whether it's load bearing
<alyssa>
I.e. are there real workloads that run on that class of hardware that have their performance materially improved from the indirects
<gfxstrand>
The Intel vec4 back-end is affected. We can't just turn on indirect lowering.
<alyssa>
is this a thing games use?
<gfxstrand>
But like I said, we can go the load/store_scratch path, we just need to wire it up.
<gfxstrand>
Ues
<gfxstrand>
sadly
<alyssa>
Boo
<alyssa>
doesn't load_scratch hit the stack..?
<gfxstrand>
You'd have to dig back through the history. This is an anholt thing from like 10 years ago.
<gfxstrand>
Yes, what the vec4 back-end does today is use load/store_scratch for arrays.
<gfxstrand>
It just does it in the back-end which is dumb.
<alyssa>
Oh. That's just wrong :p
<alyssa>
yes, we can fix that then
<alyssa>
i thought it had actual indirect access to the GPRF
<gfxstrand>
It does but we don't use it because that would be insane.
<alyssa>
Right.. if it's just a question of wiring up 2 intrinsics and calling the lowering pass, all the more reason to wean it off
<alyssa>
i've just discovered brw_clip i am closing the file
<alyssa>
this is too terrible what
<alyssa>
ok. sure. we can fix this for intel vec4 fine
<gfxstrand>
Yeah, stay clear of brw_clip.c
<gfxstrand>
There be dragons and they're hungry!
Zopolis4_ has joined #dri-devel
<alyssa>
Seemingly ir3 really does do GPRF indirect access, but also ir3 is a good backend so I'm not worried about plumbing in load_array/store_array intrinsics for it to get the same effect