ChanServ changed the topic of #dri-devel to: <ajax> nothing involved with X should ever be unable to find a bar
Kayden has joined #dri-devel
ybogdano is now known as Guest7217
Guest7126 is now known as ybogdano
krushia has quit [Remote host closed the connection]
<DavidHeidelberg[m]>
anholt_: CI passed, time to merge new mesa-swrast machines?
<gfxstrand>
airlied: RE: Vulkan Beta...
ngcortes has joined #dri-devel
<gfxstrand>
airlied: Yeah, part of the XML parser changes was to unify the skipping of provisional stuff.
<gfxstrand>
I didn't realize we had video in the tree at that time.
<gfxstrand>
(Might be good for CI to build-test that....)
<gfxstrand>
airlied: IDK what the right thing to do is. I kinda want to keep as much beta stuff disabled by default as we can. That way we're guaranteed that we never ship beta stuff accidentally.
<gfxstrand>
But, also, I don't want to make the build system a mess by passing an extra flag into every single python generator.
<airlied>
gfxstrand: not sure we can build stuff without the define at all
<airlied>
so I don't see how we could accidentally ship stuff
<gfxstrand>
Yeah
<gfxstrand>
Let me see what breaks when I drop the check
nchery has quit [Remote host closed the connection]
nchery has joined #dri-devel
<airlied>
like I'd wire it up if I had a clue how, but my python brain wasn't full across it
<airlied>
I did consider a ci job for it, but it seemed like overkill until I add drivers code in the tree that I care about under beta
<airlied>
thus far I've avoided merging beta encoding for radv
<gfxstrand>
that's okay
<airlied>
since I think the spec is a bit too churny, also doing it in CI I felt was unfairly pushing maintaining beta builds onto others when it might not be trivial
<airlied>
every header update would require radv changes in the beta code which might be very non-trivial
JohnnyonFlame has joined #dri-devel
Kayden has quit [Quit: leave office]
stuarts has quit []
krushia has joined #dri-devel
columbarius has joined #dri-devel
co1umbarius has quit [Ping timeout: 480 seconds]
kzd has joined #dri-devel
nchery has quit [Quit: Leaving]
kzd has quit [Quit: kzd]
kzd has joined #dri-devel
Guest7217 has quit [Ping timeout: 480 seconds]
idr has quit [Ping timeout: 480 seconds]
appusony has joined #dri-devel
ngcortes has quit [Remote host closed the connection]
ngcortes has joined #dri-devel
kzd has quit [Quit: kzd]
windleaves has joined #dri-devel
heat has quit [Read error: No route to host]
heat has joined #dri-devel
wind has quit [Ping timeout: 480 seconds]
khfeng_ has quit [Ping timeout: 480 seconds]
alyssa has joined #dri-devel
<alyssa>
my MR is waiting for sanity
<alyssa>
oddly poetic
khfeng_ has joined #dri-devel
FireBurn has quit [Ping timeout: 480 seconds]
ybogdano is now known as Guest7231
ngcortes has quit [Read error: Connection reset by peer]
khfeng has joined #dri-devel
khfeng_ has quit [Ping timeout: 480 seconds]
khfeng has quit [Remote host closed the connection]
khfeng has joined #dri-devel
kzd has joined #dri-devel
khfeng has quit [Ping timeout: 480 seconds]
rsalvaterra_ has joined #dri-devel
rsalvaterra is now known as Guest7237
rsalvaterra_ is now known as rsalvaterra
Guest7238 has quit [Ping timeout: 480 seconds]
Leopold__ has joined #dri-devel
bmodem has joined #dri-devel
Leopold has quit [Ping timeout: 480 seconds]
Zopolis4 has joined #dri-devel
smiles has quit [Ping timeout: 480 seconds]
<alyssa>
eric_engestrom: oh wow enough banging my head on things later, I think I got clang-format ci working
<alyssa>
bruteforce works!
heat has quit [Ping timeout: 480 seconds]
alyssa has quit [Quit: leaving]
bmodem1 has joined #dri-devel
bmodem1 has quit []
bmodem has quit [Ping timeout: 480 seconds]
bmodem has joined #dri-devel
Company has quit [Read error: Connection reset by peer]
fxkamd has quit []
Kayden has joined #dri-devel
bgs has joined #dri-devel
khfeng has joined #dri-devel
aravind has joined #dri-devel
kts has joined #dri-devel
fab has joined #dri-devel
kts has quit [Quit: Konversation terminated!]
dn^ has joined #dri-devel
ahajda has joined #dri-devel
JohnnyonFlame has quit [Read error: Connection reset by peer]
danvet has joined #dri-devel
<eric_engestrom>
alyssa: awesome! I'll review it now :)
tzimmermann has joined #dri-devel
fab has quit [Quit: fab]
agd5f_ has joined #dri-devel
kts has joined #dri-devel
agd5f has quit [Ping timeout: 480 seconds]
robobub has quit []
kzd has quit [Quit: kzd]
sghuge has quit [Remote host closed the connection]
sghuge has joined #dri-devel
YuGiOhJCJ has joined #dri-devel
jljusten has quit [Ping timeout: 480 seconds]
hansg has joined #dri-devel
fab has joined #dri-devel
tursulin has joined #dri-devel
<bbrezillon>
danylo, danvet: Hi. I'm been looking in more details at the nouveau VM_BIND stuff, and there's still one thing that's not clear to me. Is drm_sched_backend_ops::run_job() consider to be part of the signaling path. IOW, can we allocate from this callback (most/all? drivers seem to allocate their fence object from there with GFP_KERNEL flags), and if we can allocate these fences with
<bbrezillon>
GFP_KERNEL, what make things different for page table allocations.
<danvet>
yeah those are all driver bugs unfortunately
<danvet>
I thought amd reworked to fix this, but the rework never got out of amd into other drivers
<bbrezillon>
ok, so it's indeed considered to be in the signaling path
<danvet>
yeah
<bbrezillon>
is this documented?
<danvet>
no
<danvet>
I did a patch to add signalling annotations and broke everything
<danvet>
so I think best to do is to resurrect that, but add a knob to disable it per-driver
vliaskov has joined #dri-devel
<danvet>
and then set that knob for all current drivers except amdgpu
<danvet>
I think I've discussed this with thomas hellstrom or mlankhorst or mbrost in the past too
<danvet>
opt-out might need to be per-callback even since iirc amdgpu still has issues with console_lock in the tdr path
<javierm>
danvet, bbrezillon: I believe that topic was also discussed by Christian and Lina in the thread about the agx driver
<danvet>
yeah I need to catch up on that
<MrCooper>
DemiMarie: "kernels from distros like RHEL are often out of date" do not let the base kernel version fool you, most of the code is pretty close to upstream (in RHEL 8 & 9 ATM, older versions are more or less frozen at this point)
<javierm>
danvet: I read everything but have to admit that only understand half ot it :)
<danvet>
I also need to come up with some idea for the annotations in the display helpers
<danvet>
because they fire in a bunch of common but unfixable cases for something that not really anyone is using ...
<danvet>
javierm, yeah that's the usual end result when talking about memory reclaim
<danvet>
utter confusion by the people who got lost halfway through
<danvet>
and terminally sad faces by the leftover few :-/
<javierm>
I see... is not only me then
<danvet>
someone I pointed at the plumbers discussion with könig, gfxstrand and me from 2 years ago summarized it with "I don't really understand, but your facial expressions really worried me"
<bbrezillon>
Unless I'm missing something, for simple drivers (panfrost, etnaviv, ...), it's mostly a matter of pre-allocating the fence object at submit time, and filling it at run_job() time, so no allocation happens in there
jljusten has joined #dri-devel
<bbrezillon>
but no matter how we solve it, it should probably be added to the drm_sched_backend_ops::
<bbrezillon>
doc
jkrzyszt has joined #dri-devel
<danvet>
bbrezillon, yup
<danvet>
(on both actually)
<danvet>
hm if amdgpu switched to shadow buffer then the console lock thing is also sorted now, and they should work with the full annotations
<bbrezillon>
are there other callbacks that are supposed to run in the signaling path?
<danvet>
all of them actually :-)
<danvet>
but run_job and timedout_job are the ones where people usually get it wrong
<danvet>
bbrezillon, this is why my first patch annotates the entire scheduler thread
<bbrezillon>
yeah, but I'll probably leave that to you. Just wanted to update the docs, and fix panfrost :P
<danvet>
well with the annotations you get much better testing generally
<bbrezillon>
unless you want to update the docs along with the annotation
<bbrezillon>
sure, I'm not claiming we shouldn't add the annotation, just saying it's worth documenting it too
ice9 has joined #dri-devel
<bbrezillon>
because the current situation is, we don't have the annotation, and people keep doing the same mistake
<bbrezillon>
actually, this came up while I was discussing the the page table pre-allocation stuff with robmur01 on #panfrost, and he rightfully pointed me to all those drivers allocating stuff in the run_job() path, and this is where I got confused, not knowing is this was part of the signaling critical section and all drivers were buggy, or if it was actually allowed to allocate from there
kts has quit [Quit: Konversation terminated!]
<danvet>
bbrezillon, yeah probably the docs are a good first step, you're volunteering?
<danvet>
maybe in the main struct text a link to the dma-fence signalling doc section for further details, and then a one-line in each callback that they're signalling critical sections and therefore very limited in what they can do, specicially can't allocate memory with GFP_KERNEL
<danvet>
and then commit message with the note that sadly most drivers get this wrong
Lightsword_ has left #dri-devel [#dri-devel]
Lightsword has joined #dri-devel
<bbrezillon>
danvet: yep, I can do that, and fix panfrost along the way
jaganteki has quit [Remote host closed the connection]
ice9 has quit [Ping timeout: 480 seconds]
fab has joined #dri-devel
vliaskov_ has joined #dri-devel
Zopolis4 has quit []
vliaskov has quit [Ping timeout: 480 seconds]
ice99 has joined #dri-devel
fab has quit [Quit: fab]
fab has joined #dri-devel
kts has joined #dri-devel
MajorBiscuit has joined #dri-devel
jaganteki has joined #dri-devel
devilhorns has joined #dri-devel
jaganteki has quit [Remote host closed the connection]
devilhorns has quit []
_xav_ has joined #dri-devel
Company has joined #dri-devel
xroumegue has quit [Ping timeout: 480 seconds]
ice99 has quit [Ping timeout: 480 seconds]
fab has quit [Ping timeout: 480 seconds]
jaganteki has joined #dri-devel
YuGiOhJCJ has quit [Quit: YuGiOhJCJ]
srslypascal is now known as Guest7270
srslypascal has joined #dri-devel
Guest7270 has quit [Ping timeout: 480 seconds]
mohamexiety has joined #dri-devel
pochu has quit [Quit: leaving]
kts has quit [Quit: Konversation terminated!]
kts has joined #dri-devel
<daniels>
this is basically like The Purge but for lavapipe/llvmpipe
<zmike>
haha
kts has quit [Quit: Konversation terminated!]
rasterman has quit [Ping timeout: 480 seconds]
bmodem has quit [Ping timeout: 480 seconds]
<daniels>
fwiw it's looking like our remaining showstopper - apart from missing swrast - is a GitLab bug where it just doesn't hand jobs out to runners. if you see anything stuck in pending for wild amounts of time like 30-40min it's that. if the pipeline is still running other jobs, you can cancel & retry the pending-forever one and it'll sail straight through
<daniels>
I've collected enough about it to figure out how to unbreak it, so we should have a monumentally stupid workaround in place later this afternoon
<javierm>
tzimmermann: thanks for your explanations, makes sense. Feel free to add my r-b to patch #1 too if you resend/apply
<tzimmermann>
javierm, thank you. i do expect that some code can be shared at some point. i simply don't want to get ahead of myself
<javierm>
tzimmermann: yes, makes sense
MajorBiscuit has quit [Ping timeout: 480 seconds]
jaganteki has quit [Remote host closed the connection]
heat has joined #dri-devel
zehortigoza has quit [Remote host closed the connection]
oneforall2 has quit [Quit: Leaving]
oneforall2 has joined #dri-devel
<DavidHeidelberg[m]>
where I can grep for the wayland-dEQP-EGL.functional.negative_api.create_pixmap_surface ? piglit/deqp?
<daniels>
DavidHeidelberg[m]: VK-GL-CTS
<DavidHeidelberg[m]>
thx
<daniels>
though afaict that should be fine as we return EGL_BAD_PARAMETER
<daniels>
is it failing?
<DavidHeidelberg[m]>
daniels: it's fixed by mesa with wayland; but when mesa is without wayland, it should be skipped, not failed I guess?
<daniels>
it should still be passing on x11
<DavidHeidelberg[m]>
oh, then I just workarounded issue by enabling wayland :D
<daniels>
heh yeah, I think that's just broken CTS
rasterman has joined #dri-devel
MajorBiscuit has joined #dri-devel
Dr_Who has joined #dri-devel
fxkamd has joined #dri-devel
zehortigoza has joined #dri-devel
Ahuj has quit [Ping timeout: 480 seconds]
ice9 has joined #dri-devel
aravind has quit [Ping timeout: 480 seconds]
FireBurn has joined #dri-devel
ice9 has quit [Ping timeout: 480 seconds]
tursulin has quit [Ping timeout: 480 seconds]
tobiasjakobi has joined #dri-devel
fab has joined #dri-devel
tobiasjakobi has quit []
<DemiMarie>
bbrezillon: is the whole dma-fence design the actual problem?
<DemiMarie>
Where core MM stuff gets blocked on async GPU work instead of paging the memory out and having the GPU take an IOMMU fault.
<DemiMarie>
Any chance this can be fixed in drivers other than AMD?
tzimmermann has quit [Quit: Leaving]
alyssa has joined #dri-devel
<alyssa>
daniels: i wrote a CI thing and it *works*?!
<alyssa>
send help something must have gone terribly wrong
Duke`` has joined #dri-devel
gouchi has joined #dri-devel
<zmike>
I'm here to help, who needs a spot
krushia has quit [Quit: Konversation terminated!]
<alyssa>
daniels: also, re unassigning, this is what Needs merge"
<alyssa>
helps with :D
<alyssa>
"backlog of MRs to assign to Marge on a rainy day"
<zmike>
would be nice to have a cron job to sweep that a couple times a day
<bbrezillon>
DemiMarie: not sure I follow. Did I say the dma-fence design was the problem?
<DemiMarie>
bbrezillon: no, but it seems to me that the dma-fence design will keep causing bugs until it is changed
<daniels>
alyssa: hrm?
lynxeye has quit [Quit: Leaving.]
<alyssa>
daniels: which part
<daniels>
alyssa: 'wrote a CI thing'?
<alyssa>
daniels: clang-format lint job
<alyssa>
i thought i would just end up waiting for eric or okias to do it :p
<daniels>
oh, nice
<alyssa>
(..next step is getting panfrost clang-format-clean so I can flip it on there because apparently it isn't right now, whoops)
<bbrezillon>
DemiMarie: I'm no expert on these things, but my understanding is that it's not the dma_fence design itself that's problematic, but more the memory reclaim logic. With mem shrinkers waiting on dma_fence objects to release memory, and waitable allocation happening in the job submission path, you might just deadlock.
<Hazematman>
This MR adds support for the `get_screen_fd` API to all the gallium drivers, so I need some people to look over changes to those gallium drivers
alyssa has left #dri-devel [#dri-devel]
<gfxstrand>
The dma_fence design itself is fine. It's designed that way for very good reasons. There are problems we need to solve but they're more around how things have become tangled up inside drm than they are about dma_fence.
<bbrezillon>
DemiMarie: if you have a way to swapout some memory accessed by in-flight jobs, you might be able to unblock the situation, but I'm sure this 'no allocation in the scheduler path' rule is here to address the problem where a job takes too long to finish and the shrinker decides to reclaim memory anyway.
<bbrezillon>
I think the problem is that drm_sched exposes a fence to the outside world, and it needs a guarantee that this fence will be signaled, otherwise other parties (the shrinker) might wait for an event that's never going to happen
<gfxstrand>
Yup
<bbrezillon>
that comes from the fact it's not the driver fence that's exposed to the outside world, but an intermediate object, which is indirectly signaled by the driver fence, that's created later on when the scheduler calls ->run_job()
<gfxstrand>
Once a fence has been exposed, even internally within the kernel, it MUST signal in finite time.
stuarts has joined #dri-devel
<gfxstrand>
If you allocate memory, that could kick off reclaim which can then have to wait on the GPU and you're stuck.
<bbrezillon>
so the issue most drivers have, is that they allocate this driver fence in the ->run_job() path with GFP_KERNEL (waitable allocation), which might kick the GPU driver shrinker, which in turn will wait on the fence exposed by the drm_sched, which will never be signaled because the driver is waiting for memory to allocate its driver fence :-)
<bbrezillon>
what gfxstrand said :-)
<robclark>
embedding the hw fence in the job struct is probably easy enough to avoid that.. but then when you start calling into other subsys (iommu, runpm, etc) it starts getting a bit more terrifying
dviola has joined #dri-devel
MajorBiscuit has quit [Quit: WeeChat 3.6]
jaganteki has joined #dri-devel
FireBurn has quit [Quit: Konversation terminated!]
krushia has joined #dri-devel
<bbrezillon>
robclark: yeah, that's actually where the whole discussion started. I was trying to see if we could pass a custom allocator to the pgtable/iommu subsystem for page table allocation, so we can pre-allocate pages for the page table update, and avoid allocation in the run_job() path
<bbrezillon>
didn't really think of the runpm stuff, but if allocations can happen in the rpm_get_sync() path, that will be challenging too...
<bbrezillon>
I mean, blocking allocations, of course
<robclark>
bbrezillon: maybe spiff out iommu_iotlb_gather a bit more.. to also handle allocations
<bbrezillon>
yep, was pestering robmur01 with that yesterday :-)
<vsyrjala>
at least acpi is absolutely terrifying in terms of doing memory allocations in runtime pm paths/etc.
<robclark>
the gather struct is already used to defer freeing pages to optimize tlb flushing
<bbrezillon>
the other option would be to just re-implement the page table logic
kzd has joined #dri-devel
Ahuj has joined #dri-devel
mohamexiety has quit []
<robclark>
that is something I'd prefer avoiding
<bbrezillon>
which we might have to do if we want to use some fancy TTM helpers and get advanced memory reclaim involving reclaims of page-tables, not just memory backing GEM objects (didn't really check what the TTM TT abstraction looks like)
<robclark>
I did have a branch somewhere a while back that plumbed the gather struct more in the map path (since map can also trigger free's and tlb flushes)
<robclark>
we don't need to store pgtables in vram, so I don't think that is useful
<robclark>
(and not really sure how good ttm is in general with reclaim)
<bbrezillon>
don't know how good TTM reclaim is, but I'm sure panfrost reclaim is not great :-)
<robclark>
does panfrost have reclaim other than just madv?
<bbrezillon>
nope
Guest7231 is now known as ybogdano
<bbrezillon>
just ditched the whole reclaim stuff in pancsf, hoping someone could come up with a good reclaim-implementation solution for new drivers :-)
<bbrezillon>
and then I realized TTM had some of that
<robclark>
so I did add common lru and lru iteration
<robclark>
drm_gem_lru_scan()
<bbrezillon>
that's a good start
<robclark>
use that and reclaim is mostly not too bad except random places that might allocate memory
<bbrezillon>
I'll have a look, thanks for the pointer
<robclark>
_but_ I'm not doing iommu map/unmap from scheduler like a VM_BIND impl would do.. that kinda forces the issue of hoisting allocation out of io-pgtable.. but it is a useful thing to do because we can be more clever about tlb invalidates that way
<bbrezillon>
robclark: just curious, where do the page table live if they're not in vram?
<robclark>
there is no vram ;-)
<robclark>
it is all just ram
<bbrezillon>
yeah, I mean, it's the same on pancsf
<bbrezillon>
but it's still memory that's accessed by the GPU
<robclark>
not _really_
<bbrezillon>
okay, the MMU in front of the GPU :)
<robclark>
it is memory accessed by the (io)mmu which might happen to be part of the gpu
<robclark>
right
<bbrezillon>
but I think I get where you were going with 'we don't need to reclaim pgtable mem'
<bbrezillon>
tearing down a mapping will automatically release the memory, since it's all synchronous and dealt on the CPU side
<bbrezillon>
*the pgtable memory
<robclark>
hmm, that is kinda immaterial.. we don't need the pgtable to be in special memory that isn't system memory.. but you can still get into deadlock, because allocations that don't set GFP_ATOMIC or similar flags that allow the allocation to fail can recurse into shrinker
<bbrezillon>
sure, we still need to pre-allocate
<robclark>
yup
<bbrezillon>
guess the question is, should we could pgtable memory in the shrinker
<robclark>
(or, you do for the VM_BIND case.. if you do iommu map somewhere else where it is safe to allocate that is fine)
<bbrezillon>
because currently we don't
<bbrezillon>
*should we count
<robclark>
don't count.. but that isn't the problem
<robclark>
the problem is that fence signals allows other pages to be reclaimed
<robclark>
but the allocation of pages for pgtable can trigger shrinker which could depend on other gpu pages to become avail to free
<robclark>
so, one idea.. which 90% solves it (and at least reduces the # of pages you need to pre-allocate)
<bbrezillon>
sure, I'm just thinking about why TTM keeps track of page table memory in its reclaim logic. The alloc in signalling path is completly orthogonal and needs to be addressed for async VM_BIND
<robclark>
hmm.. or nm.. I was going to say do iommu map synchronously but unmap from run_job(). but that doesn't quite work
<robclark>
TTM is probably doing that because if you have vram (dGPU) you have pgtable in vram
<bbrezillon>
unmap might need to allocate too
<robclark>
reading pgtable over pci is not going to work out great
<robclark>
right, both map and unmap can free and alloc... but limited cases so you probably could put an upper bound on # of pages you'd need to pre-allocate
<bbrezillon>
there's an upper bound for map operations too. I mean, in addition to what you'll always need for the map anyway (the more you map, the more level of page tables you'll have to pre-allocate, but compared to what you'll need for the map operation itself, it should be negligible)
<bbrezillon>
so maybe allocating for the worse case is not such a big deal
kts has quit [Quit: Konversation terminated!]
<bbrezillon>
I guess you can run into problems if you have a lot of small async map/unmap operations, because then the amount of pages you allocate for the worst case can be much bigger than the amount of page you'd allocate if you knew the state of the VA space
<robclark>
probably don't free pages you didn't happen to use for current map/unmap and keep them avail for next time?
agd5f_ has joined #dri-devel
kts has joined #dri-devel
agd5f has quit [Ping timeout: 480 seconds]
kzd has quit [Quit: kzd]
<bbrezillon>
sure, we can keep a pool of free pages around to speed up allocation
mohamexiety has joined #dri-devel
lkw has joined #dri-devel
kzd has joined #dri-devel
<daniels>
I'm aware our shared runners are completely starved; this is exacerbated by the job distribution being unfair, which I'm typing up a patch for
kts has quit [Quit: Konversation terminated!]
lkw has quit [Quit: leaving]
<daniels>
ok, hopefully that's done now
iive has joined #dri-devel
alanc has quit [Remote host closed the connection]
alanc has joined #dri-devel
<DemiMarie>
bbrezillon gfxstrand: the idea I had is that the shrinker could call some sort of MMU notifier callback, which is not allowed to block or fail. That callback is responsible for unmapping the needed MMU/IOMMU entries, flushing any necessary TLB, and canceling any not-yet-submitted jobs. Jobs that are already in progress will just fault when they try to access paged-out memory, at which point it is up to the GPU driver to recover.
<DemiMarie>
Of course, this
<DemiMarie>
* course, this is equivalent to requiring all GPUs to support pageable memory on the host side.
Zopolis4 has joined #dri-devel
<DemiMarie>
More generally, having memory reclaim blocked on userspace-provided shaders seems very unsafe. I’m curious why drivers are not required to pin all memory that a shader might have access to, unless the GPU supports retryable page faults.
ybogdano is now known as Guest7297
ybogdano has joined #dri-devel
<robclark>
gpu faults because the system is overcommitted on memory isn't going to be hugely popular ;-)
<robclark>
I mean, your average 2GB or 4GB chromebook is under constant memory pressure ;-)
<robclark>
demand-paging on gpu side is something that some gpu's could do.. although compared to CPU where you are stalling one task, stalling the gpu for a fault is stalling 100's or 1000's of tasks.. so not sure if that is exactly great
<DemiMarie>
robclark: what about requring memory to be pre-pinned (and marked unavailable for shrinking)? Also, can’t even `GFP_KERNEL` fail?
<robclark>
GFP_KERNEL in practice can't fail for small allocations, IIRC
smiles has quit [Ping timeout: 480 seconds]
<robclark>
you can in the shrinker skip over memory that has unsignaled fences.. and this is probably what you want to do for early stages of shrinking. But under more extreme memory pressure you want to be able to wait for things that will become avail to reclaim in the near future
Ahuj has quit [Ping timeout: 480 seconds]
ngcortes has joined #dri-devel
smiles has joined #dri-devel
gio has quit []
smiles has quit [Ping timeout: 480 seconds]
<airlied>
bbrezillon: I think the latest nouveau bits have refactored that or are in the process of refactoring
hansg has quit [Quit: Leaving]
<airlied>
DemiMarie: pin all the things isn't really a winning method, eventually everyone pins all of memory with chrome tabs
<airlied>
dma-fence is pretty much pin all memory until a shader has finished with it, and you know it's finished when it signals a dma-fence
ybogdano is now known as Guest7305
Guest7297 is now known as ybogdano
<airlied>
dakr: ^ probably should read above just to reconfirm
JohnnyonFlame has joined #dri-devel
<dakr>
airlied, bbrezillon: Yes, it's not even entirely fixed in V2.
<DemiMarie>
airlied: So there are a couple problems there.
<DemiMarie>
First is that a shader can loop forever. Timeout detection can handle that, but it runs into another problem, which is that resetting the GPU denies service to other legitimate users of the GPU.
<airlied>
DemiMarie: yes welcome to GPUs
<DemiMarie>
The second is that some workloads (notably compute) have legitimate long-running shaders.
<dakr>
In V2 I still have the page table cleanup in the job_free() callback. The page table cleanup needs to take the same lock as the page table allocation does. Since job_free() can stall the job's run() callback, this is still a potential deadlock.
<dakr>
I'll fix this up in V3.
<airlied>
DemiMarie: for compute to do long running you have to have page faults
<airlied>
at which point you don't need to wait for dma fences in throty
<airlied>
theory
<DemiMarie>
airlied: which GPUs support page faults?
<airlied>
so compute jobs that are long running are not meant to use dma-fences
<airlied>
DemiMarie: for compute jobs, I think all current gen, most last gen
<DemiMarie>
the other option would be to use IOMMU tricks to remap the pages behind the GPU’s back
<airlied>
not sure where it falls over
<DemiMarie>
airlied: what about for graphics jobs?
<airlied>
nope grpahics jobs aren't pagefault friendly
<DemiMarie>
airlied: why?
<airlied>
and you don't really want to take a pagefault in your fragment shader
<airlied>
too many fixed function piecs
<DemiMarie>
and those cannot take page faults?
<airlied>
eventually they might get to where it could work, but it will always be horrible
<airlied>
not usually
<airlied>
texture engines and ROPs generally
<DemiMarie>
what about letting the IOMMU remap pages transparently to the GPU?
Guest7305 has quit [Ping timeout: 480 seconds]
<airlied>
don't think we always have an iommu, and I'm not sure if you could do that in any race free way
<DemiMarie>
If the GPU could retry requests one could use break-before-make
<DemiMarie>
airlied: GPUs really need to get better at hostile multi-tenancy
<airlied>
they have been getting better, just not sure when they'll be finished
<DemiMarie>
what progress has been made?
<airlied>
you couldn't even pagefault a few gens ago :-)
<DemiMarie>
If I were designing a GPU I would be very tempted to have each shader core run its own little OS (provided by the driver) with full interrupt and exception handling support.
<DemiMarie>
Or at least have the host be able to migrate pages on the GPU.
Duke`` has quit [Ping timeout: 480 seconds]
ybogdano is now known as Guest7312
ybogdano has joined #dri-devel
<robclark>
then you'd have llvmpipe :-P
<robclark>
but I wouldn't share a gpu between hostal tennants
<dottedmag>
DemiMarie: don't you have
<dottedmag>
_special_ GPUs for hostile multitenancy
<robclark>
there is server class stuff that supports sr-iov
<robclark>
I'm sure there is a whole big class of u-arch info leak issues hiding there ;-)
<airlied>
generally they are often compute only
<DemiMarie>
robclark: personally, I would be fine with using LLVMpipe, but unfortunately for Qubes users, the GUI tookit and application writers are not.
<DemiMarie>
dottedmag robclark airlied: Intel Gen12+ iGPUs support SR-IOV, though driver support for it is not (yet) upstreamed. That said, Qubes OS needs to work (and have decent performance) even without such hardware.
<DemiMarie>
Just how long does it take to reset a GPU? Because at least Apple M1+ GPUs reset so quickly that one could reset after every frame and still have a usable desktop.
apinheiro has quit [Ping timeout: 480 seconds]
<robclark>
probably on the order of few ms .. so might be ok for desktop workloads but probably not for games.. I don't have #'s for reset but resuming gpu is ~1.5-2ms for modern adreno's..
<robclark>
if resetting the gpu was the fastest way to do it then qcom wouldn't have this "zap" shader mechanism to take the gpu out of protected mode since same information leak concerns there
gouchi has quit [Remote host closed the connection]
vliaskov_ has quit [Remote host closed the connection]
agd5f_ has quit []
agd5f has joined #dri-devel
<agd5f>
DemiMarie, All gfx9 derived GPUs can support recoverable GPU page faults, but that was dropped in gfx10 and newer because it takes a lot of die area and the performance generally makes games unusable. If you want fast games, everything needs to be resident
<agd5f>
you can also preempt to deal with long shaders
<agd5f>
if there is memory pressure, stop the jobs, deal with the pressure, let them run again
fab has quit [Quit: fab]
kzd has quit [Quit: kzd]
kzd has joined #dri-devel
apinheiro has joined #dri-devel
ahajda has quit [Quit: Going offline, see ya! (www.adiirc.com)]
danvet has quit [Ping timeout: 480 seconds]
<DemiMarie>
agd5f robclark: thanks for taking your time to answer my questions!
<DemiMarie>
airlied dottedmag too
<robclark>
np
avocicltb^ has joined #dri-devel
Zopolis4 has quit []
ybogdano is now known as Guest7321
Guest7312 is now known as ybogdano
<anholt_>
daniels: I'm supposed to be off work today, but it I've taken down the swrast runners again now. Going to need someone competent to maintain them if we're going to, so we're asking around, but probably going to need to move the load back to equinix for a bit. :/