ChanServ changed the topic of #panfrost to: Panfrost - FLOSS Mali Midgard + Bifrost + Valhall - Logs https://oftc.irclog.whitequark.org/panfrost - I don't know anything about WSI. That's my story and I'm sticking to it.
anholt has joined #panfrost
Danct12 has joined #panfrost
Danct12 has quit []
wolfshappen has joined #panfrost
wolfshappen has quit []
stipa is now known as Guest7513
Guest7513 has quit [Read error: Connection reset by peer]
stipa has joined #panfrost
kinkinkijkin has joined #panfrost
kinkinkijkin has quit [Quit: Leaving]
Leopold_ has quit [Remote host closed the connection]
Leopold_ has joined #panfrost
camus has joined #panfrost
camus has quit []
camus has joined #panfrost
Danct12 has joined #panfrost
chip_x has joined #panfrost
paulk has quit [Ping timeout: 480 seconds]
chipxxx has quit [Ping timeout: 480 seconds]
Danct12 has quit [Read error: Connection reset by peer]
Danct12 has joined #panfrost
karolherbst has quit [Read error: Connection reset by peer]
karolherbst has joined #panfrost
guillaume_g has joined #panfrost
Danct12 has quit [Quit: WeeChat 3.8]
Danct12 has joined #panfrost
rasterman has joined #panfrost
macc24 has quit []
macc24 has joined #panfrost
MajorBiscuit has joined #panfrost
Danct12 has quit [Ping timeout: 480 seconds]
<bbrezillon> robmur01: Re: panfrost_job_push() making all BO mappings active => this is already what we do, and I don't think there's any problem in panfrost regarding page table allocations, at least not until we decide with want to implement a VM_BIND-like ioctl()
<bbrezillon> but if we want to add a VM_BIND ioctl to pancsf, mapping/unmapping operations will be deferred to a drm_sched queue, and executed in the run_job() path, where we're not allowed to allocate in blocking mode (the shrinker / sched deadlock thing you were discussing with robclark)
<robmur01> that's what I mean - job_push can preallocate pagetables by just "faulting in" PTEs matching the granularity the VM_BIND wants, that way the subsequent map() can be known not to need to allocate
<robmur01> if necessary, you can also prevent unmap from freeing tables by just splitting up into less than the size of the next block up
<bbrezillon> uh, except the SUBMIT ioctl() no longer gets passed a list of BOs
<bbrezillon> (in pancsf, I mean)
<bbrezillon> oh, nevermind, you meant the VM_BIND job
<bbrezillon> if pre-faulting pages doesn't do anything when something is already mapped in that range, I guess that could work. That's assuming we don't free page tables when unmapping a region that's later mapped by another VM_BIND operation, which forces us to keep track of all queued operations.
Leopold_ has quit [Remote host closed the connection]
<bbrezillon> robmur01: just want to make sure I get it correctly. For the pre-allocation to happen, we need to get rid of this https://elixir.bootlin.com/linux/latest/source/drivers/iommu/io-pgtable-arm.c#L484, and call the map function with iommu_prot=0, right?
Leopold has joined #panfrost
rasterman has quit [Quit: Gettin' stinky!]
<robmur01> or just map *something* and immediately unmap it again
<robmur01> I am assuming this is at a point where any previous users of the given VA range have already been kicked out
<bbrezillon> well, that would mean adding an ordering constraint on VM_BIND operations. vkQueueBindSparse() is asynchronous by nature, and we don't know the state of the VM at job execution time (bind jobs have explicit deps, and the state of the VM might have changed when we get to execute the VM_BIND operation)
<bbrezillon> one option to make sure we always have a correct VM state at job submission time is to: 1/ have only one binding queue 2/ execute synchronous binds on this queue
<bbrezillon> but ideally, I'd like to relax that constraint :-/
<bbrezillon> I guess exposing just one sparse binding queue is fine, but forcing synchronous binds to wait on async ones is not great
<bbrezillon> maybe that's just a non-issue, because if something was mapped already, we can figure it out before doing the dummy map/unmap dance to get things pre-allocated
<bbrezillon> it still feels a bit hack-ish to have to map/unmap in that case, but oh well
<robmur01> ah, OK, that makes things more tricky - say the previous user had a 2MB block where VM_BIND wants 4k pages; a table would definitely need allocating, but can't be "faulted in" until the previous mapping is actually removed
<bbrezillon> robmur01: that still requires some flag to prevent freeing a page table on an unmap, if we know other bind ops targetting the range covered by this page table are queued
<bbrezillon> robmur01: and there's this 2MB vs 4k granularity thing, but I thought you assumed we'd use 4k granularity for everything to avoid splits in the unmap path
soreau has quit [Ping timeout: 480 seconds]
<robmur01> guess I've been assuming that that would only apply to sparse mappings, and we'd still have "normal" BOs simply mapped for their lifetime, which could use blocks
<robmur01> if you never use blocks ever at all, and split unmaps at less than 2MB, then io-pgtable will essentially never free any tables, so any VA which has been used at least once will be reusable without allocation
<robmur01> it's churning VAs back and forth between the two granularities that poses most of the awkwardness
<robmur01> TBH I'm starting to think at this point it might just be easier to map with GFP_NOWAIT instead of GFP_KERNEL :)
<robmur01> it would seem horribly wrong if that still had a shrinker dependency somehow...
<robmur01> and if memory really is that low then just letting jobs fail until reclaim has had a chance to run by itself doesn't seem *too* terrible
<bbrezillon> robmur01: mind joining #dri-devel, so we can discuss it with people knowing what can and can't be done in the run_job() path?
<robmur01> oh alright then... it's already 4PM so I don't think today is a "finish the PMU driver" day anyway :P
Leopold has quit [Remote host closed the connection]
Leopold_ has joined #panfrost
Leopold_ has quit [Remote host closed the connection]
guillaume_g has quit []
Leopold_ has joined #panfrost
soreau has joined #panfrost
soreau has quit [Read error: Connection reset by peer]
hanetzer1 has joined #panfrost
soreau has joined #panfrost
hanetzer has quit [Ping timeout: 480 seconds]
soreau has quit [Ping timeout: 480 seconds]
soreau has joined #panfrost
greenjustin has joined #panfrost
soreau has quit [Ping timeout: 480 seconds]
MajorBiscuit has quit [Ping timeout: 480 seconds]
soreau has joined #panfrost
DPA has quit [Ping timeout: 480 seconds]
Guest7507 has quit []
Daanct12 has joined #panfrost
Daanct12 is now known as Danct12
DPA has joined #panfrost
DPA- has joined #panfrost
DPA has quit [Ping timeout: 480 seconds]
DPA- has quit [Ping timeout: 480 seconds]
DPA has joined #panfrost
DPA has quit [Quit: ZNC 1.8.2+deb2+b1 - https://znc.in]
DPA has joined #panfrost
warpme_____ has joined #panfrost
Leopold_ has quit []
Leopold_ has joined #panfrost
Leopold_ has quit [Remote host closed the connection]
Leopold_ has joined #panfrost
anholt has quit [Quit: Leaving]
anholt has joined #panfrost