ChanServ changed the topic of #panfrost to: Panfrost - FLOSS Mali Midgard + Bifrost + Valhall - Logs https://oftc.irclog.whitequark.org/panfrost
Daanct12 has joined #panfrost
Daanct12 has quit [Quit: WeeChat 4.5.2]
Daanct12 has joined #panfrost
ckeepax has quit [Quit: Smoke me a kipper I will be back for breakfast]
rasterman has joined #panfrost
<bbrezillon> robclark: because otherwise your gpuvm state is ahead of the actual state, meaning you can't easily fast-track synchronous VM binds, and you also can't do things like "give me the BO at this VA"
<bbrezillon> oh, and if something fails in the middle, it's also not trivial to go back to the actual VM state (you have to undo gpuvm modifications, which you can't really do if you didn't keep the old mappings around in case of remapping)
<bbrezillon> for all these reasons, I decided to keep things simple and just do the gpuvm update when the page tables are updated. And yes, that means we can't optimistically plan ahead how many page tables we'll need, but I decided I could live with that at the time
warpme has joined #panfrost
warpme has quit []
warpme has joined #panfrost
darandomcube has joined #panfrost
darandomcube has quit [Remote host closed the connection]
ckeepax has joined #panfrost
warpme has quit []
pbrobinson has joined #panfrost
pbrobinson has quit [Ping timeout: 480 seconds]
warpme has joined #panfrost
Daaanct12 has joined #panfrost
Daanct12 has quit [Read error: Connection reset by peer]
Daaanct12 has quit [Quit: WeeChat 4.5.2]
Daanct12 has joined #panfrost
warpme has quit []
alyssa has joined #panfrost
pbrobinson has joined #panfrost
warpme has joined #panfrost
alyssa has left #panfrost [#panfrost]
pbrobinson has quit [Ping timeout: 480 seconds]
<robclark> bbrezillon: hmm, I was kinda thinking to make sync binds a userspace problem (ie. just wait on syncobj/fence)... but having an accurate at-the-time BO list for gpu devcoredump is something I'll have to think about
sally has quit [Quit: ZNC 1.9.1+deb2+b2 - https://znc.in]
sally has joined #panfrost
<bbrezillon> robclark: sync binds is not really a problem of waiting syncobj/fence on the userpsace side, the problem is that you have to wait for all previously queued async binds to land, because your update might depend on the state of the page table tree as seen by GPUVM
<bbrezillon> you can play tricks to figure out if any of the queues requests interfere with the async binds, but that complicates things quite a lot
<bbrezillon> *queued
<bbrezillon> *sync binds
<robclark> bbrezillon: right, but is that actually a problem? Like, I think the common case would be wanting async bind/unbind, so not sure sync is a thing to optimize for. I expect userspace to batch up updates and then push to kernel before a cmd submit
ity has quit [Quit: WeeChat 4.5.1]
ity has joined #panfrost
<bbrezillon> robclark: hm, we don't do that, we just VM_MAP synchronously when the resource is created. async is only used for sparse bindings going through vkQueueBindSparse
<bbrezillon> s/is only used/will only be used/ because sparse bindings is not hooked up yet in panvk
<robclark> if userspace is not using sparse they can still use the old synchronous ioctls... although I guess that is not an option for panvk
warpme has quit []
Daanct12 has quit [Quit: WeeChat 4.5.2]
warpme has joined #panfrost
pbrobinson has joined #panfrost
<bbrezillon> you mean VM_BIND(sync=true)?
<bbrezillon> there's no old VM_MAP ioctl in panthor, we just have VM_BIND, which supports both a sync and async mode
<bbrezillon> and panfrost doesn't even let you control the virtual address space, so...
<robclark> no, I mean the pre-VM_BIND ioctl... I still have to support existing userspace, so those aren't going away
warpme has quit []
warpme has joined #panfrost
warpme has quit []
robmur01 has quit [Remote host closed the connection]
robmur01 has joined #panfrost
warpme has joined #panfrost
warpme has quit []
loki666 has joined #panfrost
<loki666> Hi, I'm currently trying to get panfrost working on Allwinner H616, this SoC seems to require a specific sequence when enabling/disable the power domain
<loki666> the GPU is the only device attached to the pd, so when GPU goes to idle, genpd disable it
<loki666> to works properly, clks need to be disabled, and reset control need to be asserted before disabling pd, and in the reverse order when resuming
<loki666> I have WIP patch, but I'd like some guidance on this
rasterman has quit [Quit: Gettin' stinky!]
<CounterPillow> I don't have strong opinions on this because I don't work on the panfrost bits, but shouldn't behaviour specific to the power domain for this SoC be handled in the SoC's power domain driver? Otherwise you might have to start implementing similar stuff in every driver that it might make use of
<jernej> H616 SoC has only GPU power domain, everything else is together in common domain
<jernej> and yes, similar patterns are already implemented in most allwinner drivers, but it's needed regardless, since almost all peripherals are in same power domain
alarumbe has quit [Read error: No route to host]
alarumbe has joined #panfrost
averne has quit [Quit: quit]
averne has joined #panfrost
cphealy has quit []