JohnnyonFlame has quit [Read error: Connection reset by peer]
frieder has joined #etnaviv
pcercuei has joined #etnaviv
lynxeye has joined #etnaviv
chewitt has joined #etnaviv
adjtm has joined #etnaviv
chewitt has quit [Quit: Zzz..]
adjtm is now known as Guest5472
adjtm has joined #etnaviv
Guest5472 has quit [Ping timeout: 480 seconds]
<mwalle>
marex: and did the patches help?
chewitt has joined #etnaviv
chewitt has quit []
chewitt has joined #etnaviv
<marex>
mwalle: which patches, the MMU ones on which I was accidentally not CCed on ?
<mwalle>
marex: yes
<marex>
mwalle: Please don't conflate all those MMU fault issues implying they have the same reason. You wouldn't do this with segfaults on the CPU, would you?
<marex>
mwalle: ... although yes, I havent seen any more of those MMU faults with the patches applied, but ... ^
<mwalle>
marex: i sense sarcasm :p
JohnnyonFlame has joined #etnaviv
<mwalle>
austriancoder: if anything, I'll drop the move of the two dma mask assignment in the first patch
<lynxeye>
marex: Allow me answer in the same style. You did notice that those MMU context fixes are fixing things in the runtime resume path, the very same spot you identified as being an issue on your system, right? You also noticed that the i.MX6QP fault issue you argued hard to take into account was reported in 2017, more than 2 years before the MMU contexts were even introduced in etnaviv.
<lynxeye>
So both things can be true a) the MMU context fixes are fixing your specific issue and b) not all MMU faults being the same
chewitt has quit [Quit: Zzz..]
JohnnyonF has joined #etnaviv
JohnnyonFlame has quit [Ping timeout: 480 seconds]
JohnnyonF has quit [Ping timeout: 480 seconds]
<mwalle>
aah reading that email from lynxeye why does the IMX8MM has two vivnate blocks?
<mwalle>
GC520 and GC600 (or something like that)
<cphealy>
Yea, one 3D GPU and one 2D GPU.
<marex>
mwalle: one gpu2d for blit acceleration, the other gpu3d for actual 3d rendering ?
<mwalle>
why wouldn't you put it into one?
<mwalle>
or are these features exclusive?
<marex>
mwalle: so you can gate off the one you dont need, so it doesnt consume power ?
<lynxeye>
^ ... at least you could do that if you didn't screw up the power-domain integration. But that's one of the reasons given in the RM.
<lynxeye>
And they don't share a FE if they are in separate GPU devices, so they can work independently without e.g. 3D rendering stalling the shared FE.
<mwalle>
mh, ok
<mwalle>
that power gating isn't that convincing given you could also gate parts of the gpu in the first place
<marex>
Please don't conflate all those power domain implementation issues implying they happen on all SoCs. You wouldn't do this with power domains on MX8MP, would you?
<marex>
... sorry, I had to :-)
<lynxeye>
mwalle: No, the GPU can only clock-gate, not power-gate.
<marex>
but that power domain / reset bug is MX8MM specific
* mwalle
is now wondering about the leakage current of an unclocked IP
<lynxeye>
marex: 2 GPUs is also 8MM specific. ;) 8MQ, 8MN and 8MP only have one GPU.
<marex>
mwalle: if you gate the IP completely off, it consumes basically nothing ; you care if its a battery-operated device
<marex>
if all you need is blit a buffer, no need for 3D GPU power hog
<mwalle>
marex: sure, i was just wondering of how much we are talking.. to get some grip, surely you want to save as much as possible
<marex>
mwalle: probably just measure it, I think there are some figures in the application notes though
<marex>
not sure how much you can really trust them
<lynxeye>
mwalle: On the order of a few dozen mW IIRC
<mwalle>
mh between unclocked and unpowered?
* mwalle
calls it a day
<mwalle>
need to repair my car :o)
<marex>
well once you conflate in DRAM utilization, it is likely gonna be in the hundreds of mW
JohnnyonFlame has joined #etnaviv
frieder has quit [Remote host closed the connection]
<marex>
lynxeye: oh, btw., you can conflate my TB on those etnaviv mmu patches, on stm32mp1
<lynxeye>
marex: Nice, so no more hangs? How often did they happen for you before?
<cphealy>
lynxeye: I thought the 8MP also had a 2D GPU (GC520L)
<marex>
lynxeye: Please don't conflate hangs and MMU faults
<marex>
lynxeye: there were no hangs, only MMU faults, and those are no longer present it seems
<marex>
with glmark2, it was easy to trigger one rather often, I got a few dumps after a few hours of run
<lynxeye>
marex: :P If the exception bit is set, a MMU fault always causes a GPU hang.
<lynxeye>
The GPU recovery isn't triggered by a MMU fault, but by the resulting GPU hang.
JohnnyonF has joined #etnaviv
JohnnyonFlame has quit [Ping timeout: 480 seconds]
<marex>
lynxeye: well, why dont you write all this siloed knowledge into some documentation for others to learn from ?
<lynxeye>
marex: I hadn't thought this is special hidden knowledge. It's right there in the kernel driver: the only path leading to GPU recovery is via the job timeout triggered from the DRM scheduler.
<marex>
lynxeye: so is the MMU stuff ?
<lynxeye>
marex: The MMU is a pretty straight forward paging structure. It just gets interesting/complex due to the GPU executing the commands asynchronous to the CPU and thus driver state.
lynxeye has quit [Quit: Leaving.]
<marex>
and so the bus factor remains constant two ...
<marex>
sigh
chewitt has joined #etnaviv
chewitt has quit [Ping timeout: 480 seconds]
karolherbst has quit [Remote host closed the connection]