ChanServ changed the topic of #etnaviv to: #etnaviv - the home of the reverse-engineered Vivante GPU driver - Logs https://oftc.irclog.whitequark.org/etnaviv
mvlad has joined #etnaviv
lynxeye has joined #etnaviv
samuelig has quit [Quit: Bye!]
<bl4ckb0ne> lynxeye: do you know what MCFE is?
<lynxeye> bl4ckb0ne: A new kind of FE that I haven't seen on any actual HW implementation I had access to. Likely MC stands for multi-channel or something like that.
<bl4ckb0ne> yeah multi channel, ive seen references to multi cluster as well
<bl4ckb0ne> does the PE needs a sem/stall if the blt engine is there? vivante seems to only stall blt if it has one and leaves the PE alone
<bl4ckb0ne> i tried forcing `need_flush` to see if that fault issue goes away but no luck
<bl4ckb0ne> and shoving all flags into VIVS_GL_FLUSH_CACHE faults as well
<lynxeye> bl4ckb0ne: We do things a little different than the vivante driver, as we send the event from the PE even if BLT is present. Vivante sends the event from the BLT, but this had issues with the wrong event ID being signaled on gc7000r6214. Thus we need the PE stall to synchronize with the BLT before sending the event.
<bl4ckb0ne> i tried to do only the BLT locally, managed the fault but it keeps hanging
<bl4ckb0ne> anymore ideas on where to look for that MMU fault lynxeye ?
cmeissl[m] has quit [Quit: Client limit exceeded: 20000]
lynxeye has quit [Quit: Leaving.]
samuelig has joined #etnaviv
mvlad has quit [Remote host closed the connection]