ChanServ changed the topic of #etnaviv to: #etnaviv - the home of the reverse-engineered Vivante GPU driver - Logs https://oftc.irclog.whitequark.org/etnaviv
JohnnyonF has joined #etnaviv
Leopold_ has quit [Ping timeout: 480 seconds]
Leopold_ has joined #etnaviv
JohnnyonFlame has quit [Ping timeout: 480 seconds]
Leopold_ has quit [Remote host closed the connection]
Leopold_ has joined #etnaviv
karolherbst_ is now known as karolherbst
karolherbst_ has joined #etnaviv
karolherbst has quit [Ping timeout: 480 seconds]
Leopold_ has quit []
Leopold_ has joined #etnaviv
frieder has joined #etnaviv
mvlad has joined #etnaviv
lynxeye has joined #etnaviv
<tomeu> lynxeye: wonder what is the best way to refuse to create an EGL context on a NPU
<tomeu> it's funny how vivante goes to great lengths to obfuscate the programming of the HW, then have to add comments to help themselves make sense out of the code
<lynxeye> tomeu: I think we'll end up exposing the NN/TP core number as device parameters in the UAPI. Then it should just be a matter of checking those for != 0 in the etnaviv winsys to avoid creating EGL contexts on those cores.
<lynxeye> Yea, I think their obfuscator just ignores comments, so they go through unfiltered.
pcercuei has joined #etnaviv
<tomeu> ok, makes sense
<tomeu> btw, I'm trying to figure out why we don't get interrupts when a job finishes, as that is making running test suites very slow
<tomeu> I'm decoding the ops in gckWLFE_Event as I suspect that is what the blob uses to signal job fences
<lynxeye> tomeu: Did you try sending the event from the FE? Normally we use events from PE, but I can't remember if this works with the NPU cores.
<tomeu> that's what I'm trying to figure out right now
<tomeu> guess that should be done with etnaviv_sync_point_queue
<lynxeye> tomeu: Nope, sync points are only used for the performance profiling. What you want to look at is the event queued in etnaviv_buffer_queue()
<tomeu> ah, I see
<tomeu> so VIVS_GL_EVENT_FROM_PE might not be correct in this case
<lynxeye> tomeu: right. Maybe the NPU can not send the event from PE, as technically there is no PE. Sending from FE should work.
<lynxeye> However the FE might reach that send event command before the core is done with whatever it is doing.
<lynxeye> So you might need to insert a FE stall until the NN cores are done.
<tomeu> hmm, interesting, will check that galcore does
<tomeu> damn
<tomeu> so I don't know why, but this doesn't happen with the code that was sent to the ml
<tomeu> only on my 5.17 branch
<tomeu> with the main difference being the power up sequence
<lynxeye> Huh? What are you doing differently in the power up sequence?
<tomeu> lynxeye: this is how I was doing it before cleaning up for upstreaming: https://gitlab.freedesktop.org/tomeu/linux/-/commit/af365186ab305d2fa3e91145ac79d2569b9df2a5
<lynxeye> Ah, I have no idea how the power domains on Amlogic SoCs are working. But PDs are always "fun" to deal with.
<tomeu> yeah, or maybe it has been some side effect of the code reorganization
mvlad has quit [Remote host closed the connection]
<tomeu> lynxeye: I think the difference might be something in hdwb
<tomeu> *hwdb
mvlad has joined #etnaviv
pcercuei has quit [Remote host closed the connection]
pcercuei has joined #etnaviv
Leopold_ has quit [Ping timeout: 480 seconds]
Leopold_ has joined #etnaviv
mvlad has quit [Remote host closed the connection]
Leopold_ has quit [Ping timeout: 480 seconds]
Leopold_ has joined #etnaviv
lynxeye has quit [Quit: Leaving.]
JohnnyonF has quit [Ping timeout: 480 seconds]
frieder has quit [Ping timeout: 480 seconds]
Leopold_ has quit [Remote host closed the connection]
JohnnyonFlame has joined #etnaviv
Leopold__ has joined #etnaviv
karolherbst_ is now known as karolherbst
pcercuei has quit [Quit: dodo]