JohnnyonFlame has quit [Read error: Connection reset by peer]
_whitelogger has joined #etnaviv
agx_ has quit [Ping timeout: 246 seconds]
agx_ has joined #etnaviv
<marex>
agx_: mntmn: Hi, did any of you work with the LCDIF->DSI bridge on MX8M ?
<marex>
I recall one of you did something there
<marex>
I'm looking at it now on the MX8MM, but I'm not sure whether its the same bridge as on MX8MQ
<marex>
ok, different IP, obviously ...
<agx_>
marex: the imx8mq uses a NWL ip core, afaik imx8mm uses another one
<marex>
agx_: yep, mx8mm seems to use the samsung DSIM
<marex>
oh well
<agx_>
i think they use different DSI phys as well.
<mntmn>
marex: yes i did
<mntmn>
ah, different IP.
<marex>
ah ... did NXP write their own, third, MXSFB driver in drivers/gpu/drm/imx/lcdif , in the downstream kernel ?
berton has joined #etnaviv
<marex>
ah yes, we had drivers/video/mxsfb.c since 2011 , then drivers/gpu/drm/mxs since 2016 ... and of course, NXP ignored both and wrote another one in 2018
<marex>
I wonder why
pcercuei has joined #etnaviv
<gbisson>
to be fair they do use mxsfb-drm for imx8mq on downstream kernel ;)
<gbisson>
good news is that there's yet another lcdif(v3) for imx8mp...
JohnnyonFlame has joined #etnaviv
<marex>
gbisson: for mm there is a separate driver for lcdif
<marex>
gbisson: I am currently busy backporting ~450 patches to get the display working at all
<marex>
gbisson: it's better than having to maintain 5500+ patches, but it is still awful
<marex>
so yeah ... for mm, custom mxsfb driver , custom MIPI DSI bridge driver (there is one for exynos already, upstream ... of course they wont use it)
<marex>
gbisson: NXP is doing particuarly bad job at upstream , look at ST, they are doing it right
<gbisson>
marex: sure I agree 100%, it was just a teaser as they did use some upstream driver (mxsfb-drm) _once_
<gbisson>
marex: the part about imx8mp using yet another driver not upstream is showing that they're back to not using upstream at all
<marex>
gbisson: there was hope ... and then there wasn't
berton_ has joined #etnaviv
berton has quit [Ping timeout: 240 seconds]
berton_ has quit [Quit: Leaving]
Chewi has quit [Ping timeout: 265 seconds]
Chewi has joined #etnaviv
<cphealy>
Is there a way to report the amount of texture memory (or any other memory) that the GPU is using with Mesa for the Vivante GPUs?
<cphealy>
With the vendor driver, there are tools for exposing GPU memory usage but obviously this is a completely different stack.
<mntmn>
would also be interested in that.
<cphealy>
Thus far what I've found is the following: 1) robclark pointed out that there's $debugfs/dri/1/gem which has some data in it but I haven't figure out exactly what it is yet 2) Mesa "supports" some GL extensions for exposing memory info but they are not supported by seemingly any of the GALLIUM drivers. These two extensions are: GL_NVX_gpu_memory_info and GL_ATI_meminfo
<pcercuei>
GALLIUM_HUD?
<cphealy>
The Vivante blob driver comes with some proprietary tools called "gpuinfo" and "gmem_info". These tools provide total GPU memory usage as well as usage of the different types of buffers and can do this on a per process basis. It looks very nice.
<cphealy>
pcercuei: I don't see any GALLIUM_HUD GPU memory info.
<marex>
apitrace ?
<gbisson>
isn't gpuinfo just a script that prints some debugfs entries?
<gbisson>
agreed that it won't work on etnaviv but I guess the kernel could expose some of that info
<cphealy>
apitrace will show the all the GL calls being made by an application to the best of my knowledge. I need to look at the GPU memory usage across all the applications at once as I'm working with a system that has a compositor, our own applications, and third party applications all running at the same time.
<cphealy>
gbisson: Hmm, I'm not sure how gpuinfo works yet. I'll take a look to see how it works.
<cphealy>
gbisson: I think yes. I'm reading about it in the "i.MX Graphics User's Guide"
<cphealy>
There's also the equally useful looking "gmem_info" that I'm seeing in the same user's guide.
<cphealy>
Tnx for pointing out the gpuinfo.sh script though.
<cphealy>
Perhaps as a near-term workaround, some of what's in the Vivante kernel driver can be crammed into the etnaviv kernel driver?
<cphealy>
Not for upstream, but just for getting though my near-term issues.
<pcercuei>
ugh
<pcercuei>
no thanks
<cphealy>
pcercuei: if you have a way of getting a clean solution in the mainline driver fast, I'm all ears. I need a solution this week though so I think hacking some stuff in locally is probably the best near-term solution.
<austriancoder>
cphealy: sadly we do not track the usage of a bo .. so we have no idea for what it gets used. so a proper solution needs some time
<cphealy>
Is this true even for things like texture loads/unloads?
<austriancoder>
yes..
<cphealy>
Hmm, that's unfortunate. Today the only way I have for knowing how much GPU memory is used is when no apps are running and GPU memory consumed is 0 or when I get GL_OUT_OF_MEMORY ;-)
<cphealy>
austriancoder: I get the last line that says 11640832 bytes. How do I figure out what each of those objects are?
<austriancoder>
cphealy: these objects are all the bo's that are currently living
<cphealy>
Got it. There's no easy way of knowing what processes those BOs are associated with, correct?
<austriancoder>
correct - would need some changes to the kernel to keep track of the pid of the bo creator
<cphealy>
Would this be etnaviv specific changes to keep track of the pid of the bo creator? Do other opensource GPU drivers already support this or more memory tracking functionality that you are aware of?
<austriancoder>
cphealy: out of my head the drm_gem_object has no pid reference.. so I think it will be etnaviv specific
<mntmn>
that's cool
<mntmn>
interesting, with sway desktop with just a terminal it's ~45MB