ChanServ changed the topic of #panfrost to: Panfrost - FLOSS Mali Midgard & Bifrost - Logs - <macc24> i have been here before it was popular
vstehle has quit [Ping timeout: 480 seconds]
JulianGro has joined #panfrost
atler is now known as Guest2374
atler has joined #panfrost
Guest2374 has quit [Ping timeout: 480 seconds]
Daanct12 has joined #panfrost
erlehmann has quit [Ping timeout: 480 seconds]
erlehmann has joined #panfrost
<icecream95> Whoa.. *how* much memory are we trying to allocate for linear constraints in RA? 3525746884 bytes, or 3 GB!
<icecream95> We only really need about 14000 nodes (186 MB), not 60000
<icecream95> Though my RA optimisations make the amount of required space closer to O(N), so this wont't really be a concern anymore
<icecream95> I was wondering why allocation failed on armv7...
<cphealy> icecream95: is this in the context of GL or Vulkan or both?
<icecream95> cphealy: Bifrost compiler, so not API dependent
erlehmann has quit [Ping timeout: 480 seconds]
erlehmann has joined #panfrost
<cphealy> So, you've found a way to reduce the amount of RAM required for the Bifrost compiler on the device?
erlehmann has quit [Ping timeout: 480 seconds]
vstehle has joined #panfrost
Daanct12 has quit [Remote host closed the connection]
<HdkR> icecream95: I guess that's just VA space?
guillaume_g has joined #panfrost
guillaume_g has quit []
guillaume_g has joined #panfrost
<guillaume_g> Are there any shared ID or one of them is wrong?
soreau has quit [Read error: No route to host]
soreau has joined #panfrost
<icecream95> guillaume_g: That's an old version of the code, Mesa has changed it's branch name to "main"
<icecream95> The current supported models are listed here:
<icecream95> HdkR: Yup.. and it only actually uses about 400 MB of the space, which is still *way* too much
<guillaume_g> icecream95: oh, ok. Thanks a lot! :)
<HdkR> icecream95: Don't look at FEX then, you'll cry how much VA space we require :>
<icecream95> HdkR: I still haven't managed to download a rootfs for it...
<HdkR> icecream95: Make sure to use FEXRootFSFetcher to assist with that
<icecream95> HdkR: The problem is that the connection keeps getting reset halfway through the download
<HdkR> huh, it uses curl with the continue feature, figured it would work
<icecream95> That needs server-side support to work...
<HdkR> My CDN supports it
<HdkR> Has the raw links if you wanted to try it directly
<icecream95> That is not my experience
<HdkR> :/
<macc24> i kinda got fex to somewhat work with x86_64 voidlinux rootfs
<icecream95> HdkR: Maybe it's related to caching.. I see "CF-Cache-Status: MISS" in the HTTP headers
<HdkR> oh you might be right that it doesn't support byte ranges actually. Just tried and got a `HTTP server doesn't seem to support byte ranges. Cannot resume.`
<HdkR> Well that's sucky, I thought it was working
<HdkR> cache missing is expected. Cloudflare can't cache >500MB
Daanct12 has joined #panfrost
<HdkR> Well shoot, I don't have a recommendation for internet connection quirks like this
<HdkR> Telling you to build the image yourself isn't nice
<macc24> dd if=http://[...]/fex-rootfs.tar.gz of=./out.tgz
<HdkR> lol
<guillaume_g> icecream95: G71 seems to be supported in code, but the doc states it is not supported yet.
<macc24> guillaume_g: G71 is a funny gpu and afaik it's in varying states of brokenness
<bbrezillon> jekstrand: I think I updated the MR
<bbrezillon> but I had other issues when porting the solution to dozen
<guillaume_g> macc24: Ok
MajorBiscuit has joined #panfrost
MajorBiscuit has quit []
MajorBiscuit has joined #panfrost
<icecream95> Wait.. have we been calling it by the wrong name all along? Is it actually Bifröst?
<macc24> it is never Bifrost unless it's from Bifrost region of france, otherwise it's just a sparkling llvmpipe
anholt has quit [Ping timeout: 480 seconds]
anholt has joined #panfrost
Daanct12 has quit [Remote host closed the connection]
floof58 has quit [Ping timeout: 480 seconds]
<robmur01> G71 is probably best described as "unsupported, but usable at your own risk"
<macc24> G71 works best if you set MESA_LOADER_DRIVER_OVERRIDE to llvmpipe
rkanwal has joined #panfrost
floof58 has joined #panfrost
jambalaya has quit [Remote host closed the connection]
markusbauer has joined #panfrost
jambalaya has joined #panfrost
MajorBiscuit has quit [Ping timeout: 480 seconds]
nlhowell has joined #panfrost
vstehle has quit [Quit: WeeChat 3.3]
alyssa has quit [Quit: So long, and thanks for all the fish]
nlhowell is now known as Guest2412
nlhowell has joined #panfrost
Guest2412 has quit [Ping timeout: 480 seconds]
rkanwal has quit [Ping timeout: 480 seconds]
markusbauer has quit []
<jekstrand> bbrezillon: Hrm... Why are those not getting generated?
<jekstrand> bbrezillon: Oh, I see now. Those are the ones that take strides.
<jekstrand> bbrezillon: I think that's easy to fix. Did you add that patch to the MR?
digetx has joined #panfrost
MajorBiscuit has joined #panfrost
<jekstrand> Ok, I see you did.
<bbrezillon> jekstrand: didn't know if we wanted those to be auto-generated (which is fairly easy to do after all) or hand-writed
<jekstrand> bbrezillon: idk
<jekstrand> bbrezillon: If you wanted to auto-generate, you could, instead of having a separate MANUAL_COMMANDS, base it on return_type != 'void'
<bbrezillon> ok, I was considering a separate MANUAL_COMMANDS for those actually :)
<jekstrand> Or we could have a second MANUAL_COMMANDS. That'd be a bit more flexible.
<jekstrand> Or maybe not so much MANUAL_COMMANDS as NO_ENQUEUE_UNLESS_PRIMARY because I don't think we ever actually want manual vk_enqueue_unless_primaryCmd*
<jekstrand> Actually.... Maybe what we really want is MANUAL_COMMANDS and NO_ENQUEUE_COMMANDS
<jekstrand> where MANUAL_COMMANDS is for things where we have a manual implementation and NO_ENQUEUE_COMMANDS is for things that we're entirely ignoring.
<jekstrand> The vk_enqueue_Cmd* emit would take both into account and the vk_enqueue_unless_primary_Cmd* emit would only consider NO_ENQUEUE_COMMANDS. I think I like that the best.
<bbrezillon> jekstrand: I think I just nersniped you ;)
<jekstrand> hehe. A bit, maybe.
<bbrezillon> feel free to drop my patch and replace it by your auto-generated version
<jekstrand> bbrezillon: Do you like that plan? Do you want to type it or review it?
<jekstrand> Ok, I'll type it quick.
<bbrezillon> and if you're happy with the rest of the MR, merge it ;-)
<jekstrand> Cool! I didn't see you review the last patch.
<jekstrand> I'll rework to auto-gen those 4 and then kick off a run with the result so we can figure out what to enable in CI.
<bbrezillon> oh, that's an omission
<bbrezillon> I mean, I fixed it, so I definitely reviewed it
<jekstrand> :D
<jekstrand> bbrezillon: Updated the MR. All that's left is for you to read my autogen patch. I'm going to kick off a full CTS run now.
guillaume_g has left #panfrost [Konversation terminated!]
JulianGro has quit [Remote host closed the connection]
<bbrezillon> jekstrand: looks good to me. You can add my R-b if CI is happy
<jekstrand> bbrezillon: Cool. I've got a full CTS run going now. I'll merge tomorrow morning if it's happy.
<bbrezillon> jekstrand: thanks for your help!
<jekstrand> bbrezillon: Happy to!
<jekstrand> bbrezillon: It's also been fun learning about Mali
MajorBiscuit has quit [Ping timeout: 480 seconds]
nlhowell has quit [Ping timeout: 480 seconds]
vstehle has joined #panfrost
robmur01 has quit [Read error: Connection reset by peer]
robmur01 has joined #panfrost
JulianGro has joined #panfrost
JulianGro has quit [Remote host closed the connection]
JulianGro has joined #panfrost
JulianGro has quit [Remote host closed the connection]
JulianGro has joined #panfrost
JulianGro has quit [Remote host closed the connection]
JulianGro has joined #panfrost
Danct12 has quit [Remote host closed the connection]
rasterman has joined #panfrost
rkanwal has joined #panfrost
* jekstrand is debating deleting renderpasses from panvk
jelly has quit []
jelly has joined #panfrost
<jekstrand> Ugh... panvk needs better BO tracking. As far as I can tell, images, textures, and buffers are all not getting tracked. I guess they're working by luck?
<jekstrand> Or does the kernel only need to know about some BOs?
<bbrezillon> it needs to know about all the BOs, and yes, things are working by luck, as you already noticed for a few other things :P
rkanwal has quit []
rkanwal has joined #panfrost
<bbrezillon> I mean, apps are not supposed to release resources while the cmdbuf is pending/in-flight, right?
<bbrezillon> that doesn't work for implicit fences though
<bbrezillon> wait, we do pass the FB attachments to the kernel
<bbrezillon> and blit src/dst BOs, we probably lack copy src/dst BOs though
<bbrezillon> nm, those are passed as well
<bbrezillon> we just re-use the batch.blit.{src,dst} fields, which is confusing
<bbrezillon> jekstrand: remember that we don't have any way to tell when the BO is RO vs RW, so we just build a list of all BOs used by a batch, and the driver currently does one meta operation (copy/blit/resolve) per batch
<jekstrand> bbrezillon: What is the BO list passed to the kernel used for?
<jekstrand> bbrezillon: Does it control residency or just implicit sync?
<bbrezillon> that's sub-optimal, and we should definitely try harder to merge things, but I don't think it's fundamentaly broken
<bbrezillon> jekstrand: almost all BOs are resident
<bbrezillon> the only exception being the tiler heap
<bbrezillon> which can grow and be mapped on-demand
<bbrezillon> pinned+mapped on-demand
<bbrezillon> so the list of BOs is mostly here to make sure the buffers don't disappear while the job is in-flight and enforce implicit-sync
<jekstrand> Ok, if all BOs are resident for as long as they exist, then the Vulkan requirements take care of the rest and all we need is implicit sync.
nlhowell has joined #panfrost
<anholt> bbrezillon: so no paging out buffers that have been mapped to an iova?
camus1 has joined #panfrost
rkanwal has quit [Ping timeout: 480 seconds]
camus has quit [Ping timeout: 480 seconds]
rasterman has quit [Quit: Gettin' stinky!]
rasterman has joined #panfrost
nlhowell has quit [Ping timeout: 480 seconds]
nlhowell has joined #panfrost
nlhowell has quit [Ping timeout: 480 seconds]
<jekstrand> Ugh... panfrost kernel driver is stuck failing to reset the GPU. :(
<jekstrand> again
<icecream95> jekstrand: What kernel are you using? Newer releases have gotten better at this
<jekstrand> 5.14
<jekstrand> 5.16 hangs more in my experience
<icecream95> I'm using 5.14 with some patches to make recovery work better
<icecream95> jekstrand: I think the patches I'm using are
rasterman has quit [Quit: Gettin' stinky!]