ChanServ changed the topic of #dri-devel to: <ajax> nothing involved with X should ever be unable to find a bar
ybogdano has quit [Ping timeout: 480 seconds]
Ryback_ has quit [Ping timeout: 480 seconds]
lstrano has quit [Ping timeout: 480 seconds]
co1umbarius has joined #dri-devel
columbarius has quit [Ping timeout: 480 seconds]
bluepenquin has joined #dri-devel
Akari` has quit [Ping timeout: 480 seconds]
mbrost has joined #dri-devel
sravn has quit [Remote host closed the connection]
sravn has joined #dri-devel
jewins has quit [Ping timeout: 480 seconds]
dakr has quit [Ping timeout: 480 seconds]
Daanct12 has joined #dri-devel
Company has quit [Quit: Leaving]
Daanct12 has quit [Remote host closed the connection]
Daanct12 has joined #dri-devel
<lina> karolherbst: I don't think it's going to get any simpler, the crazy firmware interface is what it is ^^;
<lina> On the other hand, if Rust gets real placement new support some day I can probably drop this 270-line monster of a macro and simplify some codepaths...
heat_ has quit [Ping timeout: 480 seconds]
ngcortes has quit [Ping timeout: 480 seconds]
Daaanct12 has joined #dri-devel
ppascher has joined #dri-devel
aravind has joined #dri-devel
Daanct12 has quit [Ping timeout: 480 seconds]
lemonzest has joined #dri-devel
Daanct12 has joined #dri-devel
JohnnyonFlame has quit [Ping timeout: 480 seconds]
Daaanct12 has quit [Ping timeout: 480 seconds]
pa- has joined #dri-devel
TMM has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
TMM has joined #dri-devel
pa has quit [Ping timeout: 480 seconds]
camus1 has joined #dri-devel
camus has quit [Read error: Connection reset by peer]
MatrixTravelerbot[m]1 has quit []
dafna33[m] has quit []
mairacanal[m] has quit []
colemickens has quit []
LaughingMan[m] has quit []
cleverca22[m] has quit []
tintou has quit []
arisu has quit []
Duke`` has joined #dri-devel
srslypascal has joined #dri-devel
fab has joined #dri-devel
sdutt_ has joined #dri-devel
sdutt has quit [Read error: Connection reset by peer]
tzimmermann has joined #dri-devel
jani has quit [Quit: No Ping reply in 180 seconds.]
cengiz_io has joined #dri-devel
Duke`` has quit [Ping timeout: 480 seconds]
jani has joined #dri-devel
Akari has joined #dri-devel
Jeremy_Rand_Talos has quit [Write error: connection closed]
Jeremy_Rand_Talos_ has joined #dri-devel
fab has quit [Quit: fab]
DrNick1 has quit []
Daaanct12 has joined #dri-devel
Daanct12 has quit [Ping timeout: 480 seconds]
kts has joined #dri-devel
itoral has joined #dri-devel
mbrost has quit [Read error: Connection reset by peer]
kts has quit [Ping timeout: 480 seconds]
fab has joined #dri-devel
Daanct12 has joined #dri-devel
Daanct12 has quit [Remote host closed the connection]
Daanct12 has joined #dri-devel
frieder has joined #dri-devel
Daanct12 has quit [Remote host closed the connection]
Daanct12 has joined #dri-devel
Daaanct12 has quit [Ping timeout: 480 seconds]
Daaanct12 has joined #dri-devel
Daanct12 has quit [Ping timeout: 480 seconds]
nchery has quit [Ping timeout: 480 seconds]
srslypascal has quit [Quit: Leaving]
srslypascal has joined #dri-devel
kts has joined #dri-devel
bmodem has joined #dri-devel
bmodem has quit []
chema has quit []
mvlad has joined #dri-devel
gio has joined #dri-devel
MajorBiscuit has joined #dri-devel
pjakobsson has joined #dri-devel
i-garrison has quit []
nchery has joined #dri-devel
tursulin has joined #dri-devel
OftenTimeConsuming is now known as Guest483
OftenTimeConsuming has joined #dri-devel
Guest483 has quit [Ping timeout: 480 seconds]
jkrzyszt has joined #dri-devel
fahien has joined #dri-devel
fab has quit [Read error: No route to host]
lynxeye has joined #dri-devel
fab has joined #dri-devel
pepp has joined #dri-devel
MajorBiscuit has quit [Quit: WeeChat 3.5]
swalker_ has joined #dri-devel
swalker_ is now known as sarahwalker
MajorBiscuit has joined #dri-devel
Daaanct12 has quit [Ping timeout: 480 seconds]
pepp has quit [Quit: WeeChat 2.3]
JohnnyonFlame has joined #dri-devel
pcercuei has joined #dri-devel
warpme___ has joined #dri-devel
Namarrgon has quit [Ping timeout: 480 seconds]
saurabhg has joined #dri-devel
mszyprow has joined #dri-devel
bmodem has joined #dri-devel
<tjaalton> dcbaker: hey, how about a new rc of 22.2? or final releas
<tjaalton> +e
fab has quit [Quit: fab]
fab has joined #dri-devel
kts has quit [Quit: Konversation terminated!]
kts has joined #dri-devel
kts has quit []
unrelentingtech has quit [Write error: connection closed]
Soroush has quit [Write error: connection closed]
Ella[m] has quit [Write error: connection closed]
Newbyte has quit [Write error: connection closed]
znullptr[m] has quit [Write error: connection closed]
robertmader[m] has quit [Write error: connection closed]
masush5[m] has quit [Write error: connection closed]
GeorgesStavracasfeaneron[m] has quit [Write error: connection closed]
x512[m] has quit [Write error: connection closed]
undvasistas[m] has quit [Write error: connection closed]
eyearesee has quit [Write error: connection closed]
onox[m] has quit [Write error: connection closed]
sigmoidfunc[m] has quit [Write error: connection closed]
kunal10710[m] has quit [Write error: connection closed]
JosExpsito[m] has quit [Write error: connection closed]
halfline[m] has quit [Write error: connection closed]
viciouss[m] has quit [Write error: connection closed]
jenatali has quit [Write error: connection closed]
Tooniis[m] has quit [Write error: connection closed]
xerpi[m] has quit [Write error: connection closed]
r[m] has quit [Write error: connection closed]
DemiMarieObenour[m] has quit [Write error: connection closed]
gagallo7[m] has quit [Write error: connection closed]
Sumera[m] has quit [Write error: connection closed]
zamundaaa[m] has quit [Write error: connection closed]
naheemsays[m] has quit [Write error: connection closed]
Guest414 has quit [Write error: connection closed]
Mershl[m] has quit [Write error: connection closed]
gnustomp[m] has quit [Write error: connection closed]
KunalAgarwal[m][m] has quit [Write error: connection closed]
RAOF has quit [Write error: connection closed]
gdevi has quit [Write error: connection closed]
ambasta[m] has quit [Write error: connection closed]
testing has quit [Write error: connection closed]
Andy[m] has quit [Write error: connection closed]
frytaped[m] has quit [Write error: connection closed]
ella-0[m] has quit [Write error: connection closed]
doras has quit [Write error: connection closed]
michael5050[m] has quit [Write error: connection closed]
hasebastian[m] has quit [Write error: connection closed]
Dylanger has quit [Write error: connection closed]
nielsdg has quit [Write error: connection closed]
DavidHeidelberg[m] has quit [Write error: connection closed]
martijnbraam has quit [Write error: connection closed]
heftig has quit [Write error: connection closed]
kallisti5[m] has quit [Write error: connection closed]
bluepenquin has quit [Write error: connection closed]
sjfricke[m] has quit [Write error: connection closed]
knr has quit [Write error: connection closed]
jekstrand[m] has quit [Write error: connection closed]
pac85[m] has quit [Write error: connection closed]
reactormonk[m] has quit [Write error: connection closed]
Anson[m] has quit [Write error: connection closed]
AlexisHernndezGuzmn[m] has quit [Write error: connection closed]
egalli has quit [Write error: connection closed]
hch12907 has quit [Write error: connection closed]
zzoon[m] has quit [Write error: connection closed]
tomba has quit [Write error: connection closed]
KunalAgarwal[m] has quit [Write error: connection closed]
kusma has quit [Write error: connection closed]
cmeissl[m] has quit [Write error: connection closed]
neobrain[m] has quit [Write error: connection closed]
ralf1307[theythem][m] has quit [Write error: connection closed]
YaLTeR[m] has quit [Write error: connection closed]
pushqrdx[m] has quit [Write error: connection closed]
PiGLDN[m] has quit [Write error: connection closed]
yshui` has quit [Write error: connection closed]
danylo has quit [Write error: connection closed]
tleydxdy has quit [Write error: connection closed]
T_UNIX has quit [Write error: connection closed]
nyorain[m] has quit [Write error: connection closed]
Vin[m] has quit [Write error: connection closed]
robertfoss[m] has quit [Write error: connection closed]
moben[m] has quit [Write error: connection closed]
Strit[m] has quit [Write error: connection closed]
kunal_10185[m] has quit [Write error: connection closed]
jasuarez has quit [Write error: connection closed]
cwfitzgerald[m] has quit [Write error: connection closed]
kunal_1072002[m] has quit [Write error: connection closed]
Mis012[m] has quit [Write error: connection closed]
ramacassis[m] has quit [Write error: connection closed]
mripard has quit [Write error: connection closed]
dcbaker has quit [Write error: connection closed]
bylaws has quit [Write error: connection closed]
i-garrison has joined #dri-devel
i-garrison has quit []
i-garrison has joined #dri-devel
bmodem has quit []
arisu has joined #dri-devel
saurabhg has quit [Ping timeout: 480 seconds]
rasterman has joined #dri-devel
kchibisov has quit [Remote host closed the connection]
rpigott has quit [Remote host closed the connection]
sumoon_ has quit [Remote host closed the connection]
Rayyan_ has quit [Remote host closed the connection]
ifreund has quit [Remote host closed the connection]
Rayyan has joined #dri-devel
ifreund has joined #dri-devel
sumoon has joined #dri-devel
kchibisov has joined #dri-devel
rpigott has joined #dri-devel
h0tc0d3 has quit [Quit: Leaving]
saurabhg has joined #dri-devel
Shibe has joined #dri-devel
bmodem has joined #dri-devel
kts has joined #dri-devel
saurabhg has quit [Ping timeout: 480 seconds]
bmodem has quit []
vliaskov has joined #dri-devel
bmodem has joined #dri-devel
YuGiOhJCJ has joined #dri-devel
sravn has quit [Remote host closed the connection]
sravn has joined #dri-devel
cengiz_io has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
sravn_ has joined #dri-devel
sdutt_ has quit [Ping timeout: 480 seconds]
sravn has quit [Ping timeout: 480 seconds]
bmodem has quit []
saurabhg has joined #dri-devel
fab has quit [Read error: Connection reset by peer]
vliaskov has quit []
digetx has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
saurabhg has quit [Remote host closed the connection]
Namarrgon has joined #dri-devel
<airlied> jani: why does dim still pin audio at 5.13?
digetx has joined #dri-devel
<dolphin> airlied: probably just forgetting to move forward, it kept breaking the build so often that automatic tracking was seen as a nuisance
kts has quit [Quit: Konversation terminated!]
chipxxx has joined #dri-devel
Vanfanel has joined #dri-devel
<Vanfanel> pq: How can I copy the MR message to the commit message as you asked me? I can do "git commit --amend" but pasting the entire text there, line breaks and all, makes no sense in my head. Something I am missing for sure.
<Vanfanel> Well, I will do that for now, it that's wrong then tell me
<emersion> you can re-format the MR message to be suitable for a commit message
itoral_ has joined #dri-devel
<pq> Vanfanel, of course I don't mean copy like a photograph, but the same content formatted as a commit message.
<pq> Vanfanel, this also the wrong channel, and let's keep Gitlab discussions in Gitlab.
<Vanfanel> ok, sorry
<Vanfanel> what would be the right channel?
<pq> Weston stuff is discussed on #wayland, but I still would not want to divide a discussion between IRC and Gitlab.
itoral has quit [Ping timeout: 480 seconds]
<Vanfanel> pq: yes, of course. I will avoid that.
<pq> thanks!
devilhorns has joined #dri-devel
ella-0 has joined #dri-devel
ambasta[m] has joined #dri-devel
Andy[m] has joined #dri-devel
Guest493 has joined #dri-devel
bylaws has joined #dri-devel
chema has joined #dri-devel
RAOF has joined #dri-devel
cleverca22[m] has joined #dri-devel
cmeissl[m] has joined #dri-devel
colemickens has joined #dri-devel
cwfitzgerald[m] has joined #dri-devel
dafna33[m] has joined #dri-devel
dcbaker has joined #dri-devel
DemiMarie has joined #dri-devel
Anson[m] has joined #dri-devel
Guest495 has joined #dri-devel
doras has joined #dri-devel
danylo has joined #dri-devel
Dylanger has joined #dri-devel
egalli has joined #dri-devel
ella-0[m] has joined #dri-devel
Ella[m] has joined #dri-devel
AlexisHernndezGuzmn[m] has joined #dri-devel
GeorgesStavracasfeaneron[m] has joined #dri-devel
frytaped[m] has joined #dri-devel
gagallo7[m] has joined #dri-devel
gdevi has joined #dri-devel
gnustomp[m] has joined #dri-devel
testing has joined #dri-devel
halfline[m] has joined #dri-devel
hasebastian[m] has joined #dri-devel
hch12907 has joined #dri-devel
heftig has joined #dri-devel
zzoon[m] has joined #dri-devel
jasuarez has joined #dri-devel
jekstrand[m] has joined #dri-devel
jenatali has joined #dri-devel
JosExpsito[m] has joined #dri-devel
kallisti5[m] has joined #dri-devel
kunal10710[m] has joined #dri-devel
kunal_10185[m] has joined #dri-devel
kunal_1072002[m] has joined #dri-devel
KunalAgarwal[m] has joined #dri-devel
KunalAgarwal[m][m] has joined #dri-devel
kusma has joined #dri-devel
LaughingMan[m] has joined #dri-devel
mairacanal[m] has joined #dri-devel
martijnbraam has joined #dri-devel
masush5[m] has joined #dri-devel
Mershl[m] has joined #dri-devel
michael5050[m] has joined #dri-devel
Mis012[m] has joined #dri-devel
moben[m] has joined #dri-devel
mripard has joined #dri-devel
Vin[m] has joined #dri-devel
naheemsays[m] has joined #dri-devel
neobrain[m] has joined #dri-devel
Newbyte has joined #dri-devel
eyearesee has joined #dri-devel
nielsdg has joined #dri-devel
nyorain[m] has joined #dri-devel
DavidHeidelberg[m] has joined #dri-devel
onox[m] has joined #dri-devel
pac85[m] has joined #dri-devel
PiGLDN[m] has joined #dri-devel
pmoreau has joined #dri-devel
pushqrdx[m] has joined #dri-devel
r[m] has joined #dri-devel
ralf1307[theythem][m] has joined #dri-devel
ramacassis[m] has joined #dri-devel
reactormonk[m] has joined #dri-devel
robertmader[m] has joined #dri-devel
robertfoss[m] has joined #dri-devel
sigmoidfunc[m] has joined #dri-devel
sjfricke[m] has joined #dri-devel
Strit[m] has joined #dri-devel
Sumera[m] has joined #dri-devel
knr has joined #dri-devel
T_UNIX has joined #dri-devel
tintou has joined #dri-devel
tleydxdy has joined #dri-devel
tomba has joined #dri-devel
Tooniis[m] has joined #dri-devel
undvasistas[m] has joined #dri-devel
Soroush has joined #dri-devel
unrelentingtech has joined #dri-devel
viciouss[m] has joined #dri-devel
MatrixTravelerbot[m]1 has joined #dri-devel
x512[m] has joined #dri-devel
xerpi[m] has joined #dri-devel
YaLTeR[m] has joined #dri-devel
yshui` has joined #dri-devel
zamundaaa[m] has joined #dri-devel
znullptr[m] has joined #dri-devel
fahien has quit [Ping timeout: 480 seconds]
pmoreau is now known as Guest523
ella-0_ has quit [Read error: Connection reset by peer]
Guest493 is now known as bluepenquin
kts has joined #dri-devel
dakr has joined #dri-devel
<emersion> ajax: do you have any plans to update this MR? https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/6919
<emersion> if you don't have time, i can take it over
pepp has joined #dri-devel
foul_owl has quit [Ping timeout: 480 seconds]
<lina> So, I think I'm not too far off from getting this driver to render things... it's probably a good time to start talking about UAPI design and general driver architecture.
<lina> I have a fairly good idea of how the firmware works (though I'm still missing some bits), but I'm less familiar with how DRI drivers are normally architected, or how it fits into it.
<lina> Things like MMU context binding, command queues, scheduling, how we should handle the tile buffers...
<lina> Is there anyone who wants to have a chat about this? ^^
<karolherbst> lina: probably jekstrand given the knowledge about vulkan and how to design the UAPI so it's not useless for vulkan
<karolherbst> but airlied should also be able to chime in
<karolherbst> we currently are having samish discussions around nouveau and what needs to change to support vulkan
<lina> Yeah, in particular I know very little about how the fence stuff should work (and I also need to look into how macOS does it, I did notice that Metal fills in some fields when you start using events/fences so I think there's more to discover there)
<karolherbst> it's critical to get that part right, so you can use the same interfaces in GL and Vk, so you won't have to change the entire UAPI later
<lina> For kmscube-style demos I can probably get away with what my Python prototype was doing (fully synchronous command submission, one context/queue or at least one per dri device open, no extra scheduling layers or anything) but obviously that will have to change
Vanfanel has quit [Read error: No route to host]
<lina> The good thing, of course, is this is probably a good way away from being upstreamed, and even if we start shipping it to users at some point, I don't mind breaking the UAPI since we can sync together updates
foul_owl has joined #dri-devel
<karolherbst> you need a VMA_BIND thing and this new sync framework we have
<lina> So we have plenty of time to fix problems, but I want to get the general idea right from the start if I can ^^
<karolherbst> yep, and that's important if you want to give future you less work :P
<lina> Yup!
<karolherbst> I think SYNCOBJ is the thing you want
YuGiOhJCJ has quit [Quit: YuGiOhJCJ]
<karolherbst> and if you have that, fencing is already implemented for you in vulkan
<jani> airlied: dolphin: not just breaking the build, but rebasing mid-merge-window on random commits in linus' master instead of tags, pulling in some random state to drm-tip
<lina> At least I'll abstract out the "what the firmware does" from "how DRI deals with it", so having a really dumb demo UAPI for the initial tests won't be much of a waste of time. The equivalent Python code is only a couple hundred lines, plus ~1000 for render parameter setup but a lot of that will evolve and be agnostic to the other decisions around the command submission
<karolherbst> yeah.. I think you really want to have a good abstraction around your firmware
<karolherbst> getting GL to do something is relatively easy
<jani> airlied: realized you haven't pulled this https://lore.kernel.org/r/87k06rfaku.fsf@intel.com yet
<karolherbst> just need a command submission ioctl, bo_new/destroy and bo_wait
<karolherbst> :D
<lina> Not even bo_wait, in the Python version I just made submission synchronous ^^
<lina> (Horrible, I know)
<karolherbst> well.. that's not enough for GL really
<lina> It's enough for kmscube, glmark, and inochi2d!
<karolherbst> that's true
<karolherbst> having synced command submission is an important debugging tool though
<karolherbst> so whatever UAPI you have later, you want to have a bit to enable that
<karolherbst> in nvk I used it for dumping the command submission causing the context to fail
<jani> airlied: okay if I wait with the final drm-intel-next pull request until after you've pulled the first one?
<lina> That should be easy, I'll probably give the event notification objects responsibility over that so it would just be something like "event.wait()" in the submission path
<jani> airlied: although it's soon rc6
pallavim has joined #dri-devel
<lina> And yeah, this GPU is... bad at failure recovery... it drops all ongoing work unconditionally.
<karolherbst> lina: yep, in nouveau we simply wait on the kernel side fence
<karolherbst> lina: that's normal
<lina> We can tell which context caused a fault and at what address, and *usually* that pinpoints one or two active commands in that context, but it's not reliable.
<karolherbst> if the context faults, you have to reap it and create a new one or report it to the application that they have to create a new API context and recreate state
<lina> And everything else going on just gets screwed
<lina> Oh, it breaks other contexts too...
<karolherbst> I suspect you can keep the VM and all memory?
<lina> That's the bad part.
<karolherbst> uhhh
<karolherbst> so all contexts are lost?
<lina> No contexts are lost, but ongoing work is aborted - the firmware halts, lets you inspect state and what was currently being processed, then when you tell it to resume it just fires completion events for everything without them actually completing.
<karolherbst> I suspect this depends on what you need to recover.. is there no way to just remove "broken" context, but to reboot the entire accel engine?
<karolherbst> ohh
<karolherbst> that's horrible
<lina> This causes problems even in macOS, I think alyssa was seeing Safari glitch up when she was causing GPU faults.
<lina> It's a firmware problem, I hope Apple improve it in the future...
<karolherbst> I suspect you want to recover the queued jobs on other contexts and resubmit them before resuming
<karolherbst> or resume and resubmit
<karolherbst> not sure how hard that would be
<lina> Yeah, re-submission is probably the least bad solution here. The problem is if there was other work queued after the broken commands, it will run first... I don't know if we can somehow get the firmware to roll back, or to skip everything ahead.
<lina> It's going to be complicated and flimsy and I just hope Apple fix the firmware before I get around to trying to fix this...
<karolherbst> mhhh
<lina> Anyway, I think we can probably leave this particular problem for another time...
<karolherbst> but if the queued stuff still runs, where is the problem?
<lina> The current commands are skipped, and future commands run, which could depend on it.
<karolherbst> okay...
<karolherbst> can't you destroy the context and make all jobs from a specific context to be skipped?
<lina> Good question! There is "some" op used when a context is destroyed, but I'm not actually sure if it dequeues queued commands. It didn't seem to abort ongoing commands at least. This all needs more investigation...
<karolherbst> could also probably just mark this contexts as dead on the kernel side and report this to userspace
<lina> There are also tons of ops that exist that we haven't seen macOS use that could do useful things
<karolherbst> and just process the following commands, but userspace will deal with the error anyway
<karolherbst> so what we did in nouveau is that following command submissions on a dead context return an error
<karolherbst> and that's how userspace knows the context is dead and it shouldn't do anything with it anymore
MajorBiscuit has quit [Ping timeout: 480 seconds]
<karolherbst> and then things figure out themself (meaning the application crashes :) )
<lina> I don't think we'd want to drop the whole context (VM), but we might want to drop all command queues that are part of it, possibly.
<karolherbst> the VM is part of the context?!?
<karolherbst> that's... horrible
<lina> No no, I'm just calling the VM the context
<karolherbst> okay
<karolherbst> yeah.. that's fine then
<lina> I'm not sure what you mean by context ^^
<karolherbst> mhh I guess "queue" as you call it
<karolherbst> though queues could also share a hw context
<lina> Ah, then yes, you can have multiple of those (the UABI probably needs stuff added to it to handle that, right now the demo just has one)
<karolherbst> depends on if there is such a concept as context on the apple GPU
<karolherbst> so on nvidia hw we have VMs, but also real hardware context with real state commands can set and read
<karolherbst> and then you submit commands running within a specific context
<karolherbst> and the kernel/firmware context switches all those
<lina> There are 3 limited resources: VM context IDs (63), event notification stamps (128, but you need 2 for a render job), and tile buffer manager slots (127). Those can be dynamically allocated/deallocated to work/contexts as needed.
<karolherbst> okay
<lina> Nothing else is limited, you can have as many queues as you want, as many commands queued at once as you want, etc.
<lina> The driver allocates memory for the firmware to manage each of those, so it's all dynamic
<karolherbst> yeah.. seems like your context is more like the entire VM
<karolherbst> is there persistant state across command submissions or do you have to emit all state each time you submit?
<lina> That's what I call context, yeah, the VM bit
<lina> But you can have as many contexts as you want, just only up to 63 bound in the MMU with pending work submitted
<karolherbst> or is the state just inside VRAM?
<karolherbst> by state I mean you can for example set a clear color value and it persists across submissions
<lina> Oh, you mean draw state! I don't think any of that is persisted, that's all set up as part of the draw pipeline which is mesa's job.
<karolherbst> but it's a bit weird to have limited VM context ids... well such a low limit actually, because that means you can have at most ~60 applications doing rendering
<karolherbst> lina: okay
<lina> It does do preemption though, the firmware handles that, and then it dumps state out into some buffers that userspace prepares.
<karolherbst> so you have to set the parameters of commands each time they get submitted, okay, that's fine I guess
<lina> Well, you don't have to keep contexts mapped while no work is pending for those contexts.
<karolherbst> mhhh
kts has quit [Ping timeout: 480 seconds]
<karolherbst> okay
<lina> My idea was to have some allocators that let e.g. a work queue grab an MMU context ID, which it owns until it drops that value, and it could pass the previous context ID in as a hint so the uncontended case does not thrash the TLBs
<lina> Some kind of LRU thing to evict entries
<lina> If all slots are busy I don't know what to do, that's where driver-side scheduling comes in I guess? I suspect if I just block that'd get me most of the way there in practice though...
<karolherbst> yeah.. sometihng.. swapping VM contexts sounds like something you really want to have here
kts has joined #dri-devel
<lina> Each render job submission specifies the context ID, so it's perfectly fine to swap them around as long as you only do it when all work for a context has completed and no new work was submitted.
<clever> on something like the bcm2711 v3d, i dont think the hardware even supports multiple VM's on the 3d core
<clever> and the page tables for its virtual memory are rather large
<clever> so linux just shoves every render task into the same virtual space
<lina> Panfrost only supports 3 VMs I think, one per queue type, and just swaps them around for every submission.
<lina> But it also doesn't actually have any GPU-side scheduling so the driver just uses drm_sched
<lina> I'm not sure if we really want to throw drm_sched on top of the firmware scheduling in my case, though...
<karolherbst> probably not
<clever> the v3d also uses a kind of draw list, where you can queue up jobs, but it has no pre-emption of context switching of any kind
<clever> but i did recently discover that you can chain jobs together fairly painlessly
<lina> The only case where we might need some kind of kernel scheduling is when one of those 3 resources is exhausted/blocks... but I don't know how likely that is? Maybe it's okay to just block in those cases. I don't know how macOS does it...
<karolherbst> yeah.. just block
<karolherbst> I suspect it won't happen in real world cases
MajorBiscuit has joined #dri-devel
<lina> Yeah, I think so too
<karolherbst> and if it does, you can rework things in 2030 when people start to complain 🙃
fahien has joined #dri-devel
<lina> For buffer managers, I know you can share them within a context between different queues, and we probably want to do that since it saves memory. It just means I need a bit of careful tracking to manage the size and make sure that there's a minimum capacity per queue that is using it.
<karolherbst> we might need a "high prio" flag for GPU using applications anyway
<lina> That's another question, whether that should be driven from userspace or kernelspace. Apple does it in the kernel.
<karolherbst> and then you can block all the low prio ones
<karolherbst> something something
<lina> The GPU scheduler has 4 priority levels, and we could reserve some slots for high-prio jobs
<karolherbst> yeah
<lina> Then as long as not too many high prio clients are running, they'd always get priority without blocking
<karolherbst> you never want to block the compositor for example
<lina> Yeah
<lina> And if I'm streaming I might want inochi2d to have priority over everything but the compositor too ^^
<karolherbst> I could imagine it even affects increasing power states quickly
<karolherbst> or something
<lina> There's a ton of magic constants driving the power management, but from what I've seen it's very, very fast anyway. Let me check...
<karolherbst> compositors are this kind of applications doing nothing for 5 minutes, but need 100% GPU power for 0.1 seconds and go back to sleep
<karolherbst> intels driver is very broken there :(
<clever> the rpi also has its own dedicated 2d only compositor core, so it cant do fancy effects, but it could still do 95% of what window managers need
itoral_ has quit []
<clever> its exposed via DRM and has a writeback port that could be used
sravn_ has quit [Remote host closed the connection]
<karolherbst> clever: that's not the problem. The problem is ramping up clocks to the max quickly
<clever> but X doesnt really want to share that with the WM, so it already fails there
sravn_ has joined #dri-devel
<karolherbst> i915 is really doing a terrible job there
<clever> ah, thats where some code i saw from the rpi camera subsystem could be handy
<lina> Apple also has basic compositing in the display engine (a few layers, tons of formats/scaling)
<karolherbst> if I force high clocks all the time, my desktop is suddenly super smooth even for effects
<clever> when enabling the camera port, it fires off a request to force the clocks to a higher level
<karolherbst> if I don't. it's laggy
<clever> what you need, is to dynamically force a higher clock, while jobs from a certain client are running
<clever> and dont rely on the "its been 100% for 0.2 seconds" limit to dynamically boost the clock
<karolherbst> yeah.. we need a solution there, and the solution can be "increase clocks a little if you hit 100% util"
<karolherbst> or check the avg util over the last 0.5 seconds or something
<clever> i was thinking pre-boost, when you know its a high priority job
<karolherbst> no, the compositor is doing something -> 100% clocks
<clever> dont wait for it to walk a bit
<clever> exactly
<karolherbst> and then just drop to 0% once it's idling again
<karolherbst> spikey loads are really different from normal scheduling
<karolherbst> maybe it needs to be part of command submission or something.. I don't know
<karolherbst> maybe needs GL+VK extensions..
<karolherbst> anyway.. it's broken as it is today
<clever> oh, that reminds me of cpu freq bugs i had on an old laptop
<clever> just doing `ls -lh` would make the cpu usage spike, and the freq to climb
<lina> So I think it takes the GPU about 180us to turn on and go into pstate 3 after work submission from idle-off? And then after that it does wait 20ms before ramping up to max, but I bet there is some way to control that.
<clever> right as i type the response, "oh its idle" and the freq drops again
<clever> karolherbst: but the worst part, is that when the freq changes, the cpu stops responding for 0.3 seconds, and the ps2 fifo overflows!
<clever> so it looses key-up events, and then key repeat takes offfffffffffffffffff
<karolherbst> lina: 20ms sounds good enough
<karolherbst> you lose one frame :)
<lina> Yeah (if it's too slow) ^^
<karolherbst> I am sure this 20ms has nothing to do with 1000/60 being ~16.66
<lina> 10ms to pstate 4, 20ms to 6
<karolherbst> ahh yeah
<lina> But I'm pretty sure that interval is configurable somewhere in the giant parameter structs
<karolherbst> that's probably quick enough
devilhorns has quit []
<lina> The power management task seems to run every 10ms
<karolherbst> there is one huge issue though
<karolherbst> what if the CPU and GPU starve each other
<lina> Actually 8ms? So maybe it is 8/16 and that's deliberately <16.66 ^^
<lina> Starve each other?
<karolherbst> so the CPU isn't fast enough to submit enough jobs in time, but then the GPU isn't fast enough to keep the CPU busy submitting more jobs, because they only increase each clocks by a small percentage
<karolherbst> so they race to max clocks, but slowly, because they are not able to actually put enough load
<lina> The CPU scales super fast on M1s, few microseconds too, and Linux is very eager to ramp it up when a core is busy.
<lina> So I don't think that will be a problem either
<karolherbst> yeah
<lina> (With schedutil)
<karolherbst> the only sane way of doing ultra responsive GUI: race to idle
<lina> Yup!
<karolherbst> wake up, push cores to 100%, do _all_ the things, and then sleep for a long time
<karolherbst> we even have a sleep_range thing in linux for that
<karolherbst> but hardly anybody makes use of it
<lina> Yeah...
<clever> lina: oh, on the subject of the time it takes to change cpu freq, knowing how the freq is generated can let you speed that up
<lina> (By the way, in case you're curious, this is how I watch over the power management and generally what the GPU is doing: https://twitter.com/LinaAsahi/status/1539618595537162242)
<clever> for example, on the rpi, there are 2 stages, first a PLL that multiples a crystal by some variable
<clever> then a divider, that divides it back down
<lina> It does have timestamped stats messages that tell you when things like pstate transitions happen, but that memory address poller is super cute for really seeing in detail what happens when and in what order
<clever> changing the multiplier is slow, because the PLL has to re-lock
<clever> but changing the divider is instant
<karolherbst> lina: I actually saw that :)
<lina> ^^
<airlied> jani: will process it now
<lina> clever: On Apple CPUs it's all handled by the hardware, you just poke the pstate into a register
<karolherbst> same on intel, but it's broken on intel 🙃
<clever> so i might configure the PLL to run at 2ghz, and then configure the arm core to run at 2ghz/5 == 400mhz
<lina> I think marcan measured all the latencies and they looked reasonable (basically limited by voltage ramp-up on the way up, essentially instant on the way down)
<karolherbst> it works perfectly well for high load applications, but not for GUI, wonders where the focus is :)
<clever> lina: but is that truely hardware, or is it a firmware blob behind the curtains? or multiple PLL's pre-locked to each freq?
<karolherbst> turns out, if you focus to much on HPC, your desktop side suffers
<lina> I think it's hardware, some sort of hardware sequencer
<lina> There's no known firmware block that would take care of CPU scaling
<clever> ah, and yeah, you also touched on something i have yet to play with
<clever> seperately from the clock, you also need to ramp up voltages
<lina> karolherbst: Same with audio, Intel is terrible at realtime latencies...
* karolherbst had too much fun with this on nvidia hardware
<clever> or it will malfunction at that higher clock, without enough voltage to go with it
<lina> I hope Apple does a lot better there ^^
<karolherbst> lina: that's not even the worst part, the on board audio chips are noisy as hell
<lina> Of course, but even assuming you use an external interface... on Intel, you need to basically disable all C-states if you want things not to drop out like crazy, and even then you still need to use larger than ideal buffer sizes
<karolherbst> so if you want to do pro audio stuff, you want to use an external interface regardless
<lina> Apple's jack codecs are actually kind of nice enough for pro audio work using headphones!
<karolherbst> lina: ohh.. what I noticed is, that it's really only a problem if you connect the interface on Thunderbolt docks 🙃
<lina> The most recent ones even support high-z headphones
<airlied> lina: so you want a bo alloc/free, syncobj wait, vmbind, and exec
<airlied> you want to do the va adress space mgmt int userspace
<karolherbst> as soon as I connected by interface to the laptop directly, all audio issues just vanished
<airlied> then the kernel does the page table mgmt
<karolherbst> lina: yeah well... atm I just use pipewire and that works good enough on linux, but my use case is also just having solid audio on online meetings :D
<clever> ive been using pulseaudio for years
<karolherbst> so my demnds are quite low there
<lina> vmbind?
<karolherbst> clever: it sucks :P
<karolherbst> use pipewire, your life will improve
<clever> the only major problem i had, is that originally PA was configured to run with realtime scheduling
<clever> and if the system hung for even a split second, linux would panic, and silently -9 every realtime process :P
<clever> PA would restart, but chrome would then claim it has no capture channel available
<airlied> lina: an ioctl to map/numap va space to/from bo
<karolherbst> pipewire is nice, as it merges PA with Jack, so you can painlessly use both type of applications
<clever> bye bye meetings :P
<clever> karolherbst: i'll need to investigate that
<karolherbst> yeah.. chrome is buggy there
<karolherbst> but pipewire really is a game changer
<clever> what does pipewire do in terms of special audio codec passthru?
<airlied> both exec and vmbind ioctls need to take two syncboj arrays
<clever> like atmos?
<airlied> one for waits and one for signals
<karolherbst> use pavucontrol for normal stuff, but if you want to do fancy channel wiring, just use qjackctl instead
<lina> airlied: Oh, splitting bo creation from mapping to va you mean? (gpu va I assume)
<karolherbst> it also affects the PA side of things
<airlied> lina: yes
<lina> The kernel would handle the address space allocation I think, since it also needs to put stuff in user VA space (especially for the buffer managers)
<karolherbst> so you can rewire channels of PA applications through jack tools
<airlied> lina: bo creation just allocates physical ram
<karolherbst> it's aweomse
<airlied> lina: no we want userspace to own the address space
<airlied> with a possible cut-out for kernel allocs
frytaped is now known as Guest543
testing is now known as frytaped
<karolherbst> and pipewire seems to come with lower latencys jack ever did, but also providing all the PA features
<karolherbst> it's basically the first competent audio system linux ever had :P
<karolherbst> :D
<airlied> lina: there shouldn't be much things the kernel needs to put in the vma
<lina> airlied: So the big question here is the buffer managers
<airlied> lina: what buffers are they managing?
frytaped is now known as go4godvin
<lina> The GPU needs buffer space to store tiled vertex data, and those are (mandatorily) managed by the kernel, but the buffers themselves are in user VA space. Apple does 100% of the work in their kernel driver, userspace doesn't see a thing. I don't know how we should do it.
<airlied> what is the lifetime of those allocations?
<lina> Doing it in the kernel absolves userspace and the UABI of that responsibility... doing it in userspace makes more sense, but then we need to figure out a UAPI for this, since ultimately the kernel needs to be told about things like buffer sizes increasing.
<lina> The buffer managers can be allocated at any time, are bound to hardware slots during rendering, and can be shared between multiple queues/render jobs within a context (with some requirements for minimum number of buffer pages per potential active job).
<lina> I was thinking we probably just want to allocate one per context and share it for everything, since that saves memory.
<airlied> what makes the kernel allocate them?
<airlied> the fw asks?
<lina> The FW manages the buffers via shared memory in kernel VA space, so the kernel needs to deal with this one way or another.
<lina> So the question is whether we put the logic in the kernel or make a whole API to have userspace manage this together with the kernel.
<airlied> probably just allocate it from a cutout vma space
Guest543 has quit [Remote host closed the connection]
frytaped has joined #dri-devel
<airlied> those would be annoying for vulkan to manage from userspace
<lina> They also have to be dynamically expanded if they overflow during rendering (for performance, since that causes rendering spills). E.g. apple bumps them up after a few frames of overflows.
<airlied> if the lifetimes were not known
kts has quit [Quit: Konversation terminated!]
<karolherbst> make the cutout big enough, like 1<<40 or sometihng :P
<lina> I'm not sure what we should do for shrinking, if anything. Might be okay to just have them never shrink for any given context.
<clever> lina: v3d lets you configure the addr/size of the overspill space sperately from the main buffer for a single job, and there is an irq to signal that it has run out
<lina> karolherbst: We have 39 bits of VA space, so that would be a bit of an issue...
<karolherbst> uhhh
<clever> but i'm not sure how to know when that used spillover is free again
<karolherbst> 39bits?
<lina> Yup.
<karolherbst> seriously?
<lina> Yup.
<karolherbst> that's.... not much
<lina> I know...
<karolherbst> why....
<clever> thats still more then the bcm2711 v3d, which only has 32bits
<karolherbst> well... this is no silly embedded sytstem GPU though :P
<clever> and the vc4 v3d, which only has 30bits, but there is only 30bits of ram
<lina> clever: The firmware manages the spills, and it continues rendering on its own splitting the job, but it does tell you how many times the buffer overflowed and by how many pages. So the kernel has to check that at some point and increase the buffer size to avoid future spills.
<karolherbst> thath's like intented for pro audio/video production use cases with tons of memory
<karolherbst> I don't even...
<clever> yeah, if you have >4gig of gpu ram, the v3d design wont work anymore
<karolherbst> it makes no sense
<karolherbst> lina: sure it's always 39 bits? not that there is a way to get a bigger space if needed
<karolherbst> because 39 just sounds like it's not enough
kts has joined #dri-devel
<clever> 39bits is 512gig, that seems plenty?
<lina> It's always 39, and some GPU command structures have 5 bytes per address, and some tile buffer structures do too.
<karolherbst> I mean.. it's 512 GiB, but... come on
<lina> It's baked into the design.
<lina> (Probably PowerVR legacy...)
<karolherbst> clever: we are talking virtual memory here
<lina> karolherbst: It's not like it does demand paging either, so...
<clever> karolherbst: ah, but how are page faults handled in the gpu, can swap really work?
<lina> Nope, it can't, not with current firmware.
<karolherbst> well.. doesn't matter really
<karolherbst> you don't want to be restricted by limited VA space
<karolherbst> that's all
<clever> so your basically limited by how much physical ram there is
<karolherbst> sure, but you want to be able to sanely place things
<lina> The funny thing is, apparently the design was intended to share page tables with the CPU? Even though that would force you to use 40-bit AS on the CPU side too, which is kind of ridiculous.
<clever> yeah, the same problem exists on the bcm2711, the 3d core was designed around 32bit addressing, and they didnt really fix that
<lina> You know how you do GPU TLB flushes?
<karolherbst> well.. you can manage with 39 bits I am sure, but this puts more preasure on doing proper placements of e.g. cut of spaces
<karolherbst> *cut-out
<lina> Using a *CPU* tlbi instruction. In the Outer Sharable domain.
<lina> On the GPU VA, directly.
<clever> when the soc got 8gig of ram, they just bolted an mmu betwee the 3d core and the ram
<clever> so the 32bit virtual space, can map to the 33bit physical space
<lina> I don't know what happens if GPU VAs overlap CPU VAs, I guess you needlessly shootdown CPU TLBs?
<karolherbst> maybe?
<karolherbst> well..
<karolherbst> you can also just always have the same VA on both sides :D
<karolherbst> oh well...
<lina> Yes, I think that was the intent of the hardware designers...
<lina> Except, you know, they ended up with 40 bits only (39 per kernel/user)
<karolherbst> always doing mmu_notifiers :D
<lina> And then no actual demand paging support in the firmware
<karolherbst> but yeah.. might actually make sense on that hardware
<karolherbst> so GPU and CPU pointers are valid on each other side
jewins has joined #dri-devel
<karolherbst> and being it's shared memory there is not even a perf overhead
<lina> Yes, that is used for hybrid memory type stuff on some platforms, right?
<lina> And I think that was the intent here
<lina> GPU page tables are just ARM page tables after all
<karolherbst> well.. the intent of mmu_notifiers was HMM for nvidia
<karolherbst> so you can share the VM, but you do page faults and do ondemand migration
<lina> I think here you could literally share page tables if you did it right.
<karolherbst> with a way of hinting that the kernel can do migrations before it faults
<lina> But it'd be a bit tricky with the permission bit mapping.
<clever> lina: that sounds like the perfect reason to just copy the TTBR1 into the gpu and share the whole table
<karolherbst> yeah...
<karolherbst> just embrace it and make sure the VA is the same on both sides
<lina> clever: except Linux ARM64 builds usually want >40 bits of VA space...
<lina> So that won't work for generic kernels
<clever> ah, yeah
<lina> And as I said it doesn't do demand paging anyway, so you still need to pin things, which kind of defeats the purpose.
<karolherbst> mhh
<lina> Plus the permission bit issue - you need to use Apple's SPRR mechanism to get the mapping to match I think?
<lina> And then you lose W|X systemwide
<lina> Let's not go there ^^
<karolherbst> but but...
<lina> I'm going to have enough trouble upstreaming this GPU driver without also trying to upend the entire Linux VM subsystem!!!
<lina> Rust isn't even in Linux yet...
<clever> what if you just had a private set of paging tables for the gpu, and when you import a dma_buf into the gpu, you map it there and pin the pages?
<clever> allowing the buffer to be fragmented in physical space
<karolherbst> lina: just have fun while doing it :P
nuh^ has joined #dri-devel
<lina> That's already what I'm doing!
<karolherbst> good!
frytaped has quit [Remote host closed the connection]
frytaped has joined #dri-devel
<lina> (I guess that answer works for both ^^)
frytaped has quit [Remote host closed the connection]
<lina> The only other tricky thing here is the GPU always uses 16K pages, while the kernel could be built for 4K... but I guess we'll cross that bridge when we get there.
<lina> That already causes problems with IOMMUs and those patches aren't upstream yet.
<karolherbst> ahh yeah... those silly 4k pages
<clever> yeah, thats a bigger issue
<lina> clever: I already have GEM implemented with the shmem helpers, dma_buf import, etc, and a map to GPU space function (though right now I'm doing VA management in kernel space, partially because the kernel needs to have a ton of firmware buffers and I reuse the same GEM infrastructure for that)
<lina> I could easily limit the VA space for the kernel for user contexts and say that's a kernel carveout though, I already do that for kernel space due to the interaction with the firmware page tables
<karolherbst> yeah..
<karolherbst> probably the best idea
<lina> What's the motivation for managing VAs from userspace, by the way? I didn't know that was a thing. I think panfrost doesn't do it like that?
<clever> ive been reading thru the vchi code in linux, to better understand some of the rpi protocols, and found a neat thing, where you can export a dma_buf to the firmware, and it maintains ref-counting across that RPC wall
<karolherbst> lina: vulkan mainly
<lina> Oh, vulkan needs that?
<karolherbst> yes
<lina> Huh...
<clever> and when the firmware is done with it, it sends a release msg back to linux, which decrements the refcnt, and may free it
<karolherbst> it makes also things easier
<karolherbst> if you can put buffers at certain VA addresses, you can swap buffers with bigger ones on the fly
<lina> Ahh...
<clever> for that, i feel like you would want to waste some VA space, and assign addresses as-if every object was some defined max-size
<clever> but only map it up to its actual size
<lina> If the kernel just deals with the tile buffers, I guess then I can probably just steal 4 GiB for the kernel and call it a day. If your tile buffer ends up that large, I'm pretty sure spills to the framebuffer have a ~zero performance hit at that point (I guess we should benchmark this).
<clever> like, just base + (object_id * maxSize)
<airlied> lina: for vulkan sparse bindings it just makes more sense
<karolherbst> lina: it also helps, because in vulkan you record command buffers
<karolherbst> so you might already added VM addresses for stuff, but what if you need to replace that stuff by something bigger/smaller whatever
<airlied> lina:you need to be able to allocate a va range with bo sparsely bound to it
<karolherbst> we have this issue in nouveau for example where we have to maintain a shader heap.. and resizing that one isn't trivial if the shader addresses are already backed in :)
<airlied> there is't a 1:1 between bo and va anymore
<lina> Also, userspace needs to maintain a 32-bit carveout for pipeline structures and shaders, because I guess those have 32-bit addresses internally. I'm not sure if that has to be at a fixed VA. macOS does it like that, and the firmware globals do have a pointer to the base of that, but it's also specified in every command submission so I'm not sure if that pointer is ever used for anything?
<lina> Though it might be best to play it safe and let the kernel pick that VA, if we discover it matters one day changing the UABI would suck...
<karolherbst> lina: can't you manage the shader heap in userspace?
<airlied> there isn't a 1:1 mapping
<lina> shader heap?
<karolherbst> or well.. the thing you put shaders in
<lina> I mean userspace can manage those VAs just fine, I just think it might be best to have the kernel decide what its base VA is just in case.
<karolherbst> ohh okay
<karolherbst> well
<karolherbst> you can have a query info UAPI which let's userspace query all that information
<lina> Yeah
<karolherbst> that's totally fine
<lina> macOS puts the shader/pipeline stuff at 0x11_00000000 and general GPU objects starting at 0x14_00000000.
<lina> There was also a reference to the 4G block at 0x13_00000000 in a global in prior GPU firmware, but they got rid of it in the latest version, so it was probably legacy.
<karolherbst> probably just addresses coming from dice rolls :P
<karolherbst> one came up witha fixed VA scheme and they just rolled with that
<lina> Yeah, quite possible, though they do exist in GPU init globals.
<lina> So I do wonder if they are used for anything.
<airlied> I assume metal has spasre bindings
<lina> Possibly not though, there's way too many globals for all of them to matter.
<airlied> ah yes it does
<lina> The driver also does "something" with the last 2 pages of user VA space. Possibly some kind of spill/dummy page?
<lina> I'll probably just reserve the top 4G of user VAs for the kernel or something
<clever> oh, that reminds me of a trick OSX does on any 64bit machine
<clever> the entire lower 4gig of the VA space is just banned
<clever> so if you accidentally put a pointer into a 32bit var, it will pagefault, always
<lina> The entire lower 4G of *physical* addresses also doesn't exist on these machines
<clever> heh, doubly safe
<clever> you cant make that mistake in kernel either!
<lina> And on the M1 Pro/Max, RAM starts at 0x100_00000000. People have broken booting on those more than once with kernel MM changes that break when you have zero RAM contained in "some large" block at 0.
<lina> On M1/M2 it's at 0x8_00000000 though
<karolherbst> oh wow...
* airlied is sitting in a rust/liunux talk
<karolherbst> airlied: have fun! :D
<lina> :D
<lina> airlied: Enjoy and please report back! ^^
<airlied> kernel maintainers are arguing over random things :-P
<karolherbst> of course they do
<karolherbst> question is.. are those important bits or just bikeshedding? :P
<karolherbst> "but but.. no I need to learn rust in order to review patches! I don't wanna!"
<airlied> lina: well I've informde linus he's be testing a gpu rust driver on his laptop
<airlied> mostly biskesheds with a small bit of actual issues
<karolherbst> yeah.. guess that's understandable. I just wished people would be more open minded about such things
<karolherbst> I am sure there were tons of bikeshedding around moving from assembly to C for writing kernels :P
<clever> karolherbst: some parts like the initial mmu setup kind of have to be asm, because PIC mode in gcc isnt PIC!
<karolherbst> yeah...
<clever> 32bit arm had a bit of an ugly chunk of code, where asm would apply relocation patching to some C code, so you could run bunzip without the mmu
<clever> 64bit arm then said no, the image cant be compressed
<clever> if you want compression, have the bootloader undo it first!
<clever> so 64bit arm is only mmu setup and nothing more
<clever> it gets into virtual mode and c far more easily
<clever> but, even in virtual mode, i have ran into headaches
<lina> airlied: Nice! ^^
<lina> (We haven't looked at M2 yet though... Hopefully it won't be *too* different...)
<clever> karolherbst: my pi2, was faulting almost immediately after entering virtual mode in linux, it never printed while in virtual mode, and it trashed its own stack when faulting
<karolherbst> :(
<clever> karolherbst: the printk routine grabs a mutex before it does anything, and i didnt enable SMP support in the arm core
<clever> so the mutex opcode caused it to jump to the undefined opcode exception
<clever> and linux hadnt setup the exception table yet
<clever> the fun part, was debugging that over jtag+gdb
<clever> i cant set breakpoints to a VA while the mmu is off
<clever> and i think i was entirely using software breakpoints, so i cant set a breakpoint before i load a binary
<clever> so i had to juggle turning breakpoints on&off, just before it switches between bootloader->kernel, and before the kernel turns on the mmu
<clever> karolherbst: but once i figured the above out, all i had to do was set an SMP enable bit, and it magically sprang to life!
Vanfanel has joined #dri-devel
<lina> There's so much junk in these GPU init globals... I just found the physical address of the GPU MMU context table, I bet that one matters...
Jeremy_Rand_Talos__ has joined #dri-devel
<lina> One constant keeps showing up too, 19551, in both integer and float, and I have no idea what it means but it's in the structs 10 separate times...
kts has quit [Ping timeout: 480 seconds]
Jeremy_Rand_Talos_ has quit [Remote host closed the connection]
kts has joined #dri-devel
fahien1 has joined #dri-devel
fahien has quit [Ping timeout: 480 seconds]
nuh^ has quit [Remote host closed the connection]
fahien1 is now known as fahien
<jani> airlied: thanks. fired up a thread about setting up pr-tracker-bot for drm subsystem :)
Haaninjo has joined #dri-devel
<airlied> oh yeah I considered it before but wasn't sure where to host it
sdutt has joined #dri-devel
Vanfanel has quit [Remote host closed the connection]
fab has joined #dri-devel
morphis has quit []
morphis has joined #dri-devel
mbrost has joined #dri-devel
camus1 has quit []
zehortigoza has quit [Remote host closed the connection]
danvet has joined #dri-devel
chipxxx has quit [Read error: Connection reset by peer]
Vanfanel has joined #dri-devel
Duke`` has joined #dri-devel
shadeslayer has joined #dri-devel
shadeslayer is now known as Guest556
Guest556 is now known as shadeslayer
pa has joined #dri-devel
pa- has quit [Ping timeout: 482 seconds]
iive has joined #dri-devel
heat_ has joined #dri-devel
zehortigoza has joined #dri-devel
aravind has quit [Ping timeout: 480 seconds]
sarahwalker has quit [Remote host closed the connection]
fahien has quit [Quit: fahien]
rasterman has quit [Quit: Gettin' stinky!]
ybogdano has joined #dri-devel
Vanfanel has quit []
Lucretia has quit [Remote host closed the connection]
Lucretia has joined #dri-devel
mbrost has quit [Ping timeout: 480 seconds]
sdutt has quit []
sdutt has joined #dri-devel
MajorBiscuit has quit [Ping timeout: 480 seconds]
mszyprow has quit [Ping timeout: 480 seconds]
lynxeye has quit [Quit: Leaving.]
fahien has joined #dri-devel
lemonzest has quit [Quit: WeeChat 3.5]
frieder has quit [Remote host closed the connection]
ybogdano has quit [Ping timeout: 480 seconds]
fahien has quit []
lemonzest has joined #dri-devel
slattann has joined #dri-devel
jkrzyszt has quit [Ping timeout: 480 seconds]
MatrixTravelerbot[m]1 has quit []
dafna33[m] has quit []
colemickens has quit []
LaughingMan[m] has quit []
tursulin has quit [Ping timeout: 480 seconds]
cleverca22[m] has quit []
tintou has quit []
ybogdano has joined #dri-devel
arisu has quit []
chema has quit []
Guest495 has quit []
bylaws has quit []
sravn_ has quit [Remote host closed the connection]
sravn_ has joined #dri-devel
ybogdano has quit [Ping timeout: 480 seconds]
tobiasjakobi has joined #dri-devel
tobiasjakobi has quit [Remote host closed the connection]
yshui` has quit [Write error: connection closed]
YaLTeR[m] has quit [Write error: connection closed]
pushqrdx[m] has quit [Write error: connection closed]
RAOF has quit [Write error: connection closed]
Andy[m] has quit [Write error: connection closed]
gagallo7[m] has quit [Write error: connection closed]
Newbyte has quit [Write error: connection closed]
ralf1307[theythem][m] has quit [Write error: connection closed]
pac85[m] has quit [Write error: connection closed]
Tooniis[m] has quit [Write error: connection closed]
jasuarez has quit [Write error: connection closed]
ambasta[m] has quit [Write error: connection closed]
gdevi has quit [Write error: connection closed]
nyorain[m] has quit [Write error: connection closed]
Guest523 has quit [Write error: connection closed]
undvasistas[m] has quit [Write error: connection closed]
halfline[m] has quit [Write error: connection closed]
robertfoss[m] has quit [Write error: connection closed]
cwfitzgerald[m] has quit [Write error: connection closed]
jekstrand[m] has quit [Write error: connection closed]
onox[m] has quit [Write error: connection closed]
sjfricke[m] has quit [Write error: connection closed]
naheemsays[m] has quit [Write error: connection closed]
xerpi[m] has quit [Write error: connection closed]
Ella[m] has quit [Write error: connection closed]
ella-0[m] has quit [Write error: connection closed]
doras has quit [Write error: connection closed]
bluepenquin has quit [Write error: connection closed]
masush5[m] has quit [Write error: connection closed]
DavidHeidelberg[m] has quit [Write error: connection closed]
DemiMarie has quit [Write error: connection closed]
x512[m] has quit [Write error: connection closed]
cmeissl[m] has quit [Write error: connection closed]
knr has quit [Write error: connection closed]
Sumera[m] has quit [Write error: connection closed]
Strit[m] has quit [Write error: connection closed]
reactormonk[m] has quit [Write error: connection closed]
Mershl[m] has quit [Write error: connection closed]
kunal_1072002[m] has quit [Write error: connection closed]
KunalAgarwal[m][m] has quit [Write error: connection closed]
kunal10710[m] has quit [Write error: connection closed]
zamundaaa[m] has quit [Write error: connection closed]
KunalAgarwal[m] has quit [Write error: connection closed]
mripard has quit [Write error: connection closed]
kusma has quit [Write error: connection closed]
mairacanal[m] has quit [Write error: connection closed]
go4godvin has quit [Write error: connection closed]
unrelentingtech has quit [Write error: connection closed]
znullptr[m] has quit [Write error: connection closed]
robertmader[m] has quit [Write error: connection closed]
Vin[m] has quit [Write error: connection closed]
GeorgesStavracasfeaneron[m] has quit [Write error: connection closed]
nielsdg has quit [Write error: connection closed]
martijnbraam has quit [Write error: connection closed]
eyearesee has quit [Write error: connection closed]
r[m] has quit [Write error: connection closed]
sigmoidfunc[m] has quit [Write error: connection closed]
hch12907 has quit [Write error: connection closed]
zzoon[m] has quit [Write error: connection closed]
Mis012[m] has quit [Write error: connection closed]
tleydxdy has quit [Write error: connection closed]
kunal_10185[m] has quit [Write error: connection closed]
gnustomp[m] has quit [Write error: connection closed]
michael5050[m] has quit [Write error: connection closed]
Soroush has quit [Write error: connection closed]
frytaped[m] has quit [Write error: connection closed]
heftig has quit [Write error: connection closed]
hasebastian[m] has quit [Write error: connection closed]
kallisti5[m] has quit [Write error: connection closed]
viciouss[m] has quit [Write error: connection closed]
PiGLDN[m] has quit [Write error: connection closed]
T_UNIX has quit [Write error: connection closed]
jenatali has quit [Write error: connection closed]
dcbaker has quit [Write error: connection closed]
Dylanger has quit [Write error: connection closed]
JosExpsito[m] has quit [Write error: connection closed]
egalli has quit [Write error: connection closed]
moben[m] has quit [Write error: connection closed]
Anson[m] has quit [Write error: connection closed]
AlexisHernndezGuzmn[m] has quit [Write error: connection closed]
ramacassis[m] has quit [Write error: connection closed]
tomba has quit [Write error: connection closed]
danylo has quit [Write error: connection closed]
neobrain[m] has quit [Write error: connection closed]
arisu has joined #dri-devel
tzimmermann has quit [Quit: Leaving]
heat_ has quit [Remote host closed the connection]
heat has joined #dri-devel
ybogdano has joined #dri-devel
heat has quit [Remote host closed the connection]
heat has joined #dri-devel
stuart has joined #dri-devel
tagr has quit [Remote host closed the connection]
tagr has joined #dri-devel
pallavim has quit [Ping timeout: 480 seconds]
slattann has quit [Quit: Leaving.]
<mdnavare> vsyrjala: hwentlan_: Need some inputs on some negative testing for VRR. What happens if the userspace requests VRR on a CRTC through vrr_enabled property for a connector that is not VRR capable? Currently we dont fail that modeset in a driver. Can we just return an error in drm_atomic_set_crtc_property if it tries to set VRR enabled to true on a connector for which vrr_capable is false?
<mdnavare> Or can this be just asserted in the driver?
ybogdano has quit [Ping timeout: 480 seconds]
<emersion> mdnavare: vrr_capable is on connectors, vrr_enabled on CRTCs, iirc
<emersion> i don't think it makes sense to accept vrr_enabled=1 if all connectors attached to the CRTC have vrr_capable=0
<vsyrjala> if it doesn't error out atm, and userspace already relies on that then we're probably stuck with the current behaviour
<emersion> yea, but i wouldn't be as pessimistic
<vsyrjala> there's also the nasty thing that if you swap out the monitor then a modeset with the same state will no longer be accepted
<emersion> wouldn't that be the same for many other props?
<vsyrjala> do we have such checks for other things?
<vsyrjala> can't remember tbh
<vsyrjala> i have a feeling we don't generally depend on the current monitor's capabilities for validating property values
<emersion> hmmm
<emersion> i was thinking about stuff like max_bpc of HDCP level
jacobcrypusa[m] has joined #dri-devel
mbrost has joined #dri-devel
<vsyrjala> max_bpc certainly you are free to set to whatever value you want. hdcp not sure, but can't immediately spot any checks like that at least
<Ristovski> I got a second monitor, now I can run into a whole new domain of bugs :D
<emersion> vsyrjala: hm right
<emersion> and the range of a prop is never changed after init
<emersion> what if you set HDR_STATIC_METADATA on a connector which doesn't support HDR?
mvlad has quit [Remote host closed the connection]
<vsyrjala> i think we might just send the infoframe anyway
<vsyrjala> unless we've detected a dvi sink connected at which point we send no infoframes
pixelcluster has quit [Quit: ZNC 1.8.2+deb2 - https://znc.in]
<mdnavare> Well in the driver we check if vrr_capable is set to true and only then accept the vrr_enabled for that crtc else we continue without enabling VRR but we dont fail the modeset
<mdnavare> vsyrjala: emersion: Do you think we should add a drm warn on if vrr_enabled is = true for vrr_capable = false and fail in atomic check in the driver?
<mdnavare> vsyrjala: Currently we dont, userspace needs to look at the drm documentation and know that it n eeds to first get vrr_capable prop and then set vrr_enabled = true
<mdnavare> emersion: I agree with : i don't think it makes sense to accept vrr_enabled=1 if all connectors attached to the CRTC have vrr_capable=0 , but how can we check this in drm set prope function? We can enforce this check in userspace and possibly fail modeset in kernel?
<mdnavare> what do you think is the best way to handle this emersion , vsyrjala?
<emersion> given what vsyrjala said, not sure it's a good idea to reject the atomic commit
<emersion> if userspace sees a vrr_capable monitor, then tries to set vrr_enabled=1 in an atomic commit, but the connector got disconnected in-between, it's not clear we want to make user-space fail
Haaninjo has quit [Quit: Ex-Chat]
gouchi has joined #dri-devel
* karolherbst wants to kill of TGSI for good...
<karolherbst> there is no way to figure out how much shared mem a compute shader needs with TGSI, correct?
<karolherbst> Or did I miss something?
<karolherbst> from where was the information fetched before relying on ntt?
<anholt> the compute shader state that you're trying to remove.
ybogdano has joined #dri-devel
<anholt> it was just outside of tgsi
<jekstrand> lina: Yeah, we should chat. I'm on my way back from Dublin at the moment so not today but I definitely have opinions about the "right" way to drm drivers these days.
<karolherbst> anholt: for drivers yes, but where did st/mesa got that info from?
<karolherbst> was that extracted when doing glsl -> tgsi?
<anholt> grab it straight from the gl program, I'd assume.
<karolherbst> I am wondering why nobody bothered to add this info to the TGSI directly... oh well
<karolherbst> I saw that ttn doesn't do shared mem at all, so any left over TGSI shaders have 0 shared mem anyway
mbrost has quit [Read error: Connection reset by peer]
<karolherbst> and I'd keep that field around only for drivers still consuming TGSI
<karolherbst> the only problem I currently have is, that clover also doesn't have that info when calling "create_compute_state" it only knows what it needs on top of kernel declared one :/
* jekstrand should really write a "how to design a DRM driver in 2022" blog post...
* karolherbst really wished load_uniform would always be byte based
<karolherbst> the only issue I have with my current approach is that range and range_base which I guess we could calculate and set...
<anholt> you do have to calculate and set them, drivers like freedreno need it.
<karolherbst> ahh, okay
<karolherbst> I think we actually have that info in the shader_info, but I'd have to check
<karolherbst> I didn't think of that when writing this
ambasta[m] has joined #dri-devel
Andy[m] has joined #dri-devel
Guest580 has joined #dri-devel
bylaws has joined #dri-devel
chema has joined #dri-devel
RAOF has joined #dri-devel
cleverca22[m] has joined #dri-devel
cmeissl[m] has joined #dri-devel
colemickens has joined #dri-devel
cwfitzgerald[m] has joined #dri-devel
dafna33[m] has joined #dri-devel
dcbaker has joined #dri-devel
DemiMarieObenour[m] has joined #dri-devel
Anson[m] has joined #dri-devel
Guest581 has joined #dri-devel
doras has joined #dri-devel
danylo has joined #dri-devel
Dylanger has joined #dri-devel
egalli has joined #dri-devel
ella-0[m] has joined #dri-devel
Ella[m] has joined #dri-devel
AlexisHernndezGuzmn[m] has joined #dri-devel
GeorgesStavracasfeaneron[m] has joined #dri-devel
frytaped[m] has joined #dri-devel
gagallo7[m] has joined #dri-devel
gdevi has joined #dri-devel
gnustomp[m] has joined #dri-devel
Guest588 has joined #dri-devel
halfline[m] has joined #dri-devel
hasebastian[m] has joined #dri-devel
hch12907 has joined #dri-devel
heftig has joined #dri-devel
zzoon[m] has joined #dri-devel
jasuarez has joined #dri-devel
jekstrand[m] has joined #dri-devel
jenatali has joined #dri-devel
JosExpsito[m] has joined #dri-devel
kallisti5[m] has joined #dri-devel
kunal10710[m] has joined #dri-devel
kunal_10185[m] has joined #dri-devel
kunal_1072002[m] has joined #dri-devel
KunalAgarwal[m] has joined #dri-devel
KunalAgarwal[m][m] has joined #dri-devel
kusma has joined #dri-devel
LaughingMan[m] has joined #dri-devel
mairacanal[m] has joined #dri-devel
martijnbraam has joined #dri-devel
masush5[m] has joined #dri-devel
Mershl[m] has joined #dri-devel
michael5050[m] has joined #dri-devel
Mis012[m] has joined #dri-devel
moben[m] has joined #dri-devel
mripard has joined #dri-devel
Vin[m] has joined #dri-devel
naheemsays[m] has joined #dri-devel
neobrain[m] has joined #dri-devel
Newbyte has joined #dri-devel
eyearesee has joined #dri-devel
nielsdg has joined #dri-devel
nyorain[m] has joined #dri-devel
DavidHeidelberg[m] has joined #dri-devel
onox[m] has joined #dri-devel
pac85[m] has joined #dri-devel
PiGLDN[m] has joined #dri-devel
pmoreau has joined #dri-devel
pushqrdx[m] has joined #dri-devel
r[m] has joined #dri-devel
ralf1307[theythem][m] has joined #dri-devel
ramacassis[m] has joined #dri-devel
reactormonk[m] has joined #dri-devel
robertmader[m] has joined #dri-devel
robertfoss[m] has joined #dri-devel
sigmoidfunc[m] has joined #dri-devel
sjfricke[m] has joined #dri-devel
Strit[m] has joined #dri-devel
Sumera[m] has joined #dri-devel
knr has joined #dri-devel
T_UNIX has joined #dri-devel
tintou has joined #dri-devel
tleydxdy has joined #dri-devel
tomba has joined #dri-devel
Tooniis[m] has joined #dri-devel
undvasistas[m] has joined #dri-devel
Soroush has joined #dri-devel
unrelentingtech has joined #dri-devel
viciouss[m] has joined #dri-devel
MatrixTravelerbot[m]1 has joined #dri-devel
x512[m] has joined #dri-devel
xerpi[m] has joined #dri-devel
YaLTeR[m] has joined #dri-devel
yshui` has joined #dri-devel
zamundaaa[m] has joined #dri-devel
znullptr[m] has joined #dri-devel
pmoreau is now known as Guest601
<ajax> emersion: it's on my todo list but somewhat far down, i've got !17604 and some buffer age fixes before it. if you wanted to finish it off i'd be happy to review.
<ajax> (i am, in general, entirely happy to see people pick up what i've neglected)
fab has quit [Quit: fab]
alanc has quit [Remote host closed the connection]
lstrano has joined #dri-devel
alanc has joined #dri-devel
gio has quit [Ping timeout: 480 seconds]
gouchi has quit [Remote host closed the connection]
Simonx22 has quit []
danvet has quit [Ping timeout: 480 seconds]
Ryback_ has joined #dri-devel
kts has quit [Ping timeout: 480 seconds]
ybogdano has quit [Ping timeout: 480 seconds]
Duke`` has quit [Ping timeout: 480 seconds]
ybogdano has joined #dri-devel
frankbinns has quit [Remote host closed the connection]
calebccff_ has joined #dri-devel
italove9 has joined #dri-devel
HerrSpliet has joined #dri-devel
opotin39 has joined #dri-devel
leandrohrb8 has joined #dri-devel
lcn_ has joined #dri-devel
opotin3 has quit [Read error: Connection reset by peer]
calebccff has quit [Read error: Connection reset by peer]
italove has quit [Write error: connection closed]
leandrohrb has quit [Write error: connection closed]
RSpliet has quit [Read error: Connection reset by peer]
lcn has quit [Remote host closed the connection]
HerrSpliet is now known as RSpliet
lcn_ is now known as lcn
Simonx22 has joined #dri-devel
macromorgan has quit [Quit: Leaving]
warpme___ has quit []
bluebugs has quit [Quit: Leaving]
LexSfX has quit []
pcercuei has quit [Quit: dodo]
TMM has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
TMM has joined #dri-devel
stuart has quit [Remote host closed the connection]
stuart has joined #dri-devel
Jeremy_Rand_Talos__ has quit [Remote host closed the connection]
Jeremy_Rand_Talos__ has joined #dri-devel
Jeremy_Rand_Talos__ has quit [Remote host closed the connection]
Jeremy_Rand_Talos__ has joined #dri-devel
LexSfX has joined #dri-devel
LexSfX has quit []
iive has quit [Quit: They came for me...]