ChanServ changed the topic of #dri-devel to: <ajax> nothing involved with X should ever be unable to find a bar
jernej has quit [Ping timeout: 480 seconds]
tursulin has quit [Read error: Connection reset by peer]
<agd5f> Peste_Bubonica, what CPU? IIRC, sbios only enables it with certain CPUs by default.
<Peste_Bubonica> agd5f, 5900x, with Agesa
co1umbarius has joined #dri-devel
columbarius has quit [Ping timeout: 480 seconds]
<karolherbst> mhh
<HdkR> Hmmm. What's the way to go about DRI failing to authenticate and falling back to llvmpipe?
<karolherbst> HdkR: LIBGL_ALWAYS_SOFTWARE?
<HdkR> er, go about debugging DRI failing*
<airlied> HdkR: debugging it?
<airlied> LIBGL_DEBUG=verbose
<karolherbst> ahh
<karolherbst> yeah LIBGL_DEBUG might help
<HdkR> "screen 0 does not appear to be DRI2 capable"
<HdkR> wha?
<karolherbst> HdkR: check X logs then?
<karolherbst> could happen if accel fails to initialize I think
<airlied> HdkR: can I bet on some aarch64->x86 layer in the way? :-P
<airlied> it wouldn't be HdkR if there wasn't some translation layer involved
<HdkR> nah, this is without any layers since I'm just trying to get this board running
<HdkR> Looks like glamor is failing to initialize with eglInitialize failing?
<karolherbst> HdkR: well.. it needs to do accel somehow
<karolherbst> HdkR: you could check if you can do GL from a tty
<HdkR> I've never done that to know how
<karolherbst> just run the apps? :D
<karolherbst> I think eglinfo supports it actually
<karolherbst> and SDL2 too if you don't do anything fancy
<airlied> wflinfo --platform=gbm --api=gl
<karolherbst> yeah.. eglinfo works, at least the gbm and wayland bits
<karolherbst> HdkR: fun
<karolherbst> what the heck is 0��U?
<HdkR> what does that even come from?
<airlied> usually the kernel, it this some kinda of board with split display/rendering?
<karolherbst> airlied: fun.. wflinfo just uses nouveau here :D
<karolherbst> ohhhh
<HdkR> It's a snapdragon, so... idunno
<airlied> HdkR: should be all msm then
<airlied> assuming you have msm loaded :-P
shfil has quit [Ping timeout: 480 seconds]
<HdkR> msm should be loaded otherwise I wouldn't be getting display I guess
<karolherbst> HdkR: is the lib name always the same?
<karolherbst> it... kind of looks like a weirdly alligned ptr in hex
<airlied> probably down to using strace,
<karolherbst> probably
<HdkR> lib name is always
<karolherbst> HdkR: huh? I thought you are getting this weirdo thing
<HdkR> oh, I thought you meant the real lib
<karolherbst> I am more curious if the value changes or if it's the same.. but anyway, I guess checking with strace actually helps here
<HdkR> yes, value changes
<karolherbst> so.. random garbage memory or a pointer
<karolherbst> HdkR: I guess it always starts with a 0?
<karolherbst> anyway.. doens't matter
<karolherbst> random values are random
<HdkR> Is it supposed to pull the name from DRM_IOCTL_VERSION?
<airlied> one place it pulls it from I think
<airlied> yeah on non-pci platforms it should come from there
<airlied> loader_get_kernel_driver_name
<HdkR> Curious, my little test application that pulls the ioctl_version struct gets the name correctly
<karolherbst> HdkR: mhh I guess it's time to open gdb then?
<karolherbst> HdkR: ohh.. you check if forcing a driver at least works :D
<karolherbst> mhh is that even possible?
<HdkR> That doesn't seem to change behaviour
<karolherbst> HdkR: you might want to debug loader_get_driver_for_fd
<HdkR> huh. Walking the ioctls, it looks like msm_dri gets run for a bit
<HdkR> Last ioctl coming from fd_has_syncobj before it one more dri2_init_screen and then error messages
<HdkR> oh uh. hm
<HdkR> Rebuilt mesa with clang instead of gcc and it changed wflinfo behaviour
<airlied> HdkR: lto?
<HdkR> Does mesa enable that by default?
<airlied> not that I know off
<HdkR> Then it shouldn't be enabled since I don't have that in my config
<robclark> HdkR: generic recommendation (before reading all the scrollback): try kmscube?
<HdkR> I'll give that a whirl
libv_ has joined #dri-devel
<HdkR> Looks like kmscube works...Once I reinstalled a built mesa with clang again. Seems like I'm getting partial filesystem corruption
<karolherbst> HdkR: that would be annoying
<HdkR> That it would be
<robclark> so if building on random dev board.. check `date` (ie. they tend to not have batter backed rtc which can cause all sorts of build lolz)
<karolherbst> HdkR: I hope you are not using ccache :D
libv has quit [Ping timeout: 480 seconds]
<HdkR> Lemme just blow away that cache real quick :P
<HdkR> robclark: Luckily this one definitely updated with ntp before I changed anything
<karolherbst> but every time I blame ccache there is this one person telling me, that it's not ccache fault with the result it was indeed caches fault
<karolherbst> :P
<robclark> IME it is *always* ccache's fault ;-)
<karolherbst> bonus points if you are able to reproduce the problem :D
<robclark> (not in the sense that it causes problems often, but in the sense that they are really hard to figure out, reproduce, etc)
<karolherbst> yeah..
<karolherbst> the last one we hit was a user switching git commits
<robclark> ccache is basically a fail multiplier ;-)
<karolherbst> so we were lucky as we could relibly reproduce it
<karolherbst> the problem is just that you forget you are using it and always blame something else :D
<robclark> true.. but you blame about five something-else's before you figure it out :-P
Peste_Bubonica has quit [Quit: Leaving]
<karolherbst> well not me as I am not using ccache
<HdkR> TIL about debsums though. That was nice to verify the rest of my packages aren't corrupt at least
<DrNick> ccache is the ultimate in reproducible builds, it reproduces them even when you don't want it to
<robclark> :-P
<karolherbst> HdkR: lol
<karolherbst> you think that
<karolherbst> although I hope that debsusm works correctly :D
<HdkR> :D
<karolherbst> the last ccache bug was just pure evil
<robclark> ohh, hash collision.. fun
<HdkR> oh jeez
jernej has joined #dri-devel
jernej has quit [Remote host closed the connection]
jernej has joined #dri-devel
<karolherbst> "I can reproduce the ccache problem at will now. Clean ccache store. switching between these 3 commits causes it to crash: commit c9d1569689b5dc636daba941dc44f8a573e37309 commit 98934e6aa19795072a353dae6020dafadc76a1e3 commit f9b29c4a5862d7caeab89e8a3bd23f3275dbd9a6"
<karolherbst> soo.. yeah..
<karolherbst> it's annoying :D
<HdkR> Okay, kmscube works and wflinfo works
<HdkR> Now to figure out why X hates
<robclark> HdkR: did X manage to use glamor? If X somehow falls back on sw then mesa on client side isn't going to go well..
<HdkR> lol, "glGetString() returned NULL, your GL is broken"
<robclark> *sad trombone*
<karolherbst> glGetString is not allowed to return NULL?
<HdkR> Only on error
<karolherbst> well..
<karolherbst> where error means wrong enum used? :P
<karolherbst> but yeah.. a little weird
<HdkR> Looks like this might be a weird interaction between libopengl and libgl
nchery has quit [Ping timeout: 480 seconds]
boistordu has joined #dri-devel
cphealy has quit [Remote host closed the connection]
boistordu_ex has quit [Ping timeout: 480 seconds]
camus has joined #dri-devel
gpoo has quit [Remote host closed the connection]
<HdkR> ...Deleted and things worked
<HdkR> I'll just assume libglvnd is broken on Ubuntu AArch64 and continue on with my life
<robclark> HdkR: IME libglvnd works on fedora aarch64 so I think libglvnd itself is sound.. can't speak for the distro specifics in your case
<HdkR> It was also working a couple of days ago
Company has quit [Quit: Leaving]
<HdkR> There we go, Steam on Snapdragon 888
<Kayden> are d3d12-windows tests failing for folks?
<Kayden> or am I just getting unlucky
Company has joined #dri-devel
nchery has joined #dri-devel
Company has quit []
<robclark> HdkR: \o/
nchery has quit [Remote host closed the connection]
<HdkR> Cortex-X1 plus no longer constantly running out of ram just means that everything is quite a bit faster
idr has quit [Quit: Leaving]
cphealy has joined #dri-devel
thellstrom1 has joined #dri-devel
thellstrom has quit [Read error: Connection reset by peer]
mbrost_ has joined #dri-devel
mbrost has quit [Ping timeout: 480 seconds]
<cwfitzgerald> does anyone know the magic package to get llvmpipe on ubuntu
mbrost_ has quit [Ping timeout: 480 seconds]
<airlied> should just be in the default mesa drivers package
<airlied> LIBGL_ALWAYS_SOFTWARE=1 should enable it
<airlied> libgl1-mesa-dri is possibly the name
mbrost_ has joined #dri-devel
libv has joined #dri-devel
libv_ has quit [Ping timeout: 480 seconds]
Duke`` has joined #dri-devel
<cwfitzgerald> ah! that did the trick
slattann has joined #dri-devel
slattann has quit []
<jenatali> Kayden: Got a link?
<Kayden> this is from one of Marek's builds, but it looks similar:
<Kayden> one of my branches failed 3 times in a row with dlist-fbo3129-02 mentioned in the log
<Kayden> but, enough retries and it seemed to succeed
<Kayden> maybe that test is flakey?
<Kayden> (mine did merge eventually on the 4th retry)
<jenatali> Interesting... seems it is flaky. I saw that fail for the first time as part of
<Kayden> hmm :(
<jenatali> Agreed. Guess I'll add it to the skip list
slattann has joined #dri-devel
<Kayden> I do get some valgrind errors from that test
<Kayden> though they don't look related to !11776
<jenatali> I wonder if something slipped into main already
<jenatali> Maybe for whatever reason the Windows heap allocator's just more sensitive to it?
<Kayden> there was a recent rework to vbo dlist handling...
<Kayden> I'll try and older sha and see if the valgrind errors are new
<jenatali> Cool. If it is getting noisy and we're not able to find a fix, we can skip it for the Windows CI to keep it from blocking people, but this does seem indicative of a problem that should be fixed and not swept under the rug
<Kayden> yep, valgrind errors are new!
<Kayden> something snuck in :)
jewins has quit [Remote host closed the connection]
<jenatali> Not sure if I should be excited or sad :P
<Kayden> *shrug*: yes :)
<jenatali> Cool. That's as much as I can help for tonight, but ping me if there's something else I can do
<Kayden> thank you!
heat has quit [Ping timeout: 480 seconds]
<anholt_> I saw a couple of flakes with that dlist-fbo test on windows and v3d recently, too
itoral has joined #dri-devel
pnowack has joined #dri-devel
lemonzest has joined #dri-devel
mbrost_ has quit [Ping timeout: 480 seconds]
<Kayden> anholt_: the valgrind stuff is definitely bisecting to pepp's dlist/vbo rework
<Kayden> going to file an issue once I've got a precise commit
<Kayden> presumably that just landed in close proximity to your series, and yours was fine
thellstrom has joined #dri-devel
Duke`` has quit [Ping timeout: 480 seconds]
mlankhorst has joined #dri-devel
thellstrom1 has quit [Ping timeout: 480 seconds]
Hi-Angel has joined #dri-devel
anickname has joined #dri-devel
danvet has joined #dri-devel
frieder has joined #dri-devel
sdutt has quit [Remote host closed the connection]
anickname has quit []
<pepp> Kayden: I'll take a look today
rasterman has joined #dri-devel
ramaling has quit []
Ahuj has joined #dri-devel
slattann has quit []
tarceri has quit [Read error: Connection reset by peer]
tarceri has joined #dri-devel
rgallaispou has joined #dri-devel
<Kayden> cool - thank you :)
tursulin has joined #dri-devel
<Kayden> thanks for speeding all of that code up!
slattann has joined #dri-devel
ramaling has joined #dri-devel
cef has quit [Remote host closed the connection]
cef has joined #dri-devel
shankaru has quit [Remote host closed the connection]
shfil has joined #dri-devel
pcercuei has joined #dri-devel
shankaru has joined #dri-devel
NiksDev has quit [Ping timeout: 480 seconds]
hch12907_ has joined #dri-devel
hch12907 has quit [Ping timeout: 480 seconds]
<mlankhorst> karolherbst: you wait for drm to update and pull from it?
mrsiam has joined #dri-devel
<mlankhorst> karolherbst: can also bug the drm maintainers if you need it updated. :)
mrsiam has quit []
jhli has quit [Read error: Connection reset by peer]
<airlied> mlankhorst: should all be on 5.15-rc1 now
boistordu has quit [Remote host closed the connection]
<mlankhorst> do I send one more pull req or backmerge rc1?
itoral has quit [Remote host closed the connection]
<pinchartl> danvet: in the context of the discussion about memory-to-memory processing and in which subsystem it belongs, I have a question for you
<pinchartl> traditionally, for devices that take a live input and write it to memory, we use V4L2, and for devices that read from memory and output, we use DRM/KMS
<pinchartl> for memory-to-memory devices both subsystems are used, depending on where you come from
<pinchartl> but I was wondering about devices that take a live input and have a live output, without any memory involved
<pinchartl> where would you address that ?
<pinchartl> for instance a pipeline with cameras at the input and an ethernet connection at the output
<pinchartl> all in hardware
alyssa has left #dri-devel [#dri-devel]
boistordu_ex has joined #dri-devel
<mlankhorst> dma-buf usually
<mlankhorst> oh misunderstood the memory part
<danvet> pinchartl, maybe special v4l thing?
<mlankhorst> I would say it's a v4l2 device in this case, since the important part here is the input not output. :)
<danvet> pinchartl, if it's scanout direct to ethernet it might also be something really funny in kms
<pinchartl> depends who you ask, there are people who consider the output more important than the input :-)
<mlankhorst> Yeah but camera needs configuring, setting framerate etc, it's not an ethernet device, hence v4l2. :)
<pinchartl> another case is camera-to-display without any memory in the middle
<pinchartl> display needs configuring, setting mode etc... ;-)
<mlankhorst> I'd say v4l2 here, if the display is not used by the kernel for other things
<mlankhorst> otherwise v4l2 device + overlay plane
<pinchartl> there are hybrid use cases of course, with a camera capture device and a display device, both with memory interfaces, but also a direct link
<pinchartl> hardware designers are very imaginative
<pinchartl> the case I need to figure out now is camera-to-ethernet, and I'll likely go the V4L2 way
<pinchartl> a V4L2 control to configure an IP address will be intersting :-)
<mlankhorst> Yeah but you can short-circuit that with dma-buf, just make the ops struct be recognised by both drivers, then it could do magic
<pinchartl> that's nasty. but interesting :-)
<mlankhorst> It's sort of allowed for that reason, of course exporting to gpu for 3d processing still needs to work like normal
<pq> I kinda feel there should be a different object than dmabuf for connecting the dots where stuff is live rather than stored in memory, it's kinda fundamentally a different concept for userspace
<mlankhorst> We just made dma-buf the easy way to share things between devices, if uapi knows the difference for some driver-specific dma-buf, you won't need a new uapi
<pinchartl> pq: I agree
<pinchartl> it could still be an fd
<pinchartl> but not a dmabuf fd
<pinchartl> something for later...
<mlankhorst> It's meant for that actually
<mlankhorst> Some gpu memory may not be accessible over the pci bus, but you can still make a dma-buf from it
<pinchartl> can a dmabuf omit the map/unmap operations ?
<mlankhorst> From mmap: This callback is optional.
<karolherbst> mlankhorst: no, I meant drm-misc specifically :) Want to apply some fixes, but they depend on 5.14-rc7
<mlankhorst> karolherbst: I'll backmerge rc1, just a sec.
<mlankhorst> What are the fixes called roughly so I can apply it as rationale for the backmereg?
<karolherbst> not applied yet, but the patches are: I am still unsure about patch 1 going into fixes, but patch 2 should definely get applied
<karolherbst> fixes a commit which went into rc7
<karolherbst> 1 just conflicts
<mlankhorst> danvet/airlied: Can you forward drm-next to a new base for this? ^
<mlankhorst> or drm-fixes I suppose
<danvet> on it
<airlied> can someone fixup misc fixes?
<airlied> the kmb patch
<airlied> maybe rebase
<danvet> airlied, just drop it and let tzimmermann/mlankhorst figure it out?
<danvet> well reply to the pull that you won't take it
<danvet> airlied, rolling -fixes to -rc1 right now
<danvet> so folks can backmerge
<airlied> danvet: just check i thought i pushed ir
<danvet> airlied, nope
<danvet> fast-forward didn't complain, nor did dim push
<danvet> so pretty sure I didn't overwrite your push
<airlied> i assume my machine is sitting at some prompt :-p
<danvet> yeah, probably the "are you sure you want to push this much" prompt
<danvet> or a random Kconfig
NiksDev has joined #dri-devel
Peste_Bubonica has joined #dri-devel
<pq> mlankhorst, access, fencing, and content expectations are totally different for live vs. buffers though. You can't sync a live stream to anything, or it needs to carry some kind of (time)stamping and then you need to know what that means.
<pq> no way to pause a live stream either, you can only discard if you can't handle
X-Scale` has joined #dri-devel
X-Scale has quit [Ping timeout: 480 seconds]
slattann has quit []
Surkow|laptop has quit [Remote host closed the connection]
Surkow|laptop has joined #dri-devel
cef has quit [Quit: Zoom!]
cef has joined #dri-devel
<mlankhorst> ah right
<mlankhorst> I'd say v4l2 then
<mlankhorst> v4l2 planes on a drm device would be funny
vivijim has joined #dri-devel
<pq> EGLStreams funny?
<pq> I'll show myself out...
Company has joined #dri-devel
xexaxo has quit [Ping timeout: 480 seconds]
sdutt has joined #dri-devel
sdutt has quit []
sdutt has joined #dri-devel
The_Company has joined #dri-devel
camus has quit []
camus has joined #dri-devel
The_Company has quit [Read error: No route to host]
The_Company has joined #dri-devel
Company has quit [Ping timeout: 480 seconds]
dllud_ has joined #dri-devel
dllud has quit [Read error: Connection reset by peer]
dllud has joined #dri-devel
dllud_ has quit [Read error: Connection reset by peer]
xexaxo has joined #dri-devel
alanc has quit [Remote host closed the connection]
alanc has joined #dri-devel
Ben has joined #dri-devel
Ben is now known as Guest7224
xexaxo has quit [Ping timeout: 480 seconds]
iive has joined #dri-devel
xexaxo has joined #dri-devel
<zmike> ccr: I've been testing your trace stuff a lot more lately and it seems pretty good
<zmike> if you wanna put it up in a MR I'll merge it
zackr has joined #dri-devel
frieder has quit [Remote host closed the connection]
xexaxo has quit [Ping timeout: 480 seconds]
mattrope has joined #dri-devel
Duke`` has joined #dri-devel
nchery has joined #dri-devel
<mripard> sravn: thanks for your reviews :)
dllud_ has joined #dri-devel
dllud has quit [Read error: Connection reset by peer]
mbrost has joined #dri-devel
padovan has joined #dri-devel
shfil has quit [Ping timeout: 480 seconds]
NiksDev has quit [Ping timeout: 480 seconds]
kallisti5[m] has joined #dri-devel
Peste_Bubonica has quit [Remote host closed the connection]
<mdnavare> airlied: Are there any GL apps that I can launch through Gnome GUI (right click and select launch with dedicated Graphics card) ?
vbelgaum has joined #dri-devel
JoshuaAshton_ has quit []
JoshuaAshton has joined #dri-devel
vbelgaum has quit []
idr has joined #dri-devel
vbelgaum has joined #dri-devel
<ccr> zmike, if you mean the "print strings instead of plain numbers for enums in dumps" stuff, sure .. I had the intention of recoding the Perl script I made for the purpose into Python, to conform with what is mostly used in Mesa, but all this kinda went into background as I didn't hear more from you. :)
<zmike> ccr: I didn't use trace stuff for a while
<zmike> but I've been using it a lot lately
<ccr> no worries, suspected as much, but wasn't sure if it was worth pursuing
<ccr> zmike, in any case, can do. might take a while, I have some other things to do at the moment and I want to do the Perl -> Python recode of the script. tbh it's a simple piece of code, but at least it may help remove some manual work in future if/when new enum values are added.
<zmike> no rush
<zmike> more curious about that naming mismatch in the dump I posted
<ccr> what where? :)
<zmike> on the trace ticket
* ccr looks
<vbelgaum> @Lyude, did you have questions regarding SLPC? Had issues connecting to IRC yesterday.
<ccr> zmike, without taking a look at the dump I'd _guess_ it may be the situation I dreaded earlier e.g. memory (re-)allocation results in exact same pointer which has been already labeled previously. but I'll take a look at that when I can. maybe some kind of glue can be added to retire freed pointers or something. not sure how feasible it is tho.
<zmike> hm
<Venemo> idr: about MR !12802 I wanted to ask one more thing before giving my r-b rubber stamp
<zmike> yea makes sense
<ccr> of course it could be something else, but discounting actual bugs I'd say that is probably the case there
<Venemo> idr: I see you already gave yourself my r-b and assigned to marge...
slattann has joined #dri-devel
<Venemo> what I wanted to ask is, is the foreach necessary? AFAIK the end block always has only one predecessor
<zmike> ccr: re: pointer matching, can't you just tag pointers based on their name to avoid this?
<zmike> i.e., if you see a pointer with name "XYZ" then it uses XYZ as its name until you see it with a new name
<zmike> and then it uses the new name
<zmike> that should be enough to avoid issues with lifetimes
Daanct12 has quit [Remote host closed the connection]
gouchi has joined #dri-devel
Daanct12 has joined #dri-devel
<ccr> I think my original modus of thinking was that there are cases where we can't infer a name for a pointer, so I chose to retain the first we can and apply it everywhere, including what came before. to put it other way: because of how the dumps are, it may not be possible to know "name" of the pointer until at later point in the dump.
<ccr> this of course makes it problematic to do what you are suggesting
<ccr> the only way I can see to resolve this completely would be to introduce the naming logic into the aux/trace dump logic itself, tagging the pointers there because there we certainly would know what each thing is at whatever point in time
<ccr> as currently the naming is done post-facto in the dump processing scripts, so we have to do this kind of "magic" as we haven't got all the info
NiksDev has joined #dri-devel
<idr> Venemo: Oh. :( I took "Sounds good!" as Rb.
<idr> CI is still running, so it can be canceled.
<ccr> zmike, anyway, it's also possible that my thinking is flawed :)
<Venemo> idr: no need to cancel, I was gonna r-b it, just wanted to clarify whether the foreach is needed
<idr> Venemo: I was basically copying the structure of append_set_vertex_and_primitive_count.
<idr> I think if either thing needs the foreach, then they both do.
<Venemo> last time I was talking about this with jekstrand he told me that there is only 1 predecessor to the end block, I think. but maybe i misunderstood?
<idr> It's possible... We'd have to ask Kayden if he remembers a reason for doing that way in the first place.
<Venemo> Kayden: pingy?
<idr> But it was years ago, so it's likely that any such memory may have been moved to offline storage. :)
slattann has quit []
mlankhorst has quit [Ping timeout: 480 seconds]
<Kayden> I don't remember, unfortunately.
<Kayden> Looking in the code, almost everywhere does set_foreach
<Kayden> Jason's new lower calls pass does assert that there's only 1, but only in some cases
<Kayden> I feel like if that were true, nir_validate should validate it
<Kayden> but return and halt both make their block have end_block as a successor
rsalvaterra_ has joined #dri-devel
<Kayden> so I would think that end_block would have any blocks containing return or halt as predecessors, as well as the natural end of the program
<Kayden> so I think end_block can have many predecessors.
<Kayden> we may lower that all away eventually..
rsalvaterra has quit [Ping timeout: 480 seconds]
<jekstrand> Venemo: If returns have been lowered AND there are no halt, then yes, there is one predecessor to the end block.
Ahuj has quit [Ping timeout: 480 seconds]
jhli has joined #dri-devel
<idr> Okay... I think it's safer for nir_lower_gs_intrinsics to not depend on return lowering having happend.
<sravn> pcercuei: Can I get your a-b on "abt-y030xx067a yellow tint fix"? I will add Fixes: while applying. Or maybe you take it?
<sravn> mripard: You are welcome. I have at least one series from you pending that I hope to find time to look at
moa has joined #dri-devel
tobiasjakobi has joined #dri-devel
<pcercuei> sravn: sure. Please also tweak the commit message with proper case and no space before the colon
bluebugs has quit [Ping timeout: 480 seconds]
tobiasjakobi has quit [Remote host closed the connection]
<Venemo> jekstrand: awesome, that explains it
moa is now known as bluebugs
pnowack has quit [Quit: pnowack]
<Lyude> vbelgaum: I had mainly been wondering if the GuC was capable of doing upclocking/downclocking with less latency then trying to do it from the CPU
<Lyude> since I found another issue with RPS waitboosting not boosting the GPU when it should on gnome-shell (will try to file a bug report for it today)
<vbelgaum> Lyude: good question. We have not measured latency of GuC freq management vs. host as yet. Waitboosting is not yet enabled with SLPC, that is WIP. Did you see the issue with legacy Turbo or SLPC?
<Lyude> oh - uh, I'm not sure what SLPC is? I thought SLPC might have been some waitboost thing I didn't know the name of
<Lyude> but yes - waitboosts don't work very well right now, and it's definitely not the first time this has happened. and interestingly enough, running `stress -c 1` in the background is enough to fix it because it raises the CPU frequency
<vbelgaum> SLPC is Single Loop Power Control - fancy name for GuC doing the freq management (as well as RC6) instead of host
<Lyude> ahhh - yeah, the reason I figured it might help is because nvidia has a very similar scheme for doing power management (a dedicated PMU on the GPU that handles things due to the lower latency)
<imirkin> Lyude: also due to it being difficult to control stuff from the CPU when you disconnect the VRAM :)
<Lyude> imirkin: likely another reason yeah :P
<vbelgaum> GuC based freq management has been enabled on tip for Gen12+. What platform are you seeing the waitboost issues?
<Lyude> vbelgaum: WHL, but in the past when I had to debug this I also ran into waitboost issues with KBL and SKL
<Lyude> we fixed it once before but iirc ickle later came up with different changes because I think those fixes may have introduced higher power consumption
<Lyude> (also sorry I didn't catch this earlier ickle , didn't even notice this until I switched from my KBL machine to this WHL one)
<Lyude> I -do- wonder if trying to use a different waitqueue with a higher priority for scheduling rps work could help
anusha has joined #dri-devel
libv has quit [Read error: Connection reset by peer]
libv has joined #dri-devel
<vbelgaum> yeah, WHL is still using legacy Turbo. Would like to see more details on this issue, but does the boost not happen at all or just after a delay? What is the primary WL that is being run? Waitboost typically happens when a WL has not gotten a chance to run yet
pnowack has joined #dri-devel
<Kayden> vbelgaum: oh? SLPC is enabled by default on drm-tip even on tigerlake?
<vbelgaum> Kayden: nope, only gen12+ where guc submission is ON by default
<Kayden> tigerlake is gen12...
<Kayden> ah, but guc is off by default there, I see
<vbelgaum> yup
nchery has quit [Remote host closed the connection]
<vbelgaum> ADL is the only Gen12+ with guc submission ON so far, afaics
<Lyude> vbelgaum: if you want me to do any comparisons I should have access to ADL
<airlied> zackr: is DRM_VMWGFX_MKSSTATS something distros would want to enable? just wondering why it's optional
<vbelgaum> Lyude: would be interesting to see how ADL does, however, we don't have waitboost there yet, so we can't have an apples to apples comparison. This is assuming you are running 2 WLs and hoping the blocked one will trigger waitboost?
<sravn> pcercuei: fixing subject goes without saying. This is -fixes material and it seems drm-misc-fixes is not yet up to the game. So I have it saved away for that
<Lyude> vbelgaum: "WL"?
<vbelgaum> WL = workload
<Lyude> vbelgaum: well I've been testing this with just gnome-shell, if that's what you meant
<Lyude> unless you meant the stress -c 1 thing - which I think likely mostly just works because it results in raising the CPU frequency, which in theory could be giving it more time to actually perform an RPS waitboost on the GPU
rasterman has quit [Quit: Gettin' stinky!]
* Lyude will bbl, gotta go vote!
rasterman has joined #dri-devel
shfil has joined #dri-devel
anusha has quit []
gouchi has quit [Remote host closed the connection]
bluebugs has quit [Remote host closed the connection]
nchery has joined #dri-devel
rasterman has quit [Quit: Gettin' stinky!]
gouchi has joined #dri-devel
cedric has joined #dri-devel
alyssa has joined #dri-devel
<alyssa> hrm. monitor takes >500ms to modeset causing wait_for_vblanks to timeout.
<alyssa> smells like I'm doing something wrong
<alyssa> (but... I don't want swaps to come in during that time!)
<imirkin> alyssa: ancient CRT?
<alyssa> no...?
<imirkin> i feel like some of those old monitors could take a very long time to change modes
<imirkin> DP can actually also take a while esp if link training goes poorly
mbrost has quit [Read error: Connection reset by peer]
mbrost has joined #dri-devel
lemonzest has quit [Quit: WeeChat 3.2]
cedric is now known as bluebugs
<Lyude> yeah - if the screen comes up at all that's a start, there's a lot of stuff with things like DP that can be done to further optimize things
libv has quit [Read error: Connection reset by peer]
<Lyude> alyssa: how far have you gotten btw? I'm quite curious seeing as I know far more about display then I do the render side of things :)
<airlied> alyssa: does the monitor take >500ms or does the DCP take it :-)
libv has joined #dri-devel
<alyssa> airlied: Unclear. It's internally DP but with an HDMI cable and an active DP->HDMI chip in the M1
<alyssa> so lots of places for things to be slow
<Lyude> ahh, yeah, you've potentially got LTTPR to deal with as well in that case
<alyssa> Lyude: as for progress -- swaps work, i'm working through mode set hell, that's it
<Lyude> nice!
<Lyude> I've considered getting an M1, but am hesistant until I know I actually would have the time to do something with it
<alyssa> do not recommend have no time anymore ;-p
<alyssa> anyway, I figure it's normal for a monitor to take upwards of 1s to do a full modeset
<Lyude> alyssa: what do you mean by the first statement?
<Lyude> alyssa: yeah-btw, if you would find access to the DP spec helpful for any of this let me know. if you're an member (if not, you can just sign up it takes very little time) you can get access to it through VESA
<alyssa> Lyude: right so then my question is, how do I avoid the vblank timeout?
<alyssa> unless I'm supposed to be putting through vblanks when mode setting? but I can't take new swaps while modesetting
<Lyude> alyssa: iirc you usually disable vblank interrupts when a pipe is going down or coming up, up until a certain point. in which case the drm helpers are supposed to approximate the missed timestamps
<alyssa> hum ok
<alyssa> that.. makes sense
<alyssa> Also, where is the modeset supposed to actually happen?
<alyssa> and likewise the swap?
<alyssa> I'm doing it all in atomic_flush and sending the vblank when it's all done
<alyssa> but I suspect that's wrong
<Lyude> alyssa: I think that might be right? one sec
<Lyude> alyssa: yeah - atomic flush is where you're supposed to do things, there's other callbacks for types of setup specific to certain DRM objects (like drm_plane_helper_funcs, drm_crtc_helper_funcs) so that you can just leave ordering certain things up to DRM. I'd definitely take a look at some of the kdocs for the helper structs btw, they go into detail on what they're all for
<alyssa> ok, thought so
anusha has joined #dri-devel
libv_ has joined #dri-devel
<alyssa> made the timeout 3s and stuff is still broken so guess this is just a regular bug and not a race. phew
libv has quit [Ping timeout: 480 seconds]
gouchi has quit [Remote host closed the connection]
shfil has quit [Ping timeout: 480 seconds]
danvet has quit [Ping timeout: 480 seconds]
shfil has joined #dri-devel
Duke`` has quit [Ping timeout: 480 seconds]
<vbelgaum> Lyude: From what I understand, waitboost happens when one workload is running and the second one (maybe lower priority or dependent workload) does not get a chance to run, so we request RP0 so that the first one completes quickly. I am not sure if this is the scenario you are trying to test?
<Lyude> vbelgaum: no it's not, the issue I'm seeing in particular is that gnome-shell animations go from 60fps, to stuttering, back to 60fps - which appears to be the result of the GPU downclocking or not upclocking fast enough despite being busy
<vbelgaum> Lyude: ah, in that case, it might be interesting to run this on ADL and see if it does any better with GuC based freq management
thellstrom1 has joined #dri-devel
thellstrom has quit [Remote host closed the connection]
rasterman has joined #dri-devel
Guest7224 has quit [Remote host closed the connection]
shfil has quit [Ping timeout: 480 seconds]
Hi-Angel has quit [Ping timeout: 480 seconds]
ddavenport has joined #dri-devel
iive has quit []
rasterman has quit [Quit: Gettin' stinky!]
libv has joined #dri-devel
<pcercuei> sravn: don't see it as a fix, but as an improvement ;)
pcercuei has quit [Quit: dodo]
libv_ has quit [Ping timeout: 480 seconds]
<airlied> jekstrand, bnieuwenhuizen : is that missing a barrier?
ddavenport has quit [Remote host closed the connection]
<bnieuwenhuizen> depends, what ordering guarantees do you expect?
<airlied> bnieuwenhuizen: the test result seems to depend on the first atomic exchange completeing
<bnieuwenhuizen> pretty sure that if you want to preserve order across differnet lanes that is going to require a barrier
<bnieuwenhuizen> remember that a subgroup size of 1 is also still valid so these threads could be completely independent
<airlied> bnieuwenhuizen: yeah I'm hitting on lavapipe, where I run one invocation per row
<airlied> I've filed an issue on CTS so hopefully it'll get fixed up
<alyssa> oh it's a race. how.. vibrant.
<alyssa> er. a race we always lose.
<alyssa> messenge handler calls drm_kms_helper_hotplug_event which calls drm_client-modeset_commit which calls drm_atomic_helper_wait_for_vblanks
tursulin has quit [Read error: Connection reset by peer]
<alyssa> but the vblank event comes in from a messenge handler
<alyssa> and the rtkit architecture only handles one message at a time
<alyssa> so it's /not safe/ to call drm_kms_helper_hotplug_event from a message handler at all
<alyssa> sven: ^ woof.
<alyssa> i guess ITMT i can use polling
<alyssa> sven: not sure if this is an inherit limitation of the mailbox iface
<Lyude> alyssa: could you be racing with fbcon?