ChanServ changed the topic of #wayland to: https://wayland.freedesktop.org | Discussion about the Wayland protocol and its implementations, plus libinput | register your nick to speak
dcz_ has quit [Ping timeout: 480 seconds]
<wlb> wayland Issue #266 opened by () Clarifications around wl_surface.frame and opaque subsurfaces https://gitlab.freedesktop.org/wayland/wayland/-/issues/266
chenshijhie has joined #wayland
rasterman has quit [Quit: Gettin' stinky!]
columbarius has joined #wayland
co1umbarius has quit [Ping timeout: 480 seconds]
fmuellner has quit [Ping timeout: 480 seconds]
remanifest has quit [Remote host closed the connection]
remanifest has joined #wayland
<riverdc> emersion: thanks for the "Writing a Wayland Rendering Loop" blog post and example code, it's quite helpful for getting started
<riverdc> there's one thing I'm a bit confused about: it seems that clients are recommended to wait for either frame callbacks or presentation callbacks to do their drawing. but neither of these seem satisfactory for something like a game, where drawing can be expensive and you also care about input latency.
<riverdc> if you wait on presentation callbacks, then you don't get enough time to hit the next deadline. but if you wait for frame callbacks, there will likely be some input latency.
<riverdc> am I understanding correctly? this is based on reading this discussion https://lists.freedesktop.org/archives/wayland-devel/2016-March/027465.html
remanifest has quit []
remanifest has joined #wayland
zebrag has quit [Quit: Konversation terminated!]
boistordu has joined #wayland
cvmn has joined #wayland
mvlad has joined #wayland
hardening has joined #wayland
<wlb> wayland/main: Mikhail Gusarov * doc: Clarify that null terminator is included in string length https://gitlab.freedesktop.org/wayland/wayland/commit/eca836add596 doc/publican/sources/Protocol.xml
<wlb> wayland Merge request !197 merged \o/ (doc: Clarify that null terminator is included in string length https://gitlab.freedesktop.org/wayland/wayland/-/merge_requests/197)
dcz_ has joined #wayland
maxzor_ has quit [Ping timeout: 480 seconds]
cvmn has quit [Ping timeout: 480 seconds]
<emersion> riverdc: with presentation-time you can design a system to draw as close to the deadline as possible
<dottedmag> emersion: "possible" being the jitter introduced by non-RT scheduling, right?
<emersion> well, also need to account for the game logic cycles not necessarily taking the same time each frame
cvmn has joined #wayland
chenshijhie has quit [Remote host closed the connection]
GoGi has quit [Quit: GoGi]
GoGi has joined #wayland
ecloud has quit [Ping timeout: 480 seconds]
ecloud has joined #wayland
soreau has quit [Read error: Connection reset by peer]
soreau has joined #wayland
apramod has joined #wayland
cvmn has quit [Ping timeout: 480 seconds]
<riverdc> emersion: you mean trying to predict when the deadlines will be by measuring past presentation times and the display refresh rate? or something along those lines
Erandir has joined #wayland
<kennylevinsen> Yes but the prediction is a combination of future render timing *and* future svanout. The latter is the easiest to predict if you have a few timestamps already (assuming it's not VRR)
fmuellner has joined #wayland
flacks has quit [Quit: Quitter]
flacks has joined #wayland
metrd has joined #wayland
metrd has quit [Remote host closed the connection]
eroux has joined #wayland
fmuellner has quit [Ping timeout: 480 seconds]
eroux has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
eroux has joined #wayland
apramod has quit [Ping timeout: 480 seconds]
Shimmy has quit []
Shimmy has joined #wayland
Shimmy has quit []
Shimmy has joined #wayland
<swick> emersion, kennylevinsen, riverdc the presentation-time protocol is entirely unsuitable for scheduling your drawing, it can only be used for determining at what point your content will show up on the display
<dottedmag> What's the proper way to discover deadline for drawing then?
rasterman has joined #wayland
<kennylevinsen> swick: we've discussed this before - it's suboptimal but what you should use for now. It still tells you when things presented and informs if missed the deadline by having much delayed presentation. Less convenient than telling exactly when latching began of course.
<kennylevinsen> I forget if you wrote an mr for improving this...
<swick> it's not suboptimal it is straight up not possible now that compositors schedule frame callbacks dynamically with their own heuristics
psykose has quit [Remote host closed the connection]
eroux has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<daniels> your position that it is literally impossible to present any content ever and no-one should bother is a) flatly contradicted by reality, and b) extremely tiring
zebrag has joined #wayland
psykose has joined #wayland
ecloud has quit [Ping timeout: 480 seconds]
ecloud has joined #wayland
bodiccea_ has joined #wayland
bodiccea has quit [Ping timeout: 480 seconds]
<swick> daniels: >with presentation-time you can design a system to draw as close to the deadline as possible
<swick> I'm not saying it's not useful but that's not what presentation-time is for and trying to abuse it like that is futile
<swick> and if that contradicts reality I would love to see a single example
maxzor_ has joined #wayland
___nick___ has joined #wayland
<emersion> it's not an abuse, it's correct usage
<emersion> because there aren't a lot of real world users, it's very difficult to design something which would *extend* it
kernelsandals has joined #wayland
___nick___ has quit []
<emersion> especially something which tries to solve all of the problems at once
<wlb> wayland Issue #267 opened by () Clarify wl_surface.damage[_buffer] behavior when no buffer is attached https://gitlab.freedesktop.org/wayland/wayland/-/issues/267
___nick___ has joined #wayland
___nick___ has quit []
___nick___ has joined #wayland
<swick> sorry but it's not correct usage
<swick> you literally can't use it like that
<swick> there is no information whatsoever about any deadline
<swick> the correct usage is schedule your drawing with the frame callback and then estimate when your commit will show up with previous feedback and then draw the scene at the estimated time
<swick> presentation-time is for reducing stutter by presenting the right content and the right time and *not* about when you start generating the content or when you should be finished generating the content
<emersion> i don't see this written anywhere in the protocol
<emersion> the protocol says when the last presentation happened, which is enough to drive a feedback loop
<swick> presentation and the presentation deadline are two distinct events which generally have no static relation
<emersion> sure
<emersion> that's fine
soreau has quit [Read error: Connection reset by peer]
soreau has joined #wayland
<swick> yes, you can drive a feedback loop but the information you have are about some presentations and not about the presentation deadline so your feedback loop just doesn't help you getting closer to the presentation deadline
<zamundaaa[m]> Yeah presentation time isn't really useful for that. We need presentation timing for that, or extend the presentation time protocol
cphealy has joined #wayland
burden has joined #wayland
burden has quit []
c7s has joined #wayland
maxzor has joined #wayland
maxzor has quit [Remote host closed the connection]
maxzor has joined #wayland
maxzor_ has quit [Ping timeout: 480 seconds]
<dottedmag> Is there a way to define presentation deadline on non-RT schedulers anyway?
<kennylevinsen> No, you would only ever be able to report the previous value and woulr need account for scheduling jitter, compositor rendering variance and your own render variance. But it's useful to know the correct target.
c7s has quit [Ping timeout: 480 seconds]
<kennylevinsen> Currently you need to deduce it - which you undeniably can as you know it happened not too much before presentation time, and you will be informed through a doubled presentation delay if you flew too close to the sun. This is enough information for arbitrarily accurate measurement.
<kennylevinsen> however, swick rightfully suggests that it would be far more convenient to just be instructed when the last deadline was
<dottedmag> As long as compositor does not do perform the same kind of feedback-based deadline adjustment.
<dottedmag> And applications and compositor sharing the GPU and adjusting their deadlines might cause any number of... interesting... feedback loops.
<kennylevinsen> It would not dictate the next deadline, but it makes it easier to implement "avg time to deadline from previous submit - slack"
<swick> or it's vrr or mode change or missed compositing deadline or whatever else you can come up with
<kennylevinsen> yeah vrr puts a stick in the wheel of most smart scheduling solutions...
<swick> you just don't know why your commit didnt end up at a time you would expect it to
<dottedmag> If one needs smart scheduling, one probably needs different algorithms for static and variable refresh rates anyway.
<kennylevinsen> dottedmag: composition time should be fairly short, so even if it's smart it should be minor adjustments. Otherwise you'd definitely end up in a tug of war.
<swick> that's why I say any kind of heuristic with only information from core and presentation-time will only work on some compositors sometimes
<swick> i.e. it's futile
<kennylevinsen> sway allows a per-output fixed composition time adjustment. Weston has it's 7ms advance IIRC. Don't know of any dynamically adjusting compositors
<swick> mutter
<kennylevinsen> Fair, it's been a while since I looked at that
<swick> and if you have to know the internal scheduling logic of all compositors you're screwed anyway
<dottedmag> I haven't looked at it at all: is there a way to reserve some GPU resources for compositor? E.g. as a slice of time, or higher-priority task flag that can be used to make sure compositor's simple task is not delayed due to fancy stuff being rendered by apps?
<swick> yes, there is high priority queues and even CU reservation
<kennylevinsen> I disagree that you wouldn't be able to deduce a useful composition deadline (even if slightly annoying). I strongly disagree that compositor adjustments are a problem - they're should be minor in general and zero for a fullscreen game.
<kennylevinsen> I do agree that VRR is annoying depending on how you want to utilize it
<kennylevinsen> *they should
<swick> dottedmag: but FF stuff is a problem with preemption so you basically have to do all compositing on compute queues if you want the most predictable GPU timing
<dottedmag> ff?
<swick> fixed function
<dottedmag> I see
<kennylevinsen> The people working on wlroots vulkan renderer had a disagreement over compute queues, and I think the conclusion was that behavior differed a lot between GPU vendors with respect to priority and performance... But I'm outside my field of expertise here.
<swick> kennylevinsen: even in fullscreen direct scanout compositors might have an internal presentation deadline which is not the beginning of vblank
<kennylevinsen> Of course but the work of composition will be static
<swick> you're basically saying that you can deduce a presentation deadline if the display is frr and the compositor is behaving exactly like you think it does
<swick> in basically every other situation you're going to make things worse
<kennylevinsen> no I am saying that I can deduce it for any compositor regardless of implementation, and that it is absolutely trivial for fullscreen scanout. This holds as long as the compositors doesn't have a super buggy dynamic delay mechanism, doesn't swap at random between 0 and 1 vblank delay, and isn't intentionally trying to mess up the logic in clients.
<kennylevinsen> And yes, I am saying this for FRR as I have not reallyd thought of the VRR consequences. That would require filtering of measurements in a VRR-aware way and I'm not sure how well it would work...
<kennylevinsen> If the compositor behavior is more aggressive it will cause the error margin and safety buffer in the client to increase
<kennylevinsen> And there will be a few missed frames as it discovers this
<kennylevinsen> If compositor behavior changes it cases additional missed frames
<kennylevinsen> And yes your suggestion of a dedicated latch time would absolutely make it easier
<kennylevinsen> (but that information will not really be able to get you closer to the deadline without missing frames, it would just shorten calibration)
<kennylevinsen> I think one could have a lib handle this: do a few calibration frames and then have the lib handle surface submission, timing measurement and next frame scheduling (with hints from game about upcoming render loads)...
<daniels> no-one is saying that presentation-timing is perfect and solves every problem which exists in a way which is impossible to misuse. what people are saying is that is a useful tool which is strictly better than not having it.
<swick> that I agree with and I've never said anything else
<swick> kennylevinsen: you say "fullscreen scanout" but I think you really mean when the presentation deadline is the presentation. you absolutely can have a valid compositor doing fullscreen direct scanout with a different presentation deadline.
<swick> and yes, in that case it is trivial because you have all the information about the presentation deadline from the presentation-time protocol
<swick> the problem is knowing when that's the case and when it's not
<swick> and you generally just can't
<kennylevinsen> No, I am aware of the different deadline - I am assuming that as composition is trivial in this case, a "smart" compositor will never have any reason to move the composition deadline noticably, and non-"smart" ones (like sway, Weston) just have a static offset (plus some scheduling jitter of course).
LaserEyess_ has quit []
<kennylevinsen> (the deadline can be any point between two presentation times)
LaserEyess has joined #wayland
<kennylevinsen> I am very tempted to make a proof of concept when I get to the end of my inbox, but no guarantees :P
<LaserEyess> what knowledge does the kernel have from the hardware (GPU and display) in this respect, what information is knowable but currently not passed to clients or even compositors?
<LaserEyess> for the previously sent frame, I'm sure a lot
<LaserEyess> but what about for the next frame?
<LaserEyess> is there anything?
<swick> kennylevinsen: you talk about "smart" compositors again. IOW it depends on the implementation not on the spec.
<swick> and there is lots of other reasons your commit can end up on a different presentation than "scheduled frame too late"
<kennylevinsen> swick: "smart compositor" was just shorter than "compositors implementing some form of dynamically adjusted composition deadline to automatically compensate for varying workloads". This is opposed to ones that never vary their deadline but may still have static offset from presentation deadline. I thought it was implied from the context of the discussion - apologies if not.
<swick> so even if you get lucky with the compositor you have unnecessary bad data
<kennylevinsen> A little bad data isn't a problem - it's easy to filter. If we had presentation missed for other reasons all the time then that's a problem, but... That would be problem regardless, making things unplayable. Not sure if we need to worry here.
<swick> yeah, okay, if the time between two presentations is fixed and the time between presentation deadline and the presentation is fixed you probably can deduce a noisy presentation deadline
<swick> with the presentation-time protocol alone
<kennylevinsen> I suspect the worst case would be a too large calculated safety margin. If the game in question is barely capable of rendering between two deadlines then the margin could become too big. I guess it would be trivial to just disable scheduling if processing time is too close to the available time, and just render immediately after submission, frame callback or presentation.
<kennylevinsen> High refresh rate and VRR is also sure to mess things up - imagine a display jumping from hundreds of hz to tens...
<kennylevinsen> Such library would need a ¯\_(ツ)_/¯ mode
<dottedmag> LaserEyess: kernel probably does not know much, but compositors do know their internal logic for when the deadline is, and that's not communicated to the clients.
<LaserEyess> so basically, even at the hardware level, you're limited to feedback for things that have already happened, with little control over/knowledge of the future?
<dottedmag> Of course display may catch fire on the next frame
<dottedmag> I wish GPUs had knowledge of the future. Probably this could be manipulated to profitably trade securities.
<LaserEyess> well, what I mean by that is, for the trivial case of the FRR, let's say you have a 100 Hz monitor, at time t=0ms, you know you presented
<LaserEyess> you can reasonably assume at the hardware level the next present is t=10ms
<LaserEyess> but I mean, the kernel is the one sending these frames, so is there any way you can guarantee that?
<LaserEyess> sure the display could catch fire, or the GPU could blow up, or even someone cuts the power, but assuming none of that was likely
<soreau> I think the biggest mistake clients make is they assume there will always be frame callbacks. Sometimes the compositor stops sending them for an undetermined amount of time
<swick> oh yeah, that too
<swick> I would love to see you try LaserEyess :>
<swick> dottedmag: actually a good question. display and scanout hardware share the same clock but I'm not sure whose that is and if/how the refresh rate is correlated to the clock signal somehow
<swick> but in general for a FRR display you know at least roughly when the next vblank happens
<LaserEyess> I guess ultimately what I'm asking is: is there anything in the kernel that's some timestamp of "this is when I'm going to send the next frame"
<LaserEyess> or is it just solely: "this is the display rate and this is the last time I sent a frame, good luck!"
<swick> not sure if I understand the difference
remanifest has quit []
<LaserEyess> maybe there's no difference practically, but I feel like any feedback statistics would be much more valuable if you had a promised vblank time rather than a projected one
<soreau> I think the point is, you don't want to draw too soon, or else the previous frame will be discarded, resulting in wasted frames rendered
<LaserEyess> right but you need to subtract off your own render time, as well as anything the compositor is doing, and the max (median?) jitter you see in the feedback, etc.
<LaserEyess> that all seems rather fragile so I would assume more data would be strictly better
<LaserEyess> but I guess there's no difference if it's just FRR so it is impossible (?) for the display to do anything else
<LaserEyess> sorry if I sound ignorant, but I'm pretty interested in how this works, I only know what mpv does, and how I hope it will work in the future when feedback for VRR is implemented
<LaserEyess> that's a much easier problem however, because on mpv's side it also has the added benefit of having its own deadline for a frame, since most videos are fixed framerate
remanifest has joined #wayland
___nick___ has quit [Ping timeout: 480 seconds]
maxzor has quit [Remote host closed the connection]
maxzor has joined #wayland
pac85 has joined #wayland
<pac85> Does presentation time require explicit support from games?
<pac85> In any case modern games aren't that sequential, often game logic will be run in parallel with the rendering so it takes one more frame for the response to get to the screen. In the end people just run games unthrottled if they care about latency.
<pac85> As any progress been made for the issue regarding tearing updates?
mvlad has quit [Remote host closed the connection]
pac85 is now known as Guest9662
pac85 has joined #wayland
vanadiae[m] has left #wayland [#wayland]
<kennylevinsen> presentation time is provided by the presentation feedback protocol, and all protocols require explicit support
<kennylevinsen> And game logic running in parallel is fine. All that matters is when the game decides to sample game state to render a scene to a buffer that is sent to the compositor, when the compositor samples surface buffers to compose a new output buffer, and when the GPU ends up sampling the output buffers.
<kennylevinsen> even with tearing updates, unthrottled will never give the *best* or most consistent input latency - it's just a brute-force workaround...
<kennylevinsen> (tearing can be used with frame scheduling if one wanted to)
<kennylevinsen> as for progress, see the MR for a tearing control protocol: https://gitlab.freedesktop.org/wayland/wayland-protocols/-/merge_requests/65
<pac85> if a game takes say 2 frames to process the input how can you do better than running it ad fast as possibile?
<pac85> THX for the link. Seems like the discussion has pretty much died
<kennylevinsen> by rendering at approximately the right time instead of rendering all over the place including at a bunch of "bad" times and sometimes hitting a "good" time. It's a little more complicated with tearing. I think you could still be smart with tearing, but don't quote me for it - I am *not* a fan of tearing anyway. :P
<kennylevinsen> it won't be bad with very high fps games, but if you're rendering at, say, 80fps on a 60Hz monitor, then first frame is 4.1ms too old, next frame is 8.2ms too old, then 10.5ms, 0.15ms (very good!), 4.25ms, 8.35ms, etc. - basically just all over the place with a pretty bad average. Much better to time it to always be ~1ms too old.
<kennylevinsen> if one must have tearing, it would probably also be nicer for it to be consistent rather than dancing all over the place...
<pac85> well but in order to sync the game such that you get a consistent 1ms delay you would have to somehow insert some kind of wait into the game loop, effectively locking it at the refresh rate or some multiple of it right?
<pac85> with tearing I think it's better to have the tear jumping around, it would be much more noticeable if it where to stay in place, also the higher the framerate the more similar the two frames are so it gets even more unnoticeable. At some point it looks more like a rolling shutter. (ofc that takes thousands of fps).
<pac85> In any case I think it's good to have that presentation time protocol.
<kennylevinsen> you'd start the rendering work at deadline - time needed, yes. Rendering earlier is a waste - even with tearing, you don't want to render a full frame just for it to only affect a few pixel rows at the bottom of the screen. Skipping useless rendering (and thus letting your GPU run cooler!), or making a useless frame become a useful one with a slight timing adjustment is much better.
<kennylevinsen> If you want tearing and render fast enough, you could also render an extra N times back to back to add N-1 tearlines at pretty consistent locations instead of having random tearing
<kennylevinsen> you could also move it however much you want
<pac85> Yeah it's good to have presentation time after all.
<pac85> Is this the right place to ask questions about Wayland servers APIs?
<kennylevinsen> yes
luc4 has joined #wayland
<pac85> Alright. I've started a little project, nothing serious, basically like xwayland but backwards (my idea is to use it to sandbox apps under X by only allowing access to the Wayland socket). I got stuck when I tried to get clients to resize their windows. I found that wl_shell_surface_send_configure should do it but I haven't managed to understand on which object that method should be called. Specifically I wonder how I would get
fmuellner has joined #wayland
<kennylevinsen> You should pretend wl_shell doesn't exist - xdg-shell is what's used and supported as the normal desktop shell
<pac85> Mmm so I've been looking for the wrong thing the whole time. Thanks for pointing me in the right direction.
<kennylevinsen> what you should do is to send an xdg_toplevel::configure event with the width/height you suggest. Note that it is only a suggestion, as wayland clients are authoritative on surface sizes, *not* the wayland server. Main exception is fullscreen. Clients usually abide though, and tiling window managers like sway will chop off excess if they don't...
luc4 has quit []
<pac85> thx
<pac85> wow that did the trick, thanks so much, I've spent hours fiddling around
<kennylevinsen> you're welcome :)
mikolayek has joined #wayland
mikolayek has quit [Remote host closed the connection]
pac85 has quit [Remote host closed the connection]
pac85 has joined #wayland
pac85 has quit [Remote host closed the connection]
pac85 has joined #wayland
remanifest has quit [Ping timeout: 480 seconds]
remanifest has joined #wayland
remanifest has quit [Ping timeout: 480 seconds]
remanifest has joined #wayland
remanifest has quit []
<pac85> here is Weston running under xmonad https://telegra.ph/file/1bc58cd5d359e022ada49.jpg
<pac85> I meant weston-terminal
<kennylevinsen> Neat, looks like the size is slightly off but neat regardless :)
Guest9662 has quit []
<pac85> THX. Yeah I still need to get a few things right.Qt apps don't seem to be resizing and I get this message: qt.qpa.wayland: Creating a fake screen in order for Qt not to crash, don't think it is relates thoug
pac85 has quit [Remote host closed the connection]
pac85 has joined #wayland
kenny has joined #wayland
maxzor has quit [Ping timeout: 480 seconds]
pac85 has quit [Remote host closed the connection]