ChanServ changed the topic of #wayland to: | Discussion about the Wayland protocol and its implementations, plus libinput | register your nick to speak
fmuellner has quit [Ping timeout: 480 seconds]
DragoonAethis has quit [Quit: hej-hej!]
DragoonAethis has joined #wayland
columbarius has joined #wayland
co1umbarius has quit [Ping timeout: 480 seconds]
Brainium has quit [Quit: Konversation terminated!]
tjaden has quit [Quit: leaving]
julio7359 has quit [Ping timeout: 480 seconds]
julio7359 has joined #wayland
julio7359 has quit [Remote host closed the connection]
julio7359 has joined #wayland
floof58 is now known as Guest11738
floof58 has joined #wayland
Guest11738 has quit [Read error: No route to host]
tzimmermann has joined #wayland
danvet has joined #wayland
dcz has joined #wayland
Leopold has joined #wayland
bim9262 has joined #wayland
Leopold___ has joined #wayland
Leopold has quit [Ping timeout: 480 seconds]
Dami_Lu has quit [Ping timeout: 480 seconds]
Dami_Lu has joined #wayland
Dami_Lu has quit [Ping timeout: 480 seconds]
Dami_Lu has joined #wayland
mvlad has joined #wayland
<pq> drakulix[m], yeah, but I believe you'd definitely want to offer end users a way to override that, or even not use that by default. The sRGB assumption when nothing else is said may give end users what they are used to see, for better or worse.
<pq> LUT also have precision problems especially for curves that are used for inverse EOTF.
<JEEB> yup
<pq> emersion, swick[m], Weston has code to infer the linearizing curve from an arbitrary output ICC profile, thanks to vitaly. For matrix-shaper profiles, you can dissect the profile to get/drop the right curve directly.
<pq> swick[m], I think blending in scRGB is just fine, particularly when using floating-point for the value storate. The only caveat is that blend-to-output conversion is more complicated.
<pq> no matter what space you blend in, you always should take into account the actual useful part of the incoming and other color volumes
Dami_Lu has quit [Ping timeout: 480 seconds]
Dami_Lu has joined #wayland
<JEEB> yea I guess the positive of linear scRGB is that [0,1] is SDR, so if you want SDR output you may just clamp that range and encode as output transfer, while for HDR output you encode as output HDR transfer with [0,1] going to HDR graphics white (~203 nits), and then further values further than that :)
<pq> emersion, light-linear = tristimulus = optical; and the other one: perceptually linear might be close to: ignoring quantization, light-non-linear = electrical = encoded.
<pq> drakulix[m], indeed, Wayland has never guaranteed any specific blending result for semi-transparent sub-surfaces. I will try hard to keep it that way.
Dami_Lu has quit [Ping timeout: 480 seconds]
Dami_Lu has joined #wayland
<pq> emersion, re: [0, 1]; in the end, the values to feed into a display panel are ratios: percentage of how bright to make each sub-pixel light source. Naturally you cannot go negative, because negative light does not exist, and you cannot go above 100% because you'd "fry the hardware" so it won't let you. From this point of view, greater than 1.0 values are too wild and negative values are just incomprehensible.
<pq> All this changes when you start doing math on those "lamp power" percentanges.
<pq> negative values are hard, because traditionally color has been represented as unsigned integers. Use of floating point removes that problem. However, LUT processing elements are also popular, and a LUT simply cannot be applied outside of its defined input range which is implicitly [0, 1].
<pq> So the problems with outside [0, 1] range values are practical. Math has no problem with them.
<pq> JEEB, well, clamping may not be a very good way of gamut mapping, but it's certainly easy - it can even happen accidentally. :-)
rasterman has joined #wayland
<JEEB> ayup :)
<JEEB> just looking at the current tone and gamut mapping rework merge request at libplacebo you can hone that dagger a *lot*
julio7359 has quit [Ping timeout: 480 seconds]
<JEEB> pq: for the record if the results I got earlier still hold true, Windows on the desktop just does HDR white graphics white remap and then clamp when doing HDR to SDR :)
<JEEB> definitely imperfect but at least a constant
<JEEB> minus one "white" there, Friday and E_NO_COFFEE :)
<pq> hehe
Dami_Lu has quit [Ping timeout: 480 seconds]
Dami_Lu has joined #wayland
<drakulix[m]> pq: So the issue with an EDID-based ICC profile instead of the standard sRGB as an output profile is that users expect “wrong” visuals?
<drakulix[m]> What is exactly the problem here? Is it that using a correct profile might make sRGB content less vibrant on outputs with wider color spaces?
Dami_Lu has quit [Ping timeout: 480 seconds]
<dottedmag> pq: Changing the blending algorithm every other frame sounds like a great way to annoy every application developer out there while staying technically correct.
<dottedmag> I wonder what the semi-transparent surfaces are good for if the blending algorithm is not specified. Any application that cares about the result will have to avoid them.
<dottedmag> *subsurfaces
<emersion> any application which cares about the result needs to avoid a lot of things tbh
<emersion> YUV buffers for instance
Dami_Lu has joined #wayland
<pq> drakulix[m], EDID is also a) too often more or less lies, and b) usually does not reflect monitor adjustments at all.
<pq> at least historically
<pq> if you have a monitor certification system and the monitor is certified, then maybe they also check EDID, who knows
<pq> drakulix[m], yes, making traditional content less "vibrant" is something I would expect people to complain about.
<drakulix[m]> So is there any value using that over the built in sRGB profile?
<drakulix[m]> Or should I expect every users, who cares about color accuracy to just supply their own ICC file anyway?
MajorBiscuit has joined #wayland
<pq> drakulix[m], I'd offer "assume sRGB" and "trust EDID" as options, and end users can pick which they like better. Provide your own ICC file would be third option.
<drakulix[m]> Great. Thanks for all the feedback! Still trying to wrap my head around all that stuff and its great to be able to check, what I think I have understood so far. 👍
<pq> another problem with EDID is that it described only one set of color parameters, roughly, but a monitor can driven in many different "modes" based on the metadata sent to it. Which mode does the EDID apply to? I don't know.
<pq> there is metadata aside from the HDR metadata as well, which may change how the monitor works
<pq> maybe that is specified in HDMI and DP specs, maybe vendors implement that...
<pq> who knows
<pq> dottedmag, I have never implied changing the algorithm every frame. Compositors can have different implementations.
MrCooper has quit [Remote host closed the connection]
Dami_Lu has quit [Ping timeout: 480 seconds]
MrCooper has joined #wayland
Dami_Lu has joined #wayland
<pq> drakulix[m], also ICC is quite much limited to SDR so far, so for HDR displays one may need to offer more complicated UI for anyone who wants to fine-tune. With HDR the sRGB option is replaced by assumptions provided the used video signalling mode, like BT.2100 PQ or HLG.
<pq> ICC is working towards HDR, but it might be a while before there are usable implementations for it available
<drakulix[m]> Right. Do the color characteristics from the EDID play any role here or would I just take the luminance values from the HDR metadata block?
<drakulix[m]> I am still having trouble imagining how I would built an output profile for an HDR monitor with LittleCMS.
<pq> I don't really know either. :-)
<drakulix[m]> Oh perfect. Well maybe we can figure something out next week then. ^^
<pq> I know they got three monitors for the hackfest, but will anyone bring a spectrophotometer so we can actually check?
<pq> ...spectroradiometer?
<pq> oh, spectrocolorimeter
<drakulix[m]> probably a better question for the hackfest channel.
<pq> there is a channel?
<drakulix[m]> yeah. I think it was linked in the last mail?
<pq> oh, matrix
eroux has quit [Read error: No route to host]
Dami_Lu has quit [Ping timeout: 480 seconds]
devilhorns has joined #wayland
eroux has joined #wayland
Dami_Lu has joined #wayland
<dottedmag> pq: I was kidding
<wlb> weston Issue #712 closed \o/ (client: weston-simple-dmabuf-v4l2 does not translate planar video formats correctly from V4L2 to DRM
Dami_Lu has quit [Ping timeout: 480 seconds]
Dami_Lu has joined #wayland
Cyrinux9 has quit []
Cyrinux9 has joined #wayland
Dami_Lu has quit [Ping timeout: 480 seconds]
Dami_Lu has joined #wayland
Dami_Lu has quit [Ping timeout: 480 seconds]
Dami_Lu has joined #wayland
jmdaemon has quit [Ping timeout: 480 seconds]
<pq> drakulix[m], would you happen to have a pointer on how to create a Wayland client in Rust that uses compositor-specific surface role extensions? What crates to combine to not need to reinvent a whole UI drawing toolkit?
<drakulix[m]> Do you have a toolkit in mind, that you want to be using?
<pq> no, but I'd probably prefer something Rust-native
<pq> I've been eyeing on iced, but haven't stared at it long enough to understand how to bypass the platform abstractions to get my hands into Wayland objects.
<drakulix[m]> The big problem is, that most of them are based on winit, which handles xdg-surfaces exclusively.
<drakulix[m]> However most (if not all) GL-abstractions these days go through, go through the raw-window-handle crate, which would allow you to use any wl_surface.
<pq> right
<drakulix[m]> That leaves input of course as a problem. Few toolkits can be driven completely externally. egui and iced are definitely options, I have successfully integrated both into my compositor.
<pq> Would you have a link to some sources I could study in the long run?
<drakulix[m]> Sure. gimme a sec
shankaru has quit [Ping timeout: 480 seconds]
<drakulix[m]> First I would suggest to go with sctk (or smithay client-toolkit) to avoid doing everything yourself: (full message at <>)
<pq> cool, thanks!
MrCooper has quit [Quit: Leaving]
MrCooper has joined #wayland
<pq> For anyone who simply wants to build a HDR compositor without the hassle of ICC color management, please have a look at .
<JEEB> i like how cta shop actually needs no registration, you just input a@b.tld and trollgatan 13, sverige as address info and bob's your uncle
<pq> heh, well, how'd they even check that
<JEEB> it just redirects at checkout so no checks
<JEEB> still annoying of course so I'm happy people then archive the docs on
<JEEB> anyways, more on topic: nice text, I would also just take into mention RGB case as well unless it was already mentioned there
<JEEB> since many apps deal with hdr10 (pq, bt2020) but in rgb
<pq> that's what I didn't quite understand: can it be HDR10 if it is RGB?
<JEEB> at least that's what the marketing departments seem to call it :D
<pq> you can't sub-sample RGB...
<JEEB> it seems to be pq plus bt2020 and the static metadata
<JEEB> for video media it's 4.2.0 or so o' course
<pq> anyway, it's not a big difference: RGB means you need to implement less of color-representation and nothing else - that's even mentioned there.
<JEEB> yup
<JEEB> i should attempt to add the opengl extension into libplacebo, although iirc the only things showing the bt2020 and pq extension were android devices
<pq> not even Windows? Is Windows, where you tried(?) it, only scRGB then?
<JEEB> on windows it's the d3d11 api usually
<JEEB> i haven't tried the gl side stuff :)
<pq> ah, of course
<JEEB> both scrgb and hdr10 capabilities are in the d3d11 api so if driver vendor maps to those they should work
<JEEB> in libplacebo I did so that HDR content -> HDR10, non-HDR wide gamut -> scRGB
<pq> How does scRGB work on Windows? Is it simply hard-clipped to sRGB for traditional monitors and BT.2020 for WCG/HDR monitors, and then let the monitor do whatever? Any metadata?
<JEEB> yea, SDR output from compositor is clipped/clamped, and if the output is wide gamut or HDR, it gets possibly adjusted to HDR graphics white and then clipped/clamped according to output (I *think*)
fmuellner has joined #wayland
<pq> so they basically expect the app to know to use only the usable portion of the scRGB space to begin with?
<pq> I mean usable portion on the specific monitor
<JEEB> not really. you can do your own tone or gamut mapping according to your screen's capabilities in either output mode (in order to f.ex. minimize the monitor's own remap)
<JEEB> scRGB is just simpler if you are thinking in terms of SDR content, since you have common [0,1]. and then if you are then extending that range
<pq> is it also "you must do" that?
<JEEB> no
<pq> I'm looking for a better color gamut mapping than clipping, and that requires knowing the boundaries of the color volume, but scRGB in itself is infinite.
<JEEB> you end up having to actually looking up the actually utilized area
<pq> How does Windows know what the boundaries on app content are with scRGB? Does Windows simply not do anything better than clipping?
<JEEB> in my experience the most it does is handle the difference between SDR graphics white and HDR graphics white, and then clips
<pq> ok
<swick[m]> mh, that explains why they get way with scRGB
<swick[m]> without extra metadata
<pq> so if you want any "proper" color gamut mapping, the app needs to do it itself.
<JEEB> yea, which is why libplacebo now has a whole rework MR going on :D
<pq> cool
<pq> This means we can make assumptions about Windows apps that use scRGB: that they target the color volume the OS tells them to... right?
<swick[m]> or they just use an arbitrary color volume in there and accept the clipping
<JEEB> as an scRGB user I target whatever I target. it's a simple extension for game developers, and for libplacebo I utilize it if the content is not HDR but wider gamut. that way at least the compositor gets the wide gamut content, and if in theory they support Display P3 or whatever in the future, that can get shown better than on an sRGB device.
<JEEB> so yes, in my use cases I accept the clip
<JEEB> I hope this makes sense
<pq> I kind of understand that, but I wonder what it means to a Wayland compositor.
<pq> Do we need to leave scRGB unmanaged, then?
<swick[m]> For compatibility with windows?
<pq> yeah, where else would we get a precedent of how scRGB should be handled?
<swick[m]> wine could set the target color volume to the display capabilities and then the behavior should be identical to windows, no?
<pq> it would help porting games to Linux I suppose, and help wine-wayland
<pq> I don't think that would result in identical display, because we'd change clipping to proper gamut mapping, but I also don't know if that makes a significant difference.
<pq> proper... as proper as a compositor bothers to implement
<swick[m]> we'd clip the out of target color volume colors and would not have to gamut map colors inside the target color volume because it is the same as the output color volume
<pq> oh right
<pq> yeah, that should work
Net147 has quit [Quit: Quit]
Net147 has joined #wayland
Net147 has quit []
Net147 has joined #wayland
nerdopolis has quit [Remote host closed the connection]
eroux_ has joined #wayland
eroux has quit [Ping timeout: 480 seconds]
nerdopolis has joined #wayland
<pq> Talking about EDID, this is what my a few years old monitor says: Desired content max luminance: 115 (603.666 cd/m^2), Desired content max frame-average luminance: 90 (351.250 cd/m^2)
<pq> that's quite a bright MaxFALL, leaving the HDR headroom at less than 1x (or 2x, how you want to think of it).
<pq> If you rendered content aiming at those values, I don't think it would be much HDR.
<pq> emersion, btw. did you get your 3D LUT fully working?
<emersion> pq, yup!
jmdaemon has joined #wayland
devilhorns has quit [Quit: Leaving]
jmdaemon has quit [Ping timeout: 480 seconds]
<JoshuaAshton> pq: For scRGB. What we do is just translate from 709 -> 2020 with a CTM. We then just clip it at that point going into the next parts of the color pipeline. 2020 is so wide and at the range of perception that you don't really need to do anything better for mapping scRGB content.
<JoshuaAshton> You can then do whatever 2020->Native Gamut Mapping you do for typical HDR10/PQ content
<JoshuaAshton> We have a Shaper + 3D LUT to handle all of our gamut remapping stuff (we actually have the shaper TF go from scRGB -> PQ encoding at that point to go into that part which is funny)
<JoshuaAshton> *clip it as in we clip negatives off, sorry just clarifying there
jmdaemon has joined #wayland
<JoshuaAshton> ofc you can also use the HDR metadata to potentially aid in any mapping, but just be aware that you probably won't get it :-)
andyrtr has quit [Quit: ZNC 1.8.2 -]
andyrtr has joined #wayland
jmdaemon has quit [Ping timeout: 480 seconds]
kts has joined #wayland
eroux_ has quit [Read error: Connection reset by peer]
eroux has joined #wayland
<swick[m]> negative values in the CTM makes me nervous. KMS color stuff is so underspecified...
<JoshuaAshton> swick[m]: It's completely fine on AMD at least, I believe this is actually why they have CTM still in hardware despite exposing Shaper + 3D LUT
<swick[m]> most CTMs don't consume or produce negative values so I would not bet on all vendors supporting that
<swick[m]> but if it works for you that's great
<jadahl> only so great if anyone can use it, and anyone can only use it if its known to work universally :P
<swick[m]> personally I'm very hesitant to use any of the KMS API for color transformations right now
<JoshuaAshton> melissawen, Harry Wentland and I have been doing a lot of work in the AMDGPU space to get stuff in shape there
<JoshuaAshton> Lots of back and forth so we can ensure we get HDR scanout for SteamOS/Gamescope on AMD
<swick[m]> doesn't help if all other vendors do something else
<swick[m]> at least for us
<JoshuaAshton> Sure, that is why all our properties are like AMD_PLANE_DEGAMMA_TF, AMD_PLANE_LUT3D etc right now
<JoshuaAshton> If I am honest... there are lots of things that I have learnt specifically about AMD hardware with how certain LUTs are (shaper is fixed point etc) and how the segementation stuff works for them that makes me think that doing a generic color mgmt system is going to be a lot harder than just throwing LUTs are the wall
<JoshuaAshton> AMD DC code internally mitigates stuff like this by having LUTs and then a TF that goes either before or after that affects the weighting/segmentation
<JoshuaAshton> Obviously other vendors might not have that luxury or have other situations here
<JoshuaAshton> Another thing that I found recently is that we really need to use the fixed function ROM block for degamma of PQ, etc on AMD. At least the current code that does de-PQ using a LUT produces visible banding.
<swick[m]> yeah, all of those are known issues. segmented LUTs exist in different forms on different hardware, fixed TF implementations also exist. shaper curves for 3d LUTs are basically a requirement.
<swick[m]> and precision in the pipeline and of each operation is also relevant
<swick[m]> and KMS just doesn't handle any of that
<JoshuaAshton> Harry Wentland is probably sick of me emailing them about precision of the shaper LUT haha :-)
<JoshuaAshton> Took a long time for us to get stuff worked out there in a way that works for us
<swick[m]> I can imagine. We really need this to work generically though eventually and that's the big issue...
<JoshuaAshton> Eventually being a key word. I think in the short term (I know people are going to groan) but vendored properties are probably a decent cal.
<JoshuaAshton> call
<swick[m]> sure, I'm fine with that as long as the HDR signalling works generically
<JoshuaAshton> I was speaking to zamundaaa ( zamundaaa[m] ), and KDE also had some interest in trying out our vendored AMD property implementation stuff
<JoshuaAshton> We are probably going to look at attempting to upstream that soon-ish
<JoshuaAshton> Like, even if it isn't what people end up going for once we have something generic -- there is always probably going to be some vendor-specific thing some compositor vendor (*cough* gamescope) wants to care about. It's also probably great to do bringup of stuff regardless also
julio7359 has joined #wayland
<JoshuaAshton> I see it as no different to Vulkan vendor-specific exts pathing the way for a generic EXT really
bodiccea has joined #wayland
<swick[m]> I'm certainly not complaining about you testing all of the hardware features ;)
<swick[m]> just too much other stuff to do first before I can focus on that
<JoshuaAshton> While I am here I assume you also agree that tetrahedral for 3D LUT >>>>> linear
<JoshuaAshton> :b
<JoshuaAshton> I was super pleased when I found AMDGPU had that in hw
<swick[m]> oh do they? that's neat.
<JoshuaAshton> ya
<swick[m]> AMD hardware really has a much better color pipeline than nvidia...
<swick[m]> you're going to have fun when you're looking at that at some point
<JoshuaAshton> I've already looked and was disappointed
<JoshuaAshton> When we do a generic SteamOS I will probably have to composite in a bunch more cases
<JoshuaAshton> on NV at least
<swick[m]> yeah, I doubt we'll ever be able to use the current NV pipeline at all
<JoshuaAshton> no 3D LUT at all
<JoshuaAshton> super disappointing
<JoshuaAshton> at least when I was grepping their open-gpu-whatever registers
<swick[m]> that's also my understanding
<swick[m]> no flexibility in the order of operations as well
<JoshuaAshton> I mean... AMD also has no flexibility there :S
<JoshuaAshton> but there's enough stuff
<JoshuaAshton> that you can kinda do what you want
<JoshuaAshton> per-plane 3D LUT, per-crtc 3D LUT, hdr_mul, blend_lut, per-plane ctm, etc etc
<jadahl> vendored methods for offloading compositing has always been on the table hasn't it, just that it was intended to be hidden behind "drivers" in libliftoff?
tzimmermann has quit [Quit: Leaving]
<JoshuaAshton> I have not heard of that but that sounds like a great idea to me :-)
<swick[m]> it's one of the ideas discussed and it's not horrible
<jadahl> the hard part is then to figure out a libliftoff api that makes everyone happy :P
<swick[m]> yeah, and making sure everything is consistent. that's my bigger worry...
<jadahl> we need conformance tests, perhaps it'll be as universally used as the wayland conformance tests
<swick[m]> heh
<swick[m]> I still prefer enumerating possible pipeline configurations and describing each element in it sufficiently. that still requires a user space library that can map an arbitrary color pipeline to the hardware pipeline but it already removes a lot of vendor specificness.
MajorBiscuit has quit [Quit: WeeChat 3.6]
<JoshuaAshton> I am waiting for display HW to evolve to the point where it's just a shader :-)
i509vcb has quit [Quit: Connection closed for inactivity]
<drakulix[m]> Please don’t. I don’t have any problems with drivers and some common interface to link against, but I don’t want to rely on libliftoff given we just build our own plane-offloading code in smithay and libliftoff doesn’t integrate with our apis that well.. :/
<swick[m]> I also don't like libliftoff very much and prefer a design like drm_hwcomposer
<swick[m]> but that does't integrate well with mutter, so... eh
<LaserEyess> I read some of this backlog, and the first question I have is "how does Windows do this?" do they really just leave it to hw manufacturers to just write their own drivers to map to a higher level API?
<LaserEyess> or, do they just... not, and handle everything in userspace
ngor has joined #wayland
jmdaemon has joined #wayland
___nick___ has joined #wayland
<wlb> weston Merge request !1226 opened by Leandro Ribeiro (leandrohrb) Add ICC VCGT tests
jmdaemon has quit [Ping timeout: 480 seconds]
gspbirel56840084987909192 has quit []
gspbirel56840084987909192 has joined #wayland
___nick___ has quit [Ping timeout: 480 seconds]
<JEEB> LaserEyess: if you're talking about gamut/tone mapping, as far as I can tell the windows compositor tries to be as hands-off as possible
<JEEB> it handles any possible SDR/HDR graphics white difference and then clamps/clips to output space
___nick___ has joined #wayland
neonking has joined #wayland
mvlad has quit [Remote host closed the connection]
dcz has quit [Ping timeout: 480 seconds]
ManMower has quit [Ping timeout: 480 seconds]
jmdaemon has joined #wayland
kts has quit [Quit: Konversation terminated!]
bim9262 has quit [Quit: ZNC -]
bim9262 has joined #wayland
ManMower has joined #wayland
julio7359 has quit [Remote host closed the connection]
danvet has quit [Ping timeout: 480 seconds]
julio7359 has joined #wayland
___nick___ has quit [Ping timeout: 480 seconds]
<emersion> drakulix[m]: why not? it's designed to be very much like libdrm and as unintrusive as possible
<JoshuaAshton> Even so, if you wanted to write your own equivelant thing in Rust or whatever, I am sure that'd be fine.
<JoshuaAshton> The real question is who does the "generifying" part.
<JoshuaAshton> I would prefer some userspace component, as us in Gamescope would always like to have an AMD-specific path and generic everywhere else because we want to get the absolute most out of that hardware as we ship a device with it and have very high color mgmt demands and also need scanout
ngor has left #wayland [#wayland]
ngortheone has joined #wayland
<ngortheone> Hello. I am playing with wayland protocol and I am hand-encoding messages to send over the wire. There is something I don't understand and the spec does not really explan. When I construct `get_registry` request I need to provide an Arg of type new_id
<ngortheone> and it seems that the only value that works is 2
<ngortheone> Why is the argument needed if a registry always has a obj_id == 2?
<ngortheone> if I sepcify any other id the server responds with an error message "invalid arguments for wl_display@1.get_registry" (I tried 0, 3 , 42)
<ngortheone> what is "new_id" ? How can I know what new_id value is expected?
<ngortheone> After reading the spec I thought new_id is a way to tell the server what id I want to bind the new object to
<ngortheone> But it seems that I am wrong
<ngortheone> There is nothing in the documentation that explains this.
Brainium has joined #wayland
<kennylevinsen> I'd suggest looking at Wayland logging on the server side, as it might tell you more
<kennylevinsen> It is just the ID selected by the client, nothing else. With libwayland the id ends up in this handler:
rasterman has quit [Quit: Gettin' stinky!]