ChanServ changed the topic of #wayland to: https://wayland.freedesktop.org | Discussion about the Wayland protocol and its implementations, plus libinput | register your nick to speak
<d3x0r>
I'm playing with putting wayland with rdp backend in a docker container... I can't find a good exmaple of 'cursor-theme=' setting, it looks like it's a word, but 'Paper' didn't work. it continuously says "could not load cursor 'dnd-move'" and -none, -copy, I ran with WAYLAND_DEBUG=server, but there was no hint just the could not load; I ran weston with strace, but strace didn't show it trying to access any files about that
dcz has joined #wayland
<ManMower>
try strace -f? it's defintely trying to open those cursors
dcz_ has quit [Ping timeout: 480 seconds]
<ManMower>
weston-dekstop-shell would be the process doing that though, forked from weston
<ManMower>
if you specified Paper, I think it'd search in /usr/share/icons/Paper (and a few other places)
<mvlad>
might want to check XCURSOR_PATH env variable. Exporting that to /usr/share/icons then setting up the default theme to the proper icon.
dcz_ has joined #wayland
<d3x0r>
thanks for the -f hint
dcz has quit [Ping timeout: 480 seconds]
fmuellner has joined #wayland
cabal704 has joined #wayland
cabal704 has quit []
vyivel_ is now known as vyivel
<vyivel>
what is the expected behavior of `weston-simple-damage --rotating-transform`?
<ManMower>
to be a total mess
<ManMower>
I think there have been a few efforts to "fix" it in the past, but I'm not sure there's ever been a concensus on what it should actually do.
* ManMower
wonders if we should just remove the option
<emersion>
yeah there are a few MRs with discussion
<emersion>
maybe time to start working on the actual feature i wanted to implement now :P
hergertme has quit []
d3x0r has quit [Remote host closed the connection]
zebrag has joined #wayland
dblsaiko has quit [Remote host closed the connection]
dblsaiko has joined #wayland
hergertme has joined #wayland
txtsd has quit [Quit: WeeChat 3.5]
txtsd has joined #wayland
slattann has joined #wayland
fmuellner has quit [Ping timeout: 480 seconds]
creich has joined #wayland
mokee has joined #wayland
slattann has quit [Read error: Connection reset by peer]
ybogdano has joined #wayland
mokee has quit []
mokee has joined #wayland
mokee has quit []
mokee has joined #wayland
<LaserEyess>
a while ago I was asking questions about wlshm and how most compositors implement it, and I was told that usually through a gltexture(?) and then composited
<LaserEyess>
but assuming you have compatible formats, couldn't you just upload it via libdrm directly and then use linux-dmabuf?
<danieldg>
the client chooses which interface to use, not the server. And yes, client using dmabuf are simpler for the server
<danieldg>
*clients
<LaserEyess>
well, I mean, I'm assuming the server could do this themselves, once it gets the fd from wlshm
<LaserEyess>
but yes I'm talking about the client side, what the client chooses to do
<danieldg>
well, if the clent doesn't use the gpu for rendering, then there's no benefit from it making the dmabuf
<LaserEyess>
the usecase here is a subtitle overlay which comes from the CPU
<LaserEyess>
right now the thought is to just use wlshm to upload it
<LaserEyess>
but wlshm is less efficient, so the question is: what can the client do to make it more efficient? can the client use libdrm to upload a buffer in a dmabuf compatible format?
<LaserEyess>
video is already handled with a dmabuf, directly with vaapi and dmabuf, so there is no "rendering" done here, just decoding and moving a buffer
<danieldg>
I think so
<LaserEyess>
ideally, wlshm would just uplaod the buffer in a compatible format, but I'm told that in most compositors it explicitly does not do that, it goes through openGL and requires some copies
<danieldg>
well, the compositor has to render the subtitle surface on top of the video before sending it to the screen, so that involves a copy
<danieldg>
you can either have the compositor do it or you can do it before giving it to the compositor
<LaserEyess>
something like libliftoff could remove that copy
<danieldg>
but someone has to produce the final frame
<LaserEyess>
but yes I do see your point
<danieldg>
if the compositor is able to use a hardware plane for each of your dmabufs, then yes
<danieldg>
but often that's not done
<LaserEyess>
yes
<LaserEyess>
still though, in the future if that becomes more common it is an optimization I think mpv would like to support
yoslin has quit [Quit: WeeChat 3.5]
yoslin has joined #wayland
<danieldg>
I think passing two dmabufs would let a compositor use two planes for this
<danieldg>
and presentation feedback can tell you if that's happening
<danieldg>
you might want someone who knows the limitations of planes to agree with me here; I'm guessing
<LaserEyess>
well, we're talking about hypotheticals of hypotheticals here
<LaserEyess>
I think first pass it mpv's overlay will just be wlshm, since it's the most portable
<ManMower>
is this an efficiency worth worrying about? most of the stuff I watch that has subtitles, they are a tiny part of the screen and updated every few seconds.
<LaserEyess>
a better thing to do would be to implement, on the compositor side, direct uploading of wlshm buffers
<LaserEyess>
but that is also a hypothetical...
<danieldg>
'direct uploading' is by definition a copy
<LaserEyess>
there will always be one copy from CPU->GPU, I understand this, I am talking about everything that happens after the data is on the GPU
<LaserEyess>
again this is what I was told: but on most compositors wlshm buffers are uploaded in a format that is not compatible with kms planes, so they must be converted
<LaserEyess>
therefore that is a copy (CPU->GPU) and another copy (gl->kms buffer)
<LaserEyess>
I'm also probably confusing some terminology here I am not very knowledgable
<ManMower>
just render the subs with GL ;)
<LaserEyess>
that is, funny enough, another thing I wondered if possible
<LaserEyess>
and the answer is: yes, it is! but also, no...
<LaserEyess>
rewriting libass sounds like a fun project for someone who is not me to do
<ManMower>
I don't think wl_shm will ever be handled in a more efficient way than it currently is. a compositor can't turn that into dmabuf after the fact.
<ManMower>
(without a copy)
<emersion>
LaserEyess: wl_shm buffers can't be used by the GPU directly, they need to be copied first
<LaserEyess>
yes that's what you told me
<LaserEyess>
but I thought you said that was a compositor implementation detail, too
<emersion>
"upload it via libdrm directly" you mean GBM maybe?
<emersion>
libdrm doesn't have an upload API
<LaserEyess>
I thought it did, but I guess I don't know what I'm talking about
<LaserEyess>
gbm then
<emersion>
libdrm has APIs to import DMA-BUFs, but that's about it
<LaserEyess>
ah, so you'd have to implement the upload yourself, ok
<emersion>
yea
<emersion>
basically gbm_bo_map + memcpy
<emersion>
it'd work a bit like GL import, except it could be used for scanout
<LaserEyess>
so can I ask again why compositores go through GL instead of that?
<LaserEyess>
I would have figured the latter is easier to do
<emersion>
mostly because they always need to have a fallback path via GL in case KMS cannot scanout the buffer
<emersion>
and doing wl_shm → GL is easier than wl_shm → GBM → GL
<LaserEyess>
ok, that makes sense
<emersion>
also some support(ed) non-GBM APIs
<LaserEyess>
right...
<LaserEyess>
I forgot about that
<emersion>
and also because some drivers didn't support DMA-BUFs
<emersion>
old drivers
<emersion>
mostly irrelevant today i guess
<emersion>
(wlroots hard-requires DMA-BUF support when hw accel is enabled)
lsd|2 has joined #wayland
<LaserEyess>
I guess on mpv's side another idea that I was thinking about was vaapi subpictures, but those are not necessarily portable and there would need to be a wlshm fallback
<LaserEyess>
I also don't know if those work as separate overlay planes, or if they require vavpp scaling/CSC first
<emersion>
probably not
<emersion>
i mean, anything involving vaapi subpictures won't use KMS planes
<emersion>
the easiest/most standard thing to do for video players would be to upload the subtitles to an EGLSurface
<emersion>
"most standard" as in "people who want to do it do this today"
<LaserEyess>
well, yes, mpv does that for most usecases
<LaserEyess>
or, no, it overlays it with openGL/vulkan and then passes a dmabuf
<emersion>
a separate EGLSurface just for the subtitles, displayed as a wl_subsurface
<LaserEyess>
I guess such a thing would be possible with libplacebo, but I don't think the person implementing this wants to do anything with opengl/vulkan at all
<LaserEyess>
I think the OSD will just be wlshm as a first pass, no big loss because as pointed out before, subtitles don't change that much
<LaserEyess>
I'm simply not knowledgable to know what is "most efficient" when it comes to this stuff, besides "less copies = better", but I don't know when copies happen, so
<any1>
What about rendering the subtitles on top of the video buffer?
<LaserEyess>
in this case the video buffer is an unscaled, un-CSC converted buffer
<LaserEyess>
subtitle buffers are BGR and vaapi buffers are usually nv12
<LaserEyess>
so there would need to be CSC and scaling in vaapi first, then they could be overlaid
<LaserEyess>
as it is currently implemented none of that is done, it's just sent to kms hardware as-is
mokee has quit []
jmdaemon has joined #wayland
<any1>
It's probably faster to render it without the conversion.
<any1>
It might be interesting to see if it's possible to use an nv12 buffer as a render buffer attachment though. :)