ChanServ changed the topic of #wayland to: https://wayland.freedesktop.org | Discussion about the Wayland protocol and its implementations, plus libinput | register your nick to speak
<hardening>
hi guys, if I wanna implement in weston a custom remoting protocol that will have real hardware (S3 instances). Is my best choice to use the drm backend on a rendernode, create virtual outputs, and wire my own remoting plugin ?
mvlad has joined #wayland
floof58 has joined #wayland
floof58_ has quit [Ping timeout: 480 seconds]
flacks has quit [Quit: Quitter]
flacks has joined #wayland
<pq>
hardening, if you can run drm-backend, that's an option, yeah. You can't use a render node though, it needs a proper KMS node.
<pq>
hardening, the alternative being a whole new backend a la rdp-backend.
<hardening>
pq: just curious, what makes it required to have a proper KMS node, what is missing in a render node ?
<pq>
the drm-backend has a check that is wants a real KMS node, nothing else
<pq>
*it
<pq>
because presumable running drm-backend on a non-KMS node makes no sense... until now?
<pq>
*presumably
<pq>
also on a system which *does* have a real KMS node and a KMS-incapable primary node (separate KMS and render devices), you really don't want to accidentally pick the non-KMS node.
<pq>
hence it searches through all DRM devices (primary nodes) on the seat and accept only one that does KMS for real, i.e. has the resources needed to light up an output.
<hardening>
ah that's why it is bothering me with a seat
<pq>
Not really? A seat is the way to tie a set of input and output devices together.
<pq>
A physical seat, that is. ID_SEAT in udev.
<hardening>
hum so if I relax these checks I may create a custom seat with a rendernode and that shall work ?
sychill has joined #wayland
<pq>
not with a render node, but a KMS-incapable primary node
<pq>
if you want render node only, you'd use headless-backend, but then you don't get the virtual output API.
<pq>
If you want to use render node with drm-backend, you need to hack more, like stop trying to set/drop DRM master.
<emersion>
oh snap, apprently weston beta was yesterday
<emersion>
for some reason my calendar didn't notify me
<pq>
yeah, I think it was supposed to be just before I came back today :-)
<emersion>
let's look at the pending patches
<emersion>
hrm, gitlab doesn't send an eamil when someones approves a MR
<pq>
hardening, to recap, there are two kinds of DRM device nodes: primary and render. Primary nodes may or may not have KMS resources. Currently DRM-backend wants a primary node that has proper KMS resources, IIRC.
<hardening>
pq: ok, but with rendernode we don't really need KMS ? So if I'd add a flag need_kms in the drm backend to skip the tests and configuration I may have the drm_backend working on a rendernode ?
<hardening>
and so as a first test I could have the remoting working on a render node ?
<pq>
hardening, if you also disable all input, then you have pretty much converted drm-backend into headless-backend that also exposes the virtual output API.
<hardening>
pq: hum ok but then why headless can't do remoting ?
<pq>
...and skip session control (launchers)
<pq>
because no-one bothered to move the code from drm-backend to core
<emersion>
surfaceless EGLSurface can't easily be exported to DMA-BUFs
<pq>
I think there is also... yes, that
<emersion>
gbm can
<emersion>
in wlroots we just use gbm for headless
<hardening>
yeah the remoting looks very dmabuf oriented
<pq>
something need to allocate and export the dmabuf, which means headless would pretty much need to depend on gbm
<emersion>
ie, the backends don't allocate memory, they just take buffers and display them
<hardening>
well I'm not against a headless running on a GPU
<pq>
hardening, oh, do you not need dmabuf?
<emersion>
weston headless already runs on the GPU
<pq>
headless-backend already runs on a GPU, just not with GBM.
<emersion>
if you just need to glReadPixels it should be fine
<pq>
if you are happy to use glReadPixels instead of exporting dmabuf, then you don't even need the virtual output API.
<emersion>
damn
<pq>
the whole point of the virtual output API to get a dmabuf, because zero-copy and further GPU processing.
<hardening>
hum so I'd better start from the headless backend
<emersion>
i swear i'm not cheating by reading your mind :P
<pq>
hahaha
<hardening>
well we have in mind to pass the generated content to nvenc after that for h264 encoding
<hardening>
perhaps a first version that glReadPixels would be a good starting PoC
<pq>
If you want to make hardware encoding efficient, then you want dmabuf.
<hardening>
yeah sure
<pq>
If you just want to try something PoC that is not efficient, then headless-backend and the screenshooting hooks might do.
<hardening>
and so if I want headless to have dmabuf I must make it sue GBM ?
<hardening>
use
<pq>
probably, yeah
<pq>
I think the screen-share plugin does something like that PoC... it uses renderer->read_pixels()
<pq>
It presents to another Weston instance, acting as a Wayland client.
gryffus has quit [Remote host closed the connection]