ChanServ changed the topic of #wayland to: https://wayland.freedesktop.org | Discussion about the Wayland protocol and its implementations, plus libinput | register your nick to speak
co1umbarius has joined #wayland
columbarius has quit [Ping timeout: 480 seconds]
rasterman has quit [Quit: Gettin' stinky!]
shashank1202_ has quit [Quit: Connection closed for inactivity]
leon-p has quit [Ping timeout: 480 seconds]
leon-p has joined #wayland
Net147 has quit [Quit: Quit]
Net147 has joined #wayland
yar has quit [Quit: yar]
yar has joined #wayland
leon-p_ has joined #wayland
leon-p_ has quit []
dcz_ has joined #wayland
leon-p has quit [Ping timeout: 480 seconds]
<DemiMarieObenour[m]>
The main reason for handling audio separately, IMO, is the much lower latency requirements.
<DemiMarieObenour[m]>
That said, one could definitely tunnel PipeWire in Wayland arrays. However, this would require a compositor and client that did all of their rendering asynchronously, so that slow rendering could not cause audio glitches. It could also interfere with the use of realtime audio threads.
<DemiMarieObenour[m]>
<qyliss> "Demi Marie Obenour: will you..." <- I have not done so yet, but I can start! Some of them are security vulnerabilities under embargo, but the rest I can make public on GitHub.