ChanServ changed the topic of #dri-devel to: <ajax> nothing involved with X should ever be unable to find a bar
pcercuei has quit [Quit: dodo]
camus has joined #dri-devel
co1umbarius has joined #dri-devel
columbarius has quit [Ping timeout: 480 seconds]
ella-0_ has joined #dri-devel
ella-0 has quit [Read error: Connection reset by peer]
nuh^ has joined #dri-devel
Daanct12 has joined #dri-devel
nuh^ has quit [Ping timeout: 480 seconds]
lemonzest has joined #dri-devel
bbrezillon has quit [Quit: WeeChat 3.5]
nuh^ has joined #dri-devel
bbrezillon has joined #dri-devel
tarceri has quit [Ping timeout: 480 seconds]
tarceri has joined #dri-devel
mclasen has quit []
mclasen has joined #dri-devel
icecream95 has joined #dri-devel
JohnnyonF has joined #dri-devel
JohnnyonF has quit [Read error: Connection reset by peer]
JohnnyonF has joined #dri-devel
tarceri_ has joined #dri-devel
tarceri has quit [Ping timeout: 480 seconds]
JohnnyonFlame has quit [Ping timeout: 480 seconds]
nuh^ has quit [Ping timeout: 480 seconds]
sdutt has quit [Ping timeout: 480 seconds]
kchibisov has quit [Read error: Connection reset by peer]
kchibisov has joined #dri-devel
JohnnyonF has quit [Read error: Connection reset by peer]
JohnnyonFlame has joined #dri-devel
kts has joined #dri-devel
Company has quit [Quit: Leaving]
kts has quit [Ping timeout: 480 seconds]
kts has joined #dri-devel
mclasen has quit []
mclasen has joined #dri-devel
alanc has quit [Remote host closed the connection]
alanc has joined #dri-devel
kts_ has joined #dri-devel
kts has quit [Ping timeout: 480 seconds]
kts_ has quit [Ping timeout: 480 seconds]
danvet has joined #dri-devel
kts_ has joined #dri-devel
nuh^ has joined #dri-devel
srslypascal has quit [Remote host closed the connection]
srslypascal has joined #dri-devel
nuh^ has quit [Ping timeout: 480 seconds]
mclasen has quit []
mclasen has joined #dri-devel
npnell has joined #dri-devel
npnell has left #dri-devel [#dri-devel]
ElementW has joined #dri-devel
slattann has joined #dri-devel
mclasen has quit []
mclasen has joined #dri-devel
Daanct12 has quit [Remote host closed the connection]
Daanct12 has joined #dri-devel
Haaninjo has joined #dri-devel
rasterman has joined #dri-devel
Daanct12 has quit [Ping timeout: 480 seconds]
pcercuei has joined #dri-devel
kts_ has quit [Ping timeout: 480 seconds]
<ElementW> Mesa and many of its drivers support GL_{EXT,ARB,KHR}_robustness, but is there a way to trigger a context loss to test it out? I haven't seen anything in either Mesa or the amdgpu/radeonsi (si_pipe)/drm kernel modules that would allow that
kts has joined #dri-devel
<bnieuwenhuizen> ElementW: put an infinite loop in a shader
<MrCooper> ElementW: accessing /sys/kernel/debug/dri/0/amdgpu_gpu_recover should trigger a GPU reset, though beware this will currently result in spectacular fireworks most likely
pixelcluster has joined #dri-devel
kts has quit [Quit: Konversation terminated!]
<ElementW> bnieuwenhuizen: Don't modern GPUs have timeouts on the execution times of shaders instead of locking up the DRM job stream/GPU ring?
<bnieuwenhuizen> which will cause a context loss
tarceri__ has joined #dri-devel
kts has joined #dri-devel
<ElementW> Seems like amdkfd causes a GPU reset when a queue refuses to unmap, but again I'm skeptical on whether an infinite shader would cause that; I'll try
<ElementW> MrCooper: Thanks for the hint, I hadn't seen that param when exploring the chain, silly me because it directly calls `amdgpu_device_gpu_recover`
tarceri_ has quit [Ping timeout: 480 seconds]
slattann has quit []
Company has joined #dri-devel
icecream95 has quit [Ping timeout: 480 seconds]
<ElementW> I'm surprised, Xorg, plasma-desktop, Konsole, Kate, Quassel, and Yakuake all survive a GPU reset. kwin_x11 doesn't though
<ElementW> bnieuwenhuizen: It takes a while for the driver to take notice but eventually an infinite loop does trigger a GPU reset
<ElementW> Are shaders really uninterruptible?
<bnieuwenhuizen> ElementW: pretty much
kts has quit [Quit: Konversation terminated!]
<bnieuwenhuizen> the driver tries to reset only some parts of the GPU if possible (which it might be with the infinite loop case), which hopefully means you don't lose VRAM contents
<bnieuwenhuizen> (if VRAM contents are lost, good luck with your desktop)
mclasen_ has joined #dri-devel
<ElementW> bnieuwenhuizen: I've noticed that; amdgpu_gpu_recover only triggers the reset part, to lose VRAM contents it's amdgpu_evict_vram, and amdgpu_evict_gtt if GTT is present (APUs)
mclasen has quit [Ping timeout: 480 seconds]
rkanwal has joined #dri-devel
<ElementW> at least if I understand "evict" properly
kts has joined #dri-devel
rkanwal has quit [Remote host closed the connection]
hikiko has quit [Remote host closed the connection]
hikiko has joined #dri-devel
X512 has joined #dri-devel
<X512> bnieuwenhuizen: Doesn't desktop recreate GPU buffers on device lost?
<bnieuwenhuizen> X512: most stuff doesn't
<X512> So infinite loop in shader can globally disrupt all GPU clients?
<bnieuwenhuizen> yes
<bnieuwenhuizen> don't you love GPUs? :)
<X512> Some malicious browser WebGL application can run infinite loop shaders and restart on device lost.
<X512> So desktop will become unuseable.
<ElementW> Welp, that's a whole world of disaster I've just discovered. Just because I came across those GL robustness extensions
<X512> I feel that some GPUs support shader preemption.
<ElementW> You'd think desktops environment would focus on being able to handle those cases too, but that's not granted... KDE, be it Wayland or X11, utterly fails recovery at all levels. kwin_x11 doesn't reset its X11 connection triggering repeat BadAccess:Composite:RedirectSubwindows errors, kwin_wayland just freezes (probably doesn't recreate some DRM stuff), plasma locks up half the time because the compositor is gone...
<ElementW> And somehow glxgears/eglgears survives, but vkcube doesn't (just freezes frame)
<FLHerne> KWin/Plasma do try to handle context loss through GL_KHR_robustness
<FLHerne> it probably isn't well tested though
<ElementW> "isn't well tested" would be an understatement
<FLHerne> despite DEdmundson's wording, I'm pretty sure it's not specific to the nvidia blob
<FLHerne> that driver just implemented KHR_robustness earlier and apparently relies on apps to use it more often
<ElementW> Doesn't detract from the fact nvidia's driver being unable to keep buffer object contents on suspend/VT switch is unacceptable
<ElementW> Not that I would expect any better from them really
<X512> I remember then I made some small game, full recreation of resources was needed on many events like resolution change.
<X512> Maybe it was Direct3D 9.
<ElementW> Isn't it D3D9Ex or some other variant where you have to restart everything whenever a change on the context or error happens?
hikiko has quit [Ping timeout: 480 seconds]
numerator has joined #dri-devel
Haaninjo has quit [Quit: Ex-Chat]
nuh^ has joined #dri-devel
X512 has quit [Quit: Vision[]: i've been blurred!]
pixelcluster has quit []
<ElementW> I wanted to see how the kernel keeps track of which buffer objects are owned by whom, by starting from a glBufferStorage call... I didn't expect the call chain to be THAT long
<ElementW> glBufferStorage -> _mesa_BufferStorage -> inlined_buffer_storage -> buffer_storage -> _mesa_bufferobj_data -> bufferobj_data -> screen->resource_create -> si_resource_create -> si_resource_create -> si_alloc_buffer_struct -> si_alloc_resource -> sscreen->ws->buffer_create -> amdgpu_buffer_create -> amdgpu_bo_create -> amdgpu_create_bo ->
<ElementW> LIBDRM amdgpu/amdgpu_bo.c: amdgpu_bo_alloc -> ioctl(DRM_AMDGPU_GEM_CREATE) -> amdgpu_gem_create_ioctl -> amdgpu_gem_object_create -> drm_gem_handle_create -> drm_gem_handle_create_tail -> idr_alloc
nuh^ has quit [Ping timeout: 480 seconds]
X512 has joined #dri-devel
<ElementW> And even through all that, since GLX is a thing, all handles to the device are opened as Xorg... I wanted to list each process' graphics resource usage, and that means /sys/kernel/debug/dri/0/amdgpu_gem_info is useless for that purpose
<X512> libdrm_amdgpu.so implements device and buffer object management.
<ElementW> X512: I'm specifically looking for kernel-side management, namely GEM/TTM, because it's them that have the knowledge of which PID own an fd and its resources
<X512> If I understand correctly there are 2 mechanisms of buffer owning: device fd + GEM handle and dma_buf fd.
X512 has quit [Quit: Vision[]: i've been blurred!]
X512 has joined #dri-devel
<ElementW> Most resources are created as GEM handles though, the only major exception being wl_buffers that are shared over dma_buf instead of wl_shm
<ElementW> And even them I'm not sure many Wayland clients use linux_dmabuf_unstable_v1 yet
X512 has quit [Quit: Vision[]: i've been blurred!]
<ElementW> Ah wait, wlroots has wl_shm objects be allocated as dma_buf (except for the shm backend, used with pixman I guess)
<robclark> danvet: `fs_reclaim_acquire(GFP_KERNEL); might_lock(&some_lock); fs_reclaim_release(GFP_KERNEL);` should be enough to make lockdep scream if some_lock is held anywhere there is an allocation that can do direct reclaim, even if some_lock is not directly acquired in shrinker?? Situation is some_lock can be acquired in submit retire path, which shrinker could indirectly depend on.. trying to figure out how to get lockdep to tell
<robclark> me about any remaining allocations with __GFP_DIRECT_RECLAIM while holding that lock, but that fs_reclaim dance doesn't seem to be doing that..
<danvet> yup
<danvet> robclark, dma_resv_lockdep() for an example
<danvet> robclark, I can also recommend the recently added might_alloc()
<danvet> liberally sprinkle that in all the places where allocations can happen, but rarely do
<danvet> to completement your annotations for any paths leading towards allocations/reclaim
<danvet> lockdep can then connect the dots for you
<robclark> hmm, in this case, there are defn allocations, it is just a matter of tracking down which things indirectly alloc with GFP_KERNEL, where we should instead be propagating down gfp_t so we can tell 'em to -ENOMEM if necessary
<danvet> ah if it's always allocating then you're fine, since all alloc entry functions should have might_alloc annotations
<danvet> so even if they don't go into any slowpath, much less reclaim
aswar002_ has joined #dri-devel
<danvet> might_alloc is more if you have a local cache or something and stuff like that
ramaling has joined #dri-devel
ramaling_ has quit [Ping timeout: 480 seconds]
mattrope_ has quit [Ping timeout: 480 seconds]
aswar002 has quit [Ping timeout: 480 seconds]
pzanoni has quit [Ping timeout: 480 seconds]
pzanoni has joined #dri-devel
mattrope_ has joined #dri-devel
mclasen_ has quit [Ping timeout: 480 seconds]
pzanoni has quit [Ping timeout: 480 seconds]
pzanoni has joined #dri-devel
<robclark> danvet: hmm, fun.. kasan does GFP_KERNEL allocations in code paths that disallow reclaim !?!
ramaling_ has joined #dri-devel
ramaling has quit [Ping timeout: 480 seconds]
pzanoni has quit [Ping timeout: 480 seconds]
pzanoni has joined #dri-devel
ramaling_ has quit [Ping timeout: 480 seconds]
<danvet> robclark, why do you not allow allocations in there?
pzanoni has quit [Ping timeout: 480 seconds]
<danvet> or I'm confused
<danvet> should probably upgrade my tree :-)
pzanoni has joined #dri-devel
iive has joined #dri-devel
<danvet> robclark, no drm_gem_get_pages_gfp in my tree?
<robclark> danvet: it is some WIP stuff I'm working on.. I want the allocations after I start acquiring locks in the submit path to fail, so I can unwind locks and call shrinker myself.. since otherwise shrinker can't stall waiting for in-flight submits to finish
<robclark> yeah, that is part of my WIP ;-)
<danvet> it does sound like a botched idea though, if you do get_pages with anything else than GFP_KERNEL then that sounds very wrong
<danvet> robclark, why?
<danvet> or differently, we have/had this all over the place in i915, it's kinda terrible
<robclark> scenario is total # of GEM buffers > than RAM, but working set needed for any one submit is less.. so need things to stall on reclaim if userspace queues up too much stuff
<robclark> yeah, i915 does similar, but without using the gem helper
<robclark> danvet: the rabbit hole started with writing an igt test similar to i915/gem_shrink .. basically because I want some test coverage of shrinker
ramaling has joined #dri-devel
<danvet> robclark, yeah but why can your shrinker not just trylock?
neonking has joined #dri-devel
<danvet> you can still call your shrinker directly if that's not enough
<danvet> trylock on the per-bo locks (of which most shouldn't be held), ofc the list management locks can't be trylock
<danvet> ideally they're all spinlocks so you cannot get it wrong (since allocations under spinlock is never allowed)
<danvet> robclark, ofc if the per-bo trylock isn't enough I guess you can still do the direct reclaim thing
<robclark> we do already trylock.. the issue is that retire path needs to acquire locks, so retire can block on held locks meanwhile shrinker is waiting on retire
<danvet> robclark, why does retire need to hold these locks?
<danvet> that sounds pretty wrong and should call for more dma_fence_begin/end_signalling annotations
<danvet> to figure out where things are wrong
<danvet> robclark, the classic reason is drm_gem_object_put, and that might need to be offloaded to a worker
<danvet> i.e. trylock on the per-bo lock, if that fails, push the final cleanup to a worker
<robclark> well the initial issue is queue->lock.. although that can be moved but we still have the problem with bo locks.. I can't use dma-fence/resv because I don't know *what* fence to block on, and also because we can observe the fence signalled before retire path has a chance to do it's bookkeeping and cleanup
<danvet> uh
<danvet> I think you have more work here :-)
<danvet> you pretty much need to be able to unbind a bo completely with just holding dma_resv_lock and all other locks and book-keeping need to be shrinker safe + a bunch of dma_resv_wait
<danvet> do not look at i915 for how to do this, you will be disastrously misguided
<robclark> dma_resv_lock == obj lock
<danvet> yup
<danvet> since otherwise every driver has their own scheme and it becomes a huge mess
<robclark> which can deadlock between submit direct reclaim and retire ;-)
<danvet> at least that's kinda my goal with trying to standardize everyone who tries to do a shrinker onto
<danvet> yeah but why does your retire have to take dma_resv_lock?
<robclark> disallowing direct reclaim isn't a bad approach, and it seems workable other than a few surprises like kasan
<danvet> the problem is that then every driver does their own fun games
<danvet> plus you fall on your face if it's not your own shrinker that's holding all the memory
<danvet> so you need a get_pages path somewhere that does GFP_KERNEL
<danvet> or you're just busted by design
<robclark> because obj lock needed to serialize things like unpinning vma.. some of that could perhaps be split into different lock, but anything that tries to alloc asking for no direct reclaim ending up in direct reclaim is a bug
<danvet> yeah the kasan thing is a bug
<robclark> most drivers don't do very much fun games.. I looked at shmem helpers and ttm, and they both have very limited shrinking
<danvet> yeah I know, I'm trying to fix that
<danvet> and it's _really_ hard
<robclark> i915 is pretty complex, but it mostly gets this right afaict
<danvet> nah
<danvet> like not even close imo
<robclark> or at least it doesn't instantly burst into flames when yuo try to have more GEM than RAM
<danvet> where I think i915 goes wrong is with a really heavy weight retire worker
<danvet> and sounds like msm is also doing a lot in there
<danvet> yeah it doesn't blow up
<danvet> doesn't make the design a good idea :-)
<danvet> (mostly doesn't blow up)
<danvet> robclark, so why do you need to pin your vma?
<danvet> without pinning the underlying object
<robclark> well, most of what we need the lock for is for accounting of how much pages can be shrunk (to make shrinker->count() not have to lock and iterate things)
<robclark> vma thing could possibly be dropped.. it is just for error checking
<danvet> robclark, if it's just for checking, walk the dma_resv fence list and filter for fences pertaining your vm?
<danvet> robclark, and why does the retire work need to account memory?
<robclark> to track active vs idle pages
<danvet> the shrinker should be able to just call dma_fence_wait and eat into any still-active memory
<danvet> robclark, why that?
kts has quit [Quit: Konversation terminated!]
<robclark> to make shrinker->count fast ;-)
<danvet> yeah I get why you want to count this
<danvet> but why do you need to count active and idle separately?
<danvet> shrinker is supposed to report the overall working set
<danvet> and if you have an especially big working set, the next task pulling more back in should balance stuff
<danvet> or if your shrinker can't evict it all
<robclark> because depending on path into shrinker you may or may not want to advertise that you can release active pages and would prefer vmscan to look elsewhere
<danvet> robclark, you can just fail to shrink those if dma_fence_wait would block?
<danvet> which yes is a good idea
<danvet> some random heuristics like only block in direct reclaim after a few rounds or so
<danvet> at least that's my cargo-culted understanding of this
<danvet> shrink less: sure
<danvet> report less: uh why
kts has joined #dri-devel
<robclark> mostly because how much you report determines how much you are asked to release
<robclark> anyways, I'm still playing with ideas, but letting allocations return -ENOMEM and then unwinding locks and shrinking doesn't seem unreasonable
kts has quit []
<danvet> well yeah it does proportional shrinking
<robclark> I could re-work some of the obj locking, but at *some* point I need to acquire the queue lock to serialize generating userspace fence.. and I do need it to remove from idr in the retire path
<danvet> robclark, imo the only case this was legit for i915 was due to the big lock with dev->struct_mutex
<robclark> so figured re-working obj locking doesn't solve the root issue
<danvet> robclark, why do you need queue->lock in all this?
<robclark> *do need queue->lock in retire path
<robclark> danvet: __msm_gem_submit_destroy()
<robclark> is main thing.. that could be pushed off to unbound wq or so..
<danvet> uh list_lru.h
<danvet> robclark, more reasons to always count the entire lru in shrinkers I guess
<danvet> maybe should very strongly encourage folks to use that thing
<danvet> I didn't know it existed ...
mclasen has joined #dri-devel
<danvet> daniels, list_lru.h for dmitry's shrinker work I guess
<danvet> robclark, split the fence_idr into its own spinlock?
<danvet> so you can put it easily
<danvet> practically falls out by converting over to xarray I think
<danvet> using the entire queue lock there is indeed not great
<danvet> and I should really try to figure out a way to upstream my dma_fence signalling annotations for drm/sched
<robclark> yeah, that is a possibility
kts has joined #dri-devel
<danvet> otherwise things in that function look all fairly safe and standard
<danvet> daniels, I replied on-list to dmitry so it's not lost in the long w/e here
<robclark> other thing the obj lock is protecting in the retire path is transition into inactve_dontneed (purgeable), because userspace can madv(dontneed) without waiting for submit to retire (and we track purgeable separately because we only consider purgeable if there is no swap)
<robclark> I guess some of that could move under mm_lock instead
<danvet> robclark, imo that should just be handled by "should I eat into active objects or not" heuristics in the shrinker
<danvet> you don't even have to really update this in the retire work, it's just spending cpu cycles for the hopefully not so common case
<danvet> in generally looking at i915 maintaining explicit active lists turned out to be a mistake
<robclark> hmm, not sure about that, because not having swap isn't that uncommon
<danvet> unified lru + active heuristics
<danvet> robclark, oh the separate purgeable list makes sense
<danvet> but splitting your purgeable list into active/inactive parts not so much
mclasen has quit [Ping timeout: 480 seconds]
<danvet> also I really think all these higher level heuristics should be moved into shared code
<robclark> only single purgeable list.. but on retire obj can move from active list to purgeable list
<danvet> hence why I really want a shmem helpers shrinker
<danvet> robclark, but why is it on that separate list to begin with?
<danvet> just toss the active list entirely
<danvet> as long as it's roughly lru it should be fine
<danvet> since if you're in "shrink but don't stall mode" you will only rarely hit an object which is still active and where the 0 timeout wait bails out
<danvet> or whatever we have as a primitive
<robclark> I'll look at list_lru (although the existing lists are lru order already)
<danvet> and once you eat into the active bo set, you're kinda in the world of pain anyway
<danvet> so burning a bunch more cpu time on not so efficient list walking is "meh"
unevenrhombus[m] has quit []
unevenrhombus[m] has joined #dri-devel
<danvet> plus when you don't split off your active purgeable objects, you shrinker has more stuff to nuke on swap-less systems
<robclark> hmm, no I think you do still need to care about that with the way chromebooks use zram swap.. you want shrinker to be fast when you are just starting to get into memory pressure
unevenrhombus[m] has quit []
unevenrhombus[m] has joined #dri-devel
<robclark> because memory pressure is a constant thing
<danvet> yeah I know
<danvet> that's also why I think we should lift the various "is it already ok to call dma_fence_wait or have we not shrunk enough other crap" heuristics into helpers
<danvet> and not duplicate in each driver in slightly different ways
<danvet> because in normal case you do indeed want to avoid stalling on dma_fence_wait like the plague
<robclark> well, only two drivers doing it now.. msm and i915 ;-)
<danvet> it's kinda like direct writeback stall
<danvet> robclark, yeah and that already freaks me out :-)
<robclark> and not entirely sure how i915 can use shmem helpers.. with it's nutty stuff like multiple cpu maps
<danvet> we're nuking the nutty stuff
<danvet> mostly
<danvet> for existing platforms the idea is to use ttm (and maybe shmem helpers eventually) for the standard case
<danvet> and then kinda keep the multi map stuff as a side wagon of some kind
<danvet> or just not use the helpers for cpu mmap at all and keep the i915-gem stuff
<danvet> for dgpu it's pure ttm singel cpu mmap or go away already
Haaninjo has joined #dri-devel
lemonzest has quit [Quit: WeeChat 3.5]
<airlied> emersion, jekstrand, danvet : came up with a workaround for the dma-buf advertising
<emersion> oh?
<airlied> just add a config option to jekstrands patch
<airlied> then add a separate patch to drm select that config option and report a new cap to userspace
<airlied> it should avoid the backport trap
<airlied> since if someone forgets to backport the dma-buf piece they won't advertise the new cap
<emersion> do we really need a config option though?
<emersion> if we're going for a DRM cap we could just include the uapi header and #ifdef on the IOCTL?
<danvet> airlied, gregkh hasn't replied yet, I guess we wait until next week
<danvet> kernel recipes was this week
<danvet> emersion, yeah that's probably best
<danvet> wrap it in some #ifdef or so
<danvet> but I do hope gregkh figures out something that's less silly
<airlied> emersion: yes a config option avoids the backport trap
<airlied> but yeah we could just include the uapi header
<airlied> but this makes it less likely to be missed in a backport
<airlied> since the dep is explicit in Kconfig
<emersion> i see
<bnieuwenhuizen> problem with a config option is someone is going to forget to enable it?
<airlied> no the drm will select it
<airlied> I'd suggest default y for it, but that brings the eye of Linus
camus has quit [Ping timeout: 480 seconds]
<airlied> we already select DMA_SHARED_BUFFER, just need to add another line
mclasen has joined #dri-devel
<robclark> danvet: btw, list_lru looks like it needs to grow a LRU_STOP_TRYING if you want to keep things on the LRU that cannot be immediately shrunk (it has no status enum value to bail out of list iteration)
<danvet> hm ...
<danvet> robclark, maybe it's not that good an idea then really
<danvet> there's also a few other things in there wrt node awareness that I guess don't make a lot of sense
<robclark> adding a way to bail out of list iteration isn't too hard, I'm unsure about the node awareness thing but it appears to all no-op if you build w/out numa support
<robclark> (although I suppose that is a thing that maybe is enabled in distro kernels)
<danvet> yeah distro tend to enable this
<robclark> but I guess one option is drm_lru ;-)
<robclark> or drm_gem_lru or so
<danvet> yeah
<danvet> which gets us back to shmem helpers or soemthing like that I guess
<danvet> robclark, since I didn't look careful enough, can you pls reply to dmitry's series that maybe not so good idea to use list_lru?
<robclark> I'd consider making drm_gem_lru separate from (although used by) shmem helpers..
<robclark> it is probably a thing msm could use sooner than shmem helpers
<robclark> k
sdutt has joined #dri-devel
<danvet> zackr, I guess the case that's broken is the compositor uses the cursor plane for something which isn't a cursor
<danvet> which with the vm moving the cursor is not great at all
<danvet> and I think that's the one you're missing
<danvet> and with emersion is trying to fix
<danvet> I wonder whether we should have a new PLANE_TYPE_VIRTUAL_CURSOR for these
<danvet> and then the opt-in setcap enables that for userspace which understands how that kind of cursor is a different beast entirely
<danvet> and then if you want to do a minimal backport, you backport just that type
<danvet> and hiding it from userspace
<danvet> with nothing else enabled
<danvet> so that the legacy cursor ioctl -> atomic cursor plane routing still works
<emersion> yeah, please make the VM case an opt-in
<emersion> it really doesn't work like real hw
technopo1rot has quit [Ping timeout: 480 seconds]
<emersion> and i'm still pretty sure CRTC_X/CRTC_Y are getting mungled by the VM softwars
<emersion> software*
technopoirot has quit [Ping timeout: 480 seconds]
kts has quit [Quit: Konversation terminated!]
kts has joined #dri-devel
srslypascal has quit [Ping timeout: 480 seconds]
kts has quit [Quit: Konversation terminated!]
tobiasjakobi has joined #dri-devel
tobiasjakobi has quit []
kts has joined #dri-devel
rasterman has quit [Quit: Gettin' stinky!]
icecream95 has joined #dri-devel
rasterman has joined #dri-devel
CounterPillow has quit [Ping timeout: 480 seconds]
CounterPillow has joined #dri-devel
Haaninjo has quit [Quit: Ex-Chat]
danvet has quit [Ping timeout: 480 seconds]
pcercuei has quit [Quit: dodo]
Daanct12 has joined #dri-devel
rsalvaterra_ has joined #dri-devel
rsalvaterra is now known as Guest1273
rsalvaterra_ is now known as rsalvaterra
Guest1274 has quit [Ping timeout: 480 seconds]
Daanct12 has quit [Remote host closed the connection]
<mareko> how does this new cap name sound to you? PIPE_CAP_MAX_TEXEL_BUFFER_KILO_ELEMENTS
<zmike> what about MILE_ELEMENTS
<mareko> it's the standard SI kilo/mega/giga etc.
<zmike> mine was a joke
<zmike> is this for thousands of texels?
<mareko> yes
<zmike> seems fine
iive has quit []
rasterman has quit [Quit: Gettin' stinky!]
<FLHerne> is it actually thousands, or 1024s?
<FLHerne> latter maybe _KIBI_ except it sounds silly
tarceri__ has quit []
tarceri has joined #dri-devel