ChanServ changed the topic of #dri-devel to: <ajax> nothing involved with X should ever be unable to find a bar
nchery has quit [Remote host closed the connection]
nchery has joined #dri-devel
pyromancy has left #dri-devel [#dri-devel]
pyromancy has joined #dri-devel
pyromancy has left #dri-devel [#dri-devel]
Zopolis4 has quit [Quit: Connection closed for inactivity]
djbw has quit [Read error: Connection reset by peer]
nchery has quit [Ping timeout: 480 seconds]
nchery has joined #dri-devel
djbw has joined #dri-devel
mbrost has joined #dri-devel
mbrost_ has joined #dri-devel
mbrost has quit [Read error: Connection reset by peer]
mbrost__ has joined #dri-devel
co1umbarius has joined #dri-devel
nchery has quit [Ping timeout: 480 seconds]
columbarius has quit [Ping timeout: 480 seconds]
mbrost_ has quit [Ping timeout: 480 seconds]
Jeremy_Rand_Talos has quit [Ping timeout: 480 seconds]
mbrost_ has joined #dri-devel
mbrost has joined #dri-devel
mbrost__ has quit [Ping timeout: 480 seconds]
mbrost_ has quit [Ping timeout: 480 seconds]
mbrost has quit []
<DemiMarie> Why does Xe use GuC submission instead of execlists? Also, what is the right place to ask questions like this?
fxkamd has joined #dri-devel
ngcortes has quit [Read error: Connection reset by peer]
yuq825 has joined #dri-devel
Jeremy_Rand_Talos has joined #dri-devel
smiles_1111 has joined #dri-devel
Jeremy_Rand_Talos has quit [Remote host closed the connection]
Jeremy_Rand_Talos has joined #dri-devel
fxkamd has quit []
thenemesis has joined #dri-devel
MrCooper has quit [Remote host closed the connection]
MrCooper has joined #dri-devel
bmodem has joined #dri-devel
aravind has joined #dri-devel
rauji__ has joined #dri-devel
<kode54> aren't execlists slower?
Danct12 has joined #dri-devel
thenemesis has quit [Quit: My MacBook Air has gone to sleep. ZZZzzz…]
psykose has joined #dri-devel
Company has quit [Quit: Leaving]
thenemesis has joined #dri-devel
kzd has quit [Ping timeout: 480 seconds]
Danct12 is now known as Guest1189
Danct12 has joined #dri-devel
Guest1189 has quit [Ping timeout: 480 seconds]
Guest1158 is now known as nchery
thenemesis has quit [Quit: My MacBook Air has gone to sleep. ZZZzzz…]
thenemesis has joined #dri-devel
bmodem1 has joined #dri-devel
bmodem has quit [Ping timeout: 480 seconds]
thenemesis has quit [Remote host closed the connection]
Zopolis4 has joined #dri-devel
thenemesis has joined #dri-devel
dcz_ has joined #dri-devel
bgs has joined #dri-devel
thenemesis has quit [Quit: Textual IRC Client: www.textualapp.com]
nicolejadeyee has quit [Read error: No route to host]
kode54 has quit [Read error: No route to host]
robobub_ has quit [Read error: No route to host]
naseer__ has quit [Read error: No route to host]
naseer__ has joined #dri-devel
robobub_ has joined #dri-devel
nicolejadeyee has joined #dri-devel
kode54 has joined #dri-devel
ernstp_____ has quit [Read error: No route to host]
ernstp_____ has joined #dri-devel
sima has joined #dri-devel
YuGiOhJCJ has joined #dri-devel
CATS has quit []
CME_ has joined #dri-devel
CME has quit [Ping timeout: 480 seconds]
CME_ has quit [Ping timeout: 480 seconds]
CME has joined #dri-devel
alanc has quit [Remote host closed the connection]
alanc has joined #dri-devel
bgs has quit [Remote host closed the connection]
TMM has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
TMM has joined #dri-devel
tzimmermann has joined #dri-devel
sghuge has quit [Remote host closed the connection]
sghuge has joined #dri-devel
tursulin has joined #dri-devel
darkbasic4 has quit [Quit: Page closed]
Zopolis4 has quit [Quit: Connection closed for inactivity]
pcercuei has joined #dri-devel
rauji__ has quit []
jkrzyszt_ has joined #dri-devel
aravind has quit [Ping timeout: 480 seconds]
aravind has joined #dri-devel
kts has joined #dri-devel
lynxeye has joined #dri-devel
FireBurn has joined #dri-devel
FireBurn has quit []
kts has quit [Quit: Konversation terminated!]
swalker__ has joined #dri-devel
swalker_ has joined #dri-devel
swalker_ is now known as Guest1205
rasterman has joined #dri-devel
swalker__ has quit [Ping timeout: 480 seconds]
<dolphin> DemiMarie: Do you mean the Xe platform or driver?
<dolphin> Short answer is that firmware based scheduling is the direction hardware has taken.
kts has joined #dri-devel
Danct12 is now known as Guest1208
Danct12 has joined #dri-devel
Danct12 has quit []
Guest1208 has quit [Remote host closed the connection]
pochu has joined #dri-devel
vjaquez has joined #dri-devel
cmichael has joined #dri-devel
thellstrom has joined #dri-devel
kts has quit [Quit: Konversation terminated!]
YuGiOhJCJ has quit [Quit: YuGiOhJCJ]
<karolherbst> gfxstrand: okay.. I think I came up with a plan for the first part of the transition towards multi entry point nirs: we have to split `struct shader_info` or rather.. attach it to an entry_point nir_function instead so we can have multiple of those
<karolherbst> from a quick scan almost all of the fields are only relevant for a given entry point
<karolherbst> I think I also want the `create_library` thing to go away. Users of spirv_to_nir should be able to just select an entrypoint in a nir_shader (after shader_info was moved to nir_function) and then it could look like the way it was before. Or we go all in and have to change nir->info to nir_shader_get_entrypoint(nir)->info
<karolherbst> but nir_shader_get_entrypoint also needs rework I think and we should have a nir->entry_point to mark the selected one, because otherwise that loop inside nir_shader_get_entrypoint will be quite expensive
<karolherbst> not sure
<ishitatsuyuki> why is __vk_append_struct named with two underscores in front?
<bnieuwenhuizen_> to make it invalid C?
<karolherbst> it's valid C
<karolherbst> two underscores just means: you shouldn't use it
<karolherbst> but stronger as just one underscore
<karolherbst> one means: you might use it if you think you know what you are doing, two means, even if you think you know, you don't
<bnieuwenhuizen_> the compiler is completely free to break it, as identifiers starting with two underscores are reserved for the implementation
<karolherbst> sure
<karolherbst> but yeah.. probably should have been one underscore
<karolherbst> :P
Lucretia has joined #dri-devel
jkrzyszt_ has quit [Remote host closed the connection]
MrCooper has quit [Remote host closed the connection]
MrCooper has joined #dri-devel
i-garrison has quit []
i-garrison has joined #dri-devel
apinheiro has joined #dri-devel
i-garrison has quit []
i-garrison has joined #dri-devel
Company has joined #dri-devel
camus has quit [Ping timeout: 480 seconds]
<melissawen> hey, I'm adding more than 15 KMS driver-specific properties, but although we have DRM hooks for driver-specific properties, the attachment function only considers `#define DRM_OBJECT_MAX_PROPERTY` here https://cgit.freedesktop.org/drm/drm-misc/tree/drivers/gpu/drm/drm_mode_object.c#n248
<melissawen> but as it's driver-specific properties, increasing on DRM interface seems weird for me
<emersion> i think it makes sense to increase
<emersion> is 47 arbitrary?
<melissawen> oh, it should be 41 :) 24 + 17 new KMS properties
<emersion> i'd recommend picking something larger, so that this doesn't need to be bumped everytime a new prop is introduced
<emersion> like, 64 or 128
camus has joined #dri-devel
<melissawen> oh, so we don't need to say exactly the amount of properties we have enabled... okay, I will increase to a higher number
<melissawen> thanks!
alpalcone has quit [Quit: WeeChat 3.8]
lplc has joined #dri-devel
CATS has joined #dri-devel
lplc has quit []
lplc has joined #dri-devel
<emersion> yeah it's just a max, for the array capacity presumably
jkrzyszt has joined #dri-devel
agd5f has quit [Ping timeout: 480 seconds]
heat has quit [Read error: Connection reset by peer]
heat has joined #dri-devel
Guest1205 has quit [Remote host closed the connection]
jkrzyszt has quit [Remote host closed the connection]
jkrzyszt has joined #dri-devel
JohnnyonFlame has quit [Ping timeout: 480 seconds]
camus has quit []
sarahwalker has joined #dri-devel
aravind has quit [Ping timeout: 480 seconds]
apinheiro has quit [Quit: Leaving]
vliaskov has joined #dri-devel
agd5f has joined #dri-devel
<DemiMarie> dolphin: why is hardware moving in the direction of firmware-based scheduling?
fxkamd has joined #dri-devel
smiles_1111 has quit [Ping timeout: 480 seconds]
thellstrom has quit [Remote host closed the connection]
<karolherbst> DemiMarie: because it offers lower latencies
<karolherbst> for desktop usage it might not matter, but some compute folks are very sensitive about that
<agd5f> DemiMarie, leads to user mode submission as well. E.g., user mode drivers can manage their own hardware queues
<karolherbst> well, there are kernelspace drivers doing user mode submission as well
<karolherbst> or rather allowing for
<karolherbst> but yeah.. doing scheduling on the hardware removes the need of interrupting the host if the hardware wants a context switch or something.
<karolherbst> we see similiar things happening for CPUs as well, no?
swalker_ has joined #dri-devel
swalker_ is now known as Guest1225
sarahwalker has quit [Remote host closed the connection]
mbrost has joined #dri-devel
mbrost_ has joined #dri-devel
<tjaalton> dcbaker: hi, 7e68cf91 is not in 23.0.x for some reason, though it is in 22.3.x?
<robclark> tursulin, sima: any thoughts about moving forward with https://patchwork.freedesktop.org/series/117008/ and https://patchwork.freedesktop.org/series/116217/ ? I think the discussion has settled down on both
cmichael has quit [Quit: Leaving]
cmichael has joined #dri-devel
<tursulin> robclark: syncobj deadline sounded fine to me, I believe you have explained that it is not any random wait that gets "deadline NOW" set but a smaller subset and AFAIR I was satisfied with that explanation. It was on Mesa to ack or not the userspace changes.
mbrost has quit [Ping timeout: 480 seconds]
<tjaalton> dcbaker: okay, found the note on .pick_status.json
<tursulin> robclark: fdinfo I planned to revisit this week but ran out of time, promise to do it next week. But I think that too looked acceptable to me.
bmodem has joined #dri-devel
<tursulin> robclark: ah that u32.. I really think it needs to be u64
<tursulin> that was possibly my last open but as said, I need to re-read it all one more time
<tursulin> u32 IMO I don't see how that works. With i915 I could overflow it in two GEM_CREATE ioctls due delayed allocation.
<tursulin> and I don't know what we are going to do with gputop
yuq825 has left #dri-devel [#dri-devel]
bmodem1 has quit [Ping timeout: 480 seconds]
mbrost_ has quit [Ping timeout: 480 seconds]
camus has joined #dri-devel
<robclark> tursulin: oh, yeah, that was meant to be u64
<robclark> I think I've read (and rebased) enough of the rest of the gputop series to be happy with it.. I'll reply w/ r-b on list
kzd has joined #dri-devel
<tursulin> robclark: thanks! I'll see if it needs yet another rebase and merge next week. Will aim to r-b your fdinfo series next week too.
<robclark> thx, I'll try and find a few min to re-send w/ s/u32/u64/.. that was just a typo when I addressed your suggestion to not size_t
nora has joined #dri-devel
heat has quit [Remote host closed the connection]
heat has joined #dri-devel
Duke`` has joined #dri-devel
crabbedhaloablut has joined #dri-devel
loki_val has quit [Read error: Connection reset by peer]
<mattst88> karolherbst: I'm enabling rusticl in gentoo. can you remind me what hardware it supports vs what hardware clover supports?
<mattst88> and also what opencl version each supports?
bmodem has quit [Ping timeout: 480 seconds]
<karolherbst> mattst88: llvmpipe, nouveau, panfrost, iris, radeonsi and up to 3.0
<kisak> the joys of OpenCL 3.0 is that the base requirements are much lower than 2.x, so every new driver starts there.
<mattst88> karolherbst: awesome, thank you. I think r600 is the only driver that was supported by clover that is not currently supported by rusticl?
<karolherbst> correct
<karolherbst> though it might just work, dunno
<karolherbst> I don't have the hardware to try
<mattst88> thanks
Guest1225 has quit [Remote host closed the connection]
cmichael has quit [Quit: Leaving]
JohnnyonFlame has joined #dri-devel
dcz_ has quit [Remote host closed the connection]
dcz_ has joined #dri-devel
<jenatali> Anybody want to review/ack a build fix? https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/22994
<jenatali> Not sure why none of the Linux release build pipeline caught this, but the MSVC release pipeline we have did
<mattst88> done :)
djbw has quit [Read error: Connection reset by peer]
<jenatali> Thanks :)
Haaninjo has joined #dri-devel
djbw has joined #dri-devel
loki_val has joined #dri-devel
crabbedhaloablut has quit [Ping timeout: 480 seconds]
alyssa has joined #dri-devel
<alyssa> jenatali: yo i heard you like reviewing nir patches
<alyssa> so i put nir patches in my nir patches
<jenatali> I'm reviewing
<alyssa> I'm shitposting
<alyssa> We're all busy this afternoon it seems
<alyssa> ;-D
<sima> robclark, if tursulin finds it acceptable I don't think it needs me on top
<sima> unless you want me to add an ack
camus has quit [Remote host closed the connection]
camus has joined #dri-devel
<karolherbst> alyssa: you are busy with the wrong stuff
<alyssa> karolherbst: gasp
Mangix has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
<robclark> sima: t'was mostly just to keep you in the loop
Mangix has joined #dri-devel
<robclark> (since uabi things)
Duke`` has quit [Ping timeout: 480 seconds]
Duke`` has joined #dri-devel
<alyssa> anholt_: I'm seeing dEQP-GLES2.functional.shaders.algorithm.rgb_to_hsl_vertex fail on asahi with mediump lowering enabled, IIRC you hit something similar with angle?
<alyssa> CTS bug maybe?
kts has joined #dri-devel
Daanct12 has quit [Quit: How would you be so reckless with someone's heart!?]
Danct12 has joined #dri-devel
tursulin has quit [Ping timeout: 480 seconds]
vliaskov has quit [Remote host closed the connection]
<DemiMarie> karolherbst: why does firmware submission have lower latencies? And I am not sure what you mean regarding CPUs, unless you are talking about coprocessors used just for power management.
Leopold__ has quit [Remote host closed the connection]
<karolherbst> DemiMarie: yeah not quite sure what's the CPU thing, I think I've read something somewhere about moving more of it into the CPU firmware instead of the kernel, but might as well just be power managmeent so far
<karolherbst> anyway, for the GPU you don't want to involve the kernel all too much getting your jobs scheduled by the hardware
<karolherbst> usermode command submission already is a big chunk of it
<karolherbst> but if scheduling involves the kernel, you still have those round trips
<karolherbst> which you kinda don't want
kzd has quit [Quit: kzd]
Duke`` has quit [Ping timeout: 480 seconds]
<robclark> also microcontrollers are much better at interrupt latency than full on desktop CPUs
<karolherbst> context switching is already done in firmware on some GPUs at least, usermode command submission is mostly a kernel feature giving userspace permission to do it (my mapping GPU memory)
Duke`` has joined #dri-devel
<karolherbst> on nvidia command submission doesn't require the kernel at all and what I know is, it's performance critical enough so it's worth it
<DemiMarie> karolherbst: is the syscall overhead noticeable?
<jenatali> Yes
<DemiMarie> robclark: does the firmware ever just busy spin? I imagine that it could afford to.
<karolherbst> DemiMarie: yes
<robclark> implementation detail I suppose
<karolherbst> for Vk/GL it doesn't matter much
<karolherbst> for compute it does
Leopold_ has joined #dri-devel
kzd has joined #dri-devel
<karolherbst> usermode submission is also kinda incompatible with graphics anyway
<karolherbst> not sure it's feasable at all unless you move more bits out of the kernel
<DemiMarie> robclark: my thought was that the microcontroller uses so little power that it could busy spin without anyone caring. Is the lower interrupt latency due to having shallower pipelines and fewer registers?
<DemiMarie> karolherbst: why is compute so latency sensitive? Also, my concern with userspace submission is obviously security.
<karolherbst> there is no pcie bus in between
<robclark> and a lot less state to save/restore, etc/etc.. some u-controllers don't even have to save/restore regs on irc. Also they tend not to be running a full blown operating system ;-)
<karolherbst> DemiMarie: ehh.. it's not a security thing really. It's just that you fill your command buffer, enqueue it in a command buffer submission ring buffer and ring a doorbell telling the GPU there is mre work
<karolherbst> the kernel still sets up all the stuff and permission thing
<robclark> uc's range from things like full blown cortex-m or riscv to things that are much smaller and very special purpose
<karolherbst> so userspace just gets some memory mapped
<karolherbst> and uses that
<DemiMarie> karolherbst: I meant that with userspace submit, the firmware is parsing untrusted input, so those parsers had better be secure.
<karolherbst> the situation is different on hardware where you could enqueue commands which could be security relevant
<karolherbst> I mean.. it's done in hardware anyway
<karolherbst> well you can submit broken stuff and the GPU hangs
<karolherbst> or something
Cyrinux9 has quit []
<karolherbst> but.. the kernel can't fully protect you from those things anyway
<DemiMarie> karolherbst: oh, I thought there was some C code actually parsing messages from the user.
<karolherbst> depends on the driver/hardware
<karolherbst> on nvidia there is not much
Cyrinux9 has joined #dri-devel
<DemiMarie> What about AMD?
<karolherbst> good question
<karolherbst> I know that we have some GPUs (*cough* broadcom *cough*) without an MMU and yeah... the kernel is absolutely required here
<DemiMarie> Also my understanding is that at least for AGX full isolation should be possible, even if not yet implemented.
<karolherbst> yeah, modern GPUs are usually like that
<karolherbst> you really do all that stuff in userspace, and the kernel just submits it to the hardware + fencing
<karolherbst> which matters for presentation
<karolherbst> biggest reason it's more of a compute only thing
<karolherbst> one thing which I think is still happening on the kenrel side if with nvidia usermode submission is all that VM stuff and binding buffers, because managing physical addresses in usermode would be... insecure :)
Leopold_ has quit [Remote host closed the connection]
<robclark> actual cmdstream "parsing" is some combo of hw and fw.. whether kernel is involved in submit doesn't really change that.. kernel's involvement is more about fencing and residency
<DemiMarie> Makes sense
gouchi has joined #dri-devel
<karolherbst> unless you have to do relocations and stuff :')
gouchi has quit [Remote host closed the connection]
Leopold_ has joined #dri-devel
<DemiMarie> Why can a malicious userspace program interfere with other users of the GPU?
<robclark> it is still a shared, non-infinite, resource
<karolherbst> also.. driver don't clear VRAM :')
<robclark> but different processes should have their own gpu virtual address space, etc (oh, and vram.. but that is one of the 99 problems I don't have :-P)
<DemiMarie> karolherbst: report that to oss-security and get a CVE assigned and I suspect that would change
<karolherbst> it's a known issue for like 10 years, but I guess
<DemiMarie> Also virtio-GPU native contexts needs this to change because cross-VM leaks are an obvious non-starter.
jfalempe has quit [Quit: Leaving]
<karolherbst> yeah, that's why hypervisors/browsers clear all GPU memory before anybody can read it out
<karolherbst> normally
<DemiMarie> With native contexts the kernel is the hypervisor 😆
<DemiMarie> My understanding is that native contexts expose the standard uAPI to the guest
tzimmermann has quit [Quit: Leaving]
<DemiMarie> robclark: could something like GPU cgroups be made?
<DemiMarie> I’m more concerned about e.g. GPU hangs and faults causing other (innocent) jobs to be crashed.
<DemiMarie> On the CPU such processes would just be preempted by the kernel and nobody else would care.
<karolherbst> depends on the hardware/firmware really
<karolherbst> on AMD it's a lost cause
<DemiMarie> And my (perhaps naïve) expectation is that the GPU should provide the same level of isolation.
<karolherbst> (but apparently it's changing with new gens)
<karolherbst> on Nvidia you can kill contexts and move on
<karolherbst> yeah.. Nvidia is quite far on that front actually
heat has quit [Remote host closed the connection]
<karolherbst> newest hardware also has native support for partitioning
<DemiMarie> karolherbst: why is it a lost cause on AMD, what changes are fixing it, and why is Nvidia better?
<karolherbst> so you can assign a certain amount of SMs to each context
<karolherbst> or partition VRAM even
heat has joined #dri-devel
<karolherbst> on AMD it's either full GPU reset or nothing
Duke`` has quit [Ping timeout: 480 seconds]
<karolherbst> and I mean full GPU reset literally
<karolherbst> I think they can kinda preserve VRAM content though
<DemiMarie> karolherbst: if it is not present on the hardware Qubes users actually have, then for my purposes it does not exist.
<karolherbst> yeah.. I don't think sharing the same AMD GPU across VMs is a working use case
<DemiMarie> What is changing on newer AMD HW? Dropping the 1-entry implicit TLB that makes TLB invalidation take a good chunk of a second?
jvesely has joined #dri-devel
<karolherbst> I think they started to implement proper GPU recovery
<karolherbst> not sure
<robclark> DemiMarie: yeah, using cgroups or some sort of "protection domain" to trigger extra state clearing is a thing I've been thinking of
<DemiMarie> robclark: protection domain = process?
<robclark> it could be a more sensitive process.. or you could setup different domains for different vm's vs host.. maybe cgroups is the right way to do that, idk.. just more of an idea at this stage than patchset ;-)
<robclark> so far I've mostly cared about iGPUs but I think we'd want something like that for dGPUs with vram..
<DemiMarie> robclark: what about using a ptrace check? If one process cannot ptrace another, they are in different protection domains.
<robclark> DemiMarie: btw, semi related, I guess qubes would perhaps be interested in a host<->guest wayland proxy, ie is you could have vm guest app fw it's window surface to host compositor
<DemiMarie> That said, most people will assume protection domain = process, so I recommend that as the default.
<robclark> ptrace could work.. for vram where you have to clear 16GB of vram, that might still be too much
<DemiMarie> robclark: why do you need to clear all 16GiB?
<DemiMarie> Just clear the buffers userspace actually requests.
<robclark> well, that is probably worst case
<DemiMarie> And yes, such a proxy would be awesome. I’m aware of two of them.
<robclark> but it could still be a lot to clear.. which is (I assume) why it isn't done already
Duke`` has joined #dri-devel
<DemiMarie> robclark: have a shader do the clearing? Linux already needs to handle zap shaders for some mobile GPUs.
<robclark> crosvm has such a wl proxy.. that is how vm apps work on CrOS.. but I kinda would like to see an upstream soln based on new virtgpu (drm/virtio) context type so we can drop downstream guest driver
<robclark> zap only needs to clear ~256kb to 4Mb of gmem (plus shader regs, etc) so that isn't quite as bad
alanc has quit [Remote host closed the connection]
alanc has joined #dri-devel
<javierm> robclark: we would also need something like sommelier but with proper releases so that distros could package it, right ?
<javierm> robclark: since mutter already supports to be nested, I wonder if there could be a mutter variant that would do the same than sommelier
<robclark> could be sommelier.. which appears to already have some support for cross-domain (which _could_ be the wl proxy virtgpu context type.. but also carries some extra baggage due to how it is used for minigbm/gralloc)
<javierm> robclark: yeah, that's the part that I'm not sure. How Cros specific sommelier is or if could be used in general Linux distros
<qyliss> javierm, robclark: have you both seen https://github.com/talex5/wayland-proxy-virtwl?
<qyliss> sommelier is not too bad for distros
<qyliss> we have a sommelier package in Nixpkgs
<qyliss> although I switched to the above when Sommelier didn't work on Linux 5.19 for a long time
<javierm> qyliss: ah, interesting
<qyliss> sommelier being useful of course means packaging crosvm, which is less easy
<qyliss> although has been getting better
<robclark> I guess in theory qemu support could be added.. it would be kind of convenient if it could just live in virglrenderer but then it wouldn't be rust
iive has joined #dri-devel
heat_ has joined #dri-devel
heat has quit [Read error: No route to host]
jkrzyszt has quit [Remote host closed the connection]
MajorBiscuit has joined #dri-devel
jkrzyszt has joined #dri-devel
gio has quit [Quit: WeeChat 3.0]
lynxeye has quit [Quit: Leaving.]
gio has joined #dri-devel
JohnnyonFlame has quit [Ping timeout: 480 seconds]
alyssa has left #dri-devel [#dri-devel]
MajorBiscuit has quit [Ping timeout: 480 seconds]
sima has quit [Ping timeout: 480 seconds]
nora has quit [Remote host closed the connection]
mbrost has joined #dri-devel
Duke`` has quit [Ping timeout: 480 seconds]
mbrost_ has joined #dri-devel
mbrost has quit [Ping timeout: 480 seconds]
MrCooper has quit [Remote host closed the connection]
MrCooper has joined #dri-devel
mbrost_ has quit [Ping timeout: 480 seconds]
jkrzyszt has quit [Remote host closed the connection]
<qyliss> yeah, I've thought about that a bit
<qyliss> could Rust be (optionally) added to virglrenderer, like in mesa?
puck_ has joined #dri-devel
dcz_ has quit [Ping timeout: 480 seconds]
<puck_> robclark: hrmm, doesn't modern chromeos already use virtio-gpu cross-domain?
<puck_> the issue i had with virglrenderer/qemu is the fact that qemu doesn't support cross-domain yet
sukrutb has quit [Remote host closed the connection]
<robclark> puck_: we do use cross-domain for some things but still have downstream virtio-wl driver.. looks like sommelier has support to _some_ degree for cross-domain but looks like it is missing some things like fence support (not that the virtio-wl fence support is actually correct)
<robclark> we use cross-domain for minigbm/gralloc for android vm, for ex.. but I'm not sure how well tested the wayland part of it is
<puck_> robclark: yeah, the fence support was something i was having trouble with - i had some fun bugs with that, and i stiill don't entirely know how the kernel drm fences work :D
<puck_> robclark: at one point i got out-of-process crosvm virtio-gpu working with cloud-hypervisor, combined with cross-domain wayland + virgl (with a hacky patch to fix the stride, because virgl doesn't pass in the host stride for buffers, and amdgpu uses a non-standard one afaict)
<robclark> well, every gpu uses a non-standard stride ;-)
<robclark> but if virgl goes away, so does that problem ;-)
<puck_> yeah exactly :p
<puck_> i was thinking about how it's funny that they first invented "just run OpenGL commands over a pipe" before going with the seemingly simpler solution of "just pass the kernel API through" -- but then i remembered the latter requires IOMMUs that didn't exist back then
<robclark> it doesn't _strictly_ require iommu.. but does require context isolation.. ie. different guest and host processes should have separate gpu address space
<robclark> but that is pretty much a given these days
<puck_> right, yeah
Haaninjo has quit [Quit: Ex-Chat]
alyssa has joined #dri-devel
<alyssa> dj-death: Kayden: btw, unified atomics just landed.. looking forward for the Intel code deletion :~)
<jenatali> alyssa: ~1hr for me unless you want to rebase + land, CI was clean on my last push I believe
<alyssa> jenatali: ?
<alyssa> oh
<jenatali> You pinged me to rebase+merge my atomics change, just gonna take a few lol
<alyssa> oh yes I see
<alyssa> sorry I'm context switching too much right now ;p
<jenatali> Oh actually it's going to conflict with another change in the queue. I'll wait til that lands first (assuming it does)
<alyssa> choo choo
a-865 has quit [Quit: ChatZilla 0.16 [SeaMonkey 2.53.16/20230320105641]]
rasterman has quit [Quit: Gettin' stinky!]
Jeremy_Rand_Talos_ has joined #dri-devel
TMM has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
TMM has joined #dri-devel
Jeremy_Rand_Talos has quit [Remote host closed the connection]
JohnnyonFlame has joined #dri-devel
tobiasjakobi has joined #dri-devel
tobiasjakobi has quit []
kts has quit [Quit: Konversation terminated!]
<jenatali> alyssa: Assigned :)
sarnex has quit [Read error: Connection reset by peer]
sarnex has joined #dri-devel
<jenatali> Side note, I wish Marge was able to move on once there's failing jobs in a pipeline (i.e. the whole pipeline won't succeed). I feel like that'd save a lot of time, where now if a job irreparably fails and needs new changes it'll just sit there waiting for the rest of the jobs to finish
psykose has quit [Remote host closed the connection]
pcercuei has quit [Quit: dodo]
<ickle_>
<zmike> jenatali: try filing ci ticket?
<jenatali> zmike: Just, in the Mesa project with the CI label?
<zmike> ci label
<zmike> maybe someone will see it and have ideas
iive has quit [Quit: They came for me...]
ngcortes has joined #dri-devel
ngcortes has quit [Read error: Connection reset by peer]
ngcortes has joined #dri-devel
<jenatali> Nice, that's a good negative line count
a-865 has joined #dri-devel