ChanServ changed the topic of #dri-devel to: <ajax> nothing involved with X should ever be unable to find a bar
<DemiMarie> Run an on-GPU ML algorithm to reconstruct it?
<DemiMarie> Then perform on-GPU OCR?
<Kayden> mdnavare: I believe so. We should probably test with GNOME 43 prereleases
<Kayden> mdnavare: it sounds like with the additional work we could get KMS modifiers turned on to make mutter itself faster and improve performance further. but that should already be the bulk of it
<karolherbst> DemiMarie: basically that
<karolherbst> though reconstructing might be easier without ML
<karolherbst> just have to detile and find proper pictues.. though that can be hard to do in the end
<airlied> Kayden: an f37 nightly might good to test
<karolherbst> the problem with that attack is, that it looks nice on a webpage, but do that automated and actually extract something
cuolin^ has quit [Ping timeout: 480 seconds]
<karolherbst> DemiMarie: but anyway.. having a proper exploit and a fun presentation on some hacker conference would give anybody enough street creds or something :P
<karolherbst> it's just I suspect you need to spend a lot of time until you actually get something feasible
<mdnavare> Kayden: And additional work means immediately trying with TEST_ONLY commits or adding the heuristic check or some design policy in Mutter
<Kayden> mdnavare: TEST_ONLY commits, yeah.
<mdnavare> Okay cool i will have a sync with our MW team tomo regarding this
<anholt> wow, fossil-db reports are so nice.
<Frogging101> If something is GPU bound and performs better on other drivers, what are likely culprits?
<anholt> Frogging101: use perfetto and your driver's perf debug env var, rather than guessing randomly.
<bnieuwenhuizen> rather RGP instead of perfetto since AMD
<Frogging101> perfetto?
co1umbarius has joined #dri-devel
<mareko> anholt: glvnd doesn't have an interface for rewriting its dispatch right now, the exported glapi symbols shouldn't be exported
columbarius has quit [Ping timeout: 480 seconds]
<heat> also wow mesa supports perfetto now
mbrost has joined #dri-devel
stuart has quit []
mbrost_ has joined #dri-devel
ella-0_ has joined #dri-devel
mbrost has quit [Ping timeout: 480 seconds]
ella-0 has quit [Read error: Connection reset by peer]
mbrost_ has quit [Ping timeout: 480 seconds]
ybogdano has quit [Ping timeout: 480 seconds]
agx has quit [Read error: Connection reset by peer]
agx has joined #dri-devel
nchery is now known as Guest1545
nchery has joined #dri-devel
Guest1545 has quit [Ping timeout: 480 seconds]
ybogdano has joined #dri-devel
mbrost has joined #dri-devel
ngcortes has quit [Remote host closed the connection]
YuGiOhJCJ has joined #dri-devel
heat has quit [Ping timeout: 480 seconds]
cuolin^ has joined #dri-devel
heat has joined #dri-devel
khfeng has joined #dri-devel
pendingchaos_ has joined #dri-devel
pendingchaos has quit [Read error: No route to host]
ybogdano has quit [Ping timeout: 480 seconds]
<Frogging101> My question would actually be more accurately phrased: "What sorts of things can impact GPU performance that are driver specific?"
<Frogging101> Shader compilation is one
<airlied> Frogging101: if you are 100% sure it GPU limited, then shader compilation is probably nearly all of them
<airlied> though image/memory compression things can affect it
<Frogging101> I see 99% GPU usage. When GPU usage drops, the framerate increases. Is that indicative?
<airlied> so memory bandwdith could be some
<Frogging101> (and CPU usage is not maxed)
<Frogging101> let's also assume a framerate limiter is in use because otherwise it would just render as fast as it can and less than 99% GPU usage would mean something else
<airlied> but yeah usually trying to identify what shader is causing the bottleneck
<airlied> RGP type things
pendingchaos_ has quit [Remote host closed the connection]
pendingchaos has joined #dri-devel
heat has quit [Ping timeout: 480 seconds]
nchery has quit [Ping timeout: 480 seconds]
saurabhg has joined #dri-devel
<robclark> DemiMarie: re: https://hsmr.cc/palinopsia/ .. information leakage (rather than omg I accessed all your ram) is I think most of the issues I've seen recently (note that specific issue is or at least should be unique to gpu's with vram).. but 99% sure I could exploit that with "api level" virtualization as well, I don't think hiding behind a userspace gl or vk driver really helps you there..
Daanct12 has joined #dri-devel
<karolherbst> robclark: the idea beind that security issue is, that once it's fixed, Qubes OS could think about enabling hw acceleration :P
<karolherbst> and honestly.. no idea how we can say that our systems are safe as long as arbitrary processes can read out other processes old VRAM
<karolherbst> though I suspected that some drivers might have fixed it, but... I don't ask because I don't want to be dissapointed
<robclark> you need to think about what your threat is.. if nation state types are after you, maybe just don't use the gpu at all (and disable hyperthreading and basically all modern conveniences ;-))
<karolherbst> robclark: "free to play" games scanning for credit card infos :P
<karolherbst> anyway
<karolherbst> security bug is a bug
<karolherbst> and should be fixed regardless
<robclark> yes
<karolherbst> sure.. it's super inpractical, but AI _could_ make attacks over this vector actually feasible
<robclark> and userspace gl or vk driver should not be considered a hardened security boundary ;-)
<karolherbst> yeah, the kernel should clear VRAM :P
<karolherbst> obviously
<karolherbst> information leak via RAM would immediately drop you 8.0+ scored CVE
<karolherbst> why does the same not happen via VRAM?
<airlied> but RAM contains interesting things :)
<karolherbst> VRAM as well
<airlied> not as interesting as RAM though
<airlied> like it rarely contains passwords or secret keys
<karolherbst> I suspect your credit card info could be very interesting to some
<robclark> karolherbst: someone looking over your shoulder a CVE?
<karolherbst> we rely on web browsers to actually clear VRAM content so it's not a super huge issue
<robclark> maybe there should be a way to mark contexts as sensitive, ie. "I'm ok with the overhead of clearing vram"?
<karolherbst> yes
<airlied> I think we have flags on amdgpu to clear this ram when i'm finished iwth it
<airlied> which is the saner thing to do
<karolherbst> and there are hundreds idea of making it less impactful
<karolherbst> only clear VRAM from other processes
<karolherbst> do it async
<karolherbst> manage a bucket of "clean VRAM" etc...
<karolherbst> but that's an implementation detail
<karolherbst> system RAM is also cleared and nobody screams "perf overhead"
<airlied> AMDGPU_GEM_CREATE_VRAM_WIPE_ON_RELEASE
<karolherbst> and we can do fast zero bandwidth clears on VRAM anyway
<airlied> how do you get 0 bw clear
<airlied> ?
<Frogging101> I have actually wondered why/how Windows manages to never show old vram contents
<karolherbst> stuff like that actually belongs into ttm though
<airlied> like making sure that memory is actually clear takes bandwidt
<karolherbst> airlied: by hw supporting it?
<airlied> karolherbst: but how does it support it?
<karolherbst> ask hw engineers, but on nvidia we do have "zero bandwidth clears" or at least that's how it's called
<airlied> it's not like you can set some color compression bits saying the ram is clear, you have to actually clear it
<karolherbst> and I would be surprised if that's something else
<airlied> most of those things are for when you have color compression
<airlied> like HiZ clers
<Frogging101> It was actually funny when I used to dual boot, once I launched a game on Linux and saw a frame from a game I was playing on Windows, pre-reboot
<karolherbst> airlied: sure, but you won't have to push data over some bus
<karolherbst> the hw does it "somehow"
<karolherbst> but you won't have to push GB/s data over PCIe in order to clear random VRAM
<Daanct12> Frogging101: same here, i remember the days when i was dualbooting
<airlied> karolherbst: if you are clearing from the CPU it's not going to be useful
<Daanct12> for me it shows the start menu live tiles
<airlied> I think intel i915 does lmem clears with fallbacks in case the clear fails
<karolherbst> airlied: well, then clear on the GPU?!?
<karolherbst> we can talk about issues if we find hw which doesn't support it
<karolherbst> or warn the user of something
<karolherbst> since all the CPU sec issues I think people became more pragmatic about it
<Frogging101> so how does windows do it? do they just eat the cost of clearing all the time?
<karolherbst> I suspect they do
<karolherbst> but they probably are also very smart about it
<karolherbst> I seriously don't think it has a meaningful perf impact if done right
<karolherbst> if processes reuse their old VRAM you can just skip clearing
<airlied> karolherbst: I think nouveau might be the only driver that doesn't have the capability to do it :-P
<airlied> at least i915 does it , and amdgpu can turn it on
<karolherbst> and then you already eliminated 99% of the overhead
<karolherbst> airlied: it's a stupid idea to let userspace control it :P
<karolherbst> i915 does because it's using RAM
<karolherbst> it has to be on the kernel side, _always_ on
<karolherbst> also we live in a world were effectivlye everything is hw accelerated
<airlied> karolherbst: no i915 does it for vram now
<karolherbst> yeah.. okay
<mareko> clearing is fast, VRAM has lots of bandwidth
<Frogging101> RADV_DEBUG=zerovram :D
<karolherbst> Frogging101: that's something else :P
<airlied> like it would be one liner to flip on in amdgpu, then someone would have to fix nouveau :-P
<karolherbst> that's for when you won't see others dirty secrets
<karolherbst> sure
<karolherbst> and I am sure the fix isn't terribly hard
<karolherbst> but
<karolherbst> it shouldn't be fixed in drivers
<karolherbst> it should be fixed in ttm
<mareko> I wonder what secrets others see on your screen when you boot...
<Frogging101> It would be nice if stale VRAM contents weren't a thing anymore, indeed
<karolherbst> mareko: heh... my full disc encryption password is displayed with bullets, sooo
<karolherbst> anyway
<karolherbst> the obvious idea behing any attack is to extract whatever you do through your webbrowser
<mareko> hopefully we won't clear with SDMA, that would be a disaster
<Frogging101> Well, there's the elephant in the room that literally any program running as my user has access to my home directory
<karolherbst> yeah.. but that doesn't end up in VRAM
<karolherbst> well..
<karolherbst> file names maybe
<karolherbst> image thumbnails containing.. random shit
<Frogging101> I'm just saying I personally don't care that much that my VRAM is full of secrets, because all of the most dangerous programs I run on my desktop are running as my user anyway
<mareko> we would have to bring compute shaders to the kernel to clear memory on AMD
<Frogging101> But regardless, like I said, if we could make stale VRAM go away, that'd be cool.
<karolherbst> mareko: why though?
<karolherbst> why do you need shaders for that?
<mareko> it's the only way to clear fast
<karolherbst> ....
<robclark> to clear shader reg mem, etc
<karolherbst> ?
<karolherbst> seriously?
<robclark> yes
<mareko> yes, no DMA can use all the bandwidth
<Frogging101> It's a limited application of shaders, at least? Doesn't have to do a whole lot
<karolherbst> why do you need DMA to clear VRAM?
<mareko> you need to clear it somehow
<Frogging101> if you don't use DMA then you need to use a shader, no?
<karolherbst> yeah.. via hw features
<karolherbst> on nvidia you can do it via commands
<Frogging101> Or is there a register to flip "clear this space"
<karolherbst> no shaders involved
<karolherbst> or DMA
<karolherbst> you specify the region and just clear it
<mareko> SDMA is like 70 GB/s, compute shader clears run at the memory bandwidth, so 512 GB/s or more with our mega cache
<Frogging101> mareko: Are there clear commands on GCN?
<karolherbst> you actually want to _copy_ when clearing?!?
<karolherbst> I meant clear, as in _clear_
<karolherbst> not copy
<mareko> me too
<karolherbst> ....
<karolherbst> you might want to add such a feature to hw then
<mareko> DMA doesn't mean copy
<karolherbst> okay
<karolherbst> so the problem is just, that "clears" via SDMA are slow, because the SDMA engine doens't get all the bandwidth
<mareko> it means you have an engine on the side that can do simple stuff and the reset of the GPU can be power down
<mareko> *rest
<karolherbst> it's still "zero bandwidth" as you submit like 100b of commands to the GPU to clear a region, it's just not utilizing all of the VRAM bandwidth, right?
<mareko> the engine has to send write requests for every cache line
<karolherbst> okay sure, but that's still done on the GPU
<karolherbst> it just feels strange to have a way of doing fast clears without shaders, but then making it slow
<karolherbst> though not sure how fast that is on nvidia tbh...
<mareko> the general idea is: do you clear less than 32KB? use the tiny DMA engine, else use shader buffer stores
<karolherbst> ... "fun"
<HdkR> karolherbst: Saturates memory
<karolherbst> HdkR: okay.. so a fast clear via 3d engine has full power?
<karolherbst> though I think other engines can also do clears
<HdkR> I believe so
<karolherbst> I used the 2d engine to clear buffers by drawing 2 lines and a rect :3
<karolherbst> not sure if that's really good or not :D
<HdkR> haha
<karolherbst> using that for vkCmdFillBuffer
<mareko> DCC fast clears are different, but they only change the perception of data and don't really clear memory
<karolherbst> ahhh
<karolherbst> anyway.. at least nvidias 2d engines is fun if it comes to that stuff
<karolherbst> and it sounds like it doesn't come with the bandwidth limitation SDMA does on AMD
<airlied> karolherbst: have we any idea about that?
<karolherbst> mareko: does AMD still have any kind of 2D engine stuff?
<mareko> no
<karolherbst> airlied: HdkRsaid so
<karolherbst> sad :(
<karolherbst> though nvdiias 2d engine didn't change since fermi
<karolherbst> but it does have tons of useful features
<mareko> adding a bigger SDMA wouldn't be economical
<karolherbst> could be a fake thing which is doing hardcoded shader pipelines under the hood
<airlied> "Don’t use async copies requiring GPU bandwidth saturation, unless you have sufficient time to cover the cycles. The async copy engine is generally built to saturate PCIE bandwidth, but it can be used as a generic copy engine if sufficient time is given to cover the cost of the transfer at those speeds."
<airlied> suggests it can't saturate vram
<karolherbst> airlied: that's ce though
<airlied> though I think for this stuff it's not really a problem, just having async clears in the background is better than tying up the engines
<karolherbst> and on turing we have like 8 of those
<HdkR> Volta has like 12 of those dumb things :D
<karolherbst> :D
<karolherbst> just split the copy and load balance :P
<karolherbst> but it's not a copy
<karolherbst> and I didn't talk about CE
<karolherbst> copy is really for literal data copies
<airlied> ah is 2D built on the 3D engine then?
<airlied> but not async
<karolherbst> maybe?
<airlied> so you'd be tying up actual resources
<karolherbst> airlied: we have "ZERO_BANDWIDTH_CLEAR" in the 3d headers, but not sure how much load they put onto the GPU in total
Daanct12 has quit [Remote host closed the connection]
<jenatali> Frogging101: I'd commented earlier, yeah Windows zeros all memory coming from another process
<karolherbst> I am sure zero bandwith means in terms of "PCIE" not sure if that also means in terms of VRAM
<karolherbst> HdkR might know :D
<airlied> karolherbst: that really seems like "fake" clears
<karolherbst> though I suspect it still utilizes VRAM bandwith
<airlied> like HiZ clears
<HdkR> karolherbst: Obviously "zero" :P
<airlied> not actually clearing the ram
Daanct12 has joined #dri-devel
<airlied> but setting some magic bits in the color compression or Z tiling
<karolherbst> mhhh
<airlied> what mareko referred to earlier as DCC clears
<HdkR> But yea, zero bandwidth clear just clears the color compression data. like a DCC clear
<karolherbst> okay
<karolherbst> then we are left with drawing insanely huge rects with 2d
<karolherbst> which are still zero bandwidth if it comes to PCIe
<HdkR> Still uses a bit of BW but it's a fraction percent compared to a real clear
<mareko> I wonder if NV can bypass the compression and read the raw uncleared data
<karolherbst> HdkR: I suspect 2D "clears" need real VRAM bandwidth?
<karolherbst> that RENDER_SOLID_PRIM stuff I mean
<HdkR> mareko: You can if you heck up the representation of data. I've caused to to happen with EGLStreams
<karolherbst> anyway
saurabhg has quit [Read error: Connection reset by peer]
<HdkR> karolherbst: Probably, 2D Engine is old AF
<karolherbst> you actually only need to clear VRAM if you hand over VRAM from old processes
<karolherbst> you also get your own dirty RAM if you reuse sometihng :P
<karolherbst> just a matter of keeping track and only doing it if necassary
<karolherbst> HdkR: yeah.. well.. didn't change since fermi
<mareko> I wonder why NV is keeping the 2D engine around
<karolherbst> but honestly, what's there to change
<karolherbst> because it's useful
<mareko> "useful" isn't a good answer for redundant hw
<HdkR> NV doesn't really remove many things
<karolherbst> it might just launch hardcoded shaders though :D at least that's what I expect nvidia to do
<HdkR> Stuff just kind of sticks around
<karolherbst> I also can't imagine that 2d takes up much space
<airlied> it could be done in microcode now
<karolherbst> yeah
bmodem has joined #dri-devel
<HdkR> karolherbst: Does Nouveau support Orin yet btw?
<karolherbst> nope
<HdkR> Dang
<karolherbst> nvidia doesn't care
<karolherbst> :P
<karolherbst> we don't even support xavier
<HdkR> It's not for real consumers anyway, so I guess eh
<karolherbst> I asked, but it's not important enough to release the firmware, soo...
<HdkR> womp womp
<karolherbst> HdkR: if nvidia doesn't release the firmware for xavier, just because, why would they for orin
<karolherbst> I could ask again I guess
kurufu_ has quit [Remote host closed the connection]
<HdkR> Orin is definitely more interesting than Xavier for consumers I would guess
<karolherbst> anyway.. nouveau on the kernel side uses the 2d engine for a alot of stuff
<karolherbst> so if that thing would go away... it's uhm... "annoying"
<karolherbst> maybe
<karolherbst> not sure why it would be more interesting than xavier
<HdkR> Consistent CPU performance for one
<karolherbst> just do mroe stuff on the GPU then :P
<karolherbst> anyway, the next switch will be orin based, or so I've heard
<mareko> Mesa CI is stuck and not merging anything
<mareko> for 5 hours
<HdkR> karolherbst: Nah, just upgrade to Samsung+RDNA
<karolherbst> mareko: huh?
<karolherbst> marge currently handles 18000
<mareko> all MRs fail to merge
<karolherbst> HdkR: ... why?
<karolherbst> mareko: what job fails?
<mareko> "CI taking too long"
<karolherbst> huh...
<karolherbst> it does indeed seem to take quite a long time
kurufu has joined #dri-devel
<karolherbst> freedreno stuff as it seem
<Frogging101> does nvidia help nouveau out at all?
<karolherbst> Frogging101: they do
<airlied> robclark: did you turn off your workstation again?
<airlied> anholt: ^
<karolherbst> :D
<airlied> Frogging101: it's like they help, and don't at the same time :-P
<karolherbst> can we have a CI bot which checks and pings people if that ever happens again? (if this is such a common problem)
<karolherbst> asking a question to nvidia has mixed results
<airlied> it happened once before
<karolherbst> sometimes you get the answer in 2 days, sometimes you wait 3 years
<airlied> so not sure if common enough to warrant writing a bot for
<mareko> anholt: most freedreno jobs are broken on a radeonsi MR: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/18195
<robclark> hmm, I disabled suspend but admin pane indicates that fd-farm is not phoning home.. re-cherry-pick bc2e1a3ed67fb4cca88229e547f6b95be05c4d5e and assign that to marge
<Frogging101> nvidia likes secrets, restrictinos, and NDAs
<karolherbst> Frogging101: nvidia does release documentation though
<karolherbst> like the vulkan driver we write only uses nvidia provided headers for mostly everything
<karolherbst> so we probably won't have to use reverse engineered headers anymore
<robclark> mareko: I'll check in the morning when back in office but for now a-b to push a re-apply of bc2e1a3ed67fb4cca88229e547f6b95be05c4d5e to disable fd-farm
<karolherbst> anyway.. there are good people at nvidia and some really want to help us out
<robclark> the a618 runners are a different ci farm, fwiw
<Frogging101> karolherbst: that's good to hear
<karolherbst> yeah.. took quite some time to get there
<karolherbst> working for RH I also have access to some NDA material, which I am allowed to use, but not to share
<HdkR> karolherbst: Mostly because I'm greedy and I want newer CPU cores than A78AE and an open GPU :P
<karolherbst> :D
<Frogging101> I am glad that at least one of the two GPU makers sees the value of openness
<mareko> it turns out that direct git push still works
<robclark> karolherbst, daniels: actually a bot that notifies if ci runner doesn't contact gitlab for >30min would be useful.. I'd have noticed that the fd farm dropped off the grid while I was still in the office
<Frogging101> It's good that nvidia is helping the FOSS drivers out. But if they could ditch that proprietary mindset altogether, that'd be even better
<mareko> also why does direct pushing show: "remote: This repository moved. Please use the new location: git@github.com:Mesa3D/mesa.git"
<Frogging101> lol
bmodem has quit []
mbrost has quit [Ping timeout: 480 seconds]
<robclark> mareko: I guess you are the first one to bypass marge for a while.. ;-)
<DemiMarie> karolherbst: do you mean that you cannot share the NDA’d information itself, but there are no restrictions on the code you write based on that information?
mbrost has joined #dri-devel
<airlied> DemiMarie: pretty much, though we often ask for stuff to get released publically when we find a use for it
<DemiMarie> Ever have problems with code review?
saurabhg has joined #dri-devel
<DemiMarie> Where someone asks “why did you do X?” and you are in an awkward position?
<DemiMarie> Frogging101: Why is nvidia not releasing signed firmware for all GPUs?
TMM has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
TMM has joined #dri-devel
<airlied> DemiMarie: not usually
<DemiMarie> airlied: not even once? is there always enough public documentation to justify the correctness of the code? (Serious question, not trying to put you in a bad position, feel free to let me know if you are not comfortable with this conversation.)
<mareko> DemiMarie: that happens to us too, we respectfully decline to explain
<DemiMarie> mareko: ouch, I could easily see contributions being NACKd because of that
<mareko> not in Mesa
<javierm> DemiMarie: I don't have access to NDA information from nvidia so can't answer in that particularly but had access to other NDA documents from other vendors when working on ARM hardware bringup
<mareko> I think it would be against the code of conduct to NAK based on that
<HdkR> When you have trusted contributors pushing code and all you can say is "The hardware needs to be programmed this way". It still works out :)
<javierm> DemiMarie: and this has never been an issue for my when contributing to Linux. You try to explain in a technical level why the code is correct in your opinion
<javierm> *for me
rsalvaterra has joined #dri-devel
<javierm> I've seen engineers from silicon vendors even quoting verbatim parts of their non-public technical reference manual or datasheet
<DemiMarie> mareko: why against CoC? I thought that NAK based on “I cannot understand why this is correct” would be valid
<javierm> I guess they contact their layers first thought to do that :)
<javierm> *laywers. Sigh, I can't even type without coffee
<karolherbst> DemiMarie: correct
<mareko> DemiMarie: we generally don't care what other driver teams do in their own subdirectory
rsalvaterra_ has quit [Ping timeout: 480 seconds]
<DemiMarie> mareko: I guess you can always say, “this is what the hardware requires”.
<mareko> that's usually true
<Frogging101> I like that AMDVLK is full of helpful comments about hardware quirks
<Frogging101> It's a gold mine
<DemiMarie> mareko HdkR: when I do code review, my usual standard is “Do I understand this change, what it does, and why it is correct?”. That is why this was so shocking.
<DemiMarie> I really wish sillicon vendors would just open up their hardware documentation.
Lyude has quit [Ping timeout: 480 seconds]
<HdkR> DemiMarie: Sadly when you're poking hardware, especially reverse engineered hardware, sometimes the only documentation is "This is how we saw it done by black box RE"
<DemiMarie> HdkR: yeah, that is why closed hardware sucks
<DemiMarie> karolherbst: the lack of good public GPU docs is also one reason Qubes OS uses software rendering
rsalvaterra_ has joined #dri-devel
rsalvaterra is now known as Guest1557
rsalvaterra_ is now known as rsalvaterra
<DemiMarie> When it comes to security, the standard is not “does it seem to work?”, but rather “is an attacker unable to break it?”
<javierm> DemiMarie: even if the documentation is not public, the person writting the patch can make an effort to explain the rationale of the change and add proper code comments, commit description, etc
<DemiMarie> javierm: true
<javierm> DemiMarie: it's just that you will have to trust the person writting the patch that their assumptions are true since you don't have a way to verify it
Guest1558 has quit [Ping timeout: 480 seconds]
<DemiMarie> javierm: I guess that is the main diffference
<DemiMarie> When reviewing drive-by contributions, one must remember that the code comes from an unknown source and thus might even be malicious
<javierm> DemiMarie: yeah, agree. At the very least a wrong assumption could cause a denial of service attack (i.e: panics a kernel due a driver bug or whatever)
<DemiMarie> javierm: or worse, it could introduce a backdoor
<airlied> people introduce backdoors everyday, without even trying
<DemiMarie> Obviously this is exceedingly unlikely, but it is something that has to be guarded against
<airlied> not sure why you'd try and sneak one in :-P
<mareko> DemiMarie: sometimes there are even no internal docs
<DemiMarie> mareko: not even the VHDL/Verilog/etc hardware source code?
<javierm> mareko: reserved registers or fields that people are using but you don't know why :)
<mareko> DemiMarie: that can be access-restricted to internal people too
<javierm> DemiMarie: it may exists but that doesn't mean that a particular team could get access to it
guru_ has quit []
<DemiMarie> mareko javierm: I forgot about corporate siloing.
oneforall2 has joined #dri-devel
<DemiMarie> airlied: someone tried to sneak one into Linux circa 2003
<Frogging101> Graphics drivers are so full of "denial of service" class issues that I think it's just considered normal
<Frogging101> Aren't GPU hangs in vulkan drivers not necessarily a bug?
<airlied> DemiMarie: but in that time we've probably added hundreds
<DemiMarie> The standard I usually use is, “You are a cloud provider. Would you be willing to trust this to isolate your tenents from each other?”
Duke`` has joined #dri-devel
<DemiMarie> Assume that the financial consequences (fines, lost customer trust, etc) of a breach are severe
<airlied> hehe sgx :-P
<mareko> so far only some CPUs ended up in trouble
<mareko> and people keep buying them anyway
<DemiMarie> For some Qubes users, the consequences of a VM escape can literally be threatening to life or liberty.
<DemiMarie> I want Qubes OS to have GPU acceleration, but so far SR-IOV is the only means for doing so that I would be willing to trust in this kind of threat model, unless someone can convince me otherwise.
<DemiMarie> Would you trust Linux’s GPU drivers in that environment?
<DemiMarie> Serious question
<HdkR> I wonder if that SR-IOV hang issue with AMD GPUs ever got resolved.
<DemiMarie> No idea
<DemiMarie> All I know is that no recent AMD GPU has public SR-IOV support.
<HdkR> The Pro stuff does
<DemiMarie> Public = with open source drivers
<DemiMarie> Preferably that are upstream
<HdkR> I mean Radeon Pro, the hardware. Should just work with upstream stuff
<HdkR> At least that's what I've been told and I'll be really frustrated if I buy one and it doesn't
<DemiMarie> HdkR: That would very much be news to me
<qyliss> Me too
<airlied> I wonder if the intel stuff will have sriov on the consumer cards
<DemiMarie> airlied: yes
<DemiMarie> How much do Radeon Pro’s cost?
<HdkR> DemiMarie: Cheapest one is $250
<HdkR> Radeon Pro W6400
<qyliss> the only ones that I know of that do SR-IOV are the V340 (which when I last looked last year could only do SR-IOV with VMware), and the V520 which you literally can't buy because it's "only available as a public cloud offering"
<HdkR> Should support 16VMs, but the rest of the hardware I want to stick it in isn't shipping yet :|
<HdkR> So I haven't tested
<qyliss> HdkR: do you know where you heard that they do SR-IOV?
<HdkR> It was in some press material and a contact
saurabhg has quit [Ping timeout: 480 seconds]
<DemiMarie> qyliss: thoughts on what I wrote regarding security?
<qyliss> what specifically? I'm catching up on 2 days of IRC :)
<DemiMarie> that given the Qubes OS threat model (highly skilled attackers who have already breached e.g. a web browser), just “it is a security boundary” isn’t enough for Qubes to trust it
<airlied> HdkR: I'd keep the returns form handy just in case it's not true :-P
Lyude has joined #dri-devel
<HdkR> airlied: I'll complain very loudly if it doesn't
<qyliss> I've been searching for the last 5 mins or so and haven't found anything to suggest it does…
* airlied can't see it on the specs page
<HdkR> Well now I just need to nab one early and see if my planned setup is a failure
<daniels> robclark, karolherbst: we did look at a bot for monitoring runner status, yeah. the bad news is that doing so requires a token for an account with admin privs and the ability to make API calls from there. I really don't like that. discussed some alternatives and didn't come up with anything good.
<DemiMarie> daniels: file a ticket with GitLab upstream?
<daniels> DemiMarie: it was already discussed, there are no particularly great answers on the whole
<daniels> there is another way we can pull the data, but, time
saurabhg has joined #dri-devel
fab has joined #dri-devel
cuolin^ has quit [Remote host closed the connection]
Duke`` has quit [Ping timeout: 480 seconds]
itoral has joined #dri-devel
nchery has joined #dri-devel
mvlad has joined #dri-devel
sdutt has quit [Remote host closed the connection]
fab has quit [Quit: fab]
danvet has joined #dri-devel
frieder has joined #dri-devel
YuGiOhJCJ has quit [Quit: YuGiOhJCJ]
fab has joined #dri-devel
fxkamd1 has joined #dri-devel
andrey-konovalov has quit [Read error: Connection reset by peer]
andrey-konovalov has joined #dri-devel
fxkamd has quit [Ping timeout: 480 seconds]
lemonzest has joined #dri-devel
rasterman has joined #dri-devel
mbrost_ has joined #dri-devel
Daanct12 has quit [Quit: Leaving]
mbrost has quit [Ping timeout: 480 seconds]
ahajda has joined #dri-devel
jkrzyszt has joined #dri-devel
mauld has joined #dri-devel
rgallaispou has joined #dri-devel
mbrost_ has quit [Remote host closed the connection]
mbrost_ has joined #dri-devel
mbrost__ has joined #dri-devel
mbrost_ has quit [Ping timeout: 480 seconds]
rgallaispou has quit [Read error: Connection reset by peer]
<DemiMarie> Why will it take so long for Intel SR-IOV support to be upstreamed?
pcercuei has joined #dri-devel
mbrost__ has quit [Ping timeout: 480 seconds]
<kusma> Hmm... I'm seeing crashes in threaded-context init (debug_get_option_trace_format return NULL-pointer) on main...
<kusma> Started happening recently
<emersion> vsyrjala: hm when you say "fallback to blit", who does the blit? kernel or xorg?
<emersion> xorg right?
frankbinns has joined #dri-devel
kts has joined #dri-devel
<vsyrjala> yeah xorg
kts has quit [Quit: Konversation terminated!]
<emersion> vsyrjala: so it seems pretty important for user-space to know whether it's possible to do an async flip?
<emersion> ie, xorg wants the kernel to reject the update if async flip isn't possible
rkanwal has joined #dri-devel
gio has joined #dri-devel
fahien has joined #dri-devel
<vsyrjala> seems somewhat important for benchmarky stuff that you can actually turn off vsync, whether or not the kms driver supports async flips in the current state of the system
<emersion> my question is mainly about the uAPI
<emersion> if the kernel silently falls back to vsyync, then it's not possible for xorg to fall back to blit
<emersion> or am i missing something here?
<linkmauve> “22:56:52 karolherbst> but until today, where is this awesome microkernel?”, Nintendo used a microkernel-based system in both the 3DS and the Switch, and aside from bugs which let userland take over important priviledged processes it was a nice architecture I think.
<linkmauve> DemiMarie, ↑
<vsyrjala> emersion: i don't think you're missing anything. silent sync flip fallback would prevent the blit fallback behaviour
<vsyrjala> i guess you could in theory check whether the flips are really happening async after submitting them, and then stop issuing async flips and switch to blits. but that seems overly complicated
<emersion> okay, thanks, so I guess it really makes sense to fail in the new uAPI then
srslypascal is now known as Guest1573
srslypascal has joined #dri-devel
Guest1573 has quit [Ping timeout: 480 seconds]
<emersion> vsyrjala: btw, is amdgpu wrong to silently fallback to sync flip for the legacy uAPI?
kts has joined #dri-devel
<vsyrjala> i think so. but the uapi is not properly specified so i guess one can argue either way
<emersion> right, but it would be better to let xorg do its blitting
<vsyrjala> in which cases does it do the fallback? just wondering why no one is complaining that their 'vblank_mode=0 <benchmark>' is capped at 60fps...
<hakzsam> jekstrand: replied on the dyn MR, found all the regressions, I guess CTS would pass now on Navi2x (at least) :)
<emersion> it has a concept of "fast update"
<emersion> hm, let's see…
<emersion> so, it cannot do async flip when:
<emersion> - planes/CRTCs are enabled/disabled
<emersion> - modeset is needed
<emersion> - scaling changes
<emersion> there may be more
<emersion> ah yeah there are more
<emersion> format change, color space change, alpha prop change…
<emersion> some modifiers changes (DCC <-> non-DCC, some DCC layout changes)
<emersion> pitch change
<emersion> oh hm
<emersion> check_boundary_crossing_for_windowed_mpo_with_odm() has some pretty involved logic
<emersion> it seems like some plane position changes may prevent fast updates as well
<emersion> like cursor over the overlay or something
<emersion> weirdly enough idle_optimizations_allowed seems to force all updates to be full instead of fast
lynxeye has joined #dri-devel
sven has quit [Quit: ZNC 1.8.2 - https://znc.in]
sven has joined #dri-devel
saurabhg has quit [Remote host closed the connection]
saurabhg has joined #dri-devel
Vanfanel has joined #dri-devel
kts has quit [Ping timeout: 480 seconds]
saurabh_1 has joined #dri-devel
<MrCooper> mareko: "flatpak ships its own Mesa in the sandbox" is incorrect; Mesa is shipped in flatpak runtimes / extensions, which can be and are updated separately from flatpak and the apps themselves
<MrCooper> mareko: please don't bypass Marge
<MrCooper> airlied Kayden mdnavare: mutter already does TEST_ONLY for direct scanout of client buffers, so that's not an issue
devilhorns has joined #dri-devel
fahien has quit [Quit: fahien]
saurabhg has quit [Ping timeout: 480 seconds]
saurabh_1 has quit [Ping timeout: 480 seconds]
warpme___ has joined #dri-devel
dakr has joined #dri-devel
itoral has quit []
tursulin has joined #dri-devel
JohnnyonFlame has quit [Read error: Connection reset by peer]
<Vanfanel> Should WLRoots-based compositors work on the DRM backend without calling wlr_output_set_mode()?
<Vanfanel> I don't mean if they *do* (they don't) but if they *should*
pac85 has joined #dri-devel
<pac85> MrCooper: iirc flatpak associates an integer to indicate which abi a runtime exposes and which abi sonething needs. Now if one wants to run a very old game it might have a dependency on an old runtime with a low abi number and I guess flatpak could fail finding a recent enough driver if the package is, say, from 15 years ago, right?
<MrCooper> right, though it might still be possible to update Mesa in a runtime extension (honestly not sure though)
<pac85> If that's the case I wouldn't want to ship my game with it. There are examples of whell thought linux game binary releases that worl after 20 years
<pac85> MrCooper: I think it would be a problem in itself to backport mesa to a potentially decades old runtime
<pac85> But really it seems to me like they didn't have that usecase in mind
tursulin has quit [Quit: Konversation terminated!]
tursulin has joined #dri-devel
<MrCooper> flatpak runtimes don't seem that different from the Steam runtime conceptually though
saurabhg has joined #dri-devel
<pac85> MrCooper: looks totally different to me. Steam runtimes are really just a fallback from when libraries are not present on the system. The way it works is it loads the newest version between what's on the system and what's on the runtime (plus some library specific logic). This is because they wanted to load system drivers which implies system libc which implies some other system libraries
<pac85> Then they added pressure vessel which just hides libraries that aren't present on the runtime
<pac85> I'm simplifying things but that's the basic idea
<pac85> A Flatpak runtime is a full blown container with it's own libc and everything and just can't load anything from the host unless it is totally static (which iirc is true for the proprietary nvidia driver).
mattst88 has quit [Quit: leaving]
mattst88 has joined #dri-devel
Thymo has quit [Quit: ZNC - http://znc.in]
<swick> we really should have a static mesa build that we can put in any runtime if the runtime has a new enough libc
YuGiOhJCJ has joined #dri-devel
saurabhg has quit [Ping timeout: 480 seconds]
devilhorns has quit [Remote host closed the connection]
devilhorns has joined #dri-devel
mi6x3m has joined #dri-devel
<mi6x3m> hey, just wanted to report something is wrong with the 22.1.7 build process
<mi6x3m> it requires DRI3 functions although not compiled with -Ddri3
<mi6x3m> and even when compiled with -Ddri3, -lxcb-dri3 is not in the dependencies of pkg-config
saurabhg has joined #dri-devel
pac85 has quit [Remote host closed the connection]
pac85 has joined #dri-devel
<mi6x3m> also getting undefined references for libexpat
saurabhg has quit [Ping timeout: 480 seconds]
heat has joined #dri-devel
saurabhg has joined #dri-devel
pac85 has quit [Remote host closed the connection]
Vanfanel has quit [Quit: leaving]
pac85 has joined #dri-devel
pac85 has quit [Remote host closed the connection]
pac85 has joined #dri-devel
JoniSt has joined #dri-devel
rgallaispou has joined #dri-devel
<JoniSt> jenatali: Would you mind taking a look at the VS2019 pipeline failure in Mesa MR 17338 when you have time? I think I need some help from a Windows expert - p_atomic_cmpxchg doesn't compile when used with pointer variables... :)
<jenatali> JoniSt: Yeah, it's 5am here but I'll take a look later. Unless I forget so feel free to ping me later if you don't hear from me :P
<JoniSt> Thanks a lot! :P
fahien has joined #dri-devel
pac85 has quit [Remote host closed the connection]
pac85 has joined #dri-devel
<MrCooper> pac85: you're describing the current state of the Steam runtime; in the beginning, it didn't pull in host libraries, which resulted in issues when the host Mesa drivers were linked against newer e.g. libc. Since flatpak runtimes are fundamentally similar, something like what Steam does now might be possible with flatpak as well
<MrCooper> (hmm, or am I mixing that up with games shipping outdated libstdc++? It's been a while, memory's getting fuzzy)
RSpliet has quit [Quit: Bye bye man, bye bye]
sdutt has joined #dri-devel
RSpliet has joined #dri-devel
sdutt has quit []
sdutt has joined #dri-devel
<kisak> sounds right, Steam's LD_* scout runtime, before the library pinning logic was added.
<kisak> pressure-vessel does a painfully complex setup to use the host libraries in Steam Linux Runtime container. Technically, it breaks any ABI promises which Flatpak can offer.
<MrCooper> doesn't the Steam runtime make similar ABI promises to games?
<kisak> as far as I know, it's best effort, not a promise. If a game is built against the runtime and there's an issue, then the runtime maintainers will take a look and try to figure it out, but that doesn't change it being broken in the wild in the mean time.
<pac85> I think the libraries that are loaded from the system are those with stable APIs
<kisak> pac85: stable APIs does not mean a stable ABI interface for prebuilt binaries.
Thymo has joined #dri-devel
<pac85> kisak: I should have said "those with stable ABIs"
lkw has joined #dri-devel
<pac85> Like glibc
<pac85> Or sdl
<kisak> glibc doesn't have a great ABI track record
<pac85> Afaik they keep it stavle and consider abi breaks bugs
<pac85> Not loading glibc is not an option if you want to load systen drivers right?
ishitatsuyuki has left #dri-devel [https://quassel-irc.org - Chat comfortably. Anywhere.]
<kisak> the quick example that comes to mind, is SSE becoming more common and breaking 32 bit games unless 32 bit glibc was built with -mstackrealign.
mi6x3m has quit [Quit: Leaving]
<pac85> I see but then again there is no choice there, not loading system glibc makes everything super hard
<kisak> I wasn't suggesting otherwise, the key detail there is best effort versus promised ABI stability
<pac85> kisak: So I guess flatpak won't ever switch to steam's approach?
<kisak> I'm not able to answer that.
<pac85> Though tbh for a problem like the one you mentioned to happen it would take years since release and it seems more likely to me for flatpak's approach to break in the meantime
rgallaispou has quit [Read error: Connection reset by peer]
<pac85> The concept of a container is based on the fact that linux has stabke abis, that's not true when you throw graphics into the mix though (not in the long term at least). The way drivers are managed by flatpak looks like an hack to me, espcially how it does it for the nvidia driver
Company has joined #dri-devel
kts has joined #dri-devel
pac85 has quit [Remote host closed the connection]
Jeremy_Rand_Talos__ has joined #dri-devel
Jeremy_Rand_Talos_ has quit [Remote host closed the connection]
RSpliet has quit [Quit: Bye bye man, bye bye]
RSpliet has joined #dri-devel
tlwoerner has quit [Quit: Leaving]
tlwoerner has joined #dri-devel
fab has quit [Quit: fab]
<emersion> eric_engestrom: can you add your S-o-b to the libdrm commit as well?
<emersion> and also update the commit message
<bl4ckb0ne> 1`/b 13
<bl4ckb0ne> wop
fahien has quit [Ping timeout: 480 seconds]
khfeng has quit [Ping timeout: 480 seconds]
<jenatali> JoniSt: Pushed a pair of patches that build for me locally
<JoniSt> jenatali: Neat, thanks a lot! I'll probably reorder the commits a bit (u_atomic first so people don't get bitten when bisecting) and then it should all be ready :)
<jenatali> JoniSt: Yeah makes sense, that's what I'd have suggested anyway, and then you can squash my fixup into your main patch
<JoniSt> Yup, that's what I intended to do! :)
kts has quit [Read error: Connection reset by peer]
kts has joined #dri-devel
TMM has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
TMM has joined #dri-devel
<jenatali> jekstrand: If you get a chance, would you be able to take a look at https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17339/diffs?commit_id=795c0619be8da98a1d22981bd9b678c9cf459700? Or else maybe nominate someone to? :P
<JoniSt> jenatali: Did you mean to write "p_atomic_cmpxchg_ptr" in the Solaris portion of the patch? (Lines 267, 269)
<jenatali> JoniSt: Doh, yep, good catch
<jenatali> Too bad there's no Solaris in CI...
<JoniSt> I'll fix it up while I squash the stuff :)
fab has joined #dri-devel
kts has quit [Ping timeout: 480 seconds]
rgallaispou has joined #dri-devel
tursulin has quit [Read error: Connection reset by peer]
tobiasjakobi has joined #dri-devel
tobiasjakobi has quit [Remote host closed the connection]
pcercuei has quit [Read error: Connection reset by peer]
pcercuei has joined #dri-devel
tursulin has joined #dri-devel
<JoniSt> jenatali: Fixed, squashed and r-b added. I don't have the necessary permissions to assign to Marge, so unless you feel like this needs another review, it'd be great if you could assign to Marge for me! :)
<emersion> vsyrjala: do you happen to know where the Xorg logic to fallback to blit when async flip fails is implemented?
<emersion> i've tried looking in drivers but couldn't find it, is it in common code?
<emersion> trying to see if my amdgpu changes would regress Xorg or not here
<JoniSt> Thanks, fingers crossed!
<eric_engestrom> emersion: good point, updating :)
<MrCooper> emersion: the Xorg driver flip hook should return FALSE if the ioctl returns an error, which results in present_execute falling back to present_execute_copy
<emersion> oh, now i see it
<emersion> thanks!
<MrCooper> no worries
Duke`` has joined #dri-devel
YuGiOhJCJ has quit [Quit: YuGiOhJCJ]
<eric_engestrom> emersion: one down, three to go (omap, exynos, tegra); I'm not interested in those though (and I don't know where they're used or not) so I won't be doing them
<emersion> ack!
<jenatali> David Heidelberg: I've seen that when one of the processes crashes, the rest of the tests that were supposed to be run in that instance are marked missing I think
ybogdano has joined #dri-devel
<jekstrand> jenatali: Commented. It was already on my list for today. :D Mostly looks ok. Just a couple nits and a bit of ? on the interface.
<jenatali> Thanks!
<jenatali> Hopefully I'll be able to address your comments before I disappear on Thursday
<jekstrand> Should be able to. None of it was earth-shattering. :)
<JoniSt> Hmm... https://gitlab.freedesktop.org/mesa/mesa/-/jobs/27578656 seems to have some "fun" with Machine Check Errors on the hardware... That's slightly concerning :D
<JoniSt> (Not my MR, just saw it while browsing around)
frieder has quit [Remote host closed the connection]
rkanwal has quit [Ping timeout: 480 seconds]
jkrzyszt has quit [Ping timeout: 480 seconds]
pac85 has joined #dri-devel
pac85 has quit [Read error: Connection reset by peer]
pac85 has joined #dri-devel
devilhorns has quit [Quit: Leaving]
cengiz_io has joined #dri-devel
srslypascal has quit [Ping timeout: 480 seconds]
srslypascal has joined #dri-devel
lkw has quit [Ping timeout: 480 seconds]
mbrost has joined #dri-devel
fahien has joined #dri-devel
mbrost_ has joined #dri-devel
tursulin has quit [Ping timeout: 480 seconds]
mbrost has quit [Ping timeout: 480 seconds]
tales-aparecida0 is now known as tales-aparecida
lynxeye has quit [Quit: Leaving.]
pac85 has quit [Remote host closed the connection]
pac85 has joined #dri-devel
saurabhg has quit [Ping timeout: 480 seconds]
pac85 has quit [Remote host closed the connection]
pac85 has joined #dri-devel
fahien has quit [Quit: fahien]
mbrost_ has quit [Remote host closed the connection]
mbrost_ has joined #dri-devel
pac85 has quit [Remote host closed the connection]
pac85 has joined #dri-devel
mbrost_ has quit [Ping timeout: 480 seconds]
heat has quit [Remote host closed the connection]
heat has joined #dri-devel
bluetail21 has quit [Read error: Connection reset by peer]
bluetail21 has joined #dri-devel
mbrost has joined #dri-devel
<karolherbst> so uhm... I kind of want to merge Rusticl, but I also not having seen enough discussion on the mailing list in order to judge if people are generall okay with it and if major concerns are already addressed. Also would be good if one could take another look at the CI changes for a final review: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/15439
warpme___ has quit []
nchery is now known as Guest1602
nchery has joined #dri-devel
bluetail21 has quit [Read error: Connection reset by peer]
bluetail21 has joined #dri-devel
<airlied> karolherbst: I can offer no other advice than: DO IT!
<jekstrand> Worst case, someone gets pissed and it gets reverted.
ybogdano has quit [Ping timeout: 480 seconds]
rkanwal has joined #dri-devel
Guest1602 has quit [Ping timeout: 480 seconds]
warpme___ has joined #dri-devel
dakr has quit [Remote host closed the connection]
bluetail215 has joined #dri-devel
Duke`` has quit [Ping timeout: 480 seconds]
bluetail21 has quit [Read error: Connection reset by peer]
Duke`` has joined #dri-devel
lkw has joined #dri-devel
Haaninjo has joined #dri-devel
ybogdano has joined #dri-devel
srslypascal is now known as Guest1603
srslypascal has joined #dri-devel
Guest1603 has quit [Ping timeout: 480 seconds]
srslypascal is now known as Guest1604
srslypascal has joined #dri-devel
<karolherbst> airlied, jekstrand: k... :D though I still want to have review on the CI bits. The meson ones already are.
lemonzest has quit [Quit: WeeChat 3.5]
Guest1604 has quit [Ping timeout: 480 seconds]
<anholt> karolherbst: I gave you ci reviews :)
<karolherbst> yeah, already working on it :)
mbrost_ has joined #dri-devel
srslypascal is now known as Guest1605
srslypascal has joined #dri-devel
heat_ has joined #dri-devel
mbrost has quit [Ping timeout: 480 seconds]
heat has quit [Read error: No route to host]
Guest1605 has quit [Ping timeout: 480 seconds]
mvlad has quit [Remote host closed the connection]
pac85 has quit [Ping timeout: 480 seconds]
mbrost_ has quit [Ping timeout: 480 seconds]
cengiz_io_ has joined #dri-devel
pcercuei has quit [Read error: Connection reset by peer]
pcercuei has joined #dri-devel
pa has joined #dri-devel
<pa> hi folks, anyone familiar with libdrm ?
<pa> meson builddir gives me ERROR: lexer on libdrm_kw = {}
cengiz_io has quit [Ping timeout: 480 seconds]
srslypascal is now known as Guest1606
srslypascal has joined #dri-devel
Guest1606 has quit [Ping timeout: 480 seconds]
<emersion> pa: is it due to one of the latest commits?
<pa> emersion: tbh i have no idea.. i was trying to follow a guide to build a software dependent on libdrm
<pa> so i simply checked it out and tried to build.. The guide was actually referring to autogen.sh for building
<emersion> which meson version?
<pa> emersion: 0.45.1
ybogdano has quit [Ping timeout: 480 seconds]
<pa> oh i see thanks
<pa> emersion: do you know what is the last version of libdrm using autogen ?
<emersion> no, and it's not supported
paulk has quit [Quit: WeeChat 3.0]
<pa> i mean i'm on a ubuntu 18.04 with 5.4.0 kernel, not even sure latest version will work for me
<emersion> ie, please don't report bugs for it
<jekstrand> airlied: Running it through CI now but ANV seems to work when I ran crucible through it.
<jekstrand> airlied: This goes a bit further than your MR. I ditched the "framework" concept for command buffers and went with a vfunc table which drivers populate and then everything lives inside vk_command_pool.c.
mbrost has joined #dri-devel
bluetail215 has quit [Read error: Connection reset by peer]
bluetail215 has joined #dri-devel
bluetail215 has quit []
bluetail215 has joined #dri-devel
ybogdano has joined #dri-devel
lkw has quit [Ping timeout: 480 seconds]
dakr has joined #dri-devel
<airlied> jekstrand: blows up on lvp somehow :-)
gbelgurr has quit [Quit: IRCNow and Forever!]
<airlied> but I'll take a closer look later
<airlied> jekstrand: does it suffer the same problem mine did with the object being cleared on init and losing stuff?
gio has quit [Ping timeout: 480 seconds]
<jekstrand> airlied: No, I fixed that
<jekstrand> airlied: I'll fix LVP in a few minutes. Need to run to the pharmacy quick.
<jekstrand> airlied: Looks like vk_cmd_list_reset isn't begin called somewhere
nchery has quit [Read error: Connection reset by peer]
nchery has joined #dri-devel
<jekstrand> Or maybe it's leaking whole command buffers?
<jekstrand> Anyway, something's leaking. I'll fix it.
<airlied> yeah anv is toast as well, so should be simple to spot
<jekstrand> drpp...
<jekstrand> no ResetCommandBuffers
gbelgurr has joined #dri-devel
<jekstrand> Or, rather, I named it wrong
lkw has joined #dri-devel
<jekstrand> CI is running again
bluetail215 has quit []
bluetail215 has joined #dri-devel
JohnnyonFlame has joined #dri-devel
bluetail215 has quit []
bluetail215 has joined #dri-devel
gbelgurr has quit []
paulk has joined #dri-devel
ybogdano has quit [Ping timeout: 480 seconds]
<jenatali> I'd like to request developer permissions on behalf of sivileri (Sil Vilerino) on my team who's been doing the video work. I believe his contributions meet the requirements but I'm failing to find the docs that outline the actual process
bluetail215 has quit []
bluetail215 has joined #dri-devel
Duke`` has quit [Ping timeout: 480 seconds]
mbrost_ has joined #dri-devel
rasterman has quit [Quit: Gettin' stinky!]
ybogdano has joined #dri-devel
bluetail215 has quit [Ping timeout: 480 seconds]
mbrost has quit [Ping timeout: 480 seconds]
<jenatali> javierm: "At the maintainer's discretion, they may give you access to directly push your own code to the project."
<daniels> yeah, there is no doc for Mesa
<daniels> the process is, ask around in any of the three available fora, see if anyone objects, if not then do it
lkw has quit [Quit: leaving]
<jenatali> daniels: Right, except in my case "do it" needs to be on someone else AFAIK since I just have the developer and not maintainer role
<airlied> yeah it's noramally 25 commits or so
<jenatali> Right, he's at 39 so far looks like
<jekstrand> jenatali: Should be fine.
bluetail215 has joined #dri-devel
pa has quit [Quit: quit]
<glehmann> btw, there's an imgtec project access request on the issue tracker that has been ignored for over 4 weeks
pa has joined #dri-devel
<mattst88> eric_engestrom, emersion: want to tag a new libdrm release?
<Frogging101> With merging rusticl, is there any heads-up needed to devs about build breakage?
<Frogging101> Since presumably it will need additional build tooling
alanc has quit [Remote host closed the connection]
i-garrison has quit [Ping timeout: 480 seconds]
alanc has joined #dri-devel
fab has quit [Quit: fab]
gbelgurr has joined #dri-devel
<airlied> Frogging101: not if you don't want to build rusticl
<airlied> which I assume most people won't initially, not many people build clover
<Frogging101> Ah, so it won't be a default driver
<DemiMarie> Has anyone tried fuzzing Mesa’s compilers?
<jekstrand> Yes, people have
<DemiMarie> Nice
<airlied> Frogging101: it's not a driver at all, it's a gallium frontend
<airlied> I think the glsl frontend has seen a lot of fuzzing, not sure how much the spir-v has
<DemiMarie> What about the backends?
<DemiMarie> Those that go from NIR to GPU machine code
<Frogging101> Oh right but I meant like a default -Dgallium_drivers or whatever the build opt is
<jekstrand> airlied: SPIR-V has seen a decent bit
<DemiMarie> Has any of the fuzzing checked for wrong generated code?
<jekstrand> Most of the fuzzing has been end-to-end. Plug in different bits of GLSL or SPIR-V and make sure it renders the right thing.
<DemiMarie> That is nice
danvet has quit [Ping timeout: 480 seconds]
kmn has quit [Ping timeout: 480 seconds]
gouchi has joined #dri-devel
kmn has joined #dri-devel
gouchi has quit []
<airlied> karolherbst, jekstrand : where did generic pointers get to? something for rusticl post merge?
<jekstrand> airlied: They've been landed for like 2 years. :P
i-garrison has joined #dri-devel
<jekstrand> airlied: There might be a couple rusticl bits in a branch somewhere. But the core stuff has been there for a long time.
<karolherbst> airlied: don't need it
<karolherbst> it compiles kernels with generic pointers just fine as long as we are able to track them down to the original type
<karolherbst> there was a bug where we had a generic pointer == NULL and didn't optimize it, so that's where things fell apart
<karolherbst> but that's fixed
<karolherbst> generic pointers will become more relevant once we do proper function calling
<karolherbst> jekstrand: nope. It's all in the MR
<karolherbst> for CL 3.0 conformance only iris fixes are needed on top of the rusticl MR
<karolherbst> I do have some WIP brnach for gl_sharing and other bits
rsripada has joined #dri-devel
bluetail215 has quit []
bluetail215 has joined #dri-devel
warpme___ has quit []
ybogdano has quit [Ping timeout: 480 seconds]
sarnex has quit [Read error: Connection reset by peer]
sarnex has joined #dri-devel
nchery is now known as Guest1621
nchery has joined #dri-devel
Guest1621 has quit [Ping timeout: 480 seconds]
ybogdano has joined #dri-devel
ahajda has quit [Quit: Going offline, see ya! (www.adiirc.com)]
rkanwal has quit [Quit: rkanwal]
<jenatali> jekstrand: daniels: Regarding developer permissions, are there more maintainers that need to come to a consensus? Or would one of you be willing to flip the switch?
<Sachiel> I'd argue that it's your switch that needs to be flipped, as the maintainer of all the windows stuff
<jenatali> I'd be fine with that, sure
dakr has quit [Ping timeout: 480 seconds]
<jekstrand> jenatali: Not really. They need a couple dozen commits and someone in good standing who can vouch for them. That's pretty much the bar.
<Sachiel> jekstrand: yeah, that's been cleared, he just can't do it himself because he lacks the appropriate permissions
<jenatali> Yup
<jekstrand> jenatali: Did you open an issue?
<jenatali> Uh... no. Was I supposed to?
<jekstrand> Yup. Open an issue and tag it with "developer access request" or whatever it's called. "access" is in the name.
mangix has quit [Read error: Connection reset by peer]
<jekstrand> jenatali: Yeah, that should probably be documented somewhere
<jenatali> jekstrand: Yeah, that should be added to https://docs.mesa3d.org/repository.html#developer-git-access
mangix has joined #dri-devel
<jenatali> There we go, #7160
<jekstrand> done
<jenatali> Thanks :)
Haaninjo has quit [Quit: Ex-Chat]
pcercuei has quit [Quit: dodo]
khfeng has joined #dri-devel
nchery has quit [Ping timeout: 480 seconds]