ChanServ changed the topic of #dri-devel to: <ajax> nothing involved with X should ever be unable to find a bar
<graphitemaster> Would be nice to build radeonsi for Windows some day.
tursulin has quit [Read error: Connection reset by peer]
<graphitemaster> Probably quite a lot of work there I imagine.
<airlied> graphitemaster: building it isn't the problem, making it useful is
* alyssa shivers at macOS mesa driver memories
vivek has quit [Ping timeout: 480 seconds]
<graphitemaster> Seems weird to me AMD has all these different driver offerings. I don't actually know which ones they actively support but like how many of them are there, like three or something? amdgpu, amdgpu-pro, what ever their windows thing is
<FLHerne> graphitemaster: James Park put a lot of effort into making RADV build on Windows https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/6162
<FLHerne> I have a vague memory that he did the same for radeonsi, but I haven't found it
<airlied> his use case isn't to actually run the driver, but use the compiler so no use for radeonsi there
<FLHerne> anyway, it's only useful if you implement the kernel interface and/or patch Mesa to use Windows' very different one, and he was quite secretive about the end goal
<FLHerne> airlied: Oh, he actually said? I must have missed that
<alyssa> airlied: Why does that need all of radv?
<alyssa> I'm running the Bifrost/Valhall compiler on macOS... panfrost/panvk very much do not build, of course.
<airlied> FLHerne: no he didn't, I talked to him in private about it :-)
<graphitemaster> AMD seems pretty open about openness, it's weird they don't just have one unified driver with mesa as the GL offering on all the platforms, much like NV does with their proprietary driver. I get there's some qualms with D3D, but Gallium has a D3D implementation too. Anything newer can probably be delegated to like dxvk :P
<airlied> alyssa: it's a bit more complicated, he wants the vulkan api in parts
pnowack has quit [Quit: pnowack]
<alyssa> airlied: because AMD bakes pipeline state into shaders?
<airlied> alyssa: yes you need more than just the compiler
<alyssa> got it
<airlied> graphitemaster: most companies will have one or two driver implementations
<airlied> sw design reflects the org chart
<jenatali> Can confirm
<airlied> conway's law
<graphitemaster> What even is tech debt anyways
<airlied> also large companies the mgmt prestige is in budget/team size, saving money by consolidating teams can be seen as losing by some VPs
<alyssa> Capitalism: the most efficient system, thanks to market forces 🤔
<jenatali> At the end of the day, you're not going to get a non-WDDM-based kernel driver on Windows, and I haven't seen any open-source usermode stack that can talk to a Windows WDDM driver, for whatever reason
<HdkR> Is WDDM documented enough for someone to write one these days?
Hi-Angel has quit [Ping timeout: 480 seconds]
<airlied> jenatali: I think virtualbox might have something close
<airlied> HdkR: lols
<jenatali> HdkR: It should be
<jenatali> airlied: I believe VirtualBox does a WDDM driver, yeah
<alyssa> jenatali: With my Mesa/Linux/free-sw developer hat on and my Collabora hat off -- I'd be ok with WDDM stuff in Mesa if Intel/AMD were serious about using Mesa for their OpenGL/Vulkan drivers on Windows.
<alyssa> (Provided Microsoft blessed it so there weren't licensing issues etc.)
<HdkR> Neat. Last I knew it wasn't documented. Good to know :D
<FLHerne> Then you just need a set of DX12 drivers
nchery has quit [Quit: Leaving]
<alyssa> DX12 in Mesa is admittedly a harder sell.
<airlied> alyssa: the big problem is getting stable kernel ABIs :-P
<jenatali> alyssa: https://github.com/microsoft/libdxg - the headers are now free of licensing concerns
<airlied> you'd have to start by writing WDDM drivers
<graphitemaster> Anyways mesa (radeonsi) on amdgpu consistently beats AMDs Windows driver in GL buffer performance in every single category in every single version of GL I tested, including ES. It's a worthless bench but I just wanted a good baseline since I've been comparing with an equivalent spec'd NV which is not fair. Whole thing makes me just want to use mesa on Windows if I can because like several things run faster, including draw calls
<graphitemaster> (no thanks to the absolute monstrosity of that compute culling work in mesa)
<HdkR> Just use the various D3D to Vulkan translation layers on Windows
<alyssa> jenatali: Great. Your move, Intel/AMD ;-)
<jenatali> graphitemaster: Maybe eventually GLOn12 (or zink) would end up being a better solution, both of which work on Windows
<graphitemaster> It looks like Microsoft is serious about building their own GL drivers now on DX12
<graphitemaster> To the point that I doubt AMD or Intel is going to bother with GL anymore there :|
<alyssa> OpenGL is dead, long live OpenGL!
<mattst88> what would you be missing out on if Intel stopped providing a Windows GL driver? it's not like it's some masterpiece :)
<airlied> yeah GL on Windows is definitely an afterthough in al ot of places
<airlied> so is Vulkan I expect
<graphitemaster> You'll never kill my beautiful procedural friendly, state machine graphics layer with a declarative, worse one (*cough cough Vulkan*)
<mattst88> I mean, Intel's GL driver on Windows was pretty terrible long before Vulkan came around as well :)
<graphitemaster> Who let the functional programmers near graphics anyways /s
<alyssa> (awkward (kinda))
<graphitemaster> (I see (what you (did there)))
<airlied> mattst88: yeah it's always been a horror show, tick-box, not like anyone ever ran CAD apps
<graphitemaster> Intel's Windows GL drivers ironically haven't given me nearly as much issues as AMDs ones, despite being like Mobile Mali - ES bog basic and standard stuff.
<graphitemaster> I think they just have better test coverage.
<graphitemaster> How far along is Zink these days, does it do GL 3.x or ES 3.x yet?
<jenatali> Huh, looks like Intel does have a WDDM driver now open-source, specifically their compute runtime targeting WDDM-on-Linux (i.e. WSL)
<graphitemaster> Oh wow Zink looks quite a bit along since I last checked up on it, who ever is working on this should slow down and take a break lol, that's wild.
<jenatali> zmike: ^^
<zmike> HOW CAN I SLOW DOWN WHEN YOU KEEP PINGING ME AHALFA32H23A32UF
* zmike falls over frothing at the mouth and takes a nap
<HdkR> zmike: No breaks, back to the Vulkan mines
<jenatali> That exclamation really looks like an image format
<kisak> gotta love those 87 bit encodings
<graphitemaster> Surely pinging someone is how you slow them down, haven't you ever been in a meeting?
<HdkR> "Is this x87?"
<Sachiel> I know for a fact that zmike was playing path of exile during the weekend, so it looks to me like he already took enough of a break
<graphitemaster> So where is the precompiled zink libGL.dll for Windows that dlopen's libvulkan and just works (tm), or do I have to build that myself XD
<bnieuwenhuizen> Sachiel: playtesting? ;)
<HdkR> pfft
<graphitemaster> Damn this looks dope, going to try and see how well this runs.
<graphitemaster> Might ship with Zink for AMD/Intel if it works better.
flto_ has quit []
flto has joined #dri-devel
Lightkey has quit [Ping timeout: 480 seconds]
<jenatali> graphitemaster: You should try GLOn12 and compare. And let me know the results :)
<graphitemaster> jenatali, Then I gotta install Windows 10 on this machine :(
<jenatali> Ah, Win7. Got it
<graphitemaster> Alternatively I just have to plug it into the internet and leave it for 10 minutes so it's not like I have to do any work.
Lightkey has joined #dri-devel
<airlied> iris-apl-egl seems to be hitting the skids a bit more than normal
<graphitemaster> jenatali, How far along is GLOn12?
<jenatali> GL3.3
<graphitemaster> ES 3 then too I suppose, basically the same thing.
<jenatali> Yeah, 3.0 I think
<graphitemaster> WebGL 2.0 then
<graphitemaster> Maybe we can kill ANGLE.
<jenatali> I'd be surprised, but who knows
<airlied> jenatali: tess or fp64 stopping gl4?
<jenatali> airlied: Competing projects usurping time from developers who could do more ;)
<jenatali> I think you start hitting some serious impedence mismatches around 4.2
<airlied> just hire zmike in his spare time :P
<HdkR> They have spare time?!
<zmike> yes, we have spare time
<zmike> well, three of us do tonight, the other five are still working
mbrost_ has quit []
mbrost has quit []
mbrost has joined #dri-devel
<graphitemaster> I've thought of getting into driver development but I honestly dunno where to start. Spent years doing graphics development for games and engines, not sure how to pivot from that. Plus all those years of blaming the driver and see how many other devs blame the driver makes me think it's hostile.
<zmike> well we blame the drivers too, so you'll fit right in
<alyssa> ^^
<jenatali> Yep
<icecream95> The more bugs a driver has, the easier it is to get into development
<sarnold> :)
<icecream95> alyssa: I've noticed you don't write as many bugs as you used to, you'd better get started on fixing that
<graphitemaster> Rejecting PRs because they don't contain bugs is the new meta.
<kisak> icecream95: sounds like the bugs are still there, they're just getting more evil and hidden
<imirkin> graphitemaster: everyone's code is perfect, and the bugs are always in someone else's code :)
<alyssa> icecream95: i've been hazed into writing tests for my code.
<imirkin> app developers blame drivers ("but it works on nvidia")
<imirkin> drivers blame apps for not conforming with specs
<graphitemaster> I also complain the specs are buggy
<imirkin> ;)
<jenatali> Runtime devs get the joy of blaming both :D
<graphitemaster> Or have made stupid decisions.
<imirkin> jenatali: and everyone blames you, so ... everyone's happy :)
<alyssa> 🎵 National Brotherhood Day - Tom Lehrer 🎵
<jenatali> Pretty much
<alyssa> *Week
<graphitemaster> I actually had a bit of a mental breakdown when I found out that GL ES requires the draw buffers be in iota order, that is bufs[n] must be GL_COLOR_ATTACHMENTn.
<graphitemaster> An absolute dumb restriction in my opinion that really crapped all over our engine code to work around.
<graphitemaster> Worked fine on NV though XD
<imirkin> yeah, ES has a *ton* of funny restrictions
<imirkin> mesa doesn't enforce all of them
<graphitemaster> It kind of sucks because one of the fast routes on NV is to ping-pong the read/draw buf of an FBO rather than actual FBOs because FBO binds require full validation and changing draw buffers is actually (provided the textures were already attached to the FBO previously) a free operation, so you have n postfx passes in an engine, a good way to make that fast is ping-pong the draw bufs and that saves a solid 2msps in our case
<graphitemaster> compared to binding FBOs.
<graphitemaster> But ES is like "nope, fuck you"
<imirkin> nah, when you change out buffers in an fbo, it still revalidates
<imirkin> (it = mesa)
<graphitemaster> Yeah I'm speaking NV proprietary here. I also happen to know it's a smalelr hardware state change (NVN on the Switch natively supports this operation).
<graphitemaster> s/smalelr/smaller
* imirkin is nouveau maintainer
* jenatali still doesn't understand GL at all
* imirkin neither
<imirkin> i still don't understand depth
<imirkin> maybe some day.
<imirkin> at least i understand textures.
<graphitemaster> There's literally a register for glDrawBuffers on NV
<graphitemaster> RT_CONTROL, 0x121c
<alyssa> jenatali: My understanding caps at ES3.0
<graphitemaster> Yet I think nouveau and mesa still do glDrawBuffer changes with RT, 0x0800
<graphitemaster> Which is a _massive_ FBO state change
ZeZu has joined #dri-devel
<jenatali> alyssa: I get the hardware functionality, but the API and terminology continually mess with my brain
<jenatali> Mainly just not used to it
<alyssa> jenatali: I feel that. In an average week, I write test programs in GLSL, OpenCL C, and Metal. It hurts.
<alyssa> ("Is it vec4(x) or (vec4)(x)? No wait it's (float4)(x), right. Or was it float4(x)?")
<milek7> I'm confused with WebGPU. why they had to make yet another api?
<graphitemaster> WebGL isn't enough and Vulkan is too unsafe to sandbox.
<graphitemaster> And web devs don't want to have to worry about syncronization :P
<graphitemaster> It's okay, We'll soon have another API.
<graphitemaster> The idea is by making more and more APIs, we'll always have work to do. It's basically job security. We could've just stopped at GL and kept adding extensions but then only like five people would have work.
<imirkin> graphitemaster: yes. it's easier that way :) detecting that condition would be a lot of work
<imirkin> esp to pipe it through the generic framework
<imirkin> i don't think most hw can swizzle RTs like that
<graphitemaster> imirkin, One of those things that would be a nice optimization to have considering it's something NV suggested in their AZDO talks at GDC and suggest as well for Switch development in the NVN documentation.
<graphitemaster> So much small low hanging fruit optimizations here and there.
<graphitemaster> Another big reason I've thought of getting into driver development.
<graphitemaster> Want to keep pushing the driver :P
mbrost has quit [Remote host closed the connection]
mbrost has joined #dri-devel
sdutt_ has joined #dri-devel
<imirkin> graphitemaster: maybe getting clocks to more than 5-10% of their capabilities would be a bigger boost?
<imirkin> graphitemaster: anyways, happy to provide some direction over in #nouveau if you're interested
sdutt has quit [Ping timeout: 480 seconds]
boistordu_old has joined #dri-devel
<graphitemaster> What's preventing re-clocking btw?
<imirkin> signed firmware
<graphitemaster> Can't use the proprietary firmware with nouveau?
<graphitemaster> Ideally you'd want to not have to use it, but seems like 5-10% of some proprietary crap is better than only 5-10% the performance
<imirkin> graphitemaster: there are some problems with that
<imirkin> long story short, no.
boistordu has quit [Ping timeout: 480 seconds]
<graphitemaster> I'm still failing to understand what makes it complicated to use the provided firmware
<graphitemaster> The provided firmware would already be signed.
<graphitemaster> So using it with the card wouldn't be much of a problem.
<imirkin> the firmware doesn't do reclocking :)
<graphitemaster> So it's the interfacing of the firmware that is the problem?
<graphitemaster> And presumably because the firmware is signed, it's hard to reverse engineer / decompile it
<graphitemaster> Since it's like encrypted or something
<imirkin> just signed
<imirkin> not encrypted
<imirkin> the problem is that you have to pass the proper instructions to the firmware
<airlied> also there are bunch of firmwares and they have a very unique initialisation order and pattern
<imirkin> in order to be actually successful in changing memory clocks
<imirkin> which is going to be dependent on a ton of stuff in the vbios
<imirkin> one technique we've used to sort this out is to fuzz the vbios and see what the blob does with all the bits
<imirkin> but now it verifies the signature
<imirkin> so ... fail.
<imirkin> getting the firmware up and running is also no small feat, but that's been taken care of
<graphitemaster> But presumably you could record those initialization patterns with external tools, and just try a bunch of the popular IHB cards and replay the correct stuff on the correct cards
<imirkin> (in the appropriate secure mode)
<imirkin> graphitemaster: ... for a particular board, sure
<imirkin> but v1.1 of that board, which uses a diff memory chip, will have a diff sequence
<graphitemaster> So then you just gotta try every single board :P
<imirkin> there are thousands of boards, with multiple revisions of each one
<imirkin> anyways, it's not an approach we've investigated seriously
<graphitemaster> You could create a system for users to do that. Have them install and use the proprietary drivers, do an initialisation sequence dump, give 'em a way to load it into nouveau all fancy (being sure to immediately invalidate it if any hardware markers change so you don't damage hot swapped devices) and have it upload somewhere too so people can reuse it if they happen to have the same exact board.
<imirkin> there was also a time when extracting the firmware was surprisingly tricky
<imirkin> i dunno if that's still the case or not
<graphitemaster> I dunno how many NV Linux users there are but I can't imagine it would take long to collect enough coverage here.
<imirkin> (it was loaded by the GPU via dma, so it didn't show up in the traces. not sure they still do that.)
<graphitemaster> Really rude of NV not to just provide documentation for it though.
<imirkin> the things you mention make sense in principle, but it's a lot of effort
<imirkin> and i think we're all just tired of it.
<imirkin> hard enough to keep nouveau going with all the "refactors" going on breaking stuff all over the place.
<graphitemaster> Do you suspect NV is intentionally sabotaging nouveau's efforts?
<imirkin> seems extremely unlikely they'd go to the effort.
<imirkin> the signing stuff was at least billed as addressing some of their supply line issues
<imirkin> with fake boards/etc
<HdkR> Also VPS security concerns
<graphitemaster> I always suspected it had to do with enforcing the sale of Quadros for virtualization and what not.
<imirkin> yeah, that too
<graphitemaster> Really pisses me off I can't GPU pass-through an RTX officially.
<imirkin> i strongly doubt nouveau was a strong consideration
<imirkin> clock-for-clock, we're still half the blob perf
<graphitemaster> Seems like if you don't got a target on your back, some sort of framework to let users dump and configure their cards isn't out of the question.
<graphitemaster> I of course speak with extreme ignorance on what that would involve technically.
<imirkin> we used to get people to collect mmiotraces back in the day
<imirkin> for the initial reclocking efforts, etc. until someone (ben?) came up with the vbios fuzzing idea
<graphitemaster> Isn't there concerns that fuzzing could entail bricking devices though
<imirkin> concerns? probably. hasn't happened in the history of nouveau though
<graphitemaster> You haven't bricked a device yet, wow
<imirkin> not through software
<graphitemaster> Fuck. I've bricked an NV GPU through OpenGL before.
<imirkin> good job? :)
<graphitemaster> Back in the day XD
<imirkin> i guess nouveau just doesn't drive them as hard
<imirkin> they tend to have overheat protection/etc
<imirkin> they'll shut down
<graphitemaster> Guess you haven't hear of the New World fiasco recently then XD
<imirkin> i did see something about it
<imirkin> but that just sounds like someone skimped on power regulators
<imirkin> like i said, nouveau doesn't drive the boards that hard :)
<graphitemaster> That really stinks :(
<graphitemaster> Last time I used nouveau is when I tried a musl-based distribution because the NV drivers expect glibc crap, wasn't that happy with the performance. That was I think the first Titan (Kepler, basically a beefier 780 Ti, GK110 iirc)
<imirkin> we have reclocking on kepler
<imirkin> dunno if we did when you tried it
<HdkR> graphitemaster: GPU passthrough is official now. Error 43 is dead
cphealy has joined #dri-devel
<HdkR> SR-IOV/vGPU is still Tesla only
<HdkR> Tesla/GRID
<graphitemaster> Maybe when NV releases new GPUs and sunsets these ones they'll release firmware and/or docs
<graphitemaster> Or hell, a driver would be nice.
<graphitemaster> Preferably one they don't first run through gcc -E before open sourcing.
gpoo has quit [Ping timeout: 480 seconds]
cwfitzgerald[m] has quit []
cwfitzgerald[m] has joined #dri-devel
gpoo has joined #dri-devel
<alyssa> imirkin: the combinatoric explosion sounds awful, my condolescenes😢
<alyssa> i guess one nice part of apple gpu right now is that there is... 1 SoC. 2 possible implementations exit.
<alyssa> *exist
<alyssa> (and they're almost certainly bit-for-bit compatible.)
gpoo has quit [Ping timeout: 480 seconds]
<alyssa> won't be true a year from now, and wouldn't be true if we backported to iphones, but yeah
<airlied> it goes off once you give OEM options on memory configuration and power management :-P
<airlied> gotta have that market differentiation
Company has quit [Quit: Leaving]
<alyssa> @kernel people -- I assume GenXML / envytools / etc in kernel space is a big no-no?
<alyssa> OTOH given what AMD's register definitions look like, anything we do will seem mild right?
<imirkin> alyssa: you don't want to introduce new build requirements into the kernel
<imirkin> i dunno if python has made it in
<airlied> yeah adding a new build req usually ends in pain
<Sachiel> just write an xml parser and code generator in the makefile
<jekstrand> :P
<jekstrand> If someone wrote a GenXML parser/generator in C, it'd probably be usable in the kernel.
<jekstrand> Not sure if expat is an acceptable dep, though.
<jekstrand> And no one wants to roll their own XML parser.
<airlied> just upcall to userspace :-P
vivek has joined #dri-devel
<airlied> though it sounds like the IOKit interface might bad enough to try and get into the kernel
<jekstrand> Why do you need IOKit?
<jekstrand> As far as GenXML goes, I've wondered about using that in i915 for registers. I'm not actually convinced doing so would be a good idea but I've thought about it.
<airlied> jekstrand: apple put the display controller and possibly the gpu controller on a second cpu
<airlied> and split the driver in middle along some iokit interfaces between the two
<jekstrand> I suppose if someone wanted to, they could have both the XML and headers in tree along with the script and just re-run the script every time someone checks in new XML.
<jekstrand> airlied: Lovely.
<airlied> so linux would have to provide the correctly packed structs to even talk to display controller
<jekstrand> airlied: But how is that different from any other firmware?
<airlied> jekstrand: the interface seems to be json
<jekstrand> And do we know how to compile for that CPU or is it super-guarded with lots of crypto signing required?
<jekstrand> airlied: Oh, that's just awesome!
<ccr> :P
<jekstrand> Display as a web service....
<airlied> jekstrand: no idea if it would be possible to build fw for the secondary cpu, I think it's just a lower form of arm device
<airlied> but not sure how hackable it is
<jekstrand> Talking JSON to it certainly seems the easiest route in the short term
<jekstrand> But oh, my....
<ccr> surely the next step for Apple will be to go the NVidia route with all that stuff and lock it down for non-macOS
<jekstrand> I guess this is what happens when you pull a bunch of people off the WebKit team to design GPU firmware....
<jekstrand> ccr: Nah, the'll standardize it as part of WebGPU. :P
<ccr> hmm. browsers all the way down ..
<airlied> like the nvidia display controller is already well hidden behind a protocol, but I don't think it requires json yet
<airlied> though not 100% sure iokit is json or just json-like
<jekstrand> You know you want to write kernel modules in javascript. After all, it has more security engineers looking at it than any other programming language on earth. That means there'll be zero security bugs, right?
<Sachiel> don't know about zero, but maybe NaN
<airlied> " marcan: there's at least two *different* binary-json-like serializations involved"
<ccr> hooray
<jekstrand> Oh, even better!
<jekstrand> I told you they were trying to make it a standard. Step 1 is two competing differently buggy implementations. They're half-way there!
<marcan> it's not json
<marcan> there's actually 3, possibly 4 different things going on
<ccr> serializations intensify
<jekstrand> Please tell me there are strings involved somewhere. :)
<marcan> 1. the DCP RPC interface is based on a custom marshaling of C++ method calls, with nesting and at least two asynchronous contexts (i.e. two threads of execution)
<ccr> yes, parsing from strings! gotta have parsing from strings.
<marcan> 2. some objects are serialized using one fairly simple serialization of e.g. arrays and maps as part of this
<marcan> 3. then there's a *different* serialization format used for big blobs, even though it does the same thing (that's the "json" part, it's not json, just think of the data model as json-adjacent, but the format is binary)
<marcan> 4. this is for the main DPC interface; there are other endpoints that seem to use a completely different RPC system called "EPIC" that I haven't looked at yet, but I *hope* it's simpler
<marcan> and this is all for the display controller, no idea what the GPU interface looks like yet, that comes next (though I *think* it'll be simpler)
<jekstrand> Someone must have gotten promoted for designing this...
<marcan> also 1. is an unstable ABI and changes with every new macOS version / firmware blob pairing
<marcan> we're going to have to support a subset of those version personalities in Linux
<airlied> jekstrand: it definitely seems like staff engineer type design :-P
<ccr> sounds more like evolution than .. ahem .. intelligent design
<marcan> thankfully we get to decouple that from whatever macOS is dual-booted on the same system
<marcan> so we can afford to only support certain "golden" versions without inconveniencing users
<airlied> I'm going to guess DP mst or link training drove them to it
<marcan> or power consumption
mbrost has quit [Read error: Connection reset by peer]
<marcan> on iPhones this thing apparently does stuff like OLED burn-in reduction
<marcan> and then there's the Apple Watch stuff which *has* to have some magic to keep the main CPU off even as it's displaying ticking clocks or whatever
<marcan> otherwise the battery would die in hours
<marcan> but here we are, and it's in laptops now
<marcan> so I guess my main questions right now are 1) can I stick python scripts in kbuild, or do I need to submit generator output (there's no way I'm open coding all this in C, given the versioning/etc that'd be insane and error-prone)
<marcan> and 2) about that json-ish structure mess... does it make sense to implement such a heap-based data structure in the kernel, or instead try to build some kind of parser that maps it to fixed structs at parse time (ignoring unknown keys/etc)
<jekstrand> If you only care about a subset of it, maybe you can choose a few fixed layouts instead of a generic heap?
<jekstrand> Or you can take an approach similar to what we've done in some of our compilers in Mesa and make a "builder".
<marcan> yeah, these things are mostly used for properties (which usually have values that are either ints, strings, bools, or small key/value dicts), so we could try to keep a static structure of those and just map, validating that the types are what we expect
<marcan> but then there's a massive set of dicts ("DCPAV properties") which encode things like all the info about a particular display output, including data from EDID
<jekstrand> agx_dsp_begin("MyFoo"); agx_dsp_int_field("x", 72); agx_dsp_int_field("y", 128); agx_dsp_end();
<marcan> and those are way more complex at first glance (haven't done a full decode yet but I know the format)
<marcan> so trying to map those to fixed-layout structs might be trickier
<marcan> jekstrand: you mean for the parser or generator?
<jekstrand> marcan: Generator.
<jekstrand> marcan: You also have to parse, don't you? That's gonna be annoying....
<marcan> yeah, exactly
<marcan> in fact it's mostly parsing
<jekstrand> Oof
<jekstrand> Text parsing in the kernel. Awesome!
<marcan> I'm not even sure if I saw anywhere where we need to generate the json-like thing
<marcan> not text, binary :p
<marcan> again it's not actually json
<jekstrand> Does it matter? :P
<marcan> think of it like msgpack
<jekstrand> It's an untrusted data stream. It's annoying either way.
<marcan> yes
<marcan> though at least you can parse them sanely in one pass, unlike text stuff :p
<jekstrand> Binary just means you don't have to look for "/n"
<jekstrand> Well, sure. And you don't have to worry about comman placement. (-:
<marcan> the serializations are basically two variants on the same TLV concept
<marcan> dicts are sets of <key, value> tuples etc
<marcan> it's very simple
<marcan> but the data model is still "anything json can represent" basically
<jekstrand> Yeah
<jekstrand> You basically have two potential models:
<jekstrand> 1. The expat stream model where you make a thing that parses and has callbacks to handle different things.
<marcan> yeah
<jekstrand> 2. The DOM model (common with JSON) where you parse it all up-front into a data structure.
<marcan> yup
<jekstrand> They're both gonna suck
<marcan> indeed
<marcan> for the RPCs, I'm describing them in python like this right now, and I have a thing that generates the marshalling structure definitions, which I was then intending to add a C struct/thunk generator on top
<marcan> A439 = Call(uint32_t, "set_parameter_dcp", param=IOMFBParameterName, value=SizedArray(4, "count", ulong), count=uint)
<jekstrand> If things were pretty stable, I'd be tempted to say to do the stream model for the parser itself and have the callbacks fill out the actual structs you care about and have "real" C structs the kernel can use be your DOM.
<jekstrand> But the moment they re-arrange anything in their API, that's gonna be a real pain.
<marcan> well, if they add fields we don't care about we can ignore them
<jekstrand> Sure
<marcan> and we can have defaults for fields that are missing
<jekstrand> But if they change a parenting relationship, you're toast.
<marcan> yeah
<marcan> I'm hoping they don't
<jekstrand> Now deep do things usually nest?
<marcan> not very for the main properties, but the big blobs are scary. I'll get back to you on those, haven't parsed them yet because they're so big they get broken up over different calls, so I can't even parse them in a call context
<marcan> like they have
<marcan> D122 = Callback(bool_, "setDCPAVPropStart", length=uint)
<marcan> D123 = Callback(bool_, "setDCPAVPropChunk", data=HexDump(SizedBytes(0x1000, "length")), offset=uint, length=uint)
<marcan> D124 = Callback(bool_, "setDCPAVPropEnd", key=string(0x40))
<marcan> and right now my code doesn't do custom callbacks to perform custom parser actions per call, just those data types :-)
<marcan> I'll do that a bit later today so I can finally get a dump of those structs
<jekstrand> That's fair.
<jekstrand> Well, best of luck to you! And you have my heartfelt condolences. :D
* jekstrand loves it whenever some other IVH else makes his hardware look well-designed.
<jekstrand> Even if you do go with a DOM model, building the DOM on top of a stream parser isn't a terrible idea so maybe start with the stream parser?
<marcan> https://mrcn.st/p/LaQHOT50 <- this is what the call stream looks like (the swap requests at the end are broken because I just updated macOS and only added/fixed the easy calls, those structs also changed layout of course)
<jekstrand> All that "StandardType" Are they seriously using strings to denote the different types?
<marcan> those would be dictionary keys
<marcan> so not the types, just the property/field names
<jekstrand> Look on the upside. At least with all those strings in there, you don't need docs for the reister numbers/names. :)
<marcan> I assume by StandardType they mean like NTSC/PAL or something
<marcan> hah, yeah :)
<marcan> though I did have to dump the IPC call prototypes from Apple's driver, that much would've been hell to figure out by trial and error
<marcan> and I don't think the copyright police are going to go after me for dumping C++ method names, at that point the damn C++ interface *is* a statement of fact about the firmware interface
<marcan> for better or worse
<marcan> though I'll change the method names to a saner naming scheme as I make a client for this (it's not like Apple's names are even consistent themselves)
<jekstrand> Sure
<jekstrand> If you can make your own firmware, you can make whatever sensible interface you want.
<marcan> unfortunately, we can't
<marcan> it's locked and loaded by the bootloader
<jekstrand> Oh, lovely.
<marcan> however we *may* be able to just turn it off and drive the hardware directly
<marcan> ... but as bad as this is, I think implementing DP training and bandwidth calculations and all that stuff would be worse
<jekstrand> That'd be nice
<marcan> and I don't even have a good way to trace register accesses from this thing like I do from macOS itself
<jekstrand> RE'd, yeah, quite possibly.
<marcan> yeah
<marcan> there's like 7MB of DCP firmware
<marcan> there is a *lot* of functionality in there
<marcan> and then there's *another* smaller core it spins up, a cortex-m3 apparently
<marcan> no idea what it does
<marcan> because clearly we must go deeper
<jekstrand> Or just a lot of templates and C++. :)
<marcan> well that too :)
<marcan> though they don't use templates much
<marcan> they're actually banned in IOKit I believe... except they used them for the RPC marshaling in violation of their own coding guidelines :p
<jekstrand> Of course. :)
<jekstrand> 7MB you say? That's about the size of a Vulkan driver and we've got a whole compiler in there.
<marcan> yup.
<marcan> -rw-r--r-- 1 marcan marcan 7704576 May 17 19:02 DCP.bin
<marcan> for comparison
<marcan> -rw-r--r-- 1 marcan marcan 2378304 May 17 19:02 GFX.bin
<jekstrand> (RADV is 7.7M, ANV is 9.9M, and lavapipe is 6.8M)
<marcan> and I think GFX embeds 3 different versions of the same firmware, DCP does not
<marcan> meanwhile,
<marcan> -rw-r--r-- 1 marcan marcan 12564288 May 17 19:02 ISP.bin
<marcan> but we all know mobile ISP firmware is *insane* these days
<marcan> so that one does not surprise me
<marcan> OTOH,
<marcan> -rw-r--r-- 1 marcan marcan 380928 May 17 19:02 PMP.bin
<marcan> power management is easy, apparently
<jekstrand> *sigh*
<marcan> -rw-r--r-- 1 marcan marcan 1123552 May 17 19:02 AVE.bin
<marcan> so is video encoding
<jekstrand> Anyway... as delightful as it is to laugh at Apple firmwares, I need to go to bed.
<jekstrand> ttyl
<Kayden> goodnight!
<marcan> night!
<marcan> I need to get lunch and then have an appointment :)
cwfitzgerald has joined #dri-devel
<cwfitzgerald> test, are my messages going through?
<airlied> cwfitzgerald: seem to be
<cwfitzgerald> great, I was using the matrix bridge before and I was sending messages but not a soul was seeing them :(
<graphitemaster> I legit think matrix is broken, none of the [m] users messages are getting through, had someone else independently confirm that cwfitzgerald
Guest2344 is now known as alatiera
<graphitemaster> So you ain't the only one. They can see our messages at least.
<cwfitzgerald> glad to know it wasn't just me, super annoying when it just silently drops messages
tzimmermann has joined #dri-devel
<jenatali> I've been using Matrix, and at least from my POV it seems like I've been having conversations with people?
<imirkin> jenatali: it's all just an illusion
<cwfitzgerald> my greatest fear in life :D
<HdkR> Likely a +M issue instead?
<HdkR> `M - client may speak only when registered and identified to NickServ`
<imirkin> what good is an irc client ... if you can't speak
<HdkR> You'll get a log in your...status window? when blocked
<jenatali> Yeah, auto-identify doesn't seem to work on oftc like it did on freenode
<jenatali> But also, oftc doesn't disconnect nearly as often... freenode was pretty much daily, where with oftc I've had to re-auth... three times?
<Sumera> No clue if this will go through but it's probably something with NickServ
* jenatali shrugs
<cwfitzgerald> now that you mention it, I got the nickserv message on irccloud but not on matrix
<imirkin> jenatali: at least hexchat supports the register-on-connect nickserv thing
<cwfitzgerald> sumera: yeah it went through
<imirkin> jenatali: oftc also supports ssl client certs for direct auth.
<jenatali> imirkin: Yeah, but does Matrix? I really like having a single sign-on across multiple devices...
<imirkin> i don't even know what matrix is :)
<imirkin> i know lots of people use it
<imirkin> and it generates really annoying irc messages sometimes
<imirkin> but i don't precisely know what it is
<Sumera> Yayy. So I figured out yesterday I'd been talking to the wrong NickServ all this time? Like it was working a couple of weeks ago and then just stopped
<cwfitzgerald> idk how to even start a conversation with nickserv on matrix
<jenatali> It's a protocol, and it has rooms and such, and also supports IRC bridges? I don't entirely know either, but I have an account and I can log into it on multiple devices and chat history syncs, and that's all I really need
<HdkR> /msg nickserv <3 u
<graphitemaster> No one making a Matrix joke about being in an alternative universe where your messages don't get through, or how you took the wrong pill so you don't get to send messages here, smh
<graphitemaster> Trinity is to blame either way
<imirkin> graphitemaster: i tried...
<imirkin> jenatali: ok, so it's a service where you give it all your credit card numbers, and it tells you if one of them's lucky? (aka you put your credentials into it?)
<jenatali> imirkin: Yeah exactly
<imirkin> jenatali: how do they make money?
<jenatali> It's apparently a non-profit
<imirkin> ah ok
<imirkin> so people use it as a free bnc or whatever?
<cwfitzgerald[m]> test from matrix 123
<graphitemaster> Matrix 123, how many instances you running?
<cwfitzgerald[m]> thank goodness, it is working
<jenatali> imirkin: Had to look up bnc, but I guess so?
<cwfitzgerald[m]> Sumera: thank you, that's exactly what I needed
<imirkin> jenatali: i didn't mean the 50ohm connectors :)
<imirkin> wow. the stuff that comes up when you search for "bnc"... not expected.
<graphitemaster> Just say bouncer
<graphitemaster> bouncer is not like kubernetes it doesn't need to be shortened to like b5r
<graphitemaster> or bnc
<imirkin> huh. apparently there are also 75ohm bnc connectors.
jewins has quit [Ping timeout: 480 seconds]
<imirkin> graphitemaster: i dunno. always called it 'bnc', didn't think about what it stood for
<jenatali> Searching 'irc bnc' got me the right thing
<imirkin> yeah
<Sumera> cwfitzgerald: np
Duke`` has joined #dri-devel
<graphitemaster> IRC is old, which means it's implementation details leak through to the user interface.
<graphitemaster> It's also very Linux, which means being wrong on the internet about it is the best way to get help.
<cwfitzgerald[m]> XD
<cwfitzgerald[m]> what I fruitlessly tried to say 3 hours ago at this point was:
<cwfitzgerald[m]> milek7: (webgpu is a thing) because there's no higher level command list api that isn't metal, and the web has some different concerns than desktop, low-level apis (disclaimer, I work on wgpu (rust's/Mozilla's implementation))
<cwfitzgerald[m]> Thankfully wgpu is turning out really well
<graphitemaster> I've had some success with the native header with wgpu too.
<graphitemaster> I got it rendering a triangle, which was about as far as I got.
<graphitemaster> Because the shader situation bit me again :(
<cwfitzgerald[m]> yeah both us and dawn (chrome's impl) have a common header you can use
<cwfitzgerald[m]> hopefully naga is mature enough now that you should be able to do a good chunk of things, even if you're using spirv
<graphitemaster> Ideally I just want to feed unaltered GLSL into it and have it work (tm)
<cwfitzgerald[m]> in all likelyhood you'd only be able to pass in wgsl or spirv, though maybe an option could be had to enable GLSL, haven't really thought about that in the C interface
<graphitemaster> I think I can repurpose my glsl-parser to output wgsl pretty easily.
mlankhorst has joined #dri-devel
itoral has joined #dri-devel
lemonzest has joined #dri-devel
yoslin_ has joined #dri-devel
yoslin has quit [Ping timeout: 480 seconds]
mattrope has quit [Ping timeout: 480 seconds]
Duke`` has quit [Ping timeout: 480 seconds]
alanc has quit [Remote host closed the connection]
alanc has joined #dri-devel
pnowack has joined #dri-devel
frieder has joined #dri-devel
danvet has joined #dri-devel
sdutt__ has joined #dri-devel
Lucretia has joined #dri-devel
sdutt_ has quit [Ping timeout: 480 seconds]
<pq> karolherbst, gdb: set print pretty on; I have that in my ~/.gdbinit
<pq> jekstrand, ^
jkrzyszt has joined #dri-devel
ppascher has quit [Ping timeout: 480 seconds]
sdutt__ has quit [Remote host closed the connection]
flto has quit [Ping timeout: 480 seconds]
ppascher has joined #dri-devel
flto has joined #dri-devel
rasterman has joined #dri-devel
adjtm has quit [Ping timeout: 480 seconds]
Hi-Angel has joined #dri-devel
itoral has quit [Remote host closed the connection]
itoral has joined #dri-devel
pcercuei has joined #dri-devel
go4godvin is now known as frytaped
frieder_ has joined #dri-devel
frieder has quit [Read error: Connection reset by peer]
frytaped is now known as go4godvin
<icecream95> Is Gitlab loading very slowly for anyone else, or is it just broken for me?
<ccr> seems very slow or not working for me as well
<pq> it was very slow a moment ago for me, then it was ok again, and now it seems to take indefinitely once more
vivek has quit [Ping timeout: 480 seconds]
<MrCooper> icecream95 ccr pq: this sometimes happens while a GitLab backup is in progress
<pq> so it's not just once in a weekend?
<pq> running backups like that
<emersion> just got a 502
<MrCooper> backups run every day AFAIK
<MrCooper> FWIW, #freedesktop is better for this kind of discussion
tursulin has joined #dri-devel
<pq> this is the first time I'm seeing the slowdown myself, usually they run when I'm off-hours
<pq> huh, I mistook this for #freedesktop, but no need to mention there, bentiss already knows :-)
flto has quit [Ping timeout: 480 seconds]
<MrCooper> AFAIK the backup normally runs earlier, but sometimes it fails, and the retry runs into the European morning
flto has joined #dri-devel
tchar has joined #dri-devel
<danvet> emersion, it works
<danvet> e.g. see `Frame Buffer Abstraction`_) at the beginning of drm-kms.rst
<emersion> it works if the destination is on the same "page"
<danvet> either you typoed or your sphinx is funny
<danvet> hm
<emersion> this is not the case here
<danvet> disappointing
<danvet> then you need to make an explicit mark
<emersion> why can't we have simple things…
<emersion> ok, will try
<javierm> tzimmermann: hi, your answer to Randy's patch made me realize that I also forgot to add a Fixes tag in my fix, posted a v2 now: https://lore.kernel.org/patchwork/patch/1468389/
<javierm> tzimmermann: and sorry for missing that before, I wrongly assumed that all arches would declare a struct screen_info
<danvet> sravn, thx for volunteering for the loongson driver
<danvet> airlied, Subject: Re: linux-next: build failure due to the drm tree <- I guess we should apply the fix to drm-next?
<danvet> also why does sfr build-test s390
<danvet> javierm, oh we're talking about the same thing
<danvet> javierm, what's the patch I should add to drm-next?
<javierm> danvet: yes, there are two fixes that need to applied: https://lore.kernel.org/patchwork/patch/1468389/ (mentioned above) and https://lkml.org/lkml/2021/7/27/36
<javierm> danvet: sorry about those... it's hard to build test with all possible configs
<danvet> airlied, I'm applying them both
gpoo has joined #dri-devel
tzimmermann_ has joined #dri-devel
tzimmermann has quit [Ping timeout: 480 seconds]
tzimmermann__ has joined #dri-devel
ppascher has quit [Quit: Gateway shutdown]
tzimmermann_ has quit [Ping timeout: 480 seconds]
ppascher has joined #dri-devel
<danvet> tzimmermann__, mlankhorst mripard: I think would be good to backmerge drm-next into drm-misc-next
<danvet> both for the sysfb compile fixes, but also to get the nouveau fix from -rc3
adjtm has joined #dri-devel
<mlankhorst> yeah true, will do so
tzimmermann__ has quit []
tzimmermann has joined #dri-devel
<tzimmermann> danvet, do you also need -rc3 in drm-misc-next? IIRC you mentioned something like that
<mlankhorst> Needs to bump to v5.14 anyway
<tzimmermann> drm-misc-fixes ^
<danvet> tzimmermann, yeah if you can't fast-forward I think a backmerge would be good
<danvet> tzimmermann, drm-fixes is already at -rc3
Kayden has quit [Remote host closed the connection]
Kayden has joined #dri-devel
<danvet> mlankhorst, I just pushed another patch to drm-misc-next btw
<danvet> in case you're prepping the backmerge right now
tzimmermann_ has joined #dri-devel
<mlankhorst> Yeah, can test recompiling again
flacks has quit [Quit: Quitter]
tzimmermann has quit [Ping timeout: 480 seconds]
flacks has joined #dri-devel
<danvet> bbrezillon, ping for bikesheds/review/testing on [PATCH v4 00/18] drm/sched dependency tracking and dma-resv fixes ?
Anorelsan has joined #dri-devel
<bbrezillon> danvet: as I said yesterday, you can add my R-b on patches touching the panfrost driver, but you already have Steven's R-b, so I'm not sure that's really useful
<danvet> bbrezillon, oh missed that
<danvet> bbrezillon, well specifically looking for review/testing/bikesheds on the common parts too
<danvet> atm the only thing I got is some detailed discussions about barriers (which imo were just wrong suggestions)
<danvet> and some naming bikesheds, which I'm happy to rename if there's consensus
<danvet> but not really anything else
<danvet> bbrezillon, as-is the series is going nowhere, until there's some review/testing from panfrost/v3d/etnaviv folks on it (as the 3 current users I've converted)
<danvet> melissawen, ^^ maybe needs more review if you want to build on top of it
<bbrezillon> danvet: testing is a bit complicated right now (only have a board on which updating the kernel is not super convenient). I reviewed the common bits and they look good to me, but as you've probably noticed, I'm also not super familiar with the dma-resv API/rules, so I'd rather let others (lynxeye?) comment on that part
<danvet> bbrezillon, there's also the scheduler common parts which is (well, should be at least) just plain refactoring
<bbrezillon> danvet: sure, you can stick my R-b on patches 1, 3, 4 and 5
<danvet> bbrezillon, can you pls reply on-list with that?
<danvet> bbrezillon, oh and 2 is actually not really relevant on x86, but much more on arm
<danvet> because x86 is TSO so only barrier it ever needs is the full smp_mb();
<bbrezillon> danvet: right, but I fear you'll need someone with a deeper understanding of drm_sched for that one :-/
camus has joined #dri-devel
camus1 has joined #dri-devel
Anorelsan has quit [Quit: Leaving]
camus has quit [Ping timeout: 480 seconds]
Company has joined #dri-devel
<danvet> bbrezillon, yeah barriers are extremely tricky
<danvet> bbrezillon, thx for the r-b
tzimmermann_ has quit []
tzimmermann has joined #dri-devel
<tzimmermann> danvet, working on it
itoral has quit []
Peste_Bubonica has joined #dri-devel
camus1 has quit [Ping timeout: 480 seconds]
<melissawen> danvet, as I well tested it for v3d, you can include my ack on the common parts too. also, I did not have better idea for that names
<melissawen> btw, v3d parts need to be rebased now
camus has joined #dri-devel
<danvet> melissawen, yeah I also need to rebase when the msm patches from robclark land
<danvet> melissawen, can you drop that on dri-devel too pls so I dont forget?
<melissawen> sure
<danvet> thx
<danvet> robclark, maybe good if you drop this into drm-next earlier than later
<alyssa> jekstrand: Meanwhile, I've been threatening to write a DRM driver bitbanging private[1] display registers to skip the firmware :-p
<alyssa> (threatening marcan I mean)
<alyssa> [1] They're CPU accessible but not mapped in the Apple device tree, and if you write them when the DCP is on, it'll crash.
sdutt has joined #dri-devel
sdutt has quit []
sdutt has joined #dri-devel
camus has quit [Remote host closed the connection]
lemonzest has quit [Quit: Quitting]
camus has joined #dri-devel
xexaxo_ has quit [Read error: No route to host]
xexaxo_ has joined #dri-devel
<emersion> danvet: the custom mark doesn't work either it seems
<emersion> `Modeset Base Object Abstraction <kms_base_object_abstraction>`_
<emersion> ^ the link
<emersion> .. _kms_base_object_abstraction_:
<emersion> ^ right before the heading
<emersion> hm, maybe there's an extra _ in the mark def…
* emersion waits another 30min for the docs to rebuild
lemonzest has joined #dri-devel
<emersion> nope, doesn't help
camus has quit [Remote host closed the connection]
<tzimmermann> danvet, backmerged into drm-misc-fixes
camus has joined #dri-devel
camus has quit [Ping timeout: 480 seconds]
iive has joined #dri-devel
Company has quit [Read error: Connection reset by peer]
<jekstrand> alyssa: Sounds like a plan. :D
<robclark> danvet: msm sched conversion? I suppose I could send an early pull req with that (and a couple other patchsets it is on top of).. need to send a v3 this morning to fix an issue I found in testing..
Company has joined #dri-devel
mbrost has joined #dri-devel
mbrost has quit [Remote host closed the connection]
mbrost has joined #dri-devel
vivijim has joined #dri-devel
<zmike> pepp: any other things for pbobench or should I marge?
mattrope has joined #dri-devel
<pepp> zmike: nope, I guess you can merge it
<zmike> cool, hope we can get some good pbo improvements with it!
Hi-Angel has quit [Quit: Konversation terminated!]
Hi-Angel has joined #dri-devel
<pepp> zmike: yup. I've fixed a few things in my branch and added numbers from pbobench to the commit msg: https://gitlab.freedesktop.org/pepp/mesa/-/commit/8c4edcf051d4b
<zmike> somehow really hard to parse
* zmike squints
<jekstrand> zmike: The Khronos issue around input attachments arrays has been resolved. Lavapipe is not going to like the result:
<jekstrand> An code:OpTypeImage with a "`Dim`" operand of code:SubpassData must:
<jekstrand> have an "`Arrayed`" operand of 0 (non-arrayed) and a "`Sampled`" operand
<jekstrand> of 2 (storage image)
<zmike> huh
<zmike> I'm out of context on all that now, so I guess I'll have to see if/how it affects things
<zmike> I don't recall there ever being an issue with input attachments though?
<zmike> jekstrand: that wasn't for input attachments specifically though, that's a nir problem
<zmike> it just happened that there was a case of it there
<jekstrand> zmike: Yes, but with the Vulkan spec update, the answer is that lavapipe needs to be able to deal with is_array mismatches, at least for that case.
<jekstrand> And I suspect for other cases as well. D3D is pretty loose about arrayness and we've considered lostening things in Vulkan.
<zmike> probably, but I don't think that's a lavapipe issue
<zmike> that's llvmpipe
<jekstrand> Uh, what? How are they different in this regard?
<zmike> because the problem is in llvmpipe's handling of is_array vs variable type
<zmike> lavapipe is unrelated
<jekstrand> Ok....
<zmike> I don't really remember the exact details at this point
<zmike> it's something I've been punting because it's a pretty minor issue and is in gallivm
<jekstrand> Ok. As long as it's GL-only.
<zmike> no, it's a general gallivm issue, so it'd affect anything that goes through llvmpipe
<zmike> though it doesn't affect anything in cts afaik
<jekstrand> Does lavapipe use gallivm? Sorry. I really don't know.
<zmike> yeah gallivm is the llvmpipe compiler
<zmike> so shaders go lavapipe -> llvmpipe -> gallivm -> llvm
camus has joined #dri-devel
<jekstrand> So it is a problem with Vulkan?
<zmike> tough to say? there's input attachment cts cases, right?
<zmike> and other array image cases
<jekstrand> There should be layered input attachment CTS cases.
<jekstrand> Then again, dEQP *should* test a lot of things....
<zmike> daniels: have you ever seen this one before? https://gitlab.freedesktop.org/zmike/piglit/-/jobs/12206063
Erandir has quit []
Duke`` has joined #dri-devel
nchery has joined #dri-devel
Erandir has joined #dri-devel
Peste_Bubonica has quit [Remote host closed the connection]
<jekstrand> anholt_, daniels: Seeing some strange CI failures on APL: https://gitlab.freedesktop.org/mesa/mesa/-/jobs/12206325
<jekstrand> In the 2nd one, it's been trying to reboot for about 15m now
<cwfitzgerald[m]> what's the difference between i915 and the iris driver?
<jekstrand> cwfitzgerald[m]: i915 may refer to the kernel driver for all Intel hardware in the last 15 years or so. Or it may refer to the Mesa driver for really old (> 14 years or so) Intel hardware.
<jekstrand> iris is the modern Mesa userspace driver for reasonably modern (last 7 years or so) Intel hardware.
Kayden has quit []
Kayden has joined #dri-devel
<cwfitzgerald[m]> oh interesting, perhaps that explains something, do has/broadwell and *lake use different drivers (as a corollary to that, is there a place where I can look this kind of thing up without bugging y'all)
<imirkin_> cwfitzgerald[m]: gen8+ uses iris (except the gen8 mobile parts)
<imirkin_> gen8 = broadwell
<cwfitzgerald[m]> ah, that explains a lot, thank you!
<imirkin_> earlier parts (gen4 - gen7.5, i.e. GM965 - haswell) use the classic i965 dri driver, although there's a gallium driver in-development which will cover these generations
<imirkin_> much earlier parts (gen2/gen3) use the i915 or i915g driver. neither is particularly great, although anholt_ has adopted i915g and has improved it greatly.
<imirkin_> in the kernel, there is a single driver called 'i915'. not confusing at all.
<jekstrand> And all of the above mentioned hardware uses the i915 kernel driver
<imirkin_> for MUCH older hardware, there's a i830 driver, +/- a little
<Sachiel> i740 anyone?
<imirkin_> (we're talking like 440BX old)
<imirkin_> Sachiel: never existed. prove me wrong.
<glennk> i think utah-glx had a driver for i740?
<imirkin_> good luck tracking down an i740 board.
<imirkin_> i think even vsyrjala is missing one, and he has the ultimate intel collection...
<Sachiel> I should have saved the one I had
<glennk> i've seen a few on ebay lately
<HdkR> Apparently there are three on Ebay right now :D
<imirkin_> and now the million dollar question ... do you have an AGP slot to plug it into? :)
<glennk> i do actually, an old k6-2 in a corner somewhere
<jenatali> I'm pretty sure we have one on our GPU wall
<glennk> but it feels insulting to plug an i740 into something that currently has a nv11 in it
<imirkin_> such a downgrade
vivek has joined #dri-devel
<cwfitzgerald[m]> imirkin_: yeah I saw a i915 header with all generations of intel in it, and thought it was a single unified driver everywhere :)
alyssa has left #dri-devel [#dri-devel]
<imirkin_> cwfitzgerald[m]: there have been some changes through the generations, esp around address space availability. each "break" in driver support corresponds to a fairly big shift in strategy
<imirkin_> gen2/gen3 didn't have vertex shaders. gen8 has more address space which allows doing away with relocations.
<imirkin_> (gen2/gen3 also didn't have integers ... you name it, they didn't have it)
<cwfitzgerald[m]> heh
<cwfitzgerald[m]> yeah that makes sense
<imirkin_> gen4 was the first DX10 part from intel
<jenatali> Gen7 was the first DX12 part from Intel, though it really shouldn't have been, due to the address space differences you mentioned
<cwfitzgerald[m]> the context for this was that I just implemented a workaround for a mesa bug regarding fastclears (which I realize now only manifested on an older iris driver), I thought everyone was using i965 so i thought it wasn't fixed
<cwfitzgerald[m]> * the context for this was that I just implemented a workaround for a mesa bug regarding fastclears (which I realize now only manifested on an older iris driver), I thought all intel was using i965 so i thought it wasn't fixed
<imirkin_> really? i thought you needed gen9 for DX12. but i guess i dunno. or they worked out a way to deal with it. i thought DX12 needed bindless...
<cwfitzgerald[m]> but this clears all that up, so I'm going to tell this person with the bug to update their mesa :)
<jenatali> imirkin_: No, the D3D12 binding model supports non-bindless hardware. The only real requirement is WDDM2, i.e. GPU virtual addressing support
<jenatali> Which gen7 technically has, but a 2GB limit of address space...
<imirkin_> ah
<imirkin_> somehow i associated D3D12 with bindless in my head. will try to remember.
<imirkin_> jenatali: so did Fermi get a D3D12 driver?
<jenatali> It did
<jenatali> https://en.wikipedia.org/wiki/Feature_levels_in_Direct3D#Support_matrix if you wanted to see - binding tier indicates level of bindless support
<imirkin_> wha... kepler doesn't have the 12.0 feature level?
<jenatali> I think it required the typed UAV load support, which I guess Kepler didn't have
<imirkin_> right
<imirkin_> have to bake the load format into the shader
<jekstrand> Is it just me or did we totally forget to test textureQueryLod in piglit?
<imirkin_> maxwell1 has typed though.
<imirkin_> jekstrand: forget is a strong word ... actively decided not to? also check for textureQueryLOD
<jekstrand> jenatali: I don't think anyone did DX12 on IVB, just HSW.
<jenatali> jekstrand: Is HSW not gen7?
<imirkin_> jekstrand: check tests/spec/arb_texture_query_lod/execution
<jekstrand> jenatali: 7.5, really
<jenatali> Ah, ok
<jekstrand> imirkin_: Ah. Of course, we have the same function with two different cases. Of course...
<imirkin_> jekstrand: the ext had it as LOD, but when it became core, it was Lod.
<imirkin_> the nice thing about standards ...
<imirkin_> is that there are so many to choose from
<jekstrand> jenatali: In theory, you CAN do D3D12/Vulkan on IVB but oh, my....
<jenatali> Yeah, HSW is kinda the same :)
<jekstrand> Even Haswell is a real pain and our Vulkan isn't 100% conformant. (I think the latest pass rates are like 99.9%)
<jekstrand> jenatali: At least Haswell has texture swizzle in hardware. :)
<jenatali> Ah, I remember when we added that during D3D12 design, yeah that was a magical feature to discover all of our target hardware supported
<glennk> jekstrand, s/Haswell/Hasalmost/ and call it a day?
vivek has quit [Ping timeout: 480 seconds]
<jekstrand> jenatali: D3D11 doesn't have swizzle?
<jenatali> jekstrand: Nope
<imirkin_> i sorta assumed D3D10 did
<imirkin_> all nvidia hw has had it since then
<imirkin_> and unless something's super-esoteric, my assumption is that nvidia == d3d10.
<imirkin_> probably not a perfect approach :)
<imirkin_> easier than checking the docs though
<jenatali> No, not until D3D12. D3D9 exposed a bunch of RGB/BGR variations on the same format, D3D10 picked RGB (except we got the channel order wrong and had to add BGR for 8bpp)
<jekstrand> jenatali: Interesting.
<jekstrand> jenatali: Fair warning: Watch out for border color. Swizzle + border color gets "fun" on AMD and Nvidia. :)
<jenatali> jekstrand: Yes, I'm aware of that one... we've got a spec gap there
<imirkin_> at least integer border colors started to work in fermi
<imirkin_> not so much before then
<jekstrand> imirkin_: Our HW didn't even have integer border colors until Haswell and there they're a joke.
<jekstrand> BDW+ is solid for it, though.
<jenatali> In 9on12, we emulate the BGR10A2 format with a swizzle, and I believe we've seen the swizzle + border issue come up in our conformance testing there
<jenatali> I'm pretty sure we'll need to add a vendor ID workaround at some point... we've got one vendor ID workaround so far for something else
<jekstrand> jenatali: Not surprising. We do a similar emulation for a couple Vulkan formats.
<jekstrand> And the Vulkan border color extension has an explicit list of formats that are known to not work. :-/
<jenatali> Oof...
<glennk> i think there are more conformance test cases for border colors than there are actual applications using them
<imirkin_> jenatali: oh, 12_0 requires sparse textures?
<jekstrand> glennk: Many games use border colors. It's just always pink. :D
<jenatali> imirkin_: Huh, I didn't remember that, but I'd believe it
tobiasjakobi has joined #dri-devel
frieder_ has quit [Remote host closed the connection]
<imirkin_> jenatali: not sure i'm reading it right. but i see why maxwell2 is required for 12_0, it requires a weird sampling parameter clamp in shader.
<imirkin_> which was not supported until then i guess
<jenatali> imirkin_: Yeah, our tier 1 of tiled/sparse resources produces UB on reading from unmapped tiles, tier 2 requires 0s
<imirkin_> which implicitly requires support for sparse resources? :)
<jenatali> No, FL12.0 requires tier 2, for whatever reason
<glennk> jekstrand, i think most of those meant to use clamp_to_edge but typo:ed
<jenatali> IIRC the reason we split it into two tiers was because of the unmapped read/write behavior, but it just so happened that hardware with the tier 2 behavior also supported that feature
dviola has quit [Ping timeout: 480 seconds]
dviola has joined #dri-devel
anujp has quit [Ping timeout: 480 seconds]
anujp has joined #dri-devel
<daniels> zmike: nah I haven't run into the pull-rate limit yet, probably because I don't use the Docker Hub images, partly for that reason :P
<daniels> ci-templates is probably a better idea
<zmike> so...I should retry
<daniels> jekstrand: first one is known and being actively worked on (machine physically just dies, we don't detect it, and the error message is also confusing); for the second one it did succeed, but yeah we probably need to decrease the allowed time for that stage
<jekstrand> daniels: Ok, glad it's known
<daniels> zmike: yeah, if you're not going to move it away from using Docker Hub then just spam retry until you land on a runner which hasn't hit the Docker Hub pull limit. but don't spam retry so many times that you push all the others beyond that limit!
tobiasjakobi has quit [Remote host closed the connection]
<zmike> I'm just trying to merge something...
ngcortes has joined #dri-devel
<daniels> I don't have a better answer for you
<daniels> Docker Hub enforces a rate limit on pulls
<daniels> Piglit uses Docker Hub images rather than ci-templates, which is news to me as well
<daniels> the runners are smashing up against the rate limits
<HdkR> Time for a self-hosted registry?
<daniels> we do have one
<daniels> everything else uses it
mlankhorst has quit [Ping timeout: 480 seconds]
<HdkR> hah
vivek has joined #dri-devel
hch12907 has quit [Read error: Connection reset by peer]
<sravn> danvet: I just browsed the loongson drm driver code - found something on lore. There is a few things to work on... I look forward to provide feedback to next patch revision
thellstrom has joined #dri-devel
thellstrom1 has joined #dri-devel
thellstrom has quit [Read error: Connection reset by peer]
camus1 has joined #dri-devel
camus has quit [Ping timeout: 480 seconds]
jkrzyszt has quit [Ping timeout: 480 seconds]
gpoo has quit [Ping timeout: 480 seconds]
gpoo has joined #dri-devel
camus has joined #dri-devel
camus1 has quit [Ping timeout: 480 seconds]
<jekstrand> daniels: So we just need to make piglit not use dockerhub? That sounds solvable.
mlankhorst has joined #dri-devel
<airlied> danvet: he does work for IBM :-)
rsalvaterra_ has joined #dri-devel
<danvet> airlied, I know
<danvet> still
<danvet> airlied, I mean you don't and you also work for ibm :-P
rsalvaterra has quit [Ping timeout: 480 seconds]
<airlied> danvet: i dont work for ibm, they just take my profits :-p
rsalvaterra_ has quit []
rsalvaterra has joined #dri-devel
<daniels> airlied: you love Java, really
<sravn> tzimmermann: any reason why the irq de-midlayer do not use devm_request_irq in all conversions?
<daniels> jekstrand: yep, and just for the two Tox jobs which use python:$VER images rather than ci-templates
<airlied> java on s390 is whre its at
<daniels> jekstrand: so either get it into ci-templates using a distro image, or just mirror the images in
mbrost has quit [Ping timeout: 480 seconds]
<sravn> tzimmermann: I have asked on mail, so others can see the response
thellstrom1 has quit []
tzimmermann has quit [Quit: Leaving]
yoslin_ has quit []
Guest1346 is now known as gruetzkopf
mbrost has joined #dri-devel
yoslin has joined #dri-devel
alyssa has joined #dri-devel
<alyssa> CI seems a lot more useful now that I'm writing unit tests.
<HdkR> It becomes depressing when you break 700 of them in one CI run though :P
<alyssa> meson test --suite=panfrost though
<alyssa> I have some bifrost unit tests not integrated with meson, I should fix that.
Kayden has quit [Quit: reboot]
<alyssa> admittedly if you broke those tests CI would fail in 17 different ways.
Kayden has joined #dri-devel
mbrost has quit [Ping timeout: 480 seconds]
Kayden has quit []
Kayden has joined #dri-devel
dv_ has quit [Ping timeout: 480 seconds]
yoslin has quit [Quit: WeeChat 3.2]
mbrost has joined #dri-devel
dv_ has joined #dri-devel
padovan4 has joined #dri-devel
padovan has quit [Ping timeout: 480 seconds]
aissen has quit [Ping timeout: 480 seconds]
Duke`` has quit [Ping timeout: 480 seconds]
Duke`` has joined #dri-devel
pcercuei has quit [Ping timeout: 480 seconds]
aissen has joined #dri-devel
pcercuei has joined #dri-devel
idr has quit [Remote host closed the connection]
adjtm has quit [Remote host closed the connection]
adjtm has joined #dri-devel
lemonzest has quit [Quit: Quitting]
Duke`` has quit [Ping timeout: 480 seconds]
yoslin has joined #dri-devel
danvet has quit [Ping timeout: 480 seconds]
agx has quit [Read error: Connection reset by peer]
agx has joined #dri-devel
xexaxo has joined #dri-devel
mlankhorst has quit [Ping timeout: 480 seconds]
xexaxo_ has quit [Ping timeout: 480 seconds]
ngcortes has quit [Remote host closed the connection]
pcercuei has quit [Quit: dodo]
<anholt_> jekstrand: yeah, that failure is why I haven't worked on filling out the testsuites and getting VK CI going for intel yet. Collabora's working on the workarounds.
<alyssa> Yikes!
vivijim has quit [Ping timeout: 480 seconds]
<alyssa> Investigating a shader-db regression from nir_opt_shrink_vectors led me to write a liveness validation pass for bifrost, which in turn uncovered an entire class of bugs X_X
<imirkin_> alyssa: ignorance is bliss
<alyssa> imirkin_: seriously!
<karolherbst> alyssa: the fun of having a proper backend compiler
<alyssa> karolherbst: twitter is peer pressuring me to be a grown up software dev
<karolherbst> alyssa: don't let them pressure you :D
<imirkin_> i believe those are called 'systems analysts'
<alyssa> Oh, gosh. The failed invariant causes suboptimal regalloc on mesa main branch.
<alyssa> So not just an imaginary problem
* karolherbst continues procrastinating his promotion to senior developer...
<urja> so a complex problem? :P
<alyssa> urja: exactly
<imirkin_> karolherbst: if you move to spain, you could be señor developer?
<alyssa> urja: Problem ∈ ℂ
<karolherbst> imirkin_: mhhh, the question is if moving to spain is easier than writing a few pages of text :D
<karolherbst> I might start to think it is
* airlied has procrastinated for 3 years on promotion writing
<imirkin_> any place you go, these reviews are the worst
<karolherbst> airlied: :D I think I am doing the same
<imirkin_> esp the peer reviews, which are a zero-sum game (since for every review given, someone receives them), but somehow you always end up writing more than you receive...
<karolherbst> yeah.. dunno :D
<imirkin_> so there's just a black hole of peer reviews somewhere
<karolherbst> my plan was simply to get relocated to germany first and then get the promotion, but relocating already took 2 years
<karolherbst> so I am quite close to airlieds 3 years :D
<airlied> my plan was to sit on my arse and hope someone promoted me with no effort
<karolherbst> airlied: same...
<imirkin_> airlied: step 1 complete, i presume? :)
<ccr> peer reviews should become beer reviews, e.g. you review and receive a beer
<airlied> imirkin_: my chair is pretty comfy
<karolherbst> airlied: when are you planning your next though? :D
<karolherbst> I can't imagine the hassle you have to go through though
<imirkin_> ccr: wait, is there any other way of writing them?
<airlied> karolherbst: for the extreme boss level promos it takes about a year of planning just to work out wtf they want from you :-P
<karolherbst> I heard if you make CL a success that might qualify you? Or what's the idea there? :P
<airlied> karolherbst: nah it's more about proving if they did things right we could make compute a success, rather than pulling off the impossible :-P
<karolherbst> :D
<airlied> everyone who works on CL should watch mean girls: "stop trying to make CL happen"
<alyssa> how do i deal with this uhhhh
<karolherbst> but if you would write that CL stack everybody starts using that would qualify you for the next levels :p
<alyssa> i wonder how the DDK deals
* airlied can't even compiler clang because my big RAM PC broke :-P
<airlied> systemd-oom also sucks, I should get on complaining more about it
<karolherbst> airlied: ... there is a trick: disable parallel linking :D
<karolherbst> I had this case where 12 linker processes started to have 12 linker threads each
<karolherbst> RAM didn't like that
<airlied> karolherbst: I can die on my machine with just one link :-P
<airlied> on my laptop
<airlied> probably need to nuke LTO
<karolherbst> I guess so
<karolherbst> I was wise and request 32 GB RAM
<karolherbst> *ed
<karolherbst> and 10GB is in use for... like.. having a desktop ¯\_(ツ)_/¯
<airlied> just have to go and fix my 64GB machine someday soon
<ccr> imirkin_, :D
<karolherbst> airlied: sounds like a good idea
<imirkin_> hrmph ... i'm starting to feel inadequate with 6GB on my main desktop...
<alyssa> Hm.
<karolherbst> imirkin_: how can you even manage
<imirkin_> same way i did 10y ago when the comp was new?
<karolherbst> granted my chromim has like 20 tabs open
<imirkin_> and same way i did when i had 512MB of ram in prev comp?
<karolherbst> *chromium
<imirkin_> nothing's changed in my usage
<karolherbst> sometimes I hit 50 tabs and think I should close some, but I've heard about folks with like 500 open tabs
<karolherbst> don't ask me how they organize that even ¯\_(ツ)_/¯
<imirkin_> usually i open enough tabs for them to all be little icons
<Sachiel> I don't think you get to 500 tabs by being organized
<imirkin_> and then i get angry and close all of them :)
<alyssa> karolherbst: I've been using 4GB of RAM since... maybe since always, i'm young
<karolherbst> Sachiel: apparently there are plugins to group them and stuff
<alyssa> But on the brink of making M1 my main machine which will be my first real upgrade in ages (8GB)
<karolherbst> alyssa: how do you even manage :D
<alyssa> manage.. what?
<zmike> > I don't think you get to 500 tabs by being organized
<zmike> can confirm
<karolherbst> :D
<Sachiel> karolherbst: yeah, I use one of those on firefox... I still have 41 tabs open in total and I think that's too much
<karolherbst> alyssa: living with 8GM or RAM
<karolherbst> *GB
<karolherbst> Sachiel: at some point you start closing tabs and get from 50 to 40, because all of the 40 ones are actually notes to you of things you have to do
<karolherbst> :D
<Sachiel> I keep all my notes in my head
* karolherbst doubts
<Sachiel> that way I keep my TODO list at just one item
<karolherbst> :D
<karolherbst> forgetting is part of the process, I see
<Sachiel> if it's important, it'll show up again
<karolherbst> I guess
<alyssa> [rainbow infinity.jpg]
* airlied declares tab bankruptcy at least once every week or two
<karolherbst> airlied: I just go with the flow
* Sachiel puts pound cake and tea at the top of the TODO list
<ccr> ahhh, coffee.
<airlied> building llvm and parallel CTS runs are where I like to use the RAM
<imirkin_> pound of cake. yum.
<bnieuwenhuizen> airlied: I just do it when I am forced to restart my browser to get those newfangled security updates
<karolherbst> bnieuwenhuizen: are you this kind of person who also reboots for kernel updates? :O
<bnieuwenhuizen> karolherbst: if people are requiring me to actually run the new kernel, yes
* karolherbst wished we had a desktop on linux which would restore all windows as they were before after reboot
iive has quit []
<bnieuwenhuizen> which I can actually somewhat see the reasoning for wrt security
<karolherbst> yeah....
<bnieuwenhuizen> but for home desktop it mostly happens involuntary in combination with a GPU hang
<karolherbst> :D
<karolherbst> I am still waiting on chromium to become usable with wayalnd
<karolherbst> but that still didn't happen
<karolherbst> so that's why I am a bit better on at least browser updates
idr has joined #dri-devel
<idr> Anyone else have the experience where turning off the external (to the laptop) monitor causes the session to crash?
<i-garrison> idr: kde plasma on wayland has it
<alyssa> idr: is having sessions crash at random times unexpected? asking as a mali user
<idr> Hm... This is gnome on Wayland... perhaps Wayland is the common bit?
<airlied> idr: journalctl should have the backtrace
<airlied> there are no common wayland bits :-P
Hi-Angel has quit [Ping timeout: 480 seconds]
<zmike> you're forgetting all the code copied from weston
<idr> hrm...
<idr> Looks like gnome-shell died in g_mutex_lock.
pnowack has quit [Quit: pnowack]
rasterman has quit [Ping timeout: 480 seconds]
<airlied> idr: if you file a bug wherever jadahl says with the backtrace it might be useful
alatiera0 has joined #dri-devel
<idr> jadahl: I await your command. :)
vivek has quit [Ping timeout: 480 seconds]
alatiera has quit [Ping timeout: 480 seconds]