ChanServ changed the topic of #dri-devel to: <ajax> nothing involved with X should ever be unable to find a bar
gpiccoli has quit [Quit: Bears...Beets...Battlestar Galactica]
stuarts has quit []
zf has joined #dri-devel
rosefromthedead has quit [Ping timeout: 480 seconds]
<DavidHeidelberg[m]> zmike: btw. adding flakes for HL2 engine traces, all flakes regulary on zink :(
<DavidHeidelberg[m]> not sure if new or it was always like that, but these days it's pretty common:
<DavidHeidelberg[m]> s/regulary/often/
<zmike> DavidHeidelberg[m]: those weren't flakes though, those were real bugs hitting my MRs until I fixed them
<DavidHeidelberg[m]> oopsie. Thanks for the feedback, there is likely bug inside the flake reporting logic to fix.
<DavidHeidelberg[m]> zmike: on other hand, does that mean traces catched something or it was multiple jobs failing?
<zmike> traces catch bugs
<DavidHeidelberg[m]> (heartwarming to hear that)
<clever> does anybody here know amdgpu well? ive written a prometheus exporter based on radeontop, and can now clearly see my issues, "vram" fills up instantly, but there is still a gig of "gtt" available
<clever> if i could adjust the balance between those 2 pools, i could get more out of this hw?
YuGiOhJCJ has joined #dri-devel
<robclark> zmike: yeah, for glthread
nchery has quit [Quit: Leaving]
<robclark> binhani: was hoping that tlwoerner would respond.. I think he is still involved w/ gsoc/evoc.. I've been out of the loop on that for a few years
<robclark> zmike: I am not sure how much we want to use glthread.. but making the frontend part of shader compile async is useful for some games, it seems.. (OTOH just getting disk_cache async store working for android might be enough)
<zmike> robclark: I'm not sure that's required for tc? this is basically a stream uploader that remains mapped async for copies between glthread + driver thread
<zmike> yeah glthread usage is def questionable on adreno, but HdkR said it was still good with source games, so I assume the driconf entries are still worthwhile
<robclark> I might worry about glthread + !tc case
<zmike> I filed a fd ticket about that since it crashes
<robclark> because in that case we do some of the same buffer replacement that tc does
<zmike> on zink at least I'd rather have tc than glthread
<robclark> yeah, I saw that.. but also didn't reproduce it.. but didn't try too hard
<zmike> not a big deal
<HdkR> I think the most significant part of the improvement is just getting work off the primary thread. Because games do dumb things like render and IO on the same thread :P
<robclark> yeah, same.. I think only reason glthread might be interesting is games that tend to invent new shaders at bad times
<robclark> well, that too
<zmike> in source iirc it's the number of draw calls
<HdkR> IO is a personal peeve of mine because I run games off of NFS and this device's networking perf is abysmal
<HdkR> Might as well as be 100mbit ethernet
mohamexiety has quit [Quit: Konversation terminated!]
<clever> HdkR: i once tried running WoW over samba, i then discovered WoW does certain writes one byte at a time, and the windows samba client blocks for a full round trip on each byte
fab has quit [Ping timeout: 480 seconds]
<clever> as a result, the game locks up solid for about an hour, when you disconnect/exit
<clever> and it does so before rendering the "you have been randomly disconnected" dialog
<HdkR> Nice
<clever> so it will just randomly hang, and only if you wait an hour will you know why
<HdkR> Nah, just strace it on Linux and see it doing a wackload of blocking IO one byte at a time :P
<clever> i forget the name, but there is a windows util similar to strace, which is how i discovered this
<clever> winmon or something, from the sysinternals package
fab has joined #dri-devel
<jenatali> Huh cool
<clever> that windows tool is more like dtrace on a mac, it gets syscalls for every process on the entire system, and you then have to filter it down to something useful
yuq825 has joined #dri-devel
binhani has quit [Ping timeout: 480 seconds]
heat_ has quit [Read error: No route to host]
heat has joined #dri-devel
columbarius has joined #dri-devel
aravind has joined #dri-devel
co1umbarius has quit [Ping timeout: 480 seconds]
Danct12 is now known as Guest8376
Danct12 has joined #dri-devel
binhani has joined #dri-devel
<mareko> robclark: the latter needs to handle a new PIPE_MAP flag
ybogdano has quit [Ping timeout: 480 seconds]
<mareko> it's totally different from TC if you use a slab allocator for pipe_transfer structures
bmodem has joined #dri-devel
<robclark> mareko: hmm, we do use slab allocator.. I didn't notice any reference to PIPE_MAP_x flag in docs about the caps
ybogdano has joined #dri-devel
binhani has quit [Ping timeout: 480 seconds]
ybogdano has quit [Ping timeout: 480 seconds]
Guest8366 is now known as nchery
aravind has quit [Ping timeout: 480 seconds]
agd5f has joined #dri-devel
agd5f_ has quit [Ping timeout: 480 seconds]
mbrost has joined #dri-devel
rsalvaterra has quit [Remote host closed the connection]
rsalvaterra has joined #dri-devel
robmur01 has quit [Ping timeout: 480 seconds]
mbrost_ has joined #dri-devel
mbrost has quit [Ping timeout: 480 seconds]
robmur01 has joined #dri-devel
Zopolis4 has joined #dri-devel
mbrost_ has quit [Ping timeout: 480 seconds]
mbrost_ has joined #dri-devel
aravind has joined #dri-devel
bmodem has quit []
mbrost_ has quit [Ping timeout: 480 seconds]
heat has quit [Remote host closed the connection]
kzd has quit [Ping timeout: 480 seconds]
mbrost_ has joined #dri-devel
ioldoortileotm^ has quit [Remote host closed the connection]
bmodem has joined #dri-devel
Leopold_ has quit [Ping timeout: 480 seconds]
Leopold has joined #dri-devel
mbrost_ has quit [Read error: Connection reset by peer]
godvino has joined #dri-devel
ngcortes has quit [Read error: Connection reset by peer]
godvino has quit [Quit: WeeChat 3.6]
Zopolis4 has quit []
itoral has joined #dri-devel
junaid has joined #dri-devel
junaid has quit [Remote host closed the connection]
danvet has joined #dri-devel
fab has quit [Quit: fab]
fab has joined #dri-devel
danvet has quit [Read error: Connection reset by peer]
Ahuj has joined #dri-devel
sghuge has quit [Remote host closed the connection]
sghuge has joined #dri-devel
danvet has joined #dri-devel
Zopolis4 has joined #dri-devel
alanc has quit [Remote host closed the connection]
lemonzest has quit [Quit: WeeChat 3.6]
alanc has joined #dri-devel
frieder has joined #dri-devel
lemonzest has joined #dri-devel
fab has quit [Quit: fab]
kts has joined #dri-devel
ice9 has joined #dri-devel
pochu has joined #dri-devel
bmodem1 has joined #dri-devel
tzimmermann has joined #dri-devel
bmodem has quit [Ping timeout: 480 seconds]
fab has joined #dri-devel
tursulin has joined #dri-devel
pcercuei has joined #dri-devel
rasterman has joined #dri-devel
lynxeye has joined #dri-devel
apinheiro has joined #dri-devel
devilhorns has joined #dri-devel
<MrCooper> clever: VRAM generally performs much better than GTT for GPU operations (and the latter takes away from system memory, it's not a separate pool), so the former filling up before the latter is expected
perr has joined #dri-devel
vliaskov has joined #dri-devel
<clever> MrCooper: the issue is that chrome is using a decent chunk of VRAM, and i think the gpu is exausting all VRAM when i launch a game
<clever> if i have chrome running when i launch a game, the game only gets 1fps, but if i close chrome and restart the game, it runs fine
<clever> ah, but i went into the chrome task manager, and killed things based on gpu mem usage, and now its just low enough usage that they can co-exist
<clever> until i enter a room with too much detail, then it slows to a crawl once more
rosefromthedead has joined #dri-devel
aravind has quit [Ping timeout: 480 seconds]
jdavies has joined #dri-devel
jdavies is now known as Guest8409
<MrCooper> clever: unless Chrome keeps drawing as well in the background, its BOs in VRAM should get evicted out of VRAM in favour of the game's in the long run
<clever> i could try SIGSTOP, that would halt all usage of the BO's
Guest8409 has quit [Ping timeout: 480 seconds]
djbw has quit [Read error: Connection reset by peer]
<clever> MrCooper: oh, is it possible to list all BO's and where they currently live?
<clever> along with size
<MrCooper> AFAICT /sys/kernel/debug/dri/0/amdgpu_gem_info is pretty much that
<clever> ah, perfect
<clever> its even listing the metric exporter i tied into prometheus, which has zero BO's
<clever> and if i parse that dump, i could graph how much vram and gtt each pid is using
<clever> X is listed multiple times..., because it has multiple drm handles open
perr has quit [Quit: Leaving]
<MrCooper> that is because the DRM file descriptor for DRI3 clients is opened by X, there are pending patches which will name the DRI3 clients instead
<clever> unix socket fd passing?
<MrCooper> yep
yuq825 has quit [Ping timeout: 480 seconds]
<clever> MrCooper: oh, that reminds me, while copying code from radeontop, i noticed getting the drm magic# from the render node, fails with a permission error
jfalempe has quit [Ping timeout: 480 seconds]
<clever> drmGetMagic()'s ioctl
<clever> which kind of makes the whole point of that moot
<MrCooper> the magic stuff isn't needed in the first place with render nodes
<MrCooper> rendering ioctls work by default with them
<clever> ah
<clever> so its only the display nodes, that need magic, to mediate control over output ports
<clever> and render nodes, anybody can use it, but they just get a BO result, and cant directly display it
<MrCooper> more or less, right
jfalempe has joined #dri-devel
bmodem has joined #dri-devel
bmodem1 has quit []
<clever> definitely looks like ctrl+z helps
<clever> vram dropped when i did that, and went back up upon fg
anholt_ has joined #dri-devel
anholt has quit [Ping timeout: 480 seconds]
<MrCooper> clever: is this in a Wayland or X session?
<clever> X session
heat has joined #dri-devel
yuq825 has joined #dri-devel
pac85 has joined #dri-devel
<pac85> I recently opened an MR with some changes to Gallium which triggered a ton of CI stages and I got a lot of failures. Now because those seemed unrelated I ran the CI on the main branch (plus a commit that adds some comments to trigger the ci stages)
<pac85> There are are several failures in both various drivers, crocus-hsw has 2 failures freedreno has 44 in one device, also anv has some
<pac85> Now if I understand correctly the CI runs every time something is merged so this shouldn't be possible right?
rosefromthedead has quit [Ping timeout: 480 seconds]
Leopold has quit [Remote host closed the connection]
_xav_ has quit [Ping timeout: 480 seconds]
Leopold has joined #dri-devel
_xav_ has joined #dri-devel
<zmike> you ran the manual jobs that don't run on merges
<zmike> you aren't supposed to run those
<pac85> Oh I didn't know that. Thank you!
<danvet> javierm, for the nvidia think I did some series but didn't get around to respinning it yet :-(
kts has quit [Quit: Konversation terminated!]
Danct12 has quit [Quit: WeeChat 3.8]
heat has quit [Remote host closed the connection]
pochu has quit [Quit: leaving]
heat has joined #dri-devel
aravind has joined #dri-devel
agd5f_ has joined #dri-devel
<javierm> danvet: yeah, we remembered that with tzimmermann and were discussing it yesterday
<danvet> I need to get around to that :-(
<javierm> danvet: since only affects nvidia in practice, I guess is hard to give it a high prio
<javierm> there are some patches from your series that I think could land though, like the ones for ast and mga500
<javierm> tzimmermann: ^
agd5f has quit [Ping timeout: 480 seconds]
Zopolis4 has quit []
fxkamd has joined #dri-devel
itoral has quit [Remote host closed the connection]
pa- has quit [Ping timeout: 480 seconds]
gpiccoli has joined #dri-devel
fxkamd has quit []
<zmike> mareko: I still need details on the exact failure you're seeing with KHR-GL46.gpu_shader_fp64.fp64.state_query
Haaninjo has joined #dri-devel
elongbug has joined #dri-devel
heat has quit [Read error: No route to host]
heat has joined #dri-devel
devilhorns has quit []
elongbug has quit [Remote host closed the connection]
elongbug has joined #dri-devel
mohamexiety has joined #dri-devel
rosefromthedead has joined #dri-devel
yuq825 has left #dri-devel [#dri-devel]
kts has joined #dri-devel
srslypascal has quit [Remote host closed the connection]
srslypascal has joined #dri-devel
pa has joined #dri-devel
fab has quit [Quit: fab]
bmodem has quit [Ping timeout: 480 seconds]
stuarts has joined #dri-devel
aravind has quit [Remote host closed the connection]
<MrCooper> robclark: sysfs is generally writable for root only, so not usable for general purpose display servers
ice9 has quit [Read error: Connection reset by peer]
ice9 has joined #dri-devel
<dj-death> gfxstrand: random question, it seems global memory load/store to implement ubo/ssbo load/stores does not deal with null descriptors, does that sound correct?
<gfxstrand> dj-death: It handles them when robustness is enabled because the buffer size is zero and so everything is OOB
kzd has joined #dri-devel
<dj-death> I see thanks
<dj-death> looks like it's going to be my problem with descriptor buffers :/
<mareko> robclark: PIPE_MAP_THREAD_SAFE
heat_ has joined #dri-devel
heat has quit [Read error: No route to host]
<anholt_> I believe so
<anholt_> (when receiving a bug report from folks that cared about the cts, that's the root of the url I got)
binhani has joined #dri-devel
Ahuj has quit [Read error: Connection reset by peer]
fab has joined #dri-devel
<jenatali> daniels: What kind of stress test were you thinking for !22034?
<daniels> jenatali: .gitlab-ci/bin/ —target ‘jobnameregex’ —stress
mbrost has joined #dri-devel
<daniels> add —sha REV (or —pipeline ID) if it’s not HEAD you want to test
godvino has joined #dri-devel
<jenatali> Ah I see
* gfxstrand so wants to rewrite the RADV image code but I'm too afraid to
<jenatali> I've not used that yet because using the UI to click play on the Windows build jobs automatically limits to the drivers I care about :P
<zmike> surely there's nothing more urgent demanding your time
YuGiOhJCJ has quit [Quit: YuGiOhJCJ]
rosefromthedead has quit []
<zmike> anholt_: is it intentional that deqp-runner doesn't work with --deqp-log-images=disable / --deqp-log-shader-sources=disable ?
Duke`` has joined #dri-devel
<anholt_> I haven't considered ever wanting to do that.
<anholt_> "be able to make some sense of rare, flaky results" has always been a priority.
<gfxstrand> dj-death: Why do descriptor buffers make it harder?
<zmike> mm
<zmike> in a run where you know there will be lots of failures, writing all the outputs will end up increasing the test time exponentially
<anholt_> I think you'd need to back up and explain what problem you're really trying to solve here.
<zmike> I'm trying to solve the problem of running cts CLs that are broken and not being able to because writing all the outputs takes literal hours and consumes my entire disk
<anholt_> I'm asking why you need the status of running the a large subset of the CTS if it's broken?
<zmike> because I'm involved with CTS development and running tests is part of the process?
<anholt_> I guess you could make the cts allow repeated arguments that override each other. or special-case in deqp-runner to drop the defaults if you have an override present. but I still don't understand why you need to run some large fraction of some massive set of tests that are all failing.
<anholt_> usually people dealing with some big set of failing stuff will carve off a specific subset to test, or use a --fraction, or something.
<anholt_> like, are you planning on tracking thousands of xfails as you develop?
<anholt_> I really don't get the usecase.
<zmike> well currently I can't even establish a baseline
frieder has quit [Remote host closed the connection]
<zmike> the goal is to be able to determine the patterns of tests that are failing so that they can be fixed, but I can't actually run all the tests (in deqp-runner) because of the previously-mentioned issues
<anholt_> ok, well, I've given a whole bunch of ideas here. go for it.
<zmike> alrighty
pcercuei has quit [Quit: brb]
kts has quit [Quit: Konversation terminated!]
bmodem has joined #dri-devel
pcercuei has joined #dri-devel
bmodem has quit [Ping timeout: 480 seconds]
djbw has joined #dri-devel
<zmike> anholt_: as an alternative, is there a reason why deqp-runner couldn't just add the user's — options after the deqp-runner internal opts? I think that would solve this case neatly
<zmike> (though obviously it would also allow users to footgun)
tursulin has quit [Ping timeout: 480 seconds]
<anholt_> actually, looking at the code, deqp-runner doesn't add any image or shader logging args
<Sachiel> they are on by default
<anholt_> yeah
<zmike> it seems to add those args at
<zmike> err 313
<zmike> and 322
<zmike> will test if moving the user args is enough to resolve it
<anholt_> I'm looking at 313 and that's "deqp-log-filename"
<anholt_> 322 is "deqp-shadercache-filename"
<zmike> yea I'm guessing specifying those overrides the user options
<zmike> or something
<anholt_> the user options you asked about were " --deqp-log-images=disable / --deqp-log-shader-sources=disable"
<anholt_> which are not those.
<zmike> I dunno, I'm just speculating why the user opts wouldn't be working based on the code there
<anholt_> ok, I'm going to stop engaging with this conversation until you do some actual investigation instead of speculating.
<zmike> sounds good
<dj-death> gfxstrand: have to decode the RENDER_SURFACE_STATE from the shader
<dj-death> gfxstrand: not impossible, just added the additional "is this the null surface" check
<dj-death> gfxstrand: maybe we never want to use A64 with descriptor buffers?
<gfxstrand> dj-death: Oh, right...
<gfxstrand> dj-death: I think we have to for things like 64-bit atomics
<gfxstrand> Unless those got surface messages when I wasn't looking
<heftig> does anyone know whether might get into 23.0.1?
<zmike> anholt_: okay, deeper investigating reveals this might be a cts runner issue and not deqp-runner at all; sorry for the noise
sghuge has quit [Remote host closed the connection]
sghuge has joined #dri-devel
ice99 has joined #dri-devel
smiles_1111 has quit [Ping timeout: 480 seconds]
ice9 has quit [Read error: Connection reset by peer]
<binhani> tlwoerner: I was informed that you might be involved with GSoC related projects. If that's true, would you guide me to what projects are currently available for this GSoC
alyssa has joined #dri-devel
<alyssa> embarassing how quickly my mesa ci appreciation report is filling up
<alyssa> everyone else is welcome to add to it too, right now it looks like it's just me who keeps trying to merge broken code and getting told off by Marge
* alyssa sweats
<robclark> anholt_: re: deqp-runner, a --give-up-after-this-many-failures=N type thing might be useful.. since I've seen CI runtimes get really long when an MR is more broken than expected
<anholt_> robclark: hard to use for CI, though. Think about when we uprev the cts, and lots of new fails show up and you need to categorize them.
<anholt_> taking really long should usually be limited by the job timeouts, which should be set appropriately already (but may not be in all cases)
<robclark> hmm, that is kinda a special case, and I guess you could just push a hack to disable that option when shaking things out
<anholt_> and job timeouts are much better at getting at the thing you're concerned about, though they mean that you don't get the results artifacts.
<robclark> I guess timeouts need to be a bit conservative because they can indicate unrelated issues
<robclark> anyways, just an idea..
<daniels> yeah I was thinking about --give-up-after-n-fails and I think it is good - you're right to say that it's painful when you're doing uprevs, but you can just enable it for marge jobs and not user/full jobs
<robclark> yeah, marge vs !marge would work
<daniels> there's a bit of a tension with gitlab-side job timeouts, as we do want to allow those to be longer so in case of machine issues (won't boot, randomly died, network stopped networking, etc) we can retry the test runs without losing our DUT reservation
<daniels> but just giving up early on deqp if things are really really broken and saying 'errr idk try it yourself perhaps' is definitely helpful
<anholt_> daniels: yeah, that's why for bare-metal I've got the TEST_PHASE_TIMEOUT
<daniels> hmm, I'm sure I've seen a630 blow through like 45 minutes on jobs which just crash every single test
<anholt_> though it ends up being real embarrassing when the reason you keep rebooting to retry your job is that there was a minute of testing left and you decided that the board must be hosed.
<daniels> we do have that on LAVA as well, but I've had to keep creeping it up because runtimes keep creeping up and then you start introducing false fails
<daniels> yeah
<anholt_> we've been getting pretty sloppy on keeping actual test phase runtimes down
<alyssa> there's an interesting chicken/egg question here ... should you only assign an MR to Marge if you're really really sure it's going to pass (probably yes), and if so, should you do a manual pipeline before assigning something to MR (unsure)?
<alyssa> (re: more broken than expect)
<anholt_> my opinion is yes, you shouldn't hand something to marge unless you've got a recent green pipeline.
<alyssa> OK
<daniels> anholt_: tbf part of that is our measurements being totally shot due to rubbish servo UART, but gallo is working on SSH execution; we've got a working PoC
<alyssa> Of the entire pipeline or just the relevant subset?
ice99 has quit [Ping timeout: 480 seconds]
<daniels> I don't mind passing stuff to Marge that definitely looks like it should be OK and not super risky
<anholt_> daniels: what's the plan for getting kernel messages while also doing ssh execution?
<daniels> anholt_: still snoop UART for kmsg, but use SSH to drive the actual tests
<anholt_> interesting
<anholt_> sounds like something that might have sharp corners, but good luck! would be lovely to have the boards more reliable
<robclark> alyssa: I've defn done the misplaced ! or similar where I expected a green pipeline but instead broke the world
* anholt_ wonders if this could include a heartbeat in kmsg so we know when uart dies
<robclark> fwiw console-ram-oops is useful for getting dmesg.. but after the DUT does warm reboot so not sure how to usefully slot that into CI as uart replacement (for kernel msgs)
<daniels> anholt_: hmm right, you mean just so we can emit 'btw uart died so you might be missing any oops'?
<anholt_> that's what I was thinking
<daniels> robclark: the problem here is that we're relying on UART to actually drive the testing machinery, so if UART goes off a cliff (which all servo-v4 seems to do), then we no longer know what deqp's doing, so we just time out and kill it
<robclark> yeah
srslypascal is now known as Guest8446
srslypascal has joined #dri-devel
<dj-death> gfxstrand: thanks, I always forget about 64bit atomics
Guest8446 has quit [Read error: Connection reset by peer]
<alyssa> robclark: yeah.. I think there's a social issue around the expectations on CI and on developers using CI, i'm unsure how we want to resolve it... When you don't have any CI you're (ostensibly) more likely to do your own deqp runs ahead-of-time before `git push`ing crap... When you do have CI it's really easy to say "well, if CI is happy so am I" and assign to marge plausibly looking code.
<alyssa> this isn't CI's fault, but I think there might be some mismatched expectations
<gfxstrand> dj-death: Sorry
<gfxstrand> dj-death: The good news is that it's not a common case so if the code is a bit horrible it's not the end of the world.
<robclark> alyssa: CI is useful in that it can run CTS across more devices than I could manually in a reasonable amount of time.. I just try to (a) if I expect some trial/error, do it at a time when marge isn't busy (or at least has MRs in the queue that look like they won't be competing for the same runners, and (b) if I realize I broke the world, cancel the job to free up runners
<alyssa> robclark: sure, the "across more devices" is big for me, I don't even *have* all the panfrost hardware we run in CI lol
<alyssa> fwiw - i'm not trying to be argumentative here. it's just that I don't think we've communicated/documented a clear expectation for what Marge workflows should look like for developers.
<alyssa> (case in point: I only learned about the "run manual jobs satisfying regex" script, like, last week)
<daniels> we tried to make it as well documented as the rest of Mesa :P
<MrCooper> alyssa: I think your Marge appreciation issue helps counter-act the human mind's tendency to focus on the negative, thanks for that
<daniels> (more seriously, the docs do need updating, yeah)
srslypascal has quit [Ping timeout: 480 seconds]
<alyssa> MrCooper: that's the idea :)
mbrost has quit [Remote host closed the connection]
mbrost has joined #dri-devel
<alyssa> MrCooper: also humbling as hell, because these days I try not to assign anything to Marge that hasn't been appropriately reviewed and that I'm not reasonable sure is correct
<alyssa> and yet, still manage to fill up that thread pretty quickly o_o
<robclark> alyssa: I think it is pretty normal to miss/overlook things.. think of CI as `Reviewed-by: GPU` ;-)
pac85 has quit [Quit: Konversation terminated!]
mbrost has quit [Remote host closed the connection]
<MrCooper> that's what we have the CI for :)
gouchi has joined #dri-devel
mbrost has joined #dri-devel
<robclark> yup
tobiasjakobi has joined #dri-devel
mbrost has quit [Ping timeout: 480 seconds]
tobiasjakobi has quit [Ping timeout: 480 seconds]
tonyk92 is now known as tonyk
bluebugs has quit [Read error: Connection reset by peer]
Guest8355 is now known as ybogdano
Guest8451 has joined #dri-devel
godvino has quit [Quit: WeeChat 3.6]
lynxeye has quit [Quit: Leaving.]
<alyssa> + * Copyright 208 Alyssa Rosenzweig
<alyssa> damn i must be old
<airlied> or parallel universe you
<FLHerne> or the code travelled back from the future, where they've reset the year numbering
<kisak> Should I ask where the other 207 Alyssas went?
<kisak> nevermind, I don't want to know the answer
<ccr> alyssa, sounds like seriously legacy code :P and you must've invented copyright!
<alyssa> I've messed with time before and there haven't been any noticeable consequences!
<ccr> or so it would seem ... * glances around *
<alyssa> lina: asahi spdx conversion MR up
<alyssa> we'll see what happens I guess
<qyliss> . o O ( does this mean the code is accidentally public domain )
<alyssa> qyliss: seems leigt
<alyssa> legit
<alyssa> the fact there's still code in Mesa that I wrote in high school amuses me greatly
<psykose> it just means it's really good
<alyssa> think
<alyssa> if i could go back in time i'd tell my high school self to use genmxl
<alyssa> genxml
<psykose> i can think of far better things to tell my high school self
<alyssa> oh, i mean. same.
tzimmermann has quit [Quit: Leaving]
rmckeever has joined #dri-devel
bluebugs has joined #dri-devel
<alyssa> admittedly the initial checkin of asahi was pretty bad and that was genxml
<alyssa> granted that was a driver merged after barely more than 4 months since getting my hands on the hardware, with no hw docs or reference code, while in school and doing panfrost
<alyssa> so I guess I can forgive myself for hardcoding some things :p
Zopolis4 has joined #dri-devel
<dj-death> gfxstrand: ah, chasing what I thought was a compiler bug for a while
<dj-death> gfxstrand: but apparently you can set nullDescriptor=true and robustBufferAccess=false
<dj-death> looks like we need the internal NIR robust handling to implement null descriptors support with global loads
<alyssa> womp womp
<gfxstrand> dj-death: Oh, that's entertaining. :-/
alyssa has quit [Quit: leaving]
Guest8451 has quit [Ping timeout: 480 seconds]
ngcortes has joined #dri-devel
mohamexiety has quit []
yogesh_m1 has quit [Ping timeout: 480 seconds]
icmroortideotm^ has joined #dri-devel
ybogdano is now known as Guest8468
ybogdano has joined #dri-devel
Duke`` has quit [Ping timeout: 480 seconds]
pcercuei has quit [Quit: dodo]
heat has joined #dri-devel
heat_ has quit [Remote host closed the connection]
flto has quit [Ping timeout: 480 seconds]
danvet has quit [Ping timeout: 480 seconds]
yogesh_m1 has joined #dri-devel
Haaninjo has quit [Quit: Ex-Chat]
mbrost has joined #dri-devel
vliaskov has quit [Remote host closed the connection]
vliaskov has joined #dri-devel
bgs has quit [Remote host closed the connection]
gouchi has quit [Remote host closed the connection]
rasterman has quit [Quit: Gettin' stinky!]
fab has quit [Quit: fab]
rsalvaterra has quit []
mbrost has quit [Remote host closed the connection]
mbrost has joined #dri-devel
rsalvaterra has joined #dri-devel
fxkamd has joined #dri-devel
ice9 has joined #dri-devel
ice9 has quit [Remote host closed the connection]
<mareko> zmike: the shader of the glcts test is: ; The problem is that gl_Position is written by VS but not read by TCS, which causes the linker to eliminate the gl_Position write, which makes all uniforms inactive, but the test expects all of them to be active, which is incorrect
<mareko> *all VS uniforms inactive
<zmike> mareko: ok, should be an easy fix then
Zopolis4 has quit []
flto has joined #dri-devel
smiles_1111 has joined #dri-devel
<mareko> zmike: In theory, what the test is doing is setting an output value based on a bunch of uniforms. There is an optional optimization in my plans ("uniform expression propagation") to eliminate that output by moving the whole uniform expression into the next shader (if the expression doesn't source any phis, though in theory we could move whole branches into the next shader). That will ruin the test even
<mareko> if the dead gl_Position write is fixed.
Company has quit [Quit: Leaving]
<mareko> tarceri: do you think it's feasible to do this at the end of gl_nir_link_varyings, and it's about outputs storing a value from load_ubo: move a UBO load from one shader to another as an optimization, i.e. copying the UBO declaration to the next shader stage such that st/mesa will correctly bind the UBO in both shader stages automatically?