<MrCooper>
if you have no other way to test, manually triggering just the needed jobs seems fair
<alyssa>
MrCooper: every gl & vk job is needed
sassefa has quit []
<zmike>
I've been complaining about this for months because there is no way
<zmike>
you just have to mash on the play buttons until it starts
sassefa has joined #dri-devel
<alyssa>
and.. people do that...?
<zmike>
no other option
<zmike>
pressing play on the top of the build job columns is usually enough
<zmike>
but you have to do it in stages as it completes
<zmike>
which is supremely annoying
<alyssa>
ummmmm... ok..
<alyssa>
i mean the other option is to stop working on common code which i had for a while but had hoped that 7 months later thing would be working again
<alyssa>
i see.
feaneron has joined #dri-devel
kts has joined #dri-devel
heat has joined #dri-devel
kzd has joined #dri-devel
<mattst88>
is "start CI jobs" not a thing you can do from the gitlab REST API?
Hazematman has joined #dri-devel
<Hazematman>
Can you not use the `./bin/ci/ci_run_n_monitor.py` script with a regex to match all the gl & vk tests?
<Hazematman>
That's what i've been doing to trigger specific jobs, without fiddling with the gitlab UI
<alyssa>
Hazematman: it would be .* regex since it's touching all the drivers
<alyssa>
but apparently this is considered "abusive"
<zmike>
.* doesn't work
<zmike>
or at least it has never worked any of the times I've tried it
nerdopolis has quit [Ping timeout: 480 seconds]
<alyssa>
that too (:
<Hazematman>
zmike: It does, if you do the `--force-manual` or whatever the option is. But the use case of testing against all CI for exploratory changes seems like a good one. It would be nice if there was a way to that without being "abusive" for hard ot test changes
<zmike>
why would I need --force-manual to trigger jobs that are not manual?
<zmike>
yeah there have been a lot of tickets about it
diolara^ has quit [Remote host closed the connection]
krushia has joined #dri-devel
illwieckz has quit [Ping timeout: 480 seconds]
illwieckz has joined #dri-devel
Calandracas_ has joined #dri-devel
<zmike>
alyssa: I assume panfrost passes dEQP-EGL.functional.partial_update.render_outside_damage_region ?
nerdopolis has joined #dri-devel
Calandracas_ has quit []
Calandracas_ has joined #dri-devel
Calandracas has quit [Ping timeout: 480 seconds]
<alyssa>
zmike: dunno, I haven't booted a Mali device in a while
<zmike>
oh I figured you'd just know
feaneron has quit [Quit: feaneron]
nerdopolis has quit [Ping timeout: 480 seconds]
sassefa has quit []
feaneron has joined #dri-devel
bolson has quit [Remote host closed the connection]
bolson has joined #dri-devel
tobiasjakobi has joined #dri-devel
tobiasjakobi has quit []
LeviYun has quit [Ping timeout: 480 seconds]
<alyssa>
dEQP-EGL was passing back in the day
<alyssa>
I don't know if those are new tests or if something's regressed since
<zmike>
I only asked because there are not many drivers in mesa that support KHR_partial_update and panfrost is one of them
<alyssa>
ah
hansg has joined #dri-devel
<alyssa>
i don't think that ext does anything on panfrost on hw newer than mali-t860
Duke`` has joined #dri-devel
<MrCooper>
alyssa: only running jobs which aren't actually needed is abusive
<zmike>
MrCooper: you probably know this - does EGL_KHR_partial_update include clear operations in the damage region?
<zmike>
or only draw commands
<zmike>
the spec refers only to "client api rendering" which can be ambiguous
<mattst88>
do we have anything that cleans ~/.cache/mesa_shader_cache? I just realized mine was 16G.
<alyssa>
MrCooper: sorry, I'm struggling to parse that, could you rephrase? thanks
<zmike>
mattst88: rm -r ?
<MrCooper>
alyssa: it's not abusive if you actually need all those jobs
<alyssa>
ah, yeah.
<MrCooper>
what's abusive is blindly running all jobs when only some subset is needed
<alyssa>
the issue as always is that I don't actually need the -full jobs, just the set Marge would run, but there's not a way to separate them
<MrCooper>
zmike: not sure offhand
<alyssa>
mattst88: this doesn't directly answer your question, but if you aren't already - consider setting MESA_SHADER_CACHE_DIR=/dev/shm for CTS runs
<mattst88>
zmike: yeah, that's what I did -- but it'd be nice if we had a program that could run occasionally and clean files that are older than $date or something
<alyssa>
should be faster, avoids polluting the cache, and saves your SSD some write cycles
<alyssa>
I think CI does something similar
<mattst88>
ccache for example allows setting a limit on the cache size and then when it's reached, deletes files to keep the cache size-limited
<mattst88>
alyssa: that's a good idea. thanks
<MrCooper>
mattst88: the cache is supposed to be pruned to 1G by default IIRC, so that sounds like something might have gone wrong there
<mattst88>
MrCooper: hm, okay. thanks
<alyssa>
(Previously I had disabled the shader cache for CTS, but it's faster to enable it but backed by RAM. At least for GL.)
<mattst88>
alyssa: presumably because you actually get lots of cache hits running CTS?
<alyssa>
Yeah
<mattst88>
I'd be cool to have some cache stats like `ccache -s` gives you
bolson_ has joined #dri-devel
tzimmermann has quit [Quit: Leaving]
bolson has quit [Ping timeout: 480 seconds]
LeviYun has joined #dri-devel
feaneron has quit [Read error: Connection reset by peer]
illwieckz has quit [Ping timeout: 480 seconds]
LeviYun has quit [Ping timeout: 480 seconds]
warpme has quit []
LeviYun has joined #dri-devel
<jenatali>
alyssa: Thanks for tackling the DXIL change in that MR. I expected you were going to ping me to do it :P
leizhou has quit [Remote host closed the connection]
leizhou has joined #dri-devel
LeviYun has quit [Ping timeout: 480 seconds]
ManMower has quit [Ping timeout: 480 seconds]
testaccount has joined #dri-devel
f_ is now known as F_
F_ is now known as FunDerScore
FunDerScore is now known as funderscore
funderscore is now known as f_
testaccount has quit []
u-amarsh04 has quit [Read error: Connection reset by peer]
ManMower has joined #dri-devel
jsa has quit [Ping timeout: 480 seconds]
abhinav__ has joined #dri-devel
vedranm_ is now known as vedranm
warpme has joined #dri-devel
<gfxstrand>
Lynne: Typing it this week
warpme has quit []
sassefa has joined #dri-devel
warpme has joined #dri-devel
Duke`` has quit []
warpme has quit []
LeviYun has joined #dri-devel
<alyssa>
jenatali: I expect you to debug it ;P
<ccr>
"no, mr. bond .. I expect you to debug."
LeviYun has quit [Max SendQ exceeded]
illwieckz has joined #dri-devel
feaneron has joined #dri-devel
testaccount has joined #dri-devel
u-amarsh04 has joined #dri-devel
tobiasjakobi has joined #dri-devel
testaccount has quit []
tobiasjakobi has quit []
glennk has quit [Remote host closed the connection]
glennk has joined #dri-devel
monkey12345[m] has joined #dri-devel
bmodem has quit [Ping timeout: 480 seconds]
coldfeet has joined #dri-devel
lynxeye has quit [Quit: Leaving.]
monkey12345[m] has left #dri-devel [#dri-devel]
kts has quit [Ping timeout: 480 seconds]
LeviYun has joined #dri-devel
<Lynne>
gfxstrand: amazing
<Lynne>
I remember you were saying it would take enourmous amount of hacks to implement, did you find a way around?
<alyssa>
Lynne: she is embracing the hacks ;)
feaneron has quit []
feaneron has joined #dri-devel
feaneron has quit [Read error: Connection reset by peer]
<Lynne>
for sane descriptor handling, as me and a certain chair would say, its all worth it
feaneron has joined #dri-devel
feaneron has quit [Read error: Connection reset by peer]
feaneron has joined #dri-devel
feaneron has quit [Read error: Connection reset by peer]
feaneron_ has joined #dri-devel
gouchi has joined #dri-devel
sassefa has quit []
sassefa has joined #dri-devel
nerdopolis has joined #dri-devel
Haaninjo has joined #dri-devel
LeviYun has quit [Ping timeout: 480 seconds]
Calandracas has joined #dri-devel
coldfeet has quit [Remote host closed the connection]
LeviYun has joined #dri-devel
feaneron_ has quit [Read error: Connection reset by peer]
feaneron has joined #dri-devel
nerdopolis has quit [Ping timeout: 480 seconds]
Calandracas__ has joined #dri-devel
Calandracas_ has quit [Ping timeout: 480 seconds]
LeviYun has quit [Ping timeout: 480 seconds]
Calandracas has quit [Ping timeout: 480 seconds]
<alyssa>
Lynne: I am not sure I agree that edb is sane descriptor handling..
nerdopolis has joined #dri-devel
<Lynne>
do you subscribe to the religion of d3d12-style descriptor heaps?
<austriancoder>
alyssa: done
<alyssa>
Lynne: I mean. At least I *understand* heaps ;)
<alyssa>
austriancoder: thanks!
<Lynne>
I don't think heaps are simpler at all...
<Lynne>
what's simpler than a buffer which you map and just set descriptors into?
<jenatali>
It's just an opaque version of the same?
<Sachiel>
it's not so simple when the hw doesn't work that way
<alyssa>
admittedly i haven't taken the time to understand EDB
<alyssa>
but the idea of having multiple descriptors buffers instead of just 1 heap is a sticky point to me
<alyssa>
heaps are just easier to reason about for me
<jenatali>
Yeah. D3D's got one heap because some hardware can only have one
<alyssa>
honeykrisp is 100% heaps internally, even though the hardware is sometimes more flexible than that
LeviYun has joined #dri-devel
<dj-death>
alyssa: how big's your heap?
<gfxstrand>
EDB is sane if you're AMD
<jenatali>
Right
<alyssa>
dj-death: which one
<dj-death>
alyssa: okay :)
<dj-death>
alyssa: so what heaps do you have and what sizes? ;)
<alyssa>
tiny sampler heap, hardware
<alyssa>
massive texture heap, arbitrary size (well up to 4GiB i guess), these are closer to buffers in hardware but we just use one as a global heap
<dj-death>
okay
<alyssa>
there's no structured buffer hardware so that's whatever software wants
<alyssa>
Huh
<alyssa>
I guess I do subscribe to the heap religion
<dj-death>
so all buffer accesses are done with global load/store?
<DavidHeidelberg>
karolherbst: HW clear_buffer is giving small speedup 130 -> 120ms per token on Intel TGL. Not that cool as on freedreno, thou nice (+ accounting the fact I have i7-1185G7 @ 3Ghz). also CPU usage decreases, but it's 1 core 50% -> 25-30%...
<alyssa>
dj-death: """interesting"""
blaztinn has quit [Remote host closed the connection]
<alyssa>
i think you mean i've set a new record for running dxvk on garbage hardware ;p
warpme has quit []
blaztinn has joined #dri-devel
<karolherbst>
DavidHeidelberg: yeah... anything which does stuff on the CPU will tank perf, and I'm already thinking of ways on how to mitigate all of this, even considering not using those callbacks at all... (e.g. going through a temporary buffer first or something)
warpme has joined #dri-devel
davispuh has joined #dri-devel
<dj-death>
alyssa: I don't know
<karolherbst>
I was also considering adding some interface so that drivers can tell what's blocking and what's not
<karolherbst>
but I think threaded context is also doing things like that? not quite sure
<dj-death>
alyssa: define "garbage" ;)
<dj-death>
alyssa: I think I prefer less capable than slightly broken in 536 different ways
<karolherbst>
nvidia also doesn't have bound checks on buffers :P though they do exists for other things generally :D
<alyssa>
dj-death: well.. this is a gles3.1 part that i'm trying to run dx12 on....
<alyssa>
and it's the pipelineist hw in the industry and I did ESO on it....
warpme has quit [Ping timeout: 480 seconds]
frieder has quit [Remote host closed the connection]
nerdopolis has quit [Ping timeout: 480 seconds]
leizhou has quit [Remote host closed the connection]
leizhou has joined #dri-devel
warpme has joined #dri-devel
hansg has quit [Quit: Leaving]
mvlad has quit [Remote host closed the connection]
Karyon has quit [Remote host closed the connection]
nerdopolis has joined #dri-devel
feaneron has quit [Quit: feaneron]
<zmike>
karolherbst: do you mean is_resource_busy ?
feaneron has joined #dri-devel
sassefa has quit []
<karolherbst>
no, like.. some operations are more or less blocking by default on some drivers e.g. texture_subdata, you can e.g. have a temporary resource you do subdata on and then do resource_copy to the actual resource instead so you don't risk waiting on the resource on the CPU side for the subdata
<karolherbst>
or rather, that's what I'm considering doing
<karolherbst>
but I'd rather not
<karolherbst>
maybe I should write shaders for all those ops :D
warpme has quit [Ping timeout: 480 seconds]
<karolherbst>
but yeah.. maybe is_resource_busy would help to only do something weird if it's really needed
<zmike>
yeah you're talking about the staging buffer with unmap sync dance
<zmike>
if you mean doing it truly async, i.e., using a separate context and threads, tc doesn't do that for textures
<karolherbst>
yeah... I'll have to check if threaded_context even makes sense for what I'm doing or not or if it's better if I do it all manually
nerdopolis has quit [Ping timeout: 480 seconds]
alanc has quit [Remote host closed the connection]
alanc has joined #dri-devel
gouchi has quit [Remote host closed the connection]
<zmike>
tc mostly just stuff for buffer invalidation
nerdopolis has joined #dri-devel
LeviYun has joined #dri-devel
warpme has joined #dri-devel
Jeremy_Rand_Talos_ has quit [Remote host closed the connection]
Jeremy_Rand_Talos_ has joined #dri-devel
guludo has joined #dri-devel
warpme has quit [Ping timeout: 480 seconds]
nerdopolis has quit [Ping timeout: 480 seconds]
LeviYun has quit [Ping timeout: 480 seconds]
leizhou has quit [Remote host closed the connection]
leizhou has joined #dri-devel
LeviYun has joined #dri-devel
nerdopolis has joined #dri-devel
feaneron has joined #dri-devel
LeviYun has quit [Ping timeout: 480 seconds]
karenw has joined #dri-devel
feaneron has quit [Read error: Connection reset by peer]
feaneron has joined #dri-devel
karenw is now known as karenthedorf
karenthedorf is now known as karenw
feaneron has quit [Read error: Connection reset by peer]
feaneron has joined #dri-devel
<Company>
llvmpipe git seems to be very flaky atm
<Company>
is there bigger refactorings going on?
LeviYun has joined #dri-devel
illwieckz has quit [Remote host closed the connection]
LeviYun has quit [Ping timeout: 480 seconds]
LeviYun has joined #dri-devel
leizhou has quit [Remote host closed the connection]
<airlied>
Company: zmike is burning the whole place down
<Company>
so I can just wait for a while until it settles
<Company>
before filing tons of bugs
<Company>
I'm doing a bunch of perf optimizations atm
<Company>
well, trying to
<Company>
and testing them on my rpi and llvmpipe from time to time to see if fps goes up there
<airlied>
probably worth filing one or two, I'm not sure if CI doesn't cover it if he'll catch things
<airlied>
is it just failing to load?
<airlied>
since there isn't much llvmpipe development going on it, it's all around the glx/egl/dri bits
LeviYun has quit [Ping timeout: 480 seconds]
<Company>
no, I've had a few random crashes
<Company>
that weren't reproducible
<Company>
and it's complaining about syncs being invalid from time to time
<airlied>
oh those might be worth trying to file
<Company>
that's the most evil ones probably because if it's syncs it's like texture handoff from gstreamer
karenw has quit [Remote host closed the connection]
karenw has joined #dri-devel
illwieckz has joined #dri-devel
sassefa has joined #dri-devel
<Company>
after having a look, those might be my fault and llvmpipe is the only one finding a race with make_current() (because it's too slow)