acidburn_ has quit [Remote host closed the connection]
sima has joined #dri-devel
xzhan34_ has joined #dri-devel
<airlied>
zmike: could we rename zink-lvp job to zink-lavapipe, so I can write a regex for the ci run script?
xzhan34 has quit [Ping timeout: 480 seconds]
kts has joined #dri-devel
<DavidHeidelberg>
Rather unify all jobs from lvp to lavapipe or otherway :)
vyivel has quit [Read error: Connection reset by peer]
<airlied>
DavidHeidelberg: I want all to match on .*l.*pipe :)
<airlied>
because I'm crap at writing regexps that involve two completely different patterns
Leopold_ has quit [Remote host closed the connection]
Leopold_ has joined #dri-devel
xzhan34__ has joined #dri-devel
vyivel has joined #dri-devel
Daanct12 has joined #dri-devel
Dark-Show has joined #dri-devel
xzhan34_ has quit [Ping timeout: 480 seconds]
mvlad has joined #dri-devel
Dark-Show has quit [Quit: Leaving]
glennk has joined #dri-devel
Company has quit [Remote host closed the connection]
xzhan34__ has quit [Remote host closed the connection]
xzhan34__ has joined #dri-devel
vyivel has quit [Ping timeout: 480 seconds]
vyivel has joined #dri-devel
mwalle has joined #dri-devel
elongbug has joined #dri-devel
frieder has joined #dri-devel
sarahwalker has joined #dri-devel
sgruszka has joined #dri-devel
yyds has quit []
lynxeye has joined #dri-devel
vyivel has quit [Ping timeout: 480 seconds]
vyivel has joined #dri-devel
sarahwalker has quit [Ping timeout: 480 seconds]
ficoPRO10 has joined #dri-devel
jkrzyszt has joined #dri-devel
rgallaispou has joined #dri-devel
lemonzest has quit [Quit: WeeChat 4.0.5]
sarahwalker has joined #dri-devel
yyds has joined #dri-devel
hansg has joined #dri-devel
kts has quit [Ping timeout: 480 seconds]
lemonzest has joined #dri-devel
pochu has quit [Ping timeout: 480 seconds]
tursulin has joined #dri-devel
pcercuei has joined #dri-devel
<DavidHeidelberg>
airlied: ".*(lvp|lavapipe|llvmpipe).*", but I agree that on first sight, lvp doesn't reassemble lavapipe that much
<DavidHeidelberg>
I think the original (and ongoing :/ ) reason for this is character limited gitlab UI presentation of job names
vyivel has quit [Read error: Connection reset by peer]
mripard has joined #dri-devel
rasterman has joined #dri-devel
tristianc6704 has joined #dri-devel
vyivel has joined #dri-devel
JohnnyonFlame has quit [Read error: Connection reset by peer]
DPA has joined #dri-devel
DPA2 has quit [Ping timeout: 480 seconds]
vyivel has quit [Read error: Connection reset by peer]
<MrCooper>
immibis: core Wayland has always been CSD only, since long before GNOME had any Wayland support
<MrCooper>
SSD is an optional extension
<MrCooper>
which would be difficult to support in mutter
<emersion>
no need to start that discussion again…
<MrCooper>
just setting the record straight, no intention of discussing it further
vyivel has joined #dri-devel
cmichael has joined #dri-devel
jfalempe has joined #dri-devel
vyivel has quit [Read error: Connection reset by peer]
danylo has quit [Quit: Ping timeout (120 seconds)]
danylo has joined #dri-devel
ficoPRO10 has quit [Ping timeout: 480 seconds]
rgallaispou has left #dri-devel [#dri-devel]
elongbug has quit [Read error: Connection reset by peer]
elongbug has joined #dri-devel
vyivel has joined #dri-devel
ficoPRO10 has joined #dri-devel
elongbug_ has joined #dri-devel
pochu has joined #dri-devel
elongbug has quit [Ping timeout: 480 seconds]
vyivel has quit [Read error: Connection reset by peer]
junaid has joined #dri-devel
vyivel has joined #dri-devel
glennk has quit [Ping timeout: 480 seconds]
junaid has quit [Remote host closed the connection]
vyivel has quit [Ping timeout: 480 seconds]
vyivel has joined #dri-devel
yyds has quit []
kts has joined #dri-devel
ficoPRO10 has quit [Ping timeout: 480 seconds]
glennk has joined #dri-devel
Daanct12 has quit [Ping timeout: 480 seconds]
ficoPRO10 has joined #dri-devel
vyivel has quit [Remote host closed the connection]
test has joined #dri-devel
vyivel has joined #dri-devel
test has quit [Remote host closed the connection]
elongbug_ has quit [Remote host closed the connection]
elongbug_ has joined #dri-devel
<tomeu>
anholt: what testing framework would you recommend to use with deqp-runner if I was to write new tests from scratch?
itoral has quit [Quit: Leaving]
yyds has joined #dri-devel
rsalvaterra has quit []
rsalvaterra has joined #dri-devel
acidburn1 has quit [Ping timeout: 480 seconds]
vyivel has quit [Read error: Connection reset by peer]
acidburn_ has joined #dri-devel
elongbug__ has joined #dri-devel
elongbug_ has quit [Read error: Connection reset by peer]
elongbug_ has joined #dri-devel
vyivel has joined #dri-devel
elongbug__ has quit [Ping timeout: 480 seconds]
yyds has quit [Remote host closed the connection]
Zopolis4 has joined #dri-devel
Leopold__ has joined #dri-devel
Leopold_ has quit [Remote host closed the connection]
Leopold__ has quit [Remote host closed the connection]
Leopold_ has joined #dri-devel
kts has quit [Quit: Leaving]
heat__ has joined #dri-devel
YuGiOhJCJ has quit [Quit: YuGiOhJCJ]
kts has joined #dri-devel
kts has quit []
<zmike>
DavidHeidelberg: what is the ci script that starts the xserver instance in ci?
SanchayanMaity_ has joined #dri-devel
SanchayanMaity_ has quit []
SanchayanMaity_ has joined #dri-devel
SanchayanMaity_ has quit []
<zmike>
I guess it's the aptly named "init-stage2.sh"
<kisak>
eric_engestrom: morning, https://cgit.freedesktop.org/mesa/mesa/commit/?id=e42c5b86d0f7fccf3c3866b1452309ad65833b4b caught my eye. Around the branchpoint there's a window of time where merge requests are getting marged, but the submitter of the merge request hasn't mentally acknowledged that the new release branch exists and should be notated yet. A nice to have extra for backport-to: to also nominate
<kisak>
the N-1 branch marked commits for the new release branch between the new branchpoint and maybe XX.Y.0 ... but that's probably an annoying timeframe to turn into code.
<DavidHeidelberg>
zmike: or .gitlab-ci/common/start-x.sh
<kisak>
Hypothetical, maybe Backport-to: could accept a + marking to be everthing newer than branch to cover that scenario? Backport-to: 19.2+
<kisak>
(the intent is to cover the common usage of Cc: mesa-stable so that it can be removed without a functional loss)
frieder has quit [Ping timeout: 480 seconds]
yuq825 has left #dri-devel [#dri-devel]
frieder has joined #dri-devel
kts has joined #dri-devel
yyds has joined #dri-devel
Company has joined #dri-devel
cmichael has quit [Quit: Leaving]
kzd has joined #dri-devel
<eric_engestrom>
kisak: the `+` is a good idea, but actually I don't there's ever a case where you *don't* want the + behaviour, so I think I'll make it always work like this
<eric_engestrom>
(I'm on holiday today, I'll do that when I'm back)
<kisak>
yeah, no hurry
Zopolis4 has quit [Quit: Connection closed for inactivity]
Haaninjo has joined #dri-devel
aravind has quit [Ping timeout: 480 seconds]
agd5f has quit [Read error: No route to host]
sarahwalker has quit [Remote host closed the connection]
Duke`` has joined #dri-devel
agd5f has joined #dri-devel
<DavidHeidelberg>
XDC reminder: we're organising small hack-weekend in Barcellona, so far only focused on CI, but we welcome any Mesa3D folks to join :) if anyone wants to join also one room is still available in the accomodation :)
<zmike>
DavidHeidelberg: is there a way to get the xorg log off ci? I'm trying to add some startup logging but none of the prints show up anywhere
pekkari has joined #dri-devel
<DavidHeidelberg>
what do you mean? Xorg usually logs loading if I'm not mistaking
<zmike>
the log isn't preserved in artifacts
<DavidHeidelberg>
yup
<DavidHeidelberg>
mv /Xorg.0.log /results/ or something like that before the job end should do it I guess
<DavidHeidelberg>
or just change the path in .gitlab-ci/common/start-x.sh and stage2 to results/Xorg.0.log
frieder has quit [Quit: Leaving]
<gfxstrand>
I don't know how to manually kick off CI anymore...
<gfxstrand>
How many manual jobs do I have to run?!?
sarahwalker has joined #dri-devel
<gfxstrand>
IDK what motivated the recent re-structuring of CI jobs but it's made CI utterly useless for developers tryint to run CI on MRs.
ficoPRO10 has quit [Ping timeout: 480 seconds]
<gfxstrand>
To be clear, before all I had to do was kick off x86_64-build_base, arm-build_base, and x86_64-test_base, and I'd get CI. Now I have no clue how to run CI. I keep starting misc jobs but it's not at all clear how to actually kick it of.
<dj-death>
similar feeling here
<zmike>
ci_run_n_monitor.py with a glob ?
<gfxstrand>
Uh, that's a thing?
<dj-death>
that script doesn't work for me
<dj-death>
it creates a pipeline but doesn't start anything
<zmike>
works fine for me and has been since forever
<zmike>
the only caveat is you can't start it before the job is extant
<dj-death>
it has undocumented dependencies too, you have to install packages but you don't know what versions you need
tzimmermann has quit [Quit: Leaving]
<gfxstrand>
And a LOT of those deps aren't in Fedora
<zmike>
I'm using fedora 🤔
pochu has quit [Ping timeout: 480 seconds]
Mangix_ has joined #dri-devel
Mangix has quit [Read error: No route to host]
<robclark>
gfxstrand: ci_run_n_monitor.sh does pip stuff to deal with the dependencies
<gfxstrand>
robclark: Oh, okay. That helps
* robclark
was in same boat
<gfxstrand>
Still not a fan of personal access tokens but I guess there's not much to be done about that.
jkrzyszt has quit [Ping timeout: 480 seconds]
sgruszka has quit [Ping timeout: 480 seconds]
<anholt>
gfxstrand: yeah, I really dislike how CI has been recently changed to remove the ability to just click run on container jobs. I use ci_run_n_monitor all the time, but I don't want to have to pull it out and construct a glob every time when I just want to pre-review CI run someone's MR.
<gfxstrand>
and IDK what I'm even globbing
<gfxstrand>
like --target "zink*" doesn't do anything
<gfxstrand>
--target anv-tgl works
<anholt>
gfxstrand: sorry, regex not glob
<zmike>
zink.* ?
Mangix_ has quit [Ping timeout: 480 seconds]
mattrope has joined #dri-devel
<gfxstrand>
Yeah, --target "zink.*" doesn't work, either
<anholt>
it gives you a link to the pipeline, are there zink jobs in that pipeline?
<gfxstrand>
Wait, what?!? Now everything is cancelled?
ficoPRO10 has joined #dri-devel
<gfxstrand>
Okay, I think I have it all running now
hansg has quit [Quit: Leaving]
<gfxstrand>
IDK why it sets everything not in the glob to cancelled. That seems like an antifeature
junaid has quit [Remote host closed the connection]
tobiasjakobi has quit [Remote host closed the connection]
lynxeye has quit [Quit: Leaving.]
sima has quit [Ping timeout: 480 seconds]
alyssa has joined #dri-devel
<alyssa>
how do I trigger a manual CI pipeline running whatever marge will, but not e.g. nightlies?
<alyssa>
for an open mr
<daniels>
anholt: if anyone uses ci_run_n_monitor on stable branches, the post-container jobs are all on_success, so you need to cancel the others so you don’t cascade job starts down
mvlad has quit [Remote host closed the connection]
tursulin has joined #dri-devel
Haaninjo has quit [Quit: Ex-Chat]
Sachiel has joined #dri-devel
<gfxstrand>
alyssa: ci_run_n_monitor.sh (not .py, the .sh one does python magic for you)
<gfxstrand>
alyssa: I just learned about this a few hours ago
fab has quit [Quit: fab]
Sachiel has quit [Quit: WeeChat 4.0.4]
Sachiel has joined #dri-devel
YuGiOhJCJ has joined #dri-devel
<alyssa>
....sh?
<alyssa>
i don't see what that fixes
Duke`` has quit [Ping timeout: 480 seconds]
<gfxstrand>
It invokes pythonenv and pip and stuff to make sure you have the dependencies
<alyssa>
that's not the problem
<alyssa>
it's what to pass to it to run the premerge
ngcortes_ has joined #dri-devel
<gfxstrand>
IDK. I did --target 'anv.*|zink.*|radv.*|a.*_vk' and got a decent selection.
ndufresne has joined #dri-devel
<dcbaker>
gfxstrand: I'm trying to accelerate the cargo patches so we can have them in 1.3.0. Whether that will branch in time is another question. I personally don't hate the idea of having the wraps in tree, at least until we decide that enough people have a new enough Meson?
<anholt>
gfxstrand: the problem is that that also kicks off the nightly jobs that take forever.
<gfxstrand>
dcbaker: Once it's in a meson version, I'm happy to hard-require that version for NVK.
<dcbaker>
gfxstrand: I think we'll want to have Meson add the cargo dependencies into our artifact tarballs anyway (which it can do), so that no one has to have an active internet connection to build a Meson tarball
<gfxstrand>
That would be neat
<dcbaker>
I reviewed the first part of Xavier's work today, the only thing that was major in it is that we've abstracted the rust crate information a bit since he reworked my patches, so that we can correctly handle building a static lib and a dynamic lib at the same time (we've gone to a rust_abi flag, and made proc-macro it's own thing so that we can enforce that you're cross compiling proc-macros the right way)
<dcbaker>
I don't think that will take too long for him to fix
<gfxstrand>
Cool. Yeah, I added notifications to both of his MRs so I saw your comments fly by.
<gfxstrand>
dcbaker: I also need the features PR for proc_macro2
<dcbaker>
Yeah, I have his second series on my todo-list to look at. I'm just sorta neck deep in teaching llvm's build system about pkg-config and meson about said pkg-config...
<gfxstrand>
Oh my...
<gfxstrand>
Good luck! (You're gonna need it...)
<dcbaker>
I've got it working correctly in about 33% of cases I think (although that's not to say that it's in a shape that it could land...)
<dcbaker>
they apparently want to drop llvm-config, and that's a bit of a problem for anyone who wants to consume llvm and isn't using cmake...
<airlied>
the whole linux distro world?
<dcbaker>
lol, yeah
<dcbaker>
among such notable projects: Mesa and PostgresSQL
<gfxstrand>
Isn't that kind-of on them to sort out?
Zopolis4 has joined #dri-devel
<gfxstrand>
But anyway, I can't use meson's crate support until I have features because proc_macro2 and friends have quite a few of them, some of which I need to be able to turn on for stuff I'm using in NAK.
<gfxstrand>
So it looks like we'll be using wraps for a bit.
gouchi has quit [Remote host closed the connection]
<dcbaker>
gfxstrand: should they sort that out? yes. Will they sort that out before they break the entire ecosystem and leave me trying to figure out why meson's cmake dependency system doesn't work right in some strange corner cases and pull my hair out for months before writing pkg-config files and then still pulling my hair out for months because there's at least one major version of LLVM that is really hard to use without using cmake? probably
<gfxstrand>
dcbaker: lmao, fair
* dcbaker
remembers when LLVM dropped autotools support and then everyone found out that basically all of Linux was using exclusively autotools and things like symbol versioning didn't work with cmake...
<ccr>
\:D\
acidburn_ has quit [Remote host closed the connection]
crabbedhaloablut has quit []
linusw has joined #dri-devel
<airlied>
yeah cmake is not well tested in the multi-llvm versions + cross compile stuff at all
<alyssa>
I want to preface this saying that I'm good at chaos
glennk has quit [Ping timeout: 480 seconds]
<alyssa>
So if I were being paid by a billion dollar company to disrupt an upstream project, an effective strategy would be burning out the top developers until there's nobody left to improve thingd
<alyssa>
Slow people down, frustrate people, argue back when they protest, until finally one by one they leave "on their own terms" because they realize that there's no point to staying
<alyssa>
But of course, doing that would meet resistance. The way to succeed would be coating in the name of progress. Instead of "think of the children", be able to push back any protest with a "think of our users"
<alyssa>
Since the project presumbably values correctness & their users, an effective way is to target testing.
<alyssa>
Nobody is allowed to say no to more testing, right? think of the poor users, lest bugs happen
<alyssa>
So with the financial backing, I would target testing. Make it terrible, make the testing so terrible that nobody can get their work down. Make it bad enough to give stomach aches to the top devs.
<alyssa>
And I'd make it mandatory, so that anybody who dares bypass the shibboleth is threatened.
ficoPRO10 has quit [Ping timeout: 480 seconds]
<alyssa>
There would be no consequences for me breaking the developers. but there would be consequences for breaking the code
iive has quit [Quit: They came for me...]
<alyssa>
I would have testing that doesn't work and that I know doesn't work, but that looks plausible. and if anyone protested, I would argue back until I win by default, because I'm being paid to fight this and they're being paid to do productive work and so I can shout louder and longer than they can.
<alyssa>
But here's the kicker.
<alyssa>
You don't need a bad actor.
<alyssa>
You don't need malice.
<alyssa>
You don't need to be trying to disrupt a project
<alyssa>
You don't need to be trying to burn people out.
<alyssa>
You can be well-intentioned but as long as you disregard the externalities -- disregard the harms you're doing to developers that are only incidental to your ostensibly good goal -- what you're doing is hard to distinguish from the bad actor
<alyssa>
I'm told that things are getting better. The reality is that every time I come back to upstream mesa, CI is somehow in worse shape than it was last itme.
<alyssa>
to the point where I can't do my job
<alyssa>
to the point where I'm forced to fork or switch to working on other projects instead
<alyssa>
and I'm not the only one
<alyssa>
I have no real power here
<alyssa>
I can't stop what's happened to mesa
<alyssa>
I can't get the project back
<alyssa>
I know that -- my health being what it is these days -- when you come angrily replying to me, that you'll be able to type a response longer and larger than anything I can, and that I will be too exhausted to reply in kind, and you'll win by default
<alyssa>
and you'll win
<alyssa>
and mesa will lose.
<anholt>
you do, in fact, have real power here. you can write MRs and review MRs related to CI. I agree that there's a problem, and in my view most of the problem comes from having a group of CI developers who are not driver developers. Back in my day we had Mesa testing being driven by Mesa developers, but driver developers quit doing that work because it was hard and no fun. But we needed testing. So people got hired to do that work, except
<anholt>
that they don't see the problems it causes to developers because they're just trying to do their jobs which is not driver dev.
<alyssa>
It's frustrating to see so many talks at XDC this year talking about how great mesa ci is, and if only we would expand coverage further
<alyssa>
but the reality is that the current state is worse than it was last year
<alyssa>
and I'm out
<alyssa>
I'm sorry but I can't do this anymore
<anholt>
I am also really grumpy at the state of CI. I'm on calls weekly complaining about the situation. I participate in MRs and poke holes in how it's going to break driver dev. But I wish I didn't feel alone in that.
<anholt>
s/weekly/biweekly/
alyssa has quit [Quit: Good luck and good night]
aissen has quit [Ping timeout: 480 seconds]
<zmike>
ci is definitely better now than it was a month or two ago when every job was failing
<Company>
if you wanted to force things, you could just agree to work on a mesa-next or mesa-staging branch where all the code goes that doesn't pass (enough of) CI yet
<Company>
that's kinda what happens when stuff gets too big - like, Linux and Mozilla have those release branches that feed from whatever the -next branch is
alyssa has joined #dri-devel
<alyssa>
Company: that's effectively where i'm at
<Company>
the tricky part is that you need people who actually do the release engineering and merging things from -next into -release
<anholt>
Company: we don't have releng, though. we can barely get releases out the door as is, where releasing is theoretically just wait a while to catch any remaining regressions and then make a tarball.
<dcbaker>
and CI actually is a big contributor to slow release process
<dcbaker>
I don't get tagged to pull patches that turn off known dead machiens
<dcbaker>
I don't get tagged to turn them back on
<dcbaker>
Some tests don't run and it's not clear if they're being disabled by design or if there's something wrong
<dcbaker>
patches get tagged that apply cleanly but cause regressions, and then the maintainer has to figure out who to ask, or try to figure out if there's something else (say in the original series) that is needed
<dcbaker>
I can't speak for eric_engestrom, but CI turnaround is long and I often pull a bunch of patches say first thing in the morning, do a local build test, and send them to CI, then get into something more interesting/pressing and don't get back to looking at those CI results for 4 hours
pcercuei has quit [Quit: dodo]
<anholt>
dcbaker: I agree, current "CI is on fire" issue is hour-long pipelines. we were supposed to be holding ourselves to "10ish minute turnaround on HW jobs for the whole capacity of a farm", but everyone's slipped on how long 10ish is, plus automatic retries were added instead of bottoming out instabilities, then people added automatic retries on the automatic retries, and that plus higher overall load on the farms from more users (more mesa
<anholt>
devs, plus DRM CI, plus the --stress tool etc.) means that we need to crank down our usage.
<alyssa>
i recall being told recently that 20min is acceptably close to 10min and, no.
<anholt>
alyssa: /o\
<alyssa>
it's the externalities that get me though
<dj-death>
and cts grows fast too :(
<alyssa>
and i guess me working on common code is what burned me fastest.
<alyssa>
because i got to run thru everyone's ci and wow
<anholt>
alyssa: common code also was awful pre-CI, because you instead got to wait for intel and amd and etc. to manually run your code for you (or have a room full of machines you ran it on yourself), then also remote-debug with someone when you landed regressions anyway and the release was blocked on you.
<alyssa>
yeah, fair. no winning there.
<dcbaker>
and I'll be fair, it is nice that we have a lot less regressions in stable branches, but if I could ask for one thing from CI it would be to bring down the runtime, and to have a better way to tag CI stuff that needs to be pulled back to stable branches
<Company>
alyssa: I remember watching a talk a while ago of some driver guys and loving the fact that they got new features in common stuff enabled - so I would guess it's better than before?
<anholt>
dcbaker: are you watching the CI label in gitlab?
<anholt>
(seems like for farm enable/disables and stuff that would be the way)
<alyssa>
tangent, who is responsible for microsoft with jesse OoO
<dcbaker>
No, but I should. Maybe I can update the pick script to look for things labled for CI
<alyssa>
spirv2dxil job failin and I don't even have a windows machine
<zmike>
alatiera in #freedesktop I think
tursulin has quit [Ping timeout: 480 seconds]
<alyssa>
zmike: it's a real fail from my patch I just can't really debug myself
<zmike>
relatable
<alyssa>
probably preexisting bug
<zmike>
surely
<Company>
that reminds me: is Mesa generating better shader code from GLSL than from spirv?
Zopolis4 has quit [Quit: Connection closed for inactivity]
<Company>
because I have the ame shader code essentially and when I have benchmarks where complex fragment shaders are the bottlenecks, Vulkan is the one with lower fps
<Company>
where "Vulkan" means my Vulkan stuff and my GL-over-zink
<anholt>
Company: I would expect equivalent shader code on radv and freedreno. I'm a suspicious of the intel compiler for vulkan but don't have any hard evidence.
<anholt>
(radv and turnip vs radeonsi and freedreno)
<Company>
I'm on radv
<Company>
I'm suspicious because I used glslc with -O and that spirv resulted in massively worse shader code
<anholt>
we've got a lot of zink-on-radv vs radeonsi perf hits in our traces collection, but I haven't gone digging into them.
<anholt>
by "worse shader code" you mean shader-db reports from the driver, or something else?
<alyssa>
zmike: i mean it works on everything else and the diff looks right
<bnieuwenhuizen>
anholt: on AMD we have the ACO vs. LLVM thing going on
<alyssa>
d3d12 job passes
<alyssa>
just not spirv2dxil units
<bnieuwenhuizen>
which might actually matter for shader perf
<Company>
I mean I had some dumb ubershader test in my code that I benchmarked that took 2s on radeonsi and 6s on radv
<Company>
and after I removed the -O (for optimize) from glslc it took 3s
<Company>
zink took 3s, too
<pendingchaos>
if no one can look at the spirv2dxil fail soonish, then maybe you could just disable the job
<pendingchaos>
I assume spirv2dxil is a command line tool, anyways?
<pendingchaos>
the comment at the top of spirv2dxil.c says it's for testing
<alyssa>
tempting
<pendingchaos>
maybe it's possible to compile the tool on linux and reproduce the assertion failure
<anholt>
or update the xfails and file an issue?
kzd has quit [Quit: kzd]
<pendingchaos>
that's probably a better idea
<airlied>
alyssa: maybe you just didn't realise how bad things were before we had CI
<airlied>
that you think this is worse
<airlied>
like it's bad, but it's in no way worse than the mesa pre-CI
shashanks_ has joined #dri-devel
<airlied>
let me merge my regressions faster because I'm an experienced developer isn't the argument that will move the needle on this
<airlied>
you probably had a nice time living in drivers which weren't central to the world, but dealing with the core and not regressing one of the major drivers is hard
<airlied>
you either wait for CI or you have to wait for as anholt said approvals and testing from amd, intel, zink, llvmpipe etc
<airlied>
like the run n monitor changes are there because people complained CI was overloaded, so instead of letting everyone just click go on all the pipeliens slowing down merges, you do some targetted pre-merge testing