dviola has quit [Read error: Connection reset by peer]
xroumegue has quit [Ping timeout: 480 seconds]
sgruszka has joined #dri-devel
xroumegue has joined #dri-devel
Omax has joined #dri-devel
dviola has joined #dri-devel
kts has quit [Ping timeout: 480 seconds]
kts has joined #dri-devel
flto_ has joined #dri-devel
flto has quit [Ping timeout: 480 seconds]
Aura has quit [Ping timeout: 480 seconds]
cmichael has quit [Quit: Leaving]
cmichael has joined #dri-devel
DemiMarie has joined #dri-devel
<DemiMarie>
Is hardware color management support expected to be conformant to a particular spec, or will those needing hardware-independent behavior need to fall back to shaders?
itoral has quit [Remote host closed the connection]
kts has joined #dri-devel
danylo has joined #dri-devel
bmodem has quit [Ping timeout: 480 seconds]
cmichael has quit [Quit: Leaving]
cmichael has joined #dri-devel
luben has joined #dri-devel
dliviu has joined #dri-devel
alyssa has joined #dri-devel
<alyssa>
how do I run ci?
<alyssa>
the script doesn't work and neither does clicking
<alyssa>
oh, '.*' is the magic
<alyssa>
not ".*" or ".\*" or .\* or '.\*'.
<alyssa>
k.
<alyssa>
nvm
<alyssa>
thx
alyssa has left #dri-devel [#dri-devel]
fab has quit [Quit: fab]
yyds has joined #dri-devel
florida has joined #dri-devel
alyssa has joined #dri-devel
<alyssa>
nope that's not working..
<eric_engestrom>
kusma: for your question on friday: I would vote for "give people permissions in the group, not the repo" because if we trust them in one project there's no reason to think we can't trust them in the others in that group
<eric_engestrom>
alyssa: what are you trying to do?
<alyssa>
eric_engestrom: run ci
<eric_engestrom>
which jobs?
<alyssa>
all
<alyssa>
whatever marge runs
chloekek has joined #dri-devel
<eric_engestrom>
those are not the same things :P
<alyssa>
i just need to know if the mr will merge or not
<kusma>
eric_engestrom: Yeah, that sounds reasonable to me...
florida has quit []
<eric_engestrom>
alyssa: for "all" -> `--target '.*'`, but for anyone else reading, please ask yourself if you really need everything before consuming everyone's resources :)
<alyssa>
i'm touching common code
<alyssa>
i need, minimally, radv and zink
<eric_engestrom>
yeah I know for you it's valid
<eric_engestrom>
I assume it's that MR you just posted
<alyssa>
yes
<eric_engestrom>
if you want radv & zink -> `--target '.*(radv|zink).*'`
<eric_engestrom>
I mean to be clear, that gives you all the radv and all the zink jobs, not the radv zink jobs
<eric_engestrom>
`--target 'zink-radv-.*'` gives you the radv zink jobs
<alyssa>
that's what I guess i would've wanted
<alyssa>
but I don't want surprises when marge goes
<alyssa>
given how many hours it takes to get results from marge
<eric_engestrom>
yeah for nir changes it makes sense to run everything
* alyssa
regrets touching common code, as usual.
<eric_engestrom>
hehe
<alyssa>
should've just used my nih fix.
<alyssa>
pipeline is stuck
<alyssa>
sanity is success but everything else is just created
<lumag>
v1 had an issue pointed out by Toni, v2 fixed it. We can merge it, but I'd like to have a formal ack from drm maintainers
Net147 has joined #dri-devel
<gfxstrand>
eric_engestrom: We really need a --auto option or something that just emulates Marge.
<gfxstrand>
Or make it do that by default when no --target is provided.
frieder has quit [Remote host closed the connection]
<daniels>
eric_engestrom: my first thought was that --force-manual should be replaced by the -full jobs and others which shouldn't be run by people who aren't trying really hard to, but it turns out you can't get job variables through the API :\
<daniels>
so short of just not running '.*-full$' jobs unless explicitly forced, I'm not sure what the best option would be
<eric_engestrom>
gfxstrand: I really don't like the idea of running everything if the user doesn't pick something, I've been working hard enough as it is to reduce our CI resource consumption without giving users an easy (and worse: by default) "run everything, even if I only care about a single one of these"
<eric_engestrom>
as for an option that emulates marge, that's not something that we can realistically do outside of gitlab (we'd have to duplicate all the CI code and it would be broken all the time)
<eric_engestrom>
I have a WIP change that adds `--exclude` with `.*-full` as the default value
<eric_engestrom>
daniels: ^
<gfxstrand>
eric_engestrom: Yeah, I get that. However, that's made the CI substantially harder to use for some of us. I don't actually want ".*" because I don't necessarily want to run the dailies. (Also, last I checked .* doesn't actually work but maybe that's changed. IDK)
<eric_engestrom>
once that lands, I'll delete `--force-manual`, hopefully there won't be any objection anymore
<gfxstrand>
Yeah, if you're just working on one driver, run `--target=.*driver.*` and you're golden. If you're working on common code, there's no obvious way to get coverage and it's a lot of trial-and-error to try and figure out the magic --target to use.
<eric_engestrom>
gfxstrand: re- '.*' not working, that's because of that `--force-manual`; if you always add it you'll get what you expect
<daniels>
eric_engestrom: oh, that's kinda neat
<eric_engestrom>
re- "not the daily jobs", the `--exclude` I'm working on should fix that
<gfxstrand>
Wait, so then .* won't run everything anymore?
<eric_engestrom>
re- "hard to figure out what --target you actually need", I don't see any solution, but if you can think of one I'm happy to try implementing it
<gfxstrand>
--auto
<gfxstrand>
Or --target=.* --exclude=.*-full, I guess
<eric_engestrom>
yeah, once I land `--exclude` with `.*-full` as the default value, you can use `--target '.*'` and not get these jobs anymore
<eric_engestrom>
and `--exclude ''` if you want to exclude nothing
<jenatali>
Aren't the driver jobs still auto? I.e. once the containers are done the driver jobs will run?
fxkamd has joined #dri-devel
<eric_engestrom>
jenatali: in which pipelines? merge, pre-merge, fork, scheduled?
<eric_engestrom>
(and also, I have another MR to make fork pipelines always have every job, but always manual instead of sometimes running sometimes not)
<jenatali>
Uh... Dunno, honestly
<eric_engestrom>
(that eliminates the problem of gitlab only considering what's changes since the last push in fork pipelines)
<jenatali>
I haven't needed to run a driver's CI other than mine, and the Windows jobs have always been a little different than the rest, but our jobs still auto-run once the containers are out of the way
<jenatali>
I personally still prefer to just click the container start buttons. Dropping back to CLI after using the web UI just feels painful/wrong to me, especially when (as I've said) the script support for Windows is pretty painful
<eric_engestrom>
yeah the windows jobs have different rules
<eric_engestrom>
maybe some day I'll look into unifying that behaviour as well, but -ENOTIME
<jenatali>
Yeah, I feel that
<eric_engestrom>
as for having to use the ci_run_n_monitor script, that's because gitlab's ui only allow thinking about things in one direction, which happens to be the opposite direction than we mesa devs think about
<eric_engestrom>
we want to run a job, and whatever is needed for that
<eric_engestrom>
but gitlab does things in the opposite direction, of letting you run jobs, and whatever needed that will also run
<daniels>
jenatali: what's the issue with scripts vs. windows - venv/pip?
<eric_engestrom>
if I had infinite time (and the assurance gitlab would accept it) I would add a button that's "run this job and everything that needs it"
<jenatali>
Token storage mainly
<eric_engestrom>
"add" -> to the gitlab web ui, I mean
<jenatali>
That would be an amazing addition
<eric_engestrom>
for token storage, we currently have two options: in a file, or as an argument to the script
<eric_engestrom>
not sure which one you use, but in case the other one helps we have that
<jenatali>
But for me at least, the UI actually has the flow I want (mostly) - start the Windows jobs to build and test my driver
<jenatali>
eric_engestrom: yeah the file doesn't work on Windows
<eric_engestrom>
yeah, because there are very few windows jobs, but that's not a workflow that can work for us linux peeps :P
<jenatali>
Which means I always have to generate a new token anytime I want to use the script, because I don't use it often enough
<eric_engestrom>
the file doesn't work? I'm sure I could fix that
fxkamd has quit []
<jenatali>
It assumes a Linux filesystem at least
Duke`` has joined #dri-devel
<eric_engestrom>
oh, I see: `~/.config/gitlab-token`
<gfxstrand>
The web UI was actually pretty good for me. The arm/x86 split, combined with file change filtering was enough granularity for most things.
<eric_engestrom>
I'm guessing we should use some python things that give us the home folder instead of tilde
<gfxstrand>
The script is a giant PITA if you work on more than one driver.
<gfxstrand>
I mean, it's managable, so maybe not "giant" but it's a pain
<eric_engestrom>
jenatali: hmm, no, docs say `os.path.expanduser('~')` should do the right thing on windows too
<eric_engestrom>
can you tell me what doesn't work?
<jenatali>
Oh, guess I just didn't try, I read the code to see how it was supposed to work (because I don't remember seeing docs) and saw that and just assumed it'd be broken
<daniels>
gfxstrand: can you elaborate on 'giant PITA'?
<eric_engestrom>
gfxstrand: can you tell me more specifically what's a pain, if there's more than the issues already discussed above?
<gfxstrand>
Before, my workflow was "click 4 buttons" and I got everything Marge would do. If I knew my change only affected desktop or only affected mobile drivers, I would just run x86 or just run Arm. If it only affected one driver, 95% of the time, the autofiltering based on files changed would just run the one driver and I just had to remember x86 vs. arm.
<gfxstrand>
Now, though, I have to dig through the web UI to figure out what tests are named (because they aren't consistent and I have a bad memory) and try to concoct a regex that that targets those jobs without running other jobs. I'll inevitably fail at that at least once or twice.
<gfxstrand>
Also, I basically have to leave that branch checked out because if the script ever dies or I need to start/stop it for some reason (happens often enough), it references the checked out branch. This means I can't kick-and-forget.
kzd has joined #dri-devel
<jenatali>
I'll also add barrier to entry for new folks. Instead of saying that they can intuitively click some UI buttons to start jobs, the "correct" way is to navigate to a script, install some dependencies, create a token, etc
<gfxstrand>
That's on top of needing to maintain http tokens because it can't use SSH and fighting with python when pyenv fails (which I had to do fairly recently).
<eric_engestrom>
gfxstrand: assuming you're talking about fork pipelines, I see several parts of the problem here:
<eric_engestrom>
1) gitlab doesn't have the same behaviour for what "change" means in fork pipelines vs MR pipelines; they introduced a new feature to fix that, and forgot about people who don't keep their fork's `main` in constant sync with upstream `main`... so we can't use it until they fix that -> tech issue, waiting on external fix
<eric_engestrom>
2) figuring out what jobs do what is hard because we haven't agreed mesa-wide on how to name our jobs (and enforce that) -> social issue, we can fix that with discussions (I'll open an issue on gitlab for that)
<eric_engestrom>
3) your workflow is based on a single folder having all the branches and switching between them; the solution I would offer is to use git's `worktree` feature, which I used and helps with having multiple in-progress things in parallel without any interaction/interference between them
<gfxstrand>
1) I only care about MR pipelines
<gfxstrand>
2) Consistency won't fix the problem. It still significantly increases resident brain memory requirements. I have ADHD. I'm hosed even if we're consistent.
<eric_engestrom>
jenatali: I know it's minor part of your point, but on unix systems the dependencies part is solved, look at `bin/python-venv.sh` if you want to make a powershell version of it :)
<gfxstrand>
3) I already use worktrees. I work on a LOT of things all at the same time, including sometimes doing runs for other people. It's still a pain point.
<eric_engestrom>
gfxstrand: +1 for point 2) we still need to figure out what jobs we want, even if we give them proper descriptive names. I don't know how else you could pick the jobs though; would maintaining lists of "this job tests these parts of mesa" and then you pick whatever job touches the part you care about help? that would be more work to keep these lists up to date though, but maybe it's worth it?
<gfxstrand>
WRT 2, consistency will help a bit but it'll only cut it by 1/2 to 1/3 at most.
<eric_engestrom>
for point 1) then I think the `--exclude` solution should cover what you need
<jenatali>
eric_engestrom: I don't really know powershell very well... :(
<gfxstrand>
eric_engestrom: The file-based filtering actually works pretty well.
<eric_engestrom>
in MR pipelines, yeah (so pre-merge and merge pipelines)
<eric_engestrom>
but gitlab's bug means in fork pipelines the file-based solution doesn't work
<gfxstrand>
Which, again, are the pipelines I care about. I'm fine with CI being a pain if you're running in some detached branch that hasn't been rebased in a month.
<gfxstrand>
I'm very happy to make a Draft MR if I want to run CI.
<gfxstrand>
Or provide some sort of base commit
<eric_engestrom>
then you get the gitlab behaviour where the jobs that exist are the ones affected by the changes you made
<eric_engestrom>
so in that case, "run everything in that MR pipeline" would be enough
<eric_engestrom>
right?
<gfxstrand>
Yeah, that's what I really want for most common stuff.
<gfxstrand>
A "do what Marge would do on this MR" button.
<gfxstrand>
(Or it can be a script but a button would be nicer.)
<gfxstrand>
The point is one thing that's the same every time so I only have to remember one thing and not N things and how to combine them.
<eric_engestrom>
so for that use case, I think you're missing my WIP `--exclude` work (that you wouldn't actually use, you'd keep its default value) and mesa-wide agreement on how to name non-Marge jobs
<eric_engestrom>
with these done, you would just do `bin/ci/ci_run_n_monitor.sh --target '.*'` and be done
<gfxstrand>
I want to be able to do `run_n_monitor_ci.sh --emulate-marge --mr=13341`
<gfxstrand>
That would be the ideal
<eric_engestrom>
oh, do you know you can pass `--pipeline https://...` if you want to manage another pipeline than the one for the current HEAD commit?
<gfxstrand>
It could also be --target '.*'
<gfxstrand>
What is that https://? A pipeline? A merge request?
<gfxstrand>
Does that auto-kick jobs or does it just monitor?
<eric_engestrom>
a pipeline url
<eric_engestrom>
it only changes what pipeline it works on, using that one instead of finding the pipeline for the HEAD commit
<eric_engestrom>
everything else is the same
glennk has quit [Ping timeout: 480 seconds]
<gfxstrand>
Okay, that's not quite --mr= but it's close(ish).
glennk has joined #dri-devel
tzimmermann has quit [Quit: Leaving]
<eric_engestrom>
we could add `--mr 123` as well, it's relatively trivial
<gfxstrand>
That would be nice. That way I don't have to go digging through the web UI for a pipeline URL and the script can do it for me.
<eric_engestrom>
yeah, makes sense
<eric_engestrom>
added to my todo list
<gfxstrand>
Yeah, if we can get down to `ci_run_n_monitor.sh --target '.*' --mr 25039` or `ci_run_n_monitor --terget '.*'` for to run the current checkout, I think that's fine.
<eric_engestrom>
:)
<gfxstrand>
Tokens are still a pain but I don't see a way around that unless we can add a button.
<gfxstrand>
We could add a ci-bot I suppose which is basically marge except it doesn't rebase or merge.
<eric_engestrom>
something that we trigger by posting a comment like "@build-bot run '.*'"? that's an idea, yeah
<gfxstrand>
Or tag build-bot as the reviewer
<gfxstrand>
IDK... that sounds like it'll get abused.
<gfxstrand>
The other thing is that I don't care too much about the n_monitor part.
<gfxstrand>
It's actually kind-of a pain to have to keep a terminal open just for that MR.
<jenatali>
+1
<jenatali>
I suspect there's not a way to queue a dependent manual job without the script still running though
tshikaboom has quit []
<eric_engestrom>
yeah we need to work on the output, it's very spammy (printing a ton of new lines on every update instead of printing the lines once and updating them)
<gfxstrand>
It's less that and more that I want to kick it off and forget about it until I check back in a couple hours later.
cmichael has quit [Quit: Leaving]
<eric_engestrom>
we can't just do everything at the start though, as gitlab doesn't let you start manual jobs before all their dependencies have started
<eric_engestrom>
so once all the manual jobs have been started, you can ^C the script and everything will continue, but if you do that too soon gitlab will not have let the job start
<eric_engestrom>
*before all their dependencies have _finished_
<jenatali>
Makes sense. Would be nice to have less verbose output at least
<daniels>
jenatali: what would that be, just print the pipeline URL and then only the final status? or each job's status as it completes? or only failed jobs?
<gfxstrand>
Each job's status would be fine, I think.
<jenatali>
Yeah, agreed, each job's status would be fine, as long as it's once and then updated, or just a final status
<daniels>
'once and then updated'?
MrCooper has quit [Remote host closed the connection]
<jenatali>
daniels: Updated in-line instead of spamming new statuses all the time
MrCooper has joined #dri-devel
<daniels>
ah yeah, that one's hard
<jenatali>
Yeah
<jenatali>
Seems more like it belongs in a GUI
<daniels>
not hard in terms of 'omg computer science research', but hard as in terms of 'hard to keep the will to live after you've gone enough into terminal handling and what do you mean it breaks when I resize or run over SSH'
<gfxstrand>
Yeah...
<gfxstrand>
If all you want is a progress bar, there are packages for that. They still break a bit but they're okay.
<eric_engestrom>
I think we can just do the simple `\033[${N}A` and not care if it breaks in weird cases
<gfxstrand>
If you want a table a la top? Uh, have fun?
<gfxstrand>
It probably involves curses
<eric_engestrom>
and cursing
<gfxstrand>
Yes
<gfxstrand>
It's a well named library. :)
<eric_engestrom>
no but if we know we have N jobs, printing N lines and then going back up by N lines is fine as the normal case, and I think we can ignore weird cases where this breaks
<eric_engestrom>
ie. a simple `printf '\033[%dA' $number_of_jobs`
<eric_engestrom>
I've been wanting to do exactly this for a while because the current output is way too spammy
<gfxstrand>
NGL, '\033[%dA' looks a lot like swearing at your terminal.
<eric_engestrom>
almost, it's a magic incantation
mvlad has quit [Ping timeout: 480 seconds]
<eric_engestrom>
(it's literally just "go back up N lines" in ansi code)
<eric_engestrom>
then K to reset the line, and print whatever new thing you want there, and loop
<jenatali>
A silent mode would be fine too. Or "silent except for errors/failures/final completion" would be nice too. You can just open a browser tab to monitor the pipeline if you want to monitor more than that
<eric_engestrom>
indeed
gouchi has joined #dri-devel
<eric_engestrom>
I think I wrote down everything, but I don't know when I'll be able to do these improvements; I'll post them here though :)
gouchi has quit [Remote host closed the connection]