ChanServ changed the topic of #dri-devel to: <ajax> nothing involved with X should ever be unable to find a bar
<imirkin_> and i sorta figured they knew what they were doing
alanc has quit [Remote host closed the connection]
<mwk> if we're still talking Tesla, that'd because the 0*x = 0 setting is global for shader, not per-instruction
<imirkin_> mwk: yeah
<imirkin_> no clue where i looked
<imirkin_> but the code i changed was in the nvc0-specific lowering
<imirkin_> where it's per-op, as you know well
<mwk> right
<imirkin_> ah no, i changed both nv50 and nvc0
<imirkin_> changed in c1e4a6bfbf015801c6a8b0ae694482421a22c2d9
<imirkin_> can't say my memory of this day 5y ago is extremely crisp :)
<imirkin_> "Secondly the current approach (x * rsq(x)) behaves poorly for x = inf - a NaN is produced instead of inf."
<mwk> hmm
<imirkin_> so i guess switching it to x * rsq(x) would work for nvc0, flipping the dnz flag on
<mwk> doesn't 0*x = 0 still break it?
<mwk> like inf * 0 is also 0
<imirkin_> i just hadn't thought to flip it?
<imirkin_> mwk: what's the problematic case?
<mwk> x= Inf
<imirkin_> oh right.
<imirkin_> yes.
<imirkin_> we could make it go NaN -> 0
<imirkin_> but that's not what we want :)
<mwk> yep
<imirkin_> so we want 0*x = 0, but x*0 = x ;)
<mwk> that might be a problem
<alyssa> imirkin_: I have a .left mode
<alyssa> with the semantic "(a == 0 || b == 0) ? a : (a * b)"
<imirkin_> fun
<alyssa> so i think I can do mul_left(x, rsq(x))
<mwk> ... huh
<imirkin_> so then it should work?
<alyssa> for x = 0, that is mul_left(0, inf) = 0
<alyssa> for x = inf, that is mul_left(inf, 0) = inf
<alyssa> ...right?
<imirkin_> seems so!
<alyssa> the Arm driver uses that mode but actually it does something... even more funny
dliviu has quit []
<alyssa> first it does a range reduction with a special sqrt mode of frexpm/frexpe
vivek has quit [Ping timeout: 480 seconds]
<alyssa> then it does sqrt(m) with mul_left(X, rsq(x))
<mwk> .. I seem to recall nvidia also did some exponent dance
<alyssa> and then it biases the exponent of that with the result of frexpe / 2
<mwk> but I don't recall if it was sqrt or div
<alyssa> I guess it might have better precision but idk
<alyssa> oh also, mul_left(NaN, rsq(NaN)) = mul_left(NaN, NaN) = NaN which is still right
dliviu has joined #dri-devel
orbea1 has joined #dri-devel
orbea has quit [Ping timeout: 480 seconds]
<alyssa> oh even more fun, that's only for 32-bit
<alyssa> for 16-bit on valhall, they do (x == 0)? 0.0 : (x * rsq(x)), with an ordinary multiply
<alyssa> (and an explicit csel)
<alyssa> uhmmm
<alyssa> ...but on Bifrost fp16 they just do mul_left(x, rsq(x))
<alyssa> so I guess valhall removed fp16 mul_left
<mwk> mul_left is such a cursed concept
<mwk> I love it
<alyssa> mwk: it's a massive FMA_RSCALE instruction that does (a*b + c) * 2^d with 4 saturation modes and 4 special case handling options and not all combinations are valid.
<alyssa> real fun
<mwk> mmmm
<alyssa> and the rounding/special case handling is "wrong" so it's only used to lower transcendntals
<mwk> "here, we give you all the control signals of the ALU, have fun"
<mwk> lol
<alyssa> yep yep :p
Lucretia has quit []
<HdkR> mwk: Nvidia exponent dance should be removed on latest generations if I remember correctly :D
<mwk> yeah, probably
<mwk> I did look at it a *long* time ago
ngcortes has quit [Ping timeout: 480 seconds]
<HdkR> Should be removed in Volta or Turing, I forget which. So quite new
pnowack has quit [Quit: pnowack]
ngcortes has joined #dri-devel
khfeng has joined #dri-devel
vivek has joined #dri-devel
ngcortes has quit [Remote host closed the connection]
YuGiOhJCJ has joined #dri-devel
cef is now known as Guest3938
cef has joined #dri-devel
Company has quit [Read error: Connection reset by peer]
Guest3938 has quit [Ping timeout: 480 seconds]
slattann has joined #dri-devel
cef has quit [Ping timeout: 480 seconds]
cef has joined #dri-devel
<imirkin_> is it expected that virgl-traces and zink-piglit-timelines sometimes fail?
<imirkin_> i re-ran, and they were fine
<alyssa> "is it expected that [....] sometimes fail?"
<alyssa> the answer is no for any value of [...]
<alyssa> daniels: ^
<imirkin_> alyssa: not always. sometimes it's a known issue.
<imirkin_> hence expected
<alyssa> imirkin_: maybe this is an unpopular opinion, but if an issue is known the correct answer is to remove the job from CI on sight.
<imirkin_> i'm such a rare contributor these days that i don't always know the current state of things
<jenatali> I saw flakes in llvmpipe and stony traces the other day too
<jenatali> I think if we applied the "if it flakes, remove it" philosophy we'd have basically 0 coverage :)
<alyssa> jenatali: That seems pesimistic... CI should be rock solid. I suppose we've already had this discussion ... I can't remember the GLon12 CI flaking on me so you're good :0
<jenatali> alyssa: Yeah we've already ratholed on it :P
<jenatali> Our public CI is way better than our internal test automation :)
<alyssa> (and correct me if i'm wrong but panfrost has been solid all summer?)
<jenatali> I haven't been trying to merge enough changes to comment definitively, but I haven't seen it flake at least
<alyssa> (I did a lot of work on the driver side to get things robust for CI, and daniels does /so/ much behind the scenes to keep the infrastructure happy)
<alyssa> anyway errr back to hacking at PCIe
<airlied> I hope you mean hacking a PCIe card with a hacksaw to fit into a 1x slot or something
cef has quit [Quit: Zoom!]
cef has joined #dri-devel
* airlied has forgotten how many conversion tests CL CTS has, at 391 after 29 hrs
<jenatali> airlied: I think you've still got a ways to go...
<jenatali> I want to say it's ~800 or so
slattann has quit []
slattann has joined #dri-devel
orbea1 has quit []
<airlied> jenatali: yeah my brain is saying ~900
<airlied> I wonder have I got an old log file sitting around :-P
slattann has quit []
orbea has joined #dri-devel
<airlied> I think it took 44 hrs last time
<jenatali> I'm sure I do
<jenatali> airlied: 765 is what I have in my log
<jenatali> Oh, that's how many I ran... we skipped doubles. Total test count is 900
<airlied> jenatali: ah cool, I'm skipping doubles as well in this run
<airlied> so over half way there :-P
<jenatali> Yeah...
<airlied> or not actually, but close to it
<jenatali> I mean props for making an exhaustive test, but damn do you really need to test every permutation of a vec16 with every possible integer?
<airlied> yeah it's pretty messed up test strategy
<alyssa> WTF
<airlied> woah spending 30% of my time in a mutex unlock path
<jenatali> :O
soreau has quit [Read error: Connection reset by peer]
soreau has joined #dri-devel
<airlied> I assume the test is launching a kernel per integer
<jenatali> It's not that bad
<airlied> my threadpool is just going brrrrr
<jenatali> They do a kernel for an array of vectors
* airlied should possibly consider optimising this pathalogical corner case :-P
Danct12 has quit [Quit: Quitting]
Danct12 has joined #dri-devel
anujp has quit [Ping timeout: 480 seconds]
anujp has joined #dri-devel
jhli has quit [Ping timeout: 480 seconds]
macromorgan has quit [Read error: Connection reset by peer]
macromorgan has joined #dri-devel
mbrost_ has joined #dri-devel
boistordu_ex has joined #dri-devel
mbrost has quit [Ping timeout: 480 seconds]
thellstrom1 has joined #dri-devel
thellstrom has quit [Read error: Connection reset by peer]
<DrNick> isn't there a CTS test that blends every color with every other color?
rcf has quit [Quit: WeeChat 3.3-dev]
Duke`` has joined #dri-devel
rcf has joined #dri-devel
mbrost_ has quit [Ping timeout: 480 seconds]
mattrope has quit [Remote host closed the connection]
slattann has joined #dri-devel
sdutt has quit [Ping timeout: 480 seconds]
JohnnyonFlame has quit [Read error: Connection reset by peer]
itoral has joined #dri-devel
mlankhorst has joined #dri-devel
K`den has joined #dri-devel
Kayden has quit [Remote host closed the connection]
danvet has joined #dri-devel
Duke`` has quit [Ping timeout: 480 seconds]
rando25902 has joined #dri-devel
frieder_ has joined #dri-devel
frieder has joined #dri-devel
frieder_ has quit []
frieder has quit []
frieder has joined #dri-devel
rando25892 has quit [Ping timeout: 480 seconds]
K`den is now known as Kayden
Kayden has quit [Quit: reboot to try and fix some rubbish]
Ahuj has joined #dri-devel
Kayden has joined #dri-devel
rasterman has joined #dri-devel
Daanct12 has joined #dri-devel
agd5f_ has joined #dri-devel
Danct12 has quit [Ping timeout: 480 seconds]
pcercuei has joined #dri-devel
agd5f has quit [Ping timeout: 480 seconds]
slattann has quit []
Lucretia has joined #dri-devel
itoral_ has joined #dri-devel
itoral has quit [Read error: Connection reset by peer]
camus1 has joined #dri-devel
pnowack has joined #dri-devel
camus has quit [Ping timeout: 480 seconds]
pochu has joined #dri-devel
slattann has joined #dri-devel
lynxeye has joined #dri-devel
bcarvalho has quit [Remote host closed the connection]
camus has joined #dri-devel
camus1 has quit [Ping timeout: 480 seconds]
ppascher has quit [Quit: Gateway shutdown]
ppascher has joined #dri-devel
tzimmermann has joined #dri-devel
bcarvalho has joined #dri-devel
mlankhorst has quit [Ping timeout: 480 seconds]
slattann has quit []
nroberts has joined #dri-devel
MrCooper has quit [Remote host closed the connection]
slattann has joined #dri-devel
MrCooper has joined #dri-devel
mlankhorst has joined #dri-devel
YuGiOhJCJ has quit [Remote host closed the connection]
jkrzyszt has joined #dri-devel
YuGiOhJCJ has joined #dri-devel
shfil has joined #dri-devel
jessica_24 has quit [Quit: Connection closed for inactivity]
lalbornoz has joined #dri-devel
lalbornoz has quit [Remote host closed the connection]
lalbornoz has joined #dri-devel
lalbornoz has quit [Read error: Connection reset by peer]
vivijim has joined #dri-devel
Daaanct12 has joined #dri-devel
xexaxo has joined #dri-devel
Daanct12 has quit [Ping timeout: 480 seconds]
flacks has quit [Quit: Quitter]
flacks has joined #dri-devel
lalbornoz_ has joined #dri-devel
lalbornoz_ has quit [Read error: Connection reset by peer]
pendingchaos has quit [Quit: No Ping reply in 180 seconds.]
pendingchaos has joined #dri-devel
pendingchaos has quit []
pendingchaos has joined #dri-devel
Company has joined #dri-devel
shfil has quit [Ping timeout: 480 seconds]
camus1 has joined #dri-devel
camus has quit [Ping timeout: 480 seconds]
itoral_ has quit []
dogukan has joined #dri-devel
dogukan has quit []
dogukan has joined #dri-devel
dogukan has quit [Remote host closed the connection]
ppascher has quit [Ping timeout: 480 seconds]
camus has joined #dri-devel
camus1 has quit [Ping timeout: 480 seconds]
iive has joined #dri-devel
<karolherbst> do we have something useful for users in order to debug dpms related issues?
slattann has quit []
<MrCooper> karolherbst: what kind of issues?
<karolherbst> MrCooper: display stays black
<MrCooper> anything in the display server log / terminal output?
<karolherbst> nope
<MrCooper> then I'd probably try enabling DRM debugging output while trying to wake it up
slattann has joined #dri-devel
<karolherbst> right... but it's a mask and what values are making sense and so on
mattst88 has quit [Remote host closed the connection]
<MrCooper> can always try 0xff if in doubt :)
<karolherbst> well
<karolherbst> there are more bits :)
<karolherbst> the issue is just drm message spam
dogukan has joined #dri-devel
<karolherbst> so you really don't want to enable everything
mattst88 has joined #dri-devel
<daniels> imirkin_: yeah, as alyssa says, those absolutely should not fail at all - it's mildly unsurprising because virgl has shifted from qemu to crosvm for its underlying execution, and zink-timelines is testing a totally new codepath that zmike only just recently landed, so they've had less stress than what came before them
<daniels> imirkin_: just keep poking the responsible people (gerddie/tomeu/etc for VirGL, zmike/airlied/kusma for Zink) and make sure they can see those test failures and get them either worked around or fixed
thellstrom has joined #dri-devel
thellstrom1 has quit [Ping timeout: 480 seconds]
<zmike> I saw a zink flake from sroland's MR last night, so that's what I'm looking at today
sdutt has joined #dri-devel
<daniels> 🥇 for you
<ajax> holy crap
<ajax> that, uh... that's a number nine chip
Daanct12 has joined #dri-devel
<daniels> ajax: dream come true!
<ajax> anyone feel like writing a driver for some dx6 hardware
camus has quit [Remote host closed the connection]
camus has joined #dri-devel
Daaanct12 has quit [Ping timeout: 480 seconds]
jewins has joined #dri-devel
mattrope has joined #dri-devel
<zmike> hm I've now run this test over 10,000 times locally without a single failure 🤔
<kusma> Computers are magic!
<tomeu> zmike: nor valgrind nor asan are being helpful?
<zmike> if only this power could somehow be harnessed for good
<zmike> nope
JohnnyonFlame has joined #dri-devel
<imirkin> zmike: if you scroll a bit further up, you'll see a link to one of my pipelines with a zink failure too
<imirkin> daniels: noted, will keep doing that
<imirkin> daniels: fwiw, i don't always know the responsible party. e.g. i have no clue for virgl (except you listed some people, so thanks for that)
<zmike> oh that's a new one
<imirkin> zmike: (but you probably found it already)
sdutt has quit []
sdutt has joined #dri-devel
<zmike> I guess I'll just let that run until it crashes? already gone a few hundred now without seeing it though 😠
Ahuj_ has joined #dri-devel
Ahuj has quit [Ping timeout: 480 seconds]
<idr> imirkin: I also had an issue with virgl-traces.
Peste_Bubonica has joined #dri-devel
<idr> ajax: At least it supports paletted textures!
<idr> Any idea which #9 chip it is?
Duke`` has joined #dri-devel
<idr> Cripes... it's the Revolution IV. Interesting because it's one of... 3? cards that will drive an SGI 1600SW monitor.
<idr> mattst88 ^^^
<imirkin> idr: btw, if you feel like righting a 10-year-old wrong:
<alyssa> "Screen seems not DRI3 capabale" "DRI2: failed to authenticate"
<alyssa> uh oh
<daniels> imirkin: yeah, fair enough :)
<alyssa> oh derp VNC in the way
<idr> imirkin: I saw that pop up in my email last night. It's on my list for today.
<imirkin> idr: cool. no rush, obviously :)
<imirkin> broken for 10y, can handle another day. or 100. :)
<alyssa> imirkin: I like deleting code
<alyssa> why is dEQP-EGL.functional.color_clears.* so slow?
slattann has quit []
heat has joined #dri-devel
<alyssa> airlied: I guess I don't get to complain given it's not the CL CTS :p
<alyssa> still really frustrating to have such poorly written CPU bound tests that only use a single thread.
vivek has quit [Remote host closed the connection]
vivek has joined #dri-devel
<tomeu> airlied: still need to further massage the code a bit more, but at least it will be smaller:
<tomeu> 6 files changed, 1160 insertions(+), 2118 deletions(-)
mbrost has joined #dri-devel
<ajax> idr: it's such a #9 that the docs have the same fonts and section formatting as the actual #9 docs
<ajax> no idea if the verilog there has the 1600sw support
jessica_24 has joined #dri-devel
<idr> ajax: Well... I have one of those cards.
<alyssa> I went out and it's still executing dEQP_EGL.functional.* tests :|
<alyssa> sysprof says it's entirely bound by dEQP's software rasterizer..
<alyssa> lowering the reslution would probably help..
<ajax> i have one of those monitors
YuGiOhJCJ has quit [Quit: YuGiOhJCJ]
<alyssa> Weeee
<MrCooper> gonna be a while until !123456 :)
Daanct12 has quit [Quit: Quitting]
Danct12 has joined #dri-devel
<ajax> that MR is kind of hilarious
<daniels> ajax: makes testing a loooot easier
khfeng has quit [Ping timeout: 480 seconds]
<ajax> totally worthwhile, don't get me wrong
slattann has joined #dri-devel
<idr> ajax: Me too. It's weird... sometimes it has a 1-pixel vertical stripe of stuck green pixels a couple centimeters from the left, but the rest of the time it's fine.
<idr> I have the #9 card that can drive it and the 3dlabs card that can drive it.
<idr> And a passthrough DVI-to-LVDS PCI card that can drive it from any graphics card.
<idr> I want to get the adapter board to drive it from my O2.
<idr> I guess there was also an off-brand Geforce card that was only released in Japan that can drive it.
<ajax> i have the multilink gadget
<ajax> i don't know if i've ever powered it up though
<ajax> monitors, moproblems
<MrCooper> idr: FWIW, lists 4 cards with an OpenLDI connector
<idr> Right... I think the Formac card is the same as the 3D Labs card, but it has OpenFirmware for PPC Macs.
<daniels> 'OpenLDI is rarely used today, as all popular home and office LCD panel monitors use a DVI or VGA standard connector.'
<idr> Lol.
<idr> Can you even buy a card today with VGA out?
<idr> s/ with VGA out// heh...
<ajax> that "siemens-nixdorf" card is a c&t 65554, lols
<alyssa> ...Does anyone know how long CTS runs are expected to take?
<alyssa> Admittedly going to scale linearly with the single-core CPU perf but..
<karolherbst> alyssa: depends on how many tests you run
<karolherbst> or did you mean the proper CTS with everything?
<alyssa> karolherbst: The proper actual ./cts-runner valid for Khronos CTS
<karolherbst> alyssa: a looooong time
<alyssa> you know. the thing that runs all the stupid tests, and only single threaded.
<imirkin> alyssa: many hours on a desktop
<karolherbst> let it run over night or so
<alyssa> crikey.
<imirkin> alyssa: not just all the tests. several times, with different configs.
<karolherbst> the issue is that the CTS does multiple runs
<karolherbst> with different configs
<alyssa> why do they hate multithreading so much
<karolherbst> alyssa: because multithreading is a issue with GL
<karolherbst> you mean using parallel runners or so :p
<karolherbst> but I guess nobody wrote the code for it... you can ask them
<alyssa> Ughhh
<karolherbst> the OpenCL CTS uses multithreading
<alyssa> karolherbst: will this run finish before I have to unplug the board to relocate ?
<idr> The card not listed in the Wikipedia is I-O Data GA-NF30
<karolherbst> alyssa: how long do you have?
<alyssa> karolherbst: If you have to ask....
<alyssa> current status - dEQP-EGL.functional.render.multi_thread.gles2_gles3.rgb888_window
<karolherbst> alyssa: is that with cts-runner?
<alyssa> yes
<karolherbst> ohh, strange
<karolherbst> but okay
<alyssa> strange?
<karolherbst> yeah.. I didn't know they added dQEP tests
<alyssa> this is for ES3.1 (well right now ES2.0 as a sanity check.. umm)
<karolherbst> I should get back to that at some point :D
<glennk> idr, if you are looking for oddball graphics cards, nec te5l?
<alyssa> karolherbst: I figure if I already put in the work to make the driver conformant, I might as well get the sticker from Khronos
<alyssa> just finished dEQP-EGL.functional.render.multi_thread.gles2_gles3.rgb888_window
<alyssa> that one only took 7 minutes of wall clock time
<alyssa> with CPU maxed the whole time..
<karolherbst> ufff
<karolherbst> I think it might take a little longer then
<karolherbst> like.. 2 weeks or so?
<karolherbst> but maybe GLES is quicker
<karolherbst> I know that GL takes a lot
ayaka_ has joined #dri-devel
<jekstrand> alyssa: Yeah, "real" CTS runs take a while. :-/
<jekstrand> But then you get a sticker!
ayaka has quit [Remote host closed the connection]
<alyssa> karolherbst:'re serious.
<Sachiel> jekstrand: hey, I never got a sticker
Ahuj_ has quit [Ping timeout: 480 seconds]
<alyssa> karolherbst: Test suite just "finished".
<alyssa> Test case 'dEQP-EGL.functional.image.create.gles2_cubemap_positive_x_rgb_read_pixels'..
<alyssa> failed to export dumb buffer: Too many open files
<alyssa> There goes two hours of progress.
<alyssa> Segmentation fault
<jekstrand> Sachiel: I can send you one.
<jekstrand> I think I've got a pile of Vulkan stickers somewhere
<Sachiel> I'm surprised they survived the move
<alyssa> jekstrand:do you have any GLES stickers ? because I don't know if I can stomach 2 weeks of this.
<jekstrand> I didn't say I knew where they were. 😂
<jekstrand> alyssa: I don't have any GLES stickers. But I can get one for you next time I'm at a "real" khronos F2F.
<Sachiel> so... 2031
<jekstrand> lol
<alyssa> Sachiel: Hey, my CTS run might pass by then!
<jekstrand> Perfect!
<karolherbst> alyssa: welcome to the world of CTS
<zmike> what a wonderful world
<alyssa> karolherbst: maybe being conmforant is overrated..
<karolherbst> guess why nouveau never passed it yet
<karolherbst> alyssa: :D
<karolherbst> I have this issue with nouveau where I actually manages to pass the mustpass file
<karolherbst> but on the 10th cycle or so I get an error
<karolherbst> ¯\_(ツ)_/¯
<alyssa> uh
<imirkin> yeah, mustpass is just the beginning
<Sachiel> after that comes shouldpass, and then wouldbeniceifitpassed
<alyssa> Sachiel: and holycrapwhydoesntthispass
<alyssa> and ididntevenwantthistopass
<karolherbst> ahh
<karolherbst> I wanted to wire up robustness :D
<alyssa> Meanwhile... why won't drm-shim work on the M1 ;v
<karolherbst> imirkin: do you remember how that thing was called to enalbe this robustness stuff?
<imirkin> enable where?
<imirkin> in the driver?
<imirkin> or in tests?
<karolherbst> there was this callback
<imirkin> yeah, like reset status? something like that
<karolherbst> yeah.. something
<karolherbst> get_device_reset_status
<karolherbst> thanks :D
<imirkin> nvc0_get_device_reset_status
<karolherbst> and set_device_reset_callback
<imirkin> oh, you found it.
<imirkin> yes
gouchi has joined #dri-devel
<karolherbst> I will wire it up for real :)
<karolherbst> just need to read up some code on how that all should work
<alyssa> mmap is failing for the memfd ...
<karolherbst> alyssa: wait.. just to make sure, you already worked on fixing CTS tests, no?
<karolherbst> because we have tooling to make it suck less
<alyssa> karolherbst: no... ND noises..
<karolherbst> _and_
<karolherbst> use mustpass files
<karolherbst> check how CI uses it
<karolherbst> you should touch "cts-runner" only after you are fairly sure you will pass the run
<karolherbst> but I also have... "this script"
<alyssa> ack
<alyssa> i'm fairly sure it'll pass gles2
slattann has quit []
<karolherbst> should tell you where the relevant files are and how to use the parallel runner :D
<karolherbst> but if I run this against GL 4.6 and GLES 3.2 this takes like 6 hours single threaded
<karolherbst> but it does check a lot of tests
<alyssa> karolherbst: why am I suddenly wondering if Arm's conformance runs are on big beefy x86_64 servers against an FPGA / software model
<karolherbst> :D
<daniels> can confirm they are not
<alyssa> sucks for them then ;P
<daniels> imagine our farm, but with a lead time measured in days, and relative reliability measured in single-digit percentage
<daniels> (also FPGAs are slower than you think)
<alyssa> daniels: CTS is completely CPU bound though, so it'd probably come out way ahead anyway?
<daniels> (this information is only current to 2014, maybe they’ve cracked it by now)
<karolherbst> yeah.. recompiling with O3 helps more than a beefier GPU :D
<daniels> it wouldn’t be CPU-bound if your GPU was several orders of magnitude slower :P
<alyssa> lul. true.
mlankhorst has quit [Ping timeout: 480 seconds]
frieder has quit [Remote host closed the connection]
<zmike> imirkin: okay, I just ran the test that flaked for you nonstop on 2 terminals for the past 3 hours and got zero crashes
<zmike> this is a tricky one.
<imirkin> zmike: this was the first CI run i did in months
<imirkin> beginner's luck?
<zmike> seems like it :/
<daniels> zmike: does it only appear when not run in isolation?
<alyssa> looks like a bunch of kmsro drivers have an fd leak
<zmike> it's a piglit test, so it's not like it's being run in the same process
<daniels> mm true
Erandir has quit [Read error: Connection reset by peer]
<mdnavare> agd5f_: daniels: Do we know if DRI PRIME can be used with media/ compute to force render on DGPU?
<mdnavare> How does AMD support that on media/compute stack?
<alyssa> oh err.. my fix breaks panvk... let's not do that..
vivijim has quit [Remote host closed the connection]
slattann has joined #dri-devel
<daniels> mdnavare: I don’t understand your question, sorry
<mdnavare> daniels: I mean for gl applications to render on DGPU we set DRI PRIME = 1, how can we force the media transcode to happen on DGPU?
<glennk> once upon a time i had a mali-55 fpga board, can confirm it was slow
<glennk> equivalent gpu speed ~1mhz
<daniels> mdnavare: API-dependent
<daniels> for Vulkan Video, the answer is clear; for VA-API … ?
vivijim has joined #dri-devel
agd5f_ has quit []
agd5f has joined #dri-devel
<agd5f> mdnavare, not sure about vaapi
<mdnavare> daniels: So for Vulkan video, it would still use DRI PRIME?
<daniels> mdnavare: Vulkan makes the user explicitly select the device always
<mdnavare> daniels: How does it explicitly select the GPU? Is it the same as what GL apps do or there is a different way, do you have any examples on how it requests a particular GPU?
<Sachiel> tl;dr: you query available devices, check their features, pick the one you want, create your instance with it, all work is done through that instance from now on
<emersion> not aware of any way to override the devcie used by libva, apart from using the DRM platform in the code explicitly
<alyssa> ok, with the memory leak down, next up is figure out why drm-shim fails on here
<alyssa> I sort of suspect I'm missing a kernel option or something silly.
<imirkin> the point of drm-shim is that you don't really need much from the kernel
<alyssa> imirkin: hello from bare metal debian on the m1
<alyssa> do not expect anything from this kernel :p
<imirkin> well, LD_PRELOAD support is nice
<imirkin> but that's about it iirc?
<alyssa> MEMFD_CREATE, but I have that..
<jekstrand> You may need /dev/dri/card0 and /dev/dri/renderD128 to exist because I know that drm-shim fakes enough file-system stuff that /dev/dri can be empty.
<alyssa> jekstrand: I have /dev/dri/card0 from my stubbed out display controller experiment, don't have any render nodes
<imirkin> seems like it should support a lack of /dev/dri
<jekstrand> Does look that way.....
<imirkin> and readdir() will append something if it's not already there
<imirkin> (i think?)
<alyssa> it's the mmap that's failing...
<alyssa> (it claims to initialize a fake /dev/renderD128 as expected)
<mdnavare> Sachiel: So the Vulkan application itself has to have code for selecting the appropriate device or is there a way to chose a device at launch time?
<jekstrand> mdnavare: The Vulkan app chooses the device
<jekstrand> mdnavare: However.... They usually choose the first one and Mesa contains a layer (shipped on most distros) which can re-order the devices and/or trim the device list down to just one to force the app to make a particular choice.
<mdnavare> jekstrand: So if the choice has to be changed, the code has to be changed? No way to do it from command line at runtime?
<jekstrand> mdnavare: THat's what the layer is for
<alyssa> ohhh....
<jekstrand> mdnavare: MESA_VK_DEVICE_SELECT=list will list devices
<jekstrand> mdnavare: MESA_VK_DEVICE_SELECT=foo will select a device
<jekstrand> On my laptop, for instance, I get:
<jekstrand> selectable devices:
<jekstrand> GPU 1: 8086:8a52 "Intel(R) Iris(R) Plus Graphics (ICL GT2)" integrated GPU 0000:00:02.0
<jekstrand> GPU 0: 10005:0 "llvmpipe (LLVM 12.0.0, 256 bits)" CPU 0000:00:00.0
<jekstrand> GPU 2: 10de:1f91 "NVIDIA GeForce GTX 1650 with Max-Q Design" discrete GPU 0000:57:00.0
<alyssa> Huh..
<alyssa> mmap on memfd with offset != 0 seems unhappy
<Sachiel> llvmpipe is the first one? huh...
<mdnavare> jekstrand: And how do i invoke a particular app to select one, just call with MESA_VK_DEVICE_SELECT=foo in command line ?
<jekstrand> mdnavare: Yup
<Sachiel> is the layer always there or you have to force it?
<jekstrand> mdnavare: Assuming you have the layer. Do =list to check for that.
<jekstrand> Sachiel: It's on by default in most distros
<Sachiel> huh
macromorgan has quit [Remote host closed the connection]
<alyssa> __builtin_ffs()
<alyssa> imirkin: yep, it's my kernel
<jekstrand> alyssa: ?
zackr has joined #dri-devel
<alyssa> The man page for mmap says that `offset` must be a multiple of the page size as returned by sysconf(_SC_PAGE_SIZE)
<alyssa> drm-shim allocates 4k pages.
<alyssa> Spot the bug.
ngcortes has joined #dri-devel
<jekstrand> Do you have funny page sizes?
<daniels> lmaoooo
<alyssa> 16k
<alyssa> (Apple IOMMU is 16k and it's easiest just to make the whole thing 16k.)
<alyssa> this is going on twitter :-p
<jekstrand> Fun fun
macromorgan has joined #dri-devel
macromorgan has quit [Read error: Connection reset by peer]
macromorgan has joined #dri-devel
<ccr> happy happy joy joy
macromorgan has quit [Read error: Connection reset by peer]
macromorgan has joined #dri-devel
macromorgan has quit [Read error: Connection reset by peer]
<mdnavare> jekstrand: Okay thanks Jason, will try that
<jekstrand> yw
<mdnavare> jekstrand: But how does Vulkan get used on the media or compute side?
<jekstrand> mdnavare: I feel like I need some context for that question
<mdnavare> jekstrand: Hmm so my initial question in this context was that how can we selectively run any media or compute application on a partiuclar GPU and then daniels said that Vulkan we can but not sure about VAAPI
<jekstrand> mdnavare: Oh, I've got no clue about vappi
pnowack has quit [Quit: pnowack]
<jekstrand> mdnavare: And OpenCL is client-selected as well and I don't know what cmdline stuff you can do there either
gpoo has quit [Ping timeout: 480 seconds]
<mdnavare> jekstrand: So for compute , if Vulkan API is used then yes we can selectively use the GPUs but Open CL we cant, is that the correct understanding?
<jekstrand> As far as I know, yes.
<jenatali> Which obviously hasn't merged. Dunno if the other loaders used on Linux have something similar already
<mdnavare> jenatali: Thanks for this info will look into it as well
<mdnavare> jekstrand: Have you played around with forcing DG for the Vulkan?
<mdnavare> and scanout on IG
<jekstrand> mdnavare: Yup. Works on my DG1+TGL with upstream i915
<jekstrand> Kayden: Can I get an ACK on the header update in !11888
<mdnavare> jekstrand: Okay cool, I can try that too then, I have a DG1 + TG1 laptopp as well
<Kayden> jekstrand: ack!
gpoo has joined #dri-devel
<jekstrand> Kayden: Thanks! Landing that and the userptr MR.
<jekstrand> Ryback_: ^^
<Kayden> mdnavare: Vulkan apps should work even on DII, as the Vulkan WSI code always allocates PRIME buffers in SMEM. GL doesn't today, hence you need the kernel migration patches. But VK should work out of the box
<Kayden> jekstrand: excellent!
<Ryback_> jekstrand: yay!
<danvet> jekstrand, ack
pochu has quit [Ping timeout: 480 seconds]
<alyssa> just opened an MR fixing drm-shim
<alyssa> I can confirm it works on my funny 16k kernel, it shoul also fix funny 64k kernels
<karolherbst> alyssa: yay
<karolherbst> can we please kill 4k pages :D
<karolherbst> alyssa: I actually do wonder if we hardcode 4k in other places inside mesa which could cause issues
<karolherbst> but it seems like it's only an iommu problem?
thellstrom has quit [Ping timeout: 480 seconds]
<alyssa> karolherbst: we do seem to hardcode 4k in lots of places....
<karolherbst> yes
<karolherbst> I am actually wondering if we should assert on the page size on all mmap wrappers we have
lynxeye has quit [Quit: Leaving.]
<karolherbst> let me check something
<karolherbst> actually I only do care about llvmpipe hitting such paths
<alyssa> yeah ... I don't care about panfrost hitting such paths
<alyssa> (asahi i do)
mlankhorst has joined #dri-devel
slattann has quit []
<karolherbst> alyssa: uhh.. I have an evil idea
<karolherbst> we could add the possibility to override all of this
<karolherbst> make sysconf(_SC_PAGE_SIZE) return 8k e.g. for testing and let the code wrapping mmap assert on the size to be still correct :D
<karolherbst> and just add an project option for that
<karolherbst> I don't know if it's worth it though
<alyssa> would be nice to have CI for this
<karolherbst> yeah, that's the idea
<karolherbst> or just write a preload wrapper for those functions...
<karolherbst> I guess there are some ways on how to test that stuff
<alyssa> jekstrand: maybe `xrandr --size=320x256` might help? smaller window means less to sw rasterize
<alyssa> default is like 1024x something
<jekstrand> alyssa: ?
pnowack has joined #dri-devel
jhli has joined #dri-devel
rasterman has quit [Quit: Gettin' stinky!]
<danvet> sravn, tzimmermann, can you pls this too? it's fallout from that conversion to shmem helpers
<danvet> and I cant really merge the kernel side without the igt fixup
<mlankhorst> airlied: Slightly late drm-misc-next pull request sent. :-)
<mlankhorst> enjoy!
<tzimmermann> danvet, a-b by /me
<danvet> tzimmermann, thx
<tzimmermann> can i somehow leave a msg in patchwork. i'm not subscribed to the igt ML
<tzimmermann> ?
<danvet> no
<danvet> patchwork is a read-only view
<tzimmermann> no problem
<alyssa> jekstrand: help the CTS run faster, I mean
<krh> alyssa: we should all be running 16k pages...
<jekstrand> alyssa: I don't think so
<jekstrand> alyssa: It typically runs pretty small windows and you really don't want it clipping
<alyssa> jekstrand: Aw :(
thellstrom has joined #dri-devel
<alyssa> karolherbst: Guess who is crashing randomly in the multithreading tests
<alyssa> Meeeeeeeeeeeeeee
ngcortes has quit [Ping timeout: 480 seconds]
<alyssa> Oh, this is evil.
<alyssa> Two different threads are binding the shader CSO in parallel, and they're both triggering compiles
<imirkin> variants suck :)
<alyssa> I guess I need a lock?
<imirkin> alyssa: or flip on the cap which says your shader CSO's aren't portable across contexts
<alyssa> imirkin: They are portable, it's just fighting over the variant list
<alyssa> iris has a simple_mtx_t for the variants list, I guess I need to copy that
<alyssa> zmike: as usual, the answer is either copy code or delete code
<daniels> krh: tell that to Arm SystemReady
<imirkin> never "write code" :)
<zmike> we always knew it would come back to this
<zmike> though now you may owe jekstrand royalties for leveraging his proprietary Delete The Code methodology
<jekstrand> zmike: Have you looked at my diffstats recently?
<zmike> no, too busy copying and pasting code
<jekstrand> lol
<zmike> last time I deleted something your payment plan almost bankrupted me
thellstrom has quit [Quit: thellstrom]
thellstrom has joined #dri-devel
<jekstrand> if (is_cow_mapping(vma->vm_flags)) printk(KERN_INFO "Moooooo");
<alyssa> jekstrand: loooooooo
<alyssa> l
<ccr> milking the puns, eh
ngcortes has joined #dri-devel
<alyssa> veals bad man
<jekstrand> Just being a bit cheesy
<ajax> cud y'all stop
tzimmermann has quit [Quit: Leaving]
<jekstrand> You got a beef with it?
<ajax> about to chuck the laptop out the window, yeah
<kisak> mooooving on...
<alyssa> Over the Moon - Idina Menzel
pochu has joined #dri-devel
<ajax> Reverend Horton Heat - Eat Steak
<zmike> ajax: that's a top pun
thellstrom has quit [Ping timeout: 480 seconds]
<alyssa> hate to interrupt this but what the heck
* alyssa is onto her next multithreading crash
<alyssa> thread A does a glFlush() of a batch, while thread B does a glTexSubImage() of the thing that batch writes, so both threads are trying to flush simultaneously and racing
<alyssa> I don't see other drivers locking for this..
<alyssa> oh, I guess fd_batch->simple_lock protects this case
<alyssa> robclark: ^
dogukan has quit [Quit: Konversation terminated!]
<daniels> ajax: fair enough, onglet off
sdutt has quit [Ping timeout: 480 seconds]
danvet has quit [Ping timeout: 480 seconds]
zackr has quit [Remote host closed the connection]
vivekk has joined #dri-devel
tlwoerner has quit [Remote host closed the connection]
mbrost_ has joined #dri-devel
zackr has joined #dri-devel
tlwoerner has joined #dri-devel
alyssa_ has joined #dri-devel
buhman has quit [Ping timeout: 480 seconds]
mauld_ has joined #dri-devel
alyssa has quit [Read error: Connection reset by peer]
vivijim is now known as Guest4012
vivijim has joined #dri-devel
mauld has quit [Read error: Connection reset by peer]
gpoo has quit [Ping timeout: 480 seconds]
Guest4012 has quit [Read error: Connection reset by peer]
vivek has quit [Ping timeout: 480 seconds]
buhman has joined #dri-devel
gpoo has joined #dri-devel
mbrost has quit [Ping timeout: 480 seconds]
sdutt has joined #dri-devel
Duke`` has quit [Ping timeout: 480 seconds]
alyssa_ is now known as alyssa
alatiera has quit [Quit: Ping timeout (120 seconds)]
ngcortes has quit [Ping timeout: 480 seconds]
alatiera has joined #dri-devel
alatiera is now known as Guest4018
Guest4018 has quit []
thelounge92 has joined #dri-devel
gouchi has quit [Remote host closed the connection]
ngcortes has joined #dri-devel
heat has quit [Remote host closed the connection]
<alyssa> a box of mutexes turn my brain to macaroni
<HdkR> Sounsd like my problem with a box of static initializers :D
<alyssa> HdkR: How do I know where to put the locks?!:p
<HdkR> Sprinkle them around until everything is recursively locked
<alyssa> uh oh
<mdnavare> jekstrand: With the Vulkan API using Mesa layers to choseGPU, does it use the DRI PRIME parsing internally or it uses a different mechansim to select the GPU
<mdnavare> ?
<jekstrand> It's totally different from DRI_PRIME
<idr> glennk: Wha... I've never even heard of that card.
mlankhorst has quit [Ping timeout: 480 seconds]
<jekstrand> alyssa: The only problem harder than figuring out where to put the locks is figuring out how to take out the bad one without the whole universe exploding. (-:
<alyssa> jekstrand: :_:
* jekstrand goes back to staring at dma_resv
<alyssa> jekstrand: gl
<alyssa> i mean
<alyssa> vk
<jekstrand> What about vk?
<HdkR> Good Luck -> GL -> VK -> No Luck had
<HdkR> :D
pcercuei has quit [Quit: dodo]
<alyssa> how have i already managed to introduce a dead lock
<karolherbst> alyssa: simple_mtx_assert_locked is key
<alyssa> karolherbst: eh?
<karolherbst> this way you can lock on the consumer side of functions without having to worry too much
<karolherbst> internal locking can make it simple to use functions and stuff, but sometimes the overall thing is too complex and you have to lock externally
<karolherbst> or just do it because it is simplier
<karolherbst> then you just verify in the racy code with simple_mtx_assert_locked that the lock was indeed locked
<alyssa> got it
<alyssa> ty
<karolherbst> also.. there is a fun fact about simple_mtx_t.. the unlock functions works also on mtx which aren't locked...
<karolherbst> not sure if that's intentional or not
<alyssa> karolherbst: I think I better start with a more basic question -- what are the rules around multithreading in Gallium?
<karolherbst> uhm....
<karolherbst> everything goes?
<alyssa> Previously, I thought it was "anything can access the screen, but only one thread can access the context"
<alyssa> clearly that is wrong.
<imirkin> same as the rules of fight club...
<karolherbst> alyssa: well
<jenatali> alyssa: That was my understanding
<imirkin> only one thread per context
<karolherbst> with GL you have to make a context current
<imirkin> but multiple threads per screen
<karolherbst> so you shouldn't have multiple threads on the same context doing stuff
<alyssa> this CTS case seems to disagree
<karolherbst> sure?
<alyssa> dEQP-EGL.functional.sharing.gles2.multithread.random.images.copytexsubimage2d.1
<karolherbst> are you sure it does operations on the same context from multiple threads?
<karolherbst> I went through all of this with nouveau already :D
<alyssa> I, errr....
<karolherbst> and I am feairly sure those tests are fine
<alyssa> wait. derp. uh
<karolherbst> *fairly
<imirkin> alyssa: resources are context-less
<karolherbst> :)
<alyssa> imirkin: yeah. doh
<alyssa> the second thread is doing a resource transfer which is triggering a flush of the batch/context writing it
<alyssa> and that's where all hell breaks lose.
<karolherbst> :)
<karolherbst> "don't do that" or so?
alanc has joined #dri-devel
* alyssa wonders how many gallium drivers fail that particular test.
<karolherbst> alyssa: all except iris and si I guess
<karolherbst> well
<karolherbst> even iris I got to fail on those MT tests
<alyssa> isn't iris conformant?
<karolherbst> yeah, but those aren't CTS tests I am super sure about
<alyssa> ./cts-runner runs them
<karolherbst> did you specify the target?
<alyssa> oh wait
<alyssa> that wasn't cts-runner, that was glcts dEQP-EGL.* ughh
<karolherbst> ahh
<karolherbst> glcts runs _all_ tests by default :)
<karolherbst> or so
<alyssa> ahhhhh
<karolherbst> but in case you care about multithreading (android _does_) then you want to fix those asap
<karolherbst> launch the android emulator with hw accel enabled
<karolherbst> and watch your machine burn down
<karolherbst> alyssa: a bit of history.. deqp was created to conformance test android devices :D
<karolherbst> and khronos used that framework for their GL CTS
<karolherbst> so dEQP-* tests are generally those google written ones
camus1 has joined #dri-devel
camus has quit [Remote host closed the connection]
<karolherbst> and khronos tests start with GTF-* or KHR-*
<imirkin> google actually acquired the company that wrote the dEQP tests :)
<karolherbst> details :D
<alyssa> karolherbst: where's the mustpass file for Khronos conformance?
<karolherbst> anyway.. krhonos forked it, and now dEQP got merged into the khronos CTS :D so it's all one now
<karolherbst> alyssa: check the gist I linked above
<karolherbst> it contains all the paths
<alyssa> ack
sdutt has quit [Ping timeout: 480 seconds]
<karolherbst> khronos_mustpass are the ones khronos cares about :)
<karolherbst> aosp is the android stuff obviously
<karolherbst> alyssa: but you have to run it inside the directory of that glcts binary
<karolherbst> otherwise it fails to locate resources
<alyssa> karolherbst: I see you special casing multi thread in that file
<karolherbst> yeah...
<karolherbst> for a reason
<karolherbst> that loop is just merging all those files together
<karolherbst> makes it trivial to test for regressions this way
<karolherbst> so I let it run and after 5 hours I have the results or so
<karolherbst> anyway.. that script takes a argument where it writes all fails into
<karolherbst> but I think you want to start with a single mustpass file anyway
<alyssa> aside- where do I get access to ?
<karolherbst> alyssa: do you have a khronos account?
<alyssa> yes
<karolherbst> a proper one I mean?
<alyssa> no
<karolherbst> does you employer have an agreement with khronos? :D
<alyssa> probably
<karolherbst> normally you just register with your work email address and you get access to everything
<alyssa> I have but it doesn't have access to anything useful
<karolherbst> I guess you used your private email address?
<alyssa> no...
<karolherbst> mhh
<karolherbst> let's see
<karolherbst> heh
<karolherbst> I am in the Tracker group at least
<karolherbst> and you are not
<karolherbst> yeah.. maybe collabora isn't allowd?
<karolherbst> not sure
<alyssa> daniels: Do we need to pay more money? :P
<karolherbst> alyssa: that tracker is used for submission results and stuff, no?
<karolherbst> or what do you want with it?
<karolherbst> alyssa: at least the internal repo has 6 commits more
<karolherbst> so...
<karolherbst> you might not need it depending on what you actually want from there :D
<alyssa> karolherbst: I keep seeing references for the issue tracker
<karolherbst> ahh
<imirkin> through i don't even get
<karolherbst> imirkin: sad :(
<imirkin> on the bright side, they don't tell me to file issues on the internal khr tracker when i go via github :)
<karolherbst> alyssa: what's weird is, that collabora and red hat have the same level
<karolherbst> alyssa: daniels even has access to that group :O
<karolherbst> I think there is something wrong with your account then
<Sachiel> mattst88 was having issues getting to gerrit a few days ago, might be related
<alyssa> .
<alyssa> Sachiel: It's been like this since i opened the acct
<alyssa> wait wait wait
<alyssa> transfer_map gets passed a pipe_context ?
<Sachiel> yeah, I mean he could get to gitlab/tracker, but gerrit was forbidden
<karolherbst> Sachiel: doubtful, alyssa simply isn't in the tracker group
<mattst88> any clue why I just get 'Forbidden' on after logging in?
<karolherbst> no idea
<karolherbst> works for me
<alyssa> karolherbst: it's been ofmy views on internet privacy
<alyssa> i deleted the tracker
<alyssa> ;p
<karolherbst> :)
<mattst88> I'm in "General", "SPIR", and "Vulkan-GL-CTS" groups on
<karolherbst> anyway.. don't know who to contact with that stuff
<mattst88> are you in any additional groups that might somehow control gerrit access?
<karolherbst> mattst88: I am in.. too many to count
<karolherbst> ehh wait
<mattst88> I emailed to get them to rename my gitlab account from mattst881 -> mattst88, FWIW
<karolherbst> no
<karolherbst> there is a join button everywhere :D
<mattst88> just search for 'Leave'
<karolherbst> I am only in General
<karolherbst> and OpenCL
<mattst88> okay, thanks, so it's likely not working group membership that controls this
<alyssa> I am in General, OpenGL, OpenGL ES, Vulkan-GL-CTS
<karolherbst> mattst88: our gitlab groups are also equal
karolherbst has quit [Remote host closed the connection]
sdutt has joined #dri-devel
karolherbst has joined #dri-devel
<karolherbst> heh.. why did I just lost my connection? nvm
CME has quit []
CME has joined #dri-devel
<karolherbst> mattst88: anyway.. I guess your rename messed it up
<karolherbst> and your old name is in some stupid list they modify with scripts or so
<mattst88> karolherbst: no, it wasn't working beforehand either
<karolherbst> ahh
<Sachiel> maybe when the rename happens you'll get the old access you had while at Intel
<mattst88> I just emailed so hopefully they can help
<karolherbst> probably something stupid, like having a number in your account name
<mattst88> Sachiel: heh, no, they renamed my old account to mattst88_intel :)
jkrzyszt has quit [Ping timeout: 480 seconds]
jkrzyszt has joined #dri-devel
<mattst88> well, maybe it is somehow related. almost instant reply:
<mattst88> > Oops. I need to update that as well. Will do it first thing in the morning
<alyssa> Hnnngh
<alyssa> why is multithreading so hard
<alyssa> it just makes my brain melt
<alyssa> maybe that's my cue to call it a day
<HdkR> Threads were a mistake, lets go back to only single core machines :D
<dcbaker> because we tend to only think about one thing at a time?
<Sachiel> no no, keep the multi core machines, but run one instance of MS-DOS on each
<imirkin> ahhh the comforts of 16-bit real mode
<HdkR> The horrors of 16-bit real mode
<imirkin> dude wtvr. you don't like something, just modify the global interrupt descriptor table. that'll fix ya right up
<imirkin> just don't ... you know ... clear it and then wait for a keypress :)
<imirkin> definitely never did that.
<karolherbst> alyssa: I can talk about my experiences with it with nouveau :)
<karolherbst> VM just sacrifized performance to account for stupid devs, remove it and just have perfect code, duh ¯\_(ツ)_/¯
<alyssa> ....ralloc isn't thread safe is it
<karolherbst> of course not
<imirkin> if you have to ask
<imirkin> then you already know the answer
<alyssa> cry emoji
<karolherbst> alyssa: just use the context as the context?
<karolherbst> dunno :D
<karolherbst> would be a stupid idea actullay
<karolherbst> *actually
<alyssa> karolherbst: sure, but using the screen is unsafe unless you lock the whole screen
<alyssa> what the helgrind
<karolherbst> yeah..
<karolherbst> you don't have to put everything into a huge ralloc context
<karolherbst> shaders can be their own
<karolherbst> as they get explicitly destroyed anyway
ngcortes_ has joined #dri-devel
vivijim is now known as Guest4036
vivijim has joined #dri-devel
<alyssa> in this case my use of ralloc was just lazy, using calloc/free directly should fix
<daniels> karolherbst: we're a Contributor member but not Adopter - adoption is under SPI since it's Mesa
<karolherbst> alyssa: ahh
pzanoni` has joined #dri-devel
<daniels> alyssa: mattst88 is right - is the one to sort your access levels, feel free to cc me
<karolherbst> daniels: sure.. but you can access that repo while alyssa can't :)
<daniels> yeah
<daniels> hence emailing Jon :P
jewins1 has joined #dri-devel
ramaling_ has joined #dri-devel
<karolherbst> why is it that late already
<daniels> karolherbst: you usually keep west coast time tbf
<karolherbst> :D
<alyssa> Oh hey, it passes now! that was the bug *smacks self*
<karolherbst> it's not that bad
<alyssa> well. probably more bugs underneath the surface :p
Guest4036 has quit [Ping timeout: 480 seconds]
mattrope has quit [Ping timeout: 480 seconds]
ngcortes has quit [Ping timeout: 480 seconds]
mattrope has joined #dri-devel
jewins has quit [Ping timeout: 480 seconds]
ramaling has quit [Ping timeout: 480 seconds]
pzanoni has quit [Ping timeout: 480 seconds]
pzanoni` is now known as pzanoni
<daniels> alyssa: nah, surely you've fixed all of them by now!
iive has quit []
<alyssa> daniels: I like your attitude better
ngcortes_ has quit []
ngcortes_ has joined #dri-devel
<Sachiel> if you don't pick up the rocks they hide under, no one needs to know the bugs even exist
<alyssa> rsrc->track.users is racy ... might replace with an atomic ...
<karolherbst> alyssa: we have c11 now :)
<karolherbst> so.. we can use _Atomic
<alyssa> it's a bit set in a u32. would need to add some util code to make the interface nice.
<karolherbst> in case you feel lazy using those atomic helpers
<karolherbst> alyssa: why?
<karolherbst> mark the field as _Atomic
<alyssa> uhhhh
<karolherbst> well.. it won't fix racy state
<karolherbst> but...
<karolherbst> so this is the issue with atomics: if you read and write the value later it's still as racy as before :D
<alyssa> nyeh
<karolherbst> but if you just flip a bit and don't depend on it earlier, you can just mark the field as _Atomic and be happt
<karolherbst> *happy
<alyssa> nod
<alyssa> I wonder if we could get CI coverage of dEQP-EGL.functional.sharing.gles2.multithread.*
<karolherbst> :D
<karolherbst> have fun
<HdkR> I still want to know a use for atomic negate :P
<alyssa> it looks like freedreno does
<karolherbst> HdkR: silly hardware?
<HdkR> Silly hardware
<karolherbst> you assume that a negate is a one op operation :p
<HdkR> It can be if you can sub with a zero register :D
<alyssa> DEQP_VER egl...
<karolherbst> HdkR: :D
<imirkin> HdkR: that assume you have an atomic sub
ngcortes_ has quit []
ngcortes_ has joined #dri-devel
<HdkR> That too
ngcortes_ has quit [Remote host closed the connection]
<HdkR> That can just be emulated with an atomic add. Just follow the trail ;)
<karolherbst> sooo
<karolherbst> context reset try 2
<glennk> have to wait for them to surface first before you can follow the trail
<karolherbst> HdkR: just loop with an atomic_inc/dec ¯\_(ツ)_/¯
<HdkR> oof
<karolherbst> HdkR: you'd think it's a lot of iterations, but be smart and check for > 0x80000000 so you can just abuse overflows and go the other directions
idr has quit [Quit: Leaving]
minecrell has quit [Quit: Ping timeout (120 seconds)]
minecrell has joined #dri-devel
<HdkR> karolherbst: I think I'll just use a CAS loop :P
<karolherbst> HdkR: what if you don't have CAS?
<HdkR> load exclusive, store exclusive
<HdkR> ARM exclusive monitor is great like that
<karolherbst> HdkR: sure, but think of a GPU which doens't have those locked ld/st semantics :D
<HdkR> Don't emulate atomic neg on those :D
demarchi has quit [Ping timeout: 480 seconds]
tomba has quit [Ping timeout: 480 seconds]
T_UNIX has quit [Ping timeout: 480 seconds]
ella-0[m] has quit [Ping timeout: 480 seconds]
Dylanger has quit [Ping timeout: 480 seconds]
Newbyte has quit [Ping timeout: 480 seconds]
xerpi[m] has quit [Ping timeout: 480 seconds]
undvasistas[m] has quit [Ping timeout: 480 seconds]
pmoreau has quit [Ping timeout: 480 seconds]
MatrixTravelerbot[m] has quit [Ping timeout: 480 seconds]
user1tt[m] has quit [Ping timeout: 480 seconds]
Eighth_Doctor has quit [Ping timeout: 480 seconds]
tintou has quit [Ping timeout: 480 seconds]
Tooniis[m] has quit [Ping timeout: 480 seconds]
Sumera[m] has quit [Ping timeout: 480 seconds]
dcbaker has quit [Ping timeout: 480 seconds]
chema has quit [Ping timeout: 480 seconds]
RAOFhehis[m] has quit [Ping timeout: 480 seconds]
cwfitzgerald[m] has quit [Ping timeout: 480 seconds]
go4godvin has quit [Ping timeout: 480 seconds]
_alice has quit [Ping timeout: 480 seconds]
exit70[m] has quit [Ping timeout: 480 seconds]
gnustomp[m] has quit [Ping timeout: 480 seconds]
reactormonk[m] has quit [Ping timeout: 480 seconds]
zzoon[m] has quit [Ping timeout: 480 seconds]
egalli has quit [Ping timeout: 480 seconds]
LaughingMan[m] has quit [Ping timeout: 480 seconds]
jekstrand[m] has quit [Ping timeout: 480 seconds]
YaLTeR[m] has quit [Ping timeout: 480 seconds]
SamJames[m] has quit [Ping timeout: 480 seconds]
apinheiro[m] has quit [Ping timeout: 480 seconds]
Anson[m] has quit [Ping timeout: 480 seconds]
neobrain[m] has quit [Ping timeout: 480 seconds]
cmarcelo has quit [Ping timeout: 480 seconds]
atulu[m] has quit [Ping timeout: 480 seconds]
danylo has quit [Ping timeout: 480 seconds]
MrR[m] has quit [Ping timeout: 480 seconds]
nielsdg has quit [Ping timeout: 480 seconds]
robertmader[m] has quit [Ping timeout: 480 seconds]
kusma has quit [Ping timeout: 480 seconds]
jasuarez has quit [Ping timeout: 480 seconds]
jenatali has quit [Ping timeout: 480 seconds]
DrNick has quit [Ping timeout: 480 seconds]
icecream95 has quit [Ping timeout: 480 seconds]
Venemo has quit [Ping timeout: 480 seconds]
Strit[m] has quit [Ping timeout: 480 seconds]
Guest3466 has quit [Ping timeout: 480 seconds]
ServerStatsDiscoverertraveler4 has quit [Ping timeout: 480 seconds]
aura[m] has quit [Ping timeout: 480 seconds]