ngcortes has quit [Remote host closed the connection]
toolchains has quit [Read error: Connection timed out]
Daanct12 has joined #dri-devel
ybogdano has quit [Ping timeout: 480 seconds]
toolchains has joined #dri-devel
toolchains has quit [Ping timeout: 480 seconds]
sul has quit [Ping timeout: 480 seconds]
sul has joined #dri-devel
toolchains has joined #dri-devel
toolchains has quit [Read error: Connection timed out]
heat has quit [Ping timeout: 480 seconds]
pixelclu- has joined #dri-devel
pixelcluster has quit [Ping timeout: 480 seconds]
kts has quit [Ping timeout: 480 seconds]
anholt has joined #dri-devel
bmodem has joined #dri-devel
ella-0_ has joined #dri-devel
ella-0 has quit [Read error: Connection reset by peer]
ppascher has joined #dri-devel
JohnnyonFlame has joined #dri-devel
JohnnyonF has quit [Ping timeout: 480 seconds]
toolchains has joined #dri-devel
Daanct12 has quit [Ping timeout: 480 seconds]
toolchains has quit [Ping timeout: 480 seconds]
Daanct12 has joined #dri-devel
toolchains has joined #dri-devel
kts has joined #dri-devel
Duke`` has joined #dri-devel
off^ has joined #dri-devel
shankaru has joined #dri-devel
jewins has quit [Ping timeout: 480 seconds]
Daanct12 has quit [Remote host closed the connection]
Daanct12 has joined #dri-devel
YuGiOhJCJ has quit [Remote host closed the connection]
YuGiOhJCJ has joined #dri-devel
Daanct12 has quit [Quit: Leaving]
off^ has quit [Ping timeout: 480 seconds]
toolchains has quit [Ping timeout: 480 seconds]
itoral has joined #dri-devel
Daanct12 has joined #dri-devel
shankaru has left #dri-devel [#dri-devel]
shankaru1 has joined #dri-devel
Duke`` has quit [Ping timeout: 480 seconds]
JohnnyonF has joined #dri-devel
Daanct12 has quit [Remote host closed the connection]
Daanct12 has joined #dri-devel
toolchains has joined #dri-devel
JohnnyonFlame has quit [Ping timeout: 480 seconds]
Daaanct12 has joined #dri-devel
toolchains has quit [Ping timeout: 480 seconds]
srslypascal has quit [Remote host closed the connection]
srslypascal has joined #dri-devel
off^ has joined #dri-devel
slattann has joined #dri-devel
toolchains has joined #dri-devel
adarshgm has joined #dri-devel
<adarshgm>
test message
Daanct12 has quit [Ping timeout: 480 seconds]
alanc has quit [Remote host closed the connection]
MajorBiscuit has joined #dri-devel
toolchains has quit [Ping timeout: 480 seconds]
alanc has joined #dri-devel
Company has quit [Quit: Leaving]
toolchains has joined #dri-devel
tzimmermann has joined #dri-devel
<tomeu>
robclark: what happens with the t760 is that it regressed badly in 5.19-rcX, and people are running it in repos that don't have the fix in their -external-fixes branch
<tomeu>
so it times out
<tomeu>
daniels: I can reduce the limit in my repo, sure
<tomeu>
hopefully there aren't that many forks out there yet
toolchains has quit [Ping timeout: 480 seconds]
ppascher has quit [Ping timeout: 480 seconds]
toolchains has joined #dri-devel
toolchains has quit [Ping timeout: 480 seconds]
<pq>
kchibisov, EGL Surfaceless platform supports pbuffers, because they are the way to get an EGLSurface there.
nchery has quit [Read error: Connection reset by peer]
off^ has quit [Ping timeout: 480 seconds]
toolchains has joined #dri-devel
rasterman has joined #dri-devel
toolchains has quit [Ping timeout: 480 seconds]
whald has joined #dri-devel
lynxeye has joined #dri-devel
ahajda__ has joined #dri-devel
whald has quit []
whald has joined #dri-devel
mvlad has joined #dri-devel
JohnnyonFlame has joined #dri-devel
<whald>
is GBM supposed to be thread-safe? so can I gbm_bo_alloc on one thread and then gbm_bo_map/unmap on a different thread?
JohnnyonF has quit [Ping timeout: 480 seconds]
<whald>
because it crashes for me on Intel and AMD, here are the stack traces: https://pastebin.com/LLP1u6ct (Intel is on unmap, AMD is on map)
<pq>
whald, if you do 'git grep pthread' in Mesa's src/gbm/, there are no hits. So I guess that's your answer.
<pq>
hmm, there is 'mtx' though, but it looks very few uses of that
<lynxeye>
whald: That's a bit of a gray area. It seems the map/unmap calls are using context operations, which are only supposed to be called by the thread where the context is current. However there is no way to make the context current on the calling thread via the GBM API...
toolchains has joined #dri-devel
<pq>
lynxeye, how does that work in single-threaded programs even? Creating a gbm_device implicitly makes a context and makes it current?
<whald>
lynxeye, that's a bit of a bummer, as map/unmap can be quite expensive. i'm trying to offload some slow operations from the main thread to some thread pool. :-/
<lynxeye>
pq: I would need to look up the details again, but I think that's effectively what happens.
<pq>
...and what context is that, exactly? Like, if I change my EGLContext in the same thread, does that screw up GBM?
<pq>
e.g. if I use GBM stand-alone without EGL, but I also use EGL Device platform to make an EGLContext to use GL with.
<pq>
there is absolutely no indication of any thread-locality or contexts going on in the GBM API, so this is all a big surprise to me
FireBurn has joined #dri-devel
<FireBurn>
Vulkan is broken on my 6800M *again*
<FireBurn>
Bisecting now
<pq>
I would have assumed that if I do my own locking around GBM, access from multiple threads would be fine.
fahien has joined #dri-devel
<lynxeye>
pq: All good questions, where I don't have a definite answer without reading the code again. gbm_map/unmap was always kind of a strange thing. All other GBM operations deal with allocations, etc. which are screen level operations, that are thread safe. map/unmap is the only thing in the GBM API that needs context operations and I don't think anyone gave it any thought, as to how the usage model for this is supposed to look like.
<pq>
lynxeye, interesting
<FireBurn>
What the chances of getting a PRIME system added to CI? I mean it breaks on a weekly basis
bmodem has quit []
<pq>
incidentally, gbm_dri_bo_map() does use a mutex that literally nothing else does.
<lynxeye>
The best advise I can give is: don't use GBM map/unmap, but import the GBM BOs into a higher level API, where this context stuff is actually defined and use that to fill the buffers.
<pepp>
FireBurn: fwiw I tested PRIME earlier this week and it was working fine, so the regression should be recent
<pepp>
and I agree: having a PRIME system in CI would be useful
<pq>
aha, looks like the only purpose of that mutex is to protect dri->context, which gbm_dri_bo_map() creates on demand, and passes it explicitly to dri->image->mapImage()
<pq>
so it doesn't look like it's thread-local in any way
<pq>
but it does mean that if you call GBM from multiple thread, you are using the dri->image API from multiple-threads simultaneously with the one and the same context
<FireBurn>
pepp: Yeah me too, I'm going to hazzard a guess at the vulkan/wsi stuff that's just landed
<pq>
also, gbm_dri_bo_unmap() uses dri->context without any locking or CPU memory barriers
<lynxeye>
pq: right and context operations are not thread safe, so that's a massive footgun right there
toolchains has quit [Ping timeout: 480 seconds]
<whald>
lynxeye, pq so with GBM map/unmap being not really up for the task, what can I do instead of going full OpenGL for getting my hands on the pixels?
<pq>
whald, there's your answer: you need to make sure only a single thread can use stuff related to the same gbm_device at a time.
pcercuei has joined #dri-devel
<pq>
(i'm jumping to the assumption that gbm_device objects do not share contexts.)
<pq>
oh, there is GBM_ALWAYS_SOFTWARE end var, hadn't heard about that one before
<pq>
*env var
<whald>
pq, hmm, having a separate gbm_device for the background thread seems doable, but to pass the buffers i'll have to export / re-import I guess. this is getting out of hand. multi-threading was a mistake, again. :-)
<pq>
yup :-)
<pq>
what else do you do with your gbm_device than just alloc/import/map/unmap?
YuGiOhJCJ has quit [Quit: YuGiOhJCJ]
<whald>
i think just doing the map on the main thread and doing the pixel peeping on a separate thread will not work either, because at least on intel there is a chance that the map will not allocate scratch space but instead do a mmap of the GPU memory, and then there would be some cache coherency stuff missing.
<whald>
pq, i'm also using gbm surfaces to have something to attach to the drm device for scanout. but that's about it. does GBM offer more that I may abuse? :)
<pq>
hmm, that doesn't sound legit for GBM API... I mean, the API does expect you to map and unmap to flush.
bmodem has joined #dri-devel
<pq>
but if you do that, I wouldn't expect which actually accesses the data to be significant
<pq>
*which thread
<MrCooper>
yeah, never unmapping will result in sadness if the implementation uses a scratch buffer
<pq>
whald, yeah, that's about it. But since you have gbm_surface, don't you also go all the way to EGL and OpenGL anyway?
<whald>
pq, right, mapping / unmapping on the main thread would solve my immediate problem because with linear buffers on intel those operations are cheap. and it will probably still work on other platforms, performance might be degraded.
<MrCooper>
whald: that is required for correctness, deferring the unmap will not work correctly in general e.g. with radeonsi
<pq>
cross-domain access is hard :-)
<whald>
pq, yes, we're using OpenGL for rendering. but I don't have an OpenGL context at hand in the part of the application where I'm doing the separate-thread-pixel-peeping. and that would be quite a refactor...
<MrCooper>
whald: in other words, writes to the mapped memory may not be visible to the GPU before unmap (and GPU writes to the buffer may not be visible in the mapped memory after map)
<whald>
MrCooper, but having a sequence where it goes like T1: create -> T1: map -> T2: peep pixels -> T1: unmap would be fine, right? T1 is responsible for managing the broader state and will e.g. block the buffer from re-use until T2 is done, that's the way it is set up already.
<MrCooper>
I guess that should work, not sure I see what problem it solves though :)
<MrCooper>
T2 or any other thread making use of the buffer with the GPU will need to wait for T1 to unmap first anyway
toolchains has joined #dri-devel
<whald>
MrCooper, the pixel peeping is pretty slow, almost 200ms and I absolutely cannot block the main thread for that long.
adarshgm has quit [Ping timeout: 480 seconds]
<MrCooper>
ah, if it's only for reading from the buffer, that makes sense
<whald>
MrCooper, I should have added that T1 does more interesting things while T2 is churning along, but the buffer and it's mapping are private to T2 until it finishes.
<kchibisov>
pq: isn't surfaceless platform requires different extension when creating a display?
<pq>
kchibisov, EGL Surfaceless platform is a different platform yes. It does not use a EGLNativeDisplay.
<kchibisov>
Yeah, I was talking about EGL_KHR_platform_wayland, since with it you don't have pbuffers at all.
<pq>
I know.
<pq>
If you really want pbuffers, Surfaceless can give them to you.
<MrCooper>
whald: almost 200ms seems very slow though, what resolution buffer is that?
<kchibisov>
Oh, I don't want them, I was curious since I've seen this line in extension.
<icecream95>
MrCooper: I've seen cases where mapping a large BO device-side (i.e. importing it) takes 100 ms or so, I could believe that a CPU map could take as long
<pq>
unlike pixmaps which simply don't exist in Wayland
toolchains has quit [Remote host closed the connection]
<pq>
kchibisov, btw. the notes about EGL_DEFAULT_DISPLAY on Wayland are absoteluly non-sensical. I've no idea why it was allowed. Maybe it's a shortcut for doing off-screen rendering on the same GPU as the winsys, but not being able to interact with the winsys at all.
sravn has joined #dri-devel
<pq>
you can't even make a EGLSurface with Wayland EGL_DEFAULT_DISPLAY
<whald>
MrCooper, 200ms is for a full HD buffer. the code doing the processing is not (yet) streamlined at all, I can probably speed it up by a factor of 5 or more. but that's still way to slow for the main thread.
rgallaispou1 has quit [Read error: Connection reset by peer]
<MrCooper>
factor 5 would still mean 25 fps max, though multiple reader threads might work
JohnnyonFlame has quit [Ping timeout: 480 seconds]
<whald>
MrCooper, I'm software-decoding some custom "video" stream coming in over UDP and want to exfiltrate previews (so a JPEG encode every 10s is what happens on the "other" thread). we're targeting an intel Atom CPU/GPU combo which has only two cores anyway, so using more threads won't work. processing an udp packet is in the 10-20 us range, and we're receiving about 40k of those if things get busy. so the JPEG encode is orders of
<whald>
magnitude slower on that Atom than anything else we do.
rkanwal has joined #dri-devel
<linkmauve>
whald, have you tried using the hardware JPEG encoder, if your SoC has one?
<linkmauve>
Check `vainfo`, if there is a VAProfileJPEGBaseline with encode support (VAEntrypointEncPicture), it could let you offload that operation.
<whald>
linkmauve, yep, and it's pretty fast. *but* with GBM I cannot directly create NV12 BOs (not supported), so I go the 1 R8 BO (for Y) and one RG88 BO (for UV) route... *but* the vaapi API refuses to accept multi-object NV12 buffers. pretty sad, eh? :-)
<linkmauve>
Can’t you allocate a single bo and import it into EGL with offsets so that it ends up in a layout that libva accepts?
<whald>
it's not the vaapi API per-se, but the intel-media-driver chickens out of the UV plane has offset == 0, which effectively means the UV has to come after the Y data.
<linkmauve>
:|
<linkmauve>
You should open an issue for that.
<linkmauve>
Does the i915 driver work better?
JohnnyonFlame has joined #dri-devel
<whald>
linkmauve, I already thought about arranging the data in a single BO by hand, but it seems there are various requirements to get it right, all somewhere in the intel-media-driver. maintaining this would be hell. so i thought encoding a JPEG every 10s with a core to spare can't be that hard. or so.
pcercuei has quit [Read error: Connection reset by peer]
off^ has joined #dri-devel
pcercuei has joined #dri-devel
<whald>
linkmauve, which i915 driver do you mean?
<linkmauve>
The vaapi one.
frieder has joined #dri-devel
<linkmauve>
It’s named libva-intel-driver in my distribution, and provides /usr/lib/dri/i965_drv_video.so.
<linkmauve>
I’ve usually had better results on my Kaby Lake than with the new one, for instance it does support VP9 encoding while the newer one doesn’t.
<whald>
linkmauve, i just gave it a try: "libva error: /run/opengl-driver/lib/dri/i965_drv_video.so init failed". hmm.
<whald>
(I'm on nixos, that's why the path is strange)
fahien1 has joined #dri-devel
fahien is now known as Guest4519
fahien1 is now known as fahien
frieder has quit [Remote host closed the connection]
Guest4519 has quit [Ping timeout: 480 seconds]
bmodem has quit []
off^ has quit [Ping timeout: 480 seconds]
<FireBurn>
Looks like it was ff13fc381d59fc8a5b06a40b6bb857503c6e7711 that broke things for me
fahien has quit [Ping timeout: 480 seconds]
fahien has joined #dri-devel
heat has joined #dri-devel
adarshgm has joined #dri-devel
rkanwal has quit [Ping timeout: 480 seconds]
<MrCooper>
Venemo: ^
kts has quit [Ping timeout: 480 seconds]
adarshgm has quit [Ping timeout: 480 seconds]
<hakzsam>
FireBurn: broke what?
gawin has joined #dri-devel
kts has joined #dri-devel
JoniSt has joined #dri-devel
Daaanct12 has quit [Remote host closed the connection]
JohnnyonFlame has quit [Read error: Connection reset by peer]
itoral has quit [Remote host closed the connection]
rkanwal has joined #dri-devel
itoral has joined #dri-devel
nchery has joined #dri-devel
icecream95 has quit [Ping timeout: 480 seconds]
<FireBurn>
Rendering on my 6800M
<FireBurn>
Might be a prime thing
<FireBurn>
Or it might not be that commit :/ Tried reverting it and stll seeing the issue
<Venemo>
MrCooper: yeah?
<Venemo>
FireBurn: please open an issue report in the Mesa gitlab, choose the radeon Vulkan issue template and fill in the details of how to reproduce your issue. Thank you.
<Venemo>
That is, assuming you experience the problem running a Vulkan app
off^ has quit [Ping timeout: 480 seconds]
aravind has quit [Ping timeout: 480 seconds]
itoral has quit [Remote host closed the connection]
off^ has joined #dri-devel
fahien has quit [Ping timeout: 480 seconds]
fahien has joined #dri-devel
zehortigoza has joined #dri-devel
dri-logg1r has joined #dri-devel
dri-logger has quit [Ping timeout: 480 seconds]
mslusarz has quit [Ping timeout: 480 seconds]
mslusarz has joined #dri-devel
off^ has quit [Ping timeout: 480 seconds]
glisse has quit [Remote host closed the connection]
glisse has joined #dri-devel
mareko has quit [Remote host closed the connection]
<alyssa>
eric_engestrom: to be clear I just build with the meson defaults
<alyssa>
which is, TTBOMK, a debug optimized build, so neither NDEBUG nor DEBUG defined
<eric_engestrom>
exactly
<eric_engestrom>
and I expect most people do that too
<alyssa>
I don't think I realized that "debug build with -O2" is different from "debug optimized"
<alyssa>
that's really confusing
<eric_engestrom>
which is why I agree with your MR
<alyssa>
the MR justification is dumber ... cheap asserts are gated behind !NDEBUG, that's what NDEBUG is there for
<eric_engestrom>
it's just your MR description which sounded like the two were a bit confused, that's why I explained the difference
<alyssa>
oh, right
<alyssa>
yeah I don't understand any of this
<eric_engestrom>
perhaps we should rename `DEBUG` to `EXPENSIVE_ASSERT` or something (:
<alyssa>
100%
<alyssa>
learning that I don't have list/mutex asserts in any of my builds (despite making use of the latter) gave me an "emperor has no clothes" moment yesterday
<eric_engestrom>
haha
gawin has quit [Ping timeout: 480 seconds]
<alyssa>
eric_engestrom: most DEBUG use is in drivers, fwiw
<alyssa>
and not my drivers
<alyssa>
so I can't do perf testing for that
<alyssa>
there are some weird uses in Mesa, thouhg
<alyssa>
like os_log_message, in util/, which prints to stderr in all builds but can be overriden with a GALLIUM env var in debug builds only
<alyssa>
*blink*
<ajax>
error message handling in mesa does not suffer from what you might call a unity of design
<alyssa>
truth
<alyssa>
when should debug_printf be used? who knows
<ajax>
part of me wants to overhaul it all but part of me wants to leave that for like a gsoc project
<alyssa>
yeah that's fair
<eric_engestrom>
I haven't looked at the detail of any gsoc project, but I had the feeling they were a lot bigger than that
<eric_engestrom>
perhaps we should put in place a list of small tasks like this for newcomers to get into the code
<ajax>
we have label:good-first-task at least
<alyssa>
eric_engestrom: unifying message handling in mesa is a lot bigger than one might think ....
<eric_engestrom>
fair, I might well be underestimating it
<ajax>
pretty sure some of those should be going to a better place than wherever fd 2 happens to be pointing
<ajax>
the secret here being you can't have good error messages without actually good error handling in the code, so fixing the messages to be any good probaby requires fixing the code around them a bit too
JohnnyonFlame has joined #dri-devel
<ajax>
best kind of error message is one you don't have to print because you fixed the algorithm so the condition can't happen anymore
<jenatali>
FWIW on Windows, 95%+ of those would be better off printing to OutputDebugString to be visible in a debugger for apps with no console attached
<jenatali>
So, yeah an error logging overhaul that enables that would be super welcome
<robclark>
tomeu: idk, possibly we need a list of maintainer trees and branches and merge all the -external-fixes? So far I've been just trying to get the more limited case of CI for an individual driver in an individual driver tree working.. once that is sorted I guess we can figure out how to roll it up so CI still works when airlied merges -next/-fixes branches
whald has quit [Remote host closed the connection]
<alyssa>
jenatali: and Android has its own place (logcat?) which freedreno uses
<eric_engestrom>
ajax, jenatali: could you write this in an issue, so that it's not lost?
<jenatali>
alyssa: Yep
<eric_engestrom>
the "where to log to" thing was (partially?) resolved with the common logging infra, but I don't know how widely used it is
nchery is now known as Guest4535
nchery has joined #dri-devel
<alyssa>
in panfrost I've mainly solved this by not logging things.......
<alyssa>
(-:
<eric_engestrom>
(I mean src/util/log.h)
Guest4535 has quit [Ping timeout: 480 seconds]
<ajax>
eric_engestrom: sure. there's at least one open issue already iirc, i'll see if i can find it
<ajax>
but also like
<ajax>
src/glx basically doesn't know about src/util/anything
<ajax>
so yeah there's common logging but there's also still some uncommon logging to get rid of
<eric_engestrom>
this common logging happened after I ~left mesa a couple of years ago I think, so I didn't follow it much
<eric_engestrom>
but it's something that I had wanted to do for a while
<eric_engestrom>
I just checked and src/egl/main/egllog.h is still there, I thought it would've been swallowed into the common stuff
ahajda__ has quit [Ping timeout: 480 seconds]
<eric_engestrom>
that should be an easy enough task; I'll make an issue and tag it good-first-task
ybogdano has joined #dri-devel
MajorBiscuit has quit [Quit: WeeChat 3.5]
tzimmermann has quit [Quit: Leaving]
<tomeu>
robclark: yeah, hopefully when people start using it, breakage will happen less often and will last for shorter periods of time, but we need things to be stable enough to get started
gouchi has joined #dri-devel
<tomeu>
so I think I'm going to reduce coverage quite a bit to reduce churn and flakiness
<tomeu>
and we can increase it again later when more people are keeping an eye on regressions
<tomeu>
and maybe we can do something in kernelci.org so less breakage makes it to drm-next
fxkamd has quit []
Duke`` has joined #dri-devel
AlexisHernndezGuzmn[m] has joined #dri-devel
<robclark>
tomeu: one idea, maybe a per gitlab tree CI variable thing to select between "short" and "full" tests? Ie. for CI runs before merging things into msm-next, I want to do a full run on qc runners but maybe on a few sanity tests on mtk/intel/amd..
<robclark>
*only a few...
<tomeu>
we can try that, sure
alyssa has quit [Quit: leaving]
krushia has quit [Quit: Konversation terminated!]
<robclark>
jenatali: re: OutputDebugString .. hook it up in mesa_log.. that is where logcat stuff is hooked up for android
<jenatali>
robclark: It is, IIRC, it's just that not everyone uses it
<jenatali>
E.g. nir logging goes straight to stderr
<robclark>
fix other code to use mesa_log as needed
<robclark>
we've hooked some nir stuff up to it
<jenatali>
Oh I agree, it's just work and it hasn't been important enough to me to prioritize it
<robclark>
we've kinda been fixing things as and when we need the msgs to not go into the either on android ;-)
<jenatali>
Makes sense. When I was doing some Android stuff I appreciated that logcat was there
<jenatali>
But I was fighting it because I was doing an in-tree build which is hardcoded to --buildtype=release, and switching it to debug breaks backtrace logging, and even then NDEBUG is defined and DEBUG isn't
<robclark>
as best I can tell, debugging anything on android is a pita
<jenatali>
Amen
<jenatali>
Especially because I couldn't figure out how to get CPU debuggers working... every time they attached to a process it crashed
<jenatali>
So... printf debugging via logcat, yay
<robclark>
so back when android was a container on CrOS, I had reasonably good luck just building mesa without stripping symbols and attaching gdb from outside the container.. but the whole vm thing makes it much harder
<jenatali>
alyssa: No. I don't even know what that is :)
<alyssa>
compatible with util/futex.h I mean
<alyssa>
Wikipedia claims Microsoft patented them so I would have thunk
<alyssa>
"Futexes have been implemented in Microsoft Windows since Windows 8 or Windows Server 2012 under the name WaitOnAddress"
<jenatali>
Huh
<alyssa>
Bit of an X/Y problem
<alyssa>
simple_mtx is only backed by C11 mutexes if we don't have futexes
mvlad has quit [Remote host closed the connection]
<alyssa>
(if we do have futexes, we implement simple_mtx with atomics and a futex ourselves)
<JoniSt>
Not *quite* the same though, linux futex has some more features but they shouldn't be relevant for mutexes
<alyssa>
(and then the simple mtx initializer becomes trivial)
<alyssa>
so if we support util/futex.h on Windows, via WaitOnAddress apparently, lygstate doesn't need to merge the wildly unpopular !17122 and everyone is happy
pcercuei has quit [Read error: Connection reset by peer]
pcercuei has joined #dri-devel
Duke`` has joined #dri-devel
<jenatali>
Well, part of it should still merge, removing the non-simple mutex initializer
<jenatali>
But agreed if we can keep the simple mutex initializer, that'd be nice
<alyssa>
incidentally the C11 impl of simple_mtx_assert_locked makes me very sad.
<jenatali>
Yeah looks like a WIN32 path could be added to util/futex.h
<alyssa>
looks like futex_waait/futex_wake map pretty directly to WaitOnAddress/WakeByAddressAll
slattann has quit [Read error: Connection reset by peer]
<alyssa>
also comparing the 3 existing impls, lol at OpenBSD being the only reasonable one.
<jenatali>
Yeah I'm trying it out, will see what blows up
<alyssa>
glhf
<alyssa>
I can't decide which impl is more unreasonable, Linux or FreeBSD
kts has quit [Ping timeout: 480 seconds]
kts has joined #dri-devel
<jenatali>
Hm, not sure I can implement this without pulling in windows.h, which sucks since that's a big header to include in another header
<jenatali>
Guess I could add a futex.c for Windows only
<alyssa>
Blink
Akari has joined #dri-devel
<ajax>
where does the Xvfb we use in CI come from?
alyssa has quit [Quit: i can't focus]
<DrNick>
I like how the futex version of simple_mtx_assert_locked() treats destroyed mutexes as locked
<daniels>
ajax: Debian
flto has joined #dri-devel
agx has quit [Read error: Connection reset by peer]
agx has joined #dri-devel
Kayden has quit [Quit: -> JF]
lynxeye has quit [Quit: Leaving.]
<jenatali>
alyssa: !17431
<jenatali>
Let's see what CI says about it
kts has quit [Ping timeout: 480 seconds]
LexSfX has quit []
LexSfX has joined #dri-devel
lemonzest has joined #dri-devel
fahien has quit [Quit: fahien]
<jenatali>
Holy crap, p_atomic_add_return is wrong in the MSVC path :O
<HdkR>
It's amazing how incorrect you can make some operations and things somehow work.
<HdkR>
Life finds a way
<jenatali>
Yeah there's really not many hits on that function in the tree though
rkanwal has quit [Ping timeout: 480 seconds]
<jenatali>
And it's only the "_return" part that's wrong
<HdkR>
I had compareexchange incorrect for months and months and didn't realize :P
<jenatali>
Unfortunately it's used by code that tries to drop multiple references at once... which means if those references should've brought the count to 0, bam that's a leak
ppascher has joined #dri-devel
zehortigoza has quit [Remote host closed the connection]
off^ has quit [Ping timeout: 480 seconds]
alyssa has joined #dri-devel
alyssa has left #dri-devel [#dri-devel]
sarnex has joined #dri-devel
Haaninjo has quit [Quit: Ex-Chat]
alyssa has joined #dri-devel
<alyssa>
Do Gallium drivers have a way to detect the API of the frontend?
<alyssa>
I have some slow legacy paths for big GL (and nine?) compat, I'd like to assert(!GLES) because it's a pretty serious bug to hit with GLES
maxzor_ has joined #dri-devel
maxzor has quit [Remote host closed the connection]
<alyssa>
(Would have caught !17430)
<jekstrand>
jenatali: Oof. I'll look now
<anholt>
alyssa: none that I know of.
<anholt>
well, nothing proper. rasterizer->point_tri_clip incidentally tells you gles2+.
<alyssa>
heh, right. that'd be a pretty nasty hack..
<zmike>
I've thought for a while that it would be nice to have a context/screen create flag for such a thing
<alyssa>
I guess I'm of 2 minds
<anholt>
oh, neat. tarceri's fix fixed portal2 on crocus as well.
<jekstrand>
daniels: If you want to spend some free brain cycles, we should test WSI in CI somehow.
<daniels>
jekstrand: I have negative brain cycles this week somehow, but also yes
<daniels>
I'm not volunteering for X11, but it's pretty easy for Wayland since you just whack the compositor in a separate thread and use protocol messages to sync on a semaphore so you can inspect whatever you like from the client side
<daniels>
(may or may not have written this for another EGL stack many years ago)
icecream95 has joined #dri-devel
Kayden has joined #dri-devel
<jekstrand>
daniels: Could we get xwayland going inside a headless Weston?
sarnex has quit [Read error: Connection reset by peer]
<jekstrand>
jenatali: You may be interested in those too ^^
<jekstrand>
jenatali: Since you're in a WSI review mode. :D
<jenatali>
Yeah, looking
<daniels>
I mean, if you are the compositor, then you can also hobble it to be shm-only
<jekstrand>
daniels: We don't currently auto-detect well
<jekstrand>
daniels: And I'm not sure we do. Most of the time, you want to not support WSI there rather than fall back to a SHM path
<daniels>
yeah, strong agree
<jekstrand>
Like, I think that's what Wayland PRIME would do today.
<daniels>
really?
<daniels>
last I saw it was just blitting to a linear dmabuf
<jekstrand>
Yeah, no one's done any work on Wayland PRIME
<jekstrand>
Nope, not for Wayland.
<jekstrand>
It's not hard to hook up but no one's done it.
<daniels>
I've never personally tested it, but it looked like the code other people had written should be doing that
<daniels>
falling back to shm seems rather worse
<jekstrand>
IMO, the hardest part is just figuring out the WL code to detect when you're on a different GPU and enable the blit path.
<jekstrand>
Apart from that, it's like 5 LOC to add the path
<daniels>
that isn't hard though - we literally send a path to the device that the compositor will be using for GPU imports?
<jekstrand>
Sure.
<jekstrand>
It's just that no one's done the typing.
<daniels>
jekstrand: if you want to do that typing, I'll happily talk you through it, but I literally don't own a multi-GPU machine :P
<daniels>
and I'm not about to go the eGPU route any time soon
<jenatali>
FWIW we find having a software device that pretends to be a GPU incredibly helpful for this kind of stuff
<jenatali>
Something like VGEM or maybe an extended version of that could be helpful for testing these kinds of paths
<jekstrand>
daniels: IDK if I own one either. :)
<jekstrand>
daniels: I did but I moved the RADV card out of my HSW
<daniels>
jekstrand: I'll pop something WSI-ish off your stack in return then :)
<daniels>
jenatali: yeah, dmabufs being software-mappable is super useful there from the compositor side - just need to make swrast clients use vgem to allocate to better test those paths
<jenatali>
We've also got a configurable software GPU driver, where you can "plug in" multiple of them and connect virtual monitors to them, to test all kinds of crazy scenarios, and then at the end of the day you can just read back the displayed output from the other side of the compositor
<zmike>
this seems like it has a lot of overlap with my lavapipe dmabuf wip 🤔
<daniels>
jenatali: yeah, there's the beginnings of work to make vkms be controllable via configfs
<jenatali>
Cool :)
<alyssa>
1
alyssa has quit [Quit: whoops]
<jekstrand>
jenatali: Is the RB for the ANV patch too?
<jenatali>
Yep
<jenatali>
Seems straightforward enough
off^ has joined #dri-devel
<jekstrand>
thanks
Duke`` has quit [Ping timeout: 480 seconds]
gouchi has quit [Remote host closed the connection]