ChanServ changed the topic of #dri-devel to: <ajax> nothing involved with X should ever be unable to find a bar
`join_subline has quit [Remote host closed the connection]
`join_subline has joined #dri-devel
rasterman has joined #dri-devel
rasterman has quit [Quit: Gettin' stinky!]
tzimmermann_ has joined #dri-devel
gawin has quit [Ping timeout: 480 seconds]
tzimmermann has quit [Ping timeout: 480 seconds]
pcercuei has quit [Quit: dodo]
mclasen has quit []
mclasen has joined #dri-devel
pH5 has quit [Remote host closed the connection]
shoragan has quit [Ping timeout: 480 seconds]
pH5 has joined #dri-devel
iive has quit []
ambasta[m] has left #dri-devel [#dri-devel]
tursulin has quit [Remote host closed the connection]
tarceri_ has quit [Remote host closed the connection]
ngcortes has quit [Remote host closed the connection]
siqueira_ has joined #dri-devel
K`den has joined #dri-devel
co1umbarius has quit [Ping timeout: 480 seconds]
siqueira has quit [Read error: Connection reset by peer]
Kayden has quit [Ping timeout: 480 seconds]
aswar002 has quit [Ping timeout: 480 seconds]
aswar002 has joined #dri-devel
craftyguy has quit [Remote host closed the connection]
craftyguy has joined #dri-devel
jewins has quit [Ping timeout: 480 seconds]
natto has quit []
mclasen has quit []
natto has joined #dri-devel
mclasen has joined #dri-devel
jewins has joined #dri-devel
nsneck_ has joined #dri-devel
FireBurn has joined #dri-devel
nsneck has quit [Ping timeout: 480 seconds]
<zmike>
I guess the freedreno jobs are dead again?
<zmike>
my plans to merge swrast patches late at night when nobody else is around has been cleverly thwarted
<zmike>
...for now
ella-0_ has joined #dri-devel
ella-0 has quit [Read error: Connection reset by peer]
aravind has joined #dri-devel
<robclark>
zmike: umm, according to #freedreno-ci things are still running as of ~3min ago..
mclasen has quit []
<zmike>
I guess it magically fixed itself
mclasen has joined #dri-devel
<zmike>
so that panfrost could fail instead
YuGiOhJCJ has joined #dri-devel
<zmike>
a classic Get Down, Mr. President
<robclark>
there was a ~40min gap in results, but no idea if that was just because marge was busy on other MRs that didn't involve freedreno jobs.. or if someone tripped over the ethernet cable
nchery has quit [Ping timeout: 480 seconds]
mclasen has quit []
mclasen has joined #dri-devel
mclasen has quit []
mclasen has joined #dri-devel
shankaru has joined #dri-devel
mclasen has quit [Ping timeout: 480 seconds]
mhenning has quit [Quit: mhenning]
nsneck_ has quit [Ping timeout: 480 seconds]
lemonzest has joined #dri-devel
mattrope has quit [Read error: Connection reset by peer]
imirkin has quit [Ping timeout: 480 seconds]
imirkin has joined #dri-devel
ybogdano has quit [Ping timeout: 480 seconds]
K`den has quit []
K`den has joined #dri-devel
Duke`` has joined #dri-devel
jewins1 has joined #dri-devel
jewins has quit [Read error: Connection reset by peer]
K`den is now known as Kayden
digetx has quit [Read error: Connection reset by peer]
kchibisov has quit [Quit: Huh]
digetx has joined #dri-devel
sdutt has quit [Read error: Connection reset by peer]
itoral has joined #dri-devel
pnowack has joined #dri-devel
danvet has joined #dri-devel
kchibisov has joined #dri-devel
kchibisov has quit [Remote host closed the connection]
JohnnyonFlame has quit [Ping timeout: 480 seconds]
kchibisov has joined #dri-devel
kchibisov has quit [Read error: Connection reset by peer]
kchibisov has joined #dri-devel
Daanct12 has joined #dri-devel
jkrzyszt has joined #dri-devel
Duke`` has quit [Ping timeout: 480 seconds]
thellstrom has joined #dri-devel
shoragan has joined #dri-devel
alanc has quit [Remote host closed the connection]
gouchi has quit [Remote host closed the connection]
YuGiOhJCJ has quit [Quit: YuGiOhJCJ]
<pq>
emersion, modesets are the thing that can take ages to complete, so you'd always want to nonblock with them as much as you can, right? Or put it in a thread...
<pq>
however, doing multiple modeset commits in parallel... I have a hard time seeing that. Instead, you'd make one atomic commit with all of them, otherwise your TEST_ONLY is not trustworthy.
tzimmermann has joined #dri-devel
<pq>
so indeed, multiple parallel modeset commits seems just an attack surface rather than something userspace would actually want to do
dviola has joined #dri-devel
<pq>
howEVER, we have the thing called DRM leasing
<pq>
DRM leasing can unintendedly and accidentally lead to multiple parallel modeset commits on the same DRM device.
RSpliet has quit [Quit: Bye bye man, bye bye]
MajorBiscuit has joined #dri-devel
<pq>
makes me wonder if DRM leasing should have been formulated as "pageflip only" and have userspace protocol to ask the lessor to set modes
ahajda has joined #dri-devel
itoral has quit [Remote host closed the connection]
itoral has joined #dri-devel
tursulin has joined #dri-devel
mvlad has joined #dri-devel
<emersion>
pq: i think by "parallel modeset" they meant a single blocking atomic commit with multiple CRTCs in it
<pq>
I did not get it like that.
<emersion>
interesting idea re: pageflip-only
<pq>
danvet, what's a parallel modeset, again? Multiple atomic commits on different CRTCs, or one atomic commit with multiple CRTCs?
<emersion>
leads to all kinds of fun stuff if you're trying to modeset in a non-glitchy way
<danvet>
pq, former
<danvet>
latter is done as a single work
<pq>
right
<emersion>
ah, ok
<pq>
emersion, which glitching are you thinking of?
<danvet>
pq, and yeah I guess leases makes this exploitable by normal apps :-/
<danvet>
this = any bugs in drivers' handling of nonblocking modesets
<emersion>
pq, modesetting without a fresh FB with the correct size for instance
<pq>
danvet, those don't even need to be nonblocking, because it's two different processes on the same DRM device, right?
<danvet>
pq, it needs to be nonblocking to be fun
<pq>
aha
<danvet>
since for blocking we keep holding the locks
<danvet>
so the locks sequence everything that could go boom in the atomic commit
<pq>
emersion, the userspace protocol would need to include a dmabuf as the FB to set.
lynxeye has joined #dri-devel
<emersion>
pq, and possibly any other KMS prop the client might want to set atomically…
<pq>
yes
<emersion>
that's… ehhhh
<pq>
I didn't say it would be a convenient userspace protocol :-)
<pq>
but it's the only solution I can see if you want reliable modeset and if you would like the display server to reduce its own display resource usage (e.g. planes or modifiers) to be able to light up one more output.
pcercuei has joined #dri-devel
frankbinns has joined #dri-devel
Major_Biscuit has joined #dri-devel
MajorBiscuit has quit [Ping timeout: 480 seconds]
rasterman has joined #dri-devel
JohnnyonFlame has joined #dri-devel
rasterman has quit [Quit: Gettin' stinky!]
itoral has quit [Remote host closed the connection]
camus1 has joined #dri-devel
itoral has joined #dri-devel
itoral has joined #dri-devel
camus has quit [Ping timeout: 480 seconds]
itoral has quit [Remote host closed the connection]
itoral has joined #dri-devel
rkanwal has joined #dri-devel
itoral has quit [Remote host closed the connection]
itoral has joined #dri-devel
JohnnyonFlame has quit [Ping timeout: 480 seconds]
<MTCoster>
If I change the value of &kernel-rootfs-url in .gitlab-ci.yml for the PowerVR MR, will that have a knock-on effect on other builds? Is that the correct way to point to a different kernel branch?
JohnnyonFlame has joined #dri-devel
<MTCoster>
Ignore that, I missed docs/ci/kernel.rst
<karolherbst>
at least everything needs a huge cleanup, but I think we can work with that somewhat
<karolherbst>
not sure I like how the Kernel stuff is looking atm, but maybe you have some good ideas
<karolherbst>
one thing we need to change is to move create_compute_state into the ctro
<karolherbst>
*ctor
<karolherbst>
which will be fun as we have to compile without a context
<karolherbst>
ehh or inside our helper one
<karolherbst>
kind of not wanting to diverge too much from clover atm as I know that I can rely on that working
tarceri_ has joined #dri-devel
tarceri has quit [Ping timeout: 480 seconds]
<mripard>
Lyude: yeah, I spent some time looking into this for vc4. The main concern was that we have a controller that has several FIFOs and one can mux each FIFOs to multiple CRTCs (each fifo containing the result of the planes composition), so we need to deal with that mux, and make sure we don't have several commits happening at the same time to have consistency
FluffyFoxeh is now known as Frogging101
<mripard>
the functions doing this are vc4_atomic_commit_setup, vc4_pv_muxing_atomic_check and vc4_atomic_commit_tail
<karolherbst>
jekstrand: anyway.. I think at this point everything besides images are supported in regards to running kernels :)
<jekstrand>
\o/
<karolherbst>
I hope most remaining fails will be reading the spec and implement whatever API
enunes has joined #dri-devel
<karolherbst>
ehh offsets are also missing, but meh
<karolherbst>
want to have a final soltuion for internal args first before adding more :)
<karolherbst>
huh.. what's clEnqueueTask
<karolherbst>
ehh.. a kernel with just one thread?
<karolherbst>
oh well
<karolherbst>
gdb support in rust isn't really that great :(
mbrost has joined #dri-devel
jkrzyszt has quit [Ping timeout: 480 seconds]
<karolherbst>
ohh wow...
<karolherbst>
jekstrand: I just found an issue where we DCEed local mem away and set kernel arg fails :D
<karolherbst>
ehh.. annoying
enunes has quit [Remote host closed the connection]
<karolherbst>
validation of local mem is different because of .. reason... I guess I need to rework some stuff
<jekstrand>
We should record kernel args very early
<karolherbst>
ohh, that wasn't the issue really
<jekstrand>
And then just handle things being missing
<karolherbst>
I was just tracking them wrongly
<jekstrand>
oh
<karolherbst>
so I lost the info that it's local mem
<karolherbst>
which matters for validation
<karolherbst>
size has to math the pointer/value unless it's local mem where it can't be 0
<karolherbst>
*match
<karolherbst>
CL validation is super annoying as it puts some req on internal code :/ it's so annoying. Is it just me or does OpenGL validation feel more pragmatic?
<jekstrand>
GL is pretty aweful when it comes to shader stuff
<karolherbst>
yeah I know that GL itself is aweful in regards to shader stuff, but I was more refering to API validation
<karolherbst>
ehh we don't handle ball_iequal8 and the likes
<karolherbst>
ohh some tests need cl_set_event_callback :) that's fun
<karolherbst>
like all the conversion ones
iive has joined #dri-devel
<karolherbst>
and math_brute_force needs async buffer mapping :/
<karolherbst>
anyway, just fixing that for big numbers or something
<karolherbst>
jekstrand: wait a second... the spec is trolling us.. remember how it says that setKernelArg shouldn't increase refcounts, because the application would never know when mem objects would release and that being a potential issue for user_ptr?
<karolherbst>
guess what.. the API has functions to register callbacks when cl_mem objects get destroyed...
enunes has joined #dri-devel
<jekstrand>
yeah?
<karolherbst>
well.. the application would know when to release it, if they register a callback
<karolherbst>
so there shouldn't be a reason to not ref the arguments
<karolherbst>
dunno if there is any other case where it would cause issues
spect3r has joined #dri-devel
mbrost has quit [Ping timeout: 480 seconds]
jkrzyszt has joined #dri-devel
<karolherbst>
I hate CLs event model
<karolherbst>
apparently the API tells users when stuff gets _executed_ on the device, like in real time. I say: ignore all of that and just reflect our the CPU side of things
<karolherbst>
do we even have a way in gallium to do that properly?
<karolherbst>
well except fencing on every single ops
mbrost has quit [Read error: Connection reset by peer]
cphealy has joined #dri-devel
thellstrom has quit [Ping timeout: 480 seconds]
rkanwal has quit [Ping timeout: 480 seconds]
thellstrom has joined #dri-devel
thellstrom1 has joined #dri-devel
Major_Biscuit has quit []
thellstrom has quit [Ping timeout: 480 seconds]
JohnnyonFlame has joined #dri-devel
<tango_>
karolherbst: wait where does oepncl require that the api tell the user when stuff gets executed in real time?
<tango_>
afaik that information is only available in the profiling info which isn't availabl until the command completes (or fails)
<karolherbst>
tango_: the entire event stuff
<karolherbst>
e.g. clSetEventCallback(
<karolherbst>
so you can register a cb on CL_RUNNING
<tango_>
yeah but it's not expected to be called in real time
<tango_>
> There is no guarantee that the callback functions registered for various execution status values for an event will be called in the exact order that the execution status of a command changes.
<karolherbst>
mhhh
<karolherbst>
soo.. my _hope_ was that once I submit to the device, I could set the status to CL_COMPLETE and just yolo it. But clients might do stupid things
<tango_>
karolherbst: well don't do that either
<tango_>
it should not be set to cl_complete until the command is actually completed
<airlied>
no it should work off a fence
<karolherbst>
I still think it's a crappy API not deserving verbatim implementation
<tango_>
doubly more so if profiling info is available
<tango_>
(requeted)
<karolherbst>
airlied: yeah.. true
<tango_>
isn't there a way to query the device about the command status without blocking?
<karolherbst>
although I'd say that CL_RUNNING is the only annoying bit
<tango_>
like “hey gpu, I enqueued X, is it running yet?”
<karolherbst>
not like that
<tango_>
or query on-device performance counters
<karolherbst>
I can create a fence for every event
<karolherbst>
and check that fence
gouchi has joined #dri-devel
<tango_>
but the fence is blocking?
<tango_>
is there some kind of lightweight fence that is only used as a marker?
<karolherbst>
not if you want to know if it was reached
<tango_>
I'm pretty sure some devices can do that
<karolherbst>
well, some don't
<karolherbst>
on nv hw we have to use fences
<karolherbst>
afaik
<karolherbst>
but a fence is nothing more than a "marker" anyway
<tango_>
this is weird though, how does nv do it in cuda then?
<karolherbst>
some memory we increase a counter really
<tango_>
they can tell when a cuda event (which is just a marker) was reached
<karolherbst>
yeah
<karolherbst>
it's a fence
<tango_>
ah
<tango_>
so the fence isn't necessarily blocking
<karolherbst>
so you can insert commands like "write this value into this memory once you reach it"
<tango_>
well, so that means you'll have to put fences everywhere with opencl
<karolherbst>
and the memory is just mapped in user space
<karolherbst>
so you can read the current counter or well fence how we call it in gallium
<karolherbst>
yep
<karolherbst>
but fences are cheapish
<karolherbst>
I think the annoying part is, that I probably have to add some gallium APIs or just cheat a little
<tango_>
the timing info that can be queried is: enqueue time (this is fully host-side), submission time (driver sent it to the device), start (device actually started running the command) and end (command completed)
<karolherbst>
yeah
<karolherbst>
you can add the time to a fence on nv hw afaik
<tango_>
so it looks like you need two fences per command?
<tango_>
one before and one after
<karolherbst>
tango_: why?
<tango_>
assuming the device schedules the following command right after the pre-fence is met
<karolherbst>
start is when the previous one ended
<tango_>
not necessarily
<tango_>
well, depends how many commands were enqueued
<tango_>
you need at least a pre-fence for the first command
<karolherbst>
mhhh
<tango_>
also if the hw has separate hw queues
<karolherbst>
but that's for profiling, right?
<karolherbst>
mhh
<tango_>
well, that's AT LEAST when the cl queue has profiling enabled
<karolherbst>
yeah.. I think I ignore profiling for now
<karolherbst>
....
<tango_>
you still need to keep approximately correct information for queries on the event status
<karolherbst>
yeah
<tango_>
clGetEventInfo
<karolherbst>
I need to support fences anyway, so CL_COMPLETE is quite easy to implement
<karolherbst>
CL_RUNNING is the annoying one
<karolherbst>
but maybe I just mark the next even running once one is complete or something... _should_ be good enough
thellstrom1 has quit [Remote host closed the connection]
<karolherbst>
currently I have one worker thread per queue submitting commands to the driver/hardware
<tango_>
how do you handle multple queues though?
<karolherbst>
and the thread executes in some defined order
<tango_>
especially if the hw supports multiple queues
<karolherbst>
well, you get a pipe_context per cl_queue
<karolherbst>
and I leave it up to the driver to do the correct thing? dunno
<tango_>
but do you know which queue the next command is being taken from?
<karolherbst>
each queue has its own thread
<karolherbst>
I still have to do deps calculation and the likes, but generally the worker stuff should be solid enough to allow implementing most of it
<tango_>
yeah but you can't set commands from both queues as running
<karolherbst>
why not?
<tango_>
because if the hw can't run both commands at the same time one of the two claims is obviously false
<karolherbst>
why should it matter?
<tango_>
so the strategy “set the next running when the previous ends” doesn't really work in this case
<karolherbst>
I think CL_RUNNING is a terrible thing and I won't implement it verbatim because it's just stupid
<karolherbst>
if the GPU starts 5 ns later after I mark it CL_RUNNING so what?
<karolherbst>
I know which stuff the GPU will work on next looking at a queue
<tango_>
the problem isn't the 5ns
<tango_>
it's the scenario in which it actually runs commands from the other queue
<karolherbst>
maybe if I would know a good use case for CL_RUNNING it would help me understand why it's not a totally weird thing nobody cares about
<karolherbst>
why would it matter to a client?
<tango_>
I know that some people use this information to infer things about how the device does things
<karolherbst>
what things?
<tango_>
run commands
<tango_>
e.g. copies and kernels
<tango_>
if they are concurrent or not
<tango_>
or multiple kernels etc
<karolherbst>
sure, but what would it change?
<karolherbst>
the API already reports if the hw has multiple hw queues or not
<karolherbst>
you don't have to infer that info from even status
<tango_>
¯\_(ツ)_/¯
<karolherbst>
yeah...
<karolherbst>
so it's pointless
frieder has quit [Remote host closed the connection]
<karolherbst>
clSetEventCallback is a terrible API anyway and I heard that almost all driver have a crappy impl if you care about perf
<karolherbst>
*all impls
<karolherbst>
and that clWaitForEvents is just faster
mclasen has quit []
mclasen has joined #dri-devel
<karolherbst>
really no idea on how to implememnt clSetEventCallback without that being a crappy impl
<karolherbst>
maybe start a second thread which is always waiting on the fences and update stuff accordingly?
lynxeye has quit []
mclasen has quit [Remote host closed the connection]
RSpliet has joined #dri-devel
nchery has joined #dri-devel
gouchi has quit [Remote host closed the connection]
kts has quit [Quit: Konversation terminated!]
kts has joined #dri-devel
mbrost has joined #dri-devel
JohnnyonFlame has quit [Read error: Connection reset by peer]
aravind has quit [Remote host closed the connection]
aravind has joined #dri-devel
kts has quit [Quit: Konversation terminated!]
mbrost has quit [Read error: Connection reset by peer]
ngcortes has joined #dri-devel
mclasen has joined #dri-devel
mbrost has joined #dri-devel
piggz has joined #dri-devel
<piggz>
Hi, I build Mesa for thePinephone SailfishOS ports... when ive switched to Mesa 22, both the PP and PP Pro, which have different drivers (lima and panfrost) are sufferent massive performance regressions. Because its cross driver, i wondered what I might be doing wrong that is common to both?
mbrost has quit [Read error: Connection reset by peer]
<ajax>
can you elaborate on what exactly regressed? gl's a big api.
pnowack has quit [Quit: pnowack]
<piggz>
ajax: its strange ... if im on the homescreen, im getting 30-40 fps, should be 60 with mesa 21 ... open an app, like the settings app, very basic, and im down to single figures
mclasen has quit []
mclasen has joined #dri-devel
<karolherbst>
piggz: might be worth to check if you even get hw accelerated GL. Could be that something fails to initialize
<karolherbst>
not that you end up with llvmpipe for whatever reason
<piggz>
karolherbst: i wondered that, and set LIMA_DEBUG, and it was printing things, so, figured it was using the lima driver
<karolherbst>
mhh, yeah.. dunno
<piggz>
and other useful variable for printing what drivers are used?
<piggz>
s/any
<karolherbst>
we also have lavapipe since some time, could mess up things as well
<karolherbst>
piggz: not sure if there are any glxinfo alternatives around you could use, but I'd check what OpenGL and/or Vulkan driver gets used
<ajax>
eglinfo
<karolherbst>
ahh, right
<ajax>
too obvious!
<karolherbst>
ajax: well, it's SailfishOS
<piggz>
ive buit eglinfo in the past, where is it at?
<ajax>
mesa/mesa-demos
<karolherbst>
piggz: worst case you can always git bisect, but not sure how much work that would be on your setup
<piggz>
and that is it run as user, not root, to get around the init failure
<karolherbst>
mhh, wayland uses swrast?
<piggz>
i saw that
<karolherbst>
although not sure what actually matters here. I'd kind of expect GUI apps to report what they end up using. But I'd expect that even under wayland lima to be used
<karolherbst>
is that different with mesa 21?
<ajax>
karolherbst: ... no?
<ajax>
you can get to swrast through EGL_EXT_patform_device but the wayland section just failed to init
<piggz>
ill have to rebuild 21 .. but i didnt have any performance issues there
<karolherbst>
ajax: the second paste
<ajax>
oh pff
<karolherbst>
piggz: yeah, but I am interested in the eglinfo output with your previous mesa version
<karolherbst>
but I wouldn't suspect swrast to be used for wayland
<karolherbst>
so that's probably indeed the issue
rkanwal has joined #dri-devel
<piggz>
karolherbst: ok, ill trigger a build, and save these 22 RPMs so I dont have to build aain
<piggz>
s/again
<piggz>
karolherbst: ok, build has started, and ill report backwhen its done ... anyway to make wayland not use swrast?
<karolherbst>
well.. normally that stuff should just work (tm)
kj has quit [Remote host closed the connection]
<ajax>
piggz: the way to make it not use swrast is to make sure your wayland compositor is not itself using swrast
<karolherbst>
ahh.. right
<karolherbst>
still opens up the question why that fails
pnowack has joined #dri-devel
mbrost has joined #dri-devel
kalli0815[m] has left #dri-devel [#dri-devel]
<jekstrand>
airlied, danvet, anholt: Proposed plan for merging the IMG code:
<jekstrand>
Please take a look and let me know what you think. I'd like to see them working in Mesa upstream sooner rather than later.
<danvet>
jekstrand, in the past we simply left out the driver binding code
<danvet>
trying to revision unstable uapi sounds like a lost play before it starts
<jekstrand>
danvet: That's possible too, I guess. They've already abstracted out a little PAL so we could leave the DRM bits out of tree in a draft MR until it's ready.
tzimmermann has quit [Quit: Leaving]
<danvet>
uh pal
<danvet>
we shoot these pretty indiscriminately in the kernel :-)
<jekstrand>
It's a very thin pal
<jekstrand>
Also, they're pretty common with the mobile drivers in Mesa. Being able to run on top of the Android kernel is useful, as it turns out.
<jekstrand>
And RADV has one so it can swap the entire kernel interface out for a noop thing and run without HW.
<danvet>
ah yeah that makes sense
<danvet>
the pal in the kernels tend to abstract away all the kernel services so you can run the same code on windows
<danvet>
decidedly less pretty
<jekstrand>
danvet: Yup. The Mesa PALs that exist aren't nearly as bad.
<bnieuwenhuizen>
the RADV one is pretty bad :) Then again we inherited it from radeonsi kinda which has both radeon and amdgpu
mclasen has quit []
<jekstrand>
bnieuwenhuizen: Yeah... I didn't say I was a huge fan.
mclasen has joined #dri-devel
<bnieuwenhuizen>
occasionally I think about what it'd take to implement windows/fuchsia support on it and I suspect we've chosen all the wrong abstractions
<danvet>
jekstrand, I dropped some comments
<jekstrand>
From what I've seen, it isn't horribly wrong. Not quite right, maybe, but not as bad as one might thingk.
rasterman has quit [Quit: Gettin' stinky!]
<jekstrand>
danvet: If you want to leave the include/drm-uapi/pvr_drm.h and the DRM PAL back-end out for now, I'm fine with that option too.
<jekstrand>
I just want to decouple things because, IMO, they're probably at least 1yr out from having a stable uAPI.
mclasen has quit []
<jekstrand>
And I'd rather they not be stuck in a branch for that long.
danvet has quit [Ping timeout: 480 seconds]
kts has joined #dri-devel
rasterman has joined #dri-devel
lemonzest has quit [Quit: WeeChat 3.4]
mvlad has quit [Remote host closed the connection]
garrison has joined #dri-devel
i-garrison has quit [Read error: Connection reset by peer]
garrison has quit []
i-garrison has joined #dri-devel
mbrost has quit [Read error: Connection reset by peer]
<piggz>
apart from removing the now remove intel drivers, the configure params are the same as 21.3
<karolherbst>
piggz: but you probably want to enable llvm, even if it's pain
<karolherbst>
but that shouldn't change or fix your situation
<karolherbst>
you could try to figure out what is failing with gdb
<karolherbst>
we also have LIBGL_DEBUG=verbose
<karolherbst>
but normally what happens is, that st/mesa tries to open fds on the driver and create a screen from it. If that fails, you don't get your hw accelerated driver
<karolherbst>
I don't know the exact func names to break on, but something in there should be broken
<piggz>
ok, thanks,ill see what i can find
Haaninjo has quit [Quit: Ex-Chat]
Duke`` has quit [Ping timeout: 480 seconds]
cef has quit [Quit: Zoom!]
cef has joined #dri-devel
ahajda has quit [Quit: Going offline, see ya! (www.adiirc.com)]
LexSfX has quit []
LexSfX has joined #dri-devel
simon-perretta-img_ has joined #dri-devel
frankbinns1 has joined #dri-devel
frankbinns1 has quit []
rkanwal has quit [Ping timeout: 480 seconds]
frankbinns has quit [Ping timeout: 480 seconds]
simon-perretta-img has quit [Ping timeout: 480 seconds]
iive has quit [Ping timeout: 480 seconds]
rasterman has quit [Quit: Gettin' stinky!]
simon-perretta-img_ has quit []
simon-perretta-img_ has joined #dri-devel
simon-perretta-img_ has quit []
simon-perretta-img has joined #dri-devel
mclasen has quit []
mclasen has joined #dri-devel
ngcortes has quit [Remote host closed the connection]