<Miepee>
Not sure if this is the right place to ask, but how would I force a specific mesa driver? Trying to troubleshoot an issue that may happen with swrast, but using "MESA_LOADER_DRIVER_OVERRIDE=swrast" results in "failed to load driver: swrast" even tho "swrast_dri.so" can be found in /usr/lib32/dri
cwabbott has joined #dri-devel
jjardon has joined #dri-devel
zzag has joined #dri-devel
narmstrong has joined #dri-devel
camus1 has quit [Ping timeout: 480 seconds]
<karolherbst>
Miepee: it might fail due to other reasons
<Miepee>
For example?
<karolherbst>
dunno, but maybe try this: LIBGL_DEBUG=verbose MESA_LOADER_DRIVER_OVERRIDE=swras glxinfo
<karolherbst>
mhh.. well it does use swrast in the end, but it also fails on not finding dri
<karolherbst>
"libGL error: core dri or dri2 extension not found" this error would worry me a little
<karolherbst>
but anyway.. swrast is ended up getting used
<Miepee>
Why does "libGL error: failed to load driver: swrast" appear though then, if it ends up being used?
<karolherbst>
I suspect we try to use the driver in a different configuration, like you see this "libGL error: image driver extension not found" error before
<karolherbst>
and then we fall back to some other
<karolherbst>
but yeah.. I see how this can be all confusing
<Miepee>
Hmm.
<Miepee>
The main reason why I wanted to check this, is because I'm trying to launch a YoYo engine game with libTAS (TASing software for Linux). However it segfaults with a stacktrace of libc->lib32-swrast->lib32->llvm->crash and to figure out if this is libTAS' fault or Mesa's fault I was trying to launch the game normally with only swrast
<Miepee>
Something to note, is that the stacktrace is different everytime, but it does generally follow the same "places". Aka, libc->swrast->llvm->crash
<karolherbst>
"The engine seems to be using a custom global new operator and it doesn't plays well with multithreading it seems."
<Miepee>
Yeah YoYo engine is... whack
<karolherbst>
well.. the engine should be fixed to not annoy everybody else
<Miepee>
Sadly, this is an engine from ~2015 or so, And YoYoGames don't even support them anymore :p
<karolherbst>
sad
Company has quit [Ping timeout: 480 seconds]
<Miepee>
I should check, if this is fixed with the newer versions though. IIRC they revamped quite a bit for that, if it's still buggy there, that would have greater chances of getting fixed
<karolherbst>
cool
<Miepee>
So would there be any workaround for me regarding the crash? Except for just trying a lot of times and hope it doesn't happen?
<karolherbst>
others might be able to answer that. Dunno if llvmpipe has such workarounds as llvmpipe is heavily threaded to begin with
<karolherbst>
and without threading perf just tanks
<karolherbst>
Miepee: maybe launching with LP_NUM_THREADS=0 might work
<Miepee>
karolherbst: hmm, it kinda does seem to solve it actually. I still get crashes but these seem to be more libtas related than mesa related judging by the stacktraces
<karolherbst>
can still be threading related ... dunno... maybe there is something we could do from within mesa, but I suspect that might be a lot of work
<karolherbst>
overloading new() seriously...
<karolherbst>
but I don't see why that should affect mesa unless all code starts to use it
camus has joined #dri-devel
<Miepee>
Actually turns out, no that doesn't fix it and the crash can still happen. Just "random" i guess :(
<pcercuei>
you say that drivers usually do this in their atomic_check, but I can't find a way to get the new value of a property from a atomic_state
mattst88 has joined #dri-devel
itoral has quit []
<Miepee>
karolherbst: Tested it on a few GMS2 games, seems that they YoYoGames fixed it on there. Can't manage to replicate any crashes, neither with libTAS nor with normal use there.
camus1 has joined #dri-devel
agd5f has quit [Read error: Connection reset by peer]
xexaxo_ has joined #dri-devel
agd5f has joined #dri-devel
xexaxo has quit [Ping timeout: 480 seconds]
camus has quit [Remote host closed the connection]
<alyssa>
airlied: Whoops typo :p
camus1 has quit []
xexaxo has joined #dri-devel
adavy has joined #dri-devel
camus has joined #dri-devel
xexaxo_ has quit [Ping timeout: 480 seconds]
camus has quit [Remote host closed the connection]
<zmike>
pepp: any chance you'd be interested in checking out that zs/layer issue I mentioned in the compute pbo MR?
<pepp>
zmike: sure, I'll take a look
<zmike>
awesome, thanks!
<zmike>
it seems like the shader just stops reading/writing data after a certain number of layers
iive has joined #dri-devel
<pepp>
zmike: 10k lines NIR shader, nice
<zmike>
pepp: I tried to keep it small enough to read
vivijim has joined #dri-devel
macromorgan_ has quit []
macromorgan_ has joined #dri-devel
idr has quit [Ping timeout: 480 seconds]
<pinchartl>
daniels: are you volunteering to implement modifiers for V4L2 ? :-)
nchery has joined #dri-devel
<daniels>
non!
macromorgan has joined #dri-devel
macromorgan has quit [Remote host closed the connection]
<pinchartl>
too late, I heard yes
macromorgan_ has quit []
macromorgan_ has joined #dri-devel
macromorgan_ has quit []
macromorgan has joined #dri-devel
xexaxo has quit [Ping timeout: 480 seconds]
xexaxo has joined #dri-devel
YuGiOhJCJ has quit [Quit: YuGiOhJCJ]
gouchi has joined #dri-devel
test is now known as blue_penquin
blue_penquin is now known as blue__penquin
blue__penquin is now known as blue_penquin
blue_penquin is now known as Guest3466
tursulin has quit [Quit: Konversation terminated!]
<pepp>
zmike: I don't know why it fails on these specific tests :/
<zmike>
:/
<zmike>
it feels like it has to be something with reading the txf data past a certain layer?
rgallaispou has left #dri-devel [#dri-devel]
sdutt has joined #dri-devel
mattrope has joined #dri-devel
nchery has quit [Quit: Leaving]
Ahuj has quit [Ping timeout: 480 seconds]
thellstrom has quit [Remote host closed the connection]
<jekstrand>
jenatali: Any idea why !12231 isn't working on Windows?
<jekstrand>
jenatali: It builds but I fear I may have blind-coded my InterlockedCompareExchange128 wrong
<jekstrand>
jenatali: Speaking of, I'm very confused by the docs for that function.
<jekstrand>
jenatali: One bullet says: "If the Destination value is equal to the ComparandResult value, the ExchangeHigh and ExchangeLow values are stored in the array specified by Destination, and also in the array specified by ComparandResult."
<jekstrand>
And then, below, it says " Regardless of the result of the comparison, the original Destination value is stored in the array specified by ComparandResult."
<jenatali>
Huh
<jekstrand>
jenatali: Unless I'm being thick, those can't both be true. Either it returns the before value or it returns the after value.
<jekstrand>
Or maybe it means it's always the after which is the destination value in case of failure?
<jenatali>
I see 2 mentions of storing the original values in the output, and only 1 mention of storing the new value in the output on success
<jenatali>
So... I think that one mention is wrong
<jenatali>
Give me a minute, I can check out the branch and try to reproduce. That return code looks like a crash so it should be easy enough to see what's up
<jekstrand>
Thanks!
<jekstrand>
I'm sure it's something stupid :)
<jekstrand>
If it's easy enough, mind trying out 32-bit as well?
<jekstrand>
daniels: How much RAM do we have on our builders in CI?
<jenatali>
Hm... I don't have a build env for x86 but I think it's easy enough to set one up, yeah. As long as I don't need LLVM :P
<jekstrand>
Nah. All you need to build is src/util and the tests
<jenatali>
jekstrand: mesa\src\util\u_atomic_list.h(308,71): warning C4098: 'u_atomic_list_finish_48bit': 'void' function returning a value
<jekstrand>
jenatali: Fixed.
<jekstrand>
jenatali: I suspect the problem is that my tests are OOMing
<jenatali>
How much memory are you trying to allocate? O.o
<jekstrand>
Yup. THat's what it is. Windows has small stacks. :D
<jenatali>
Btw, I don't see any updates since you said "Fixed"
<jekstrand>
I haven't pushed yet. One second. Fixing a couple others.
<jekstrand>
pushed
<jenatali>
Linker errors this time
<jekstrand>
:(
<jekstrand>
But it builds in CI!
thellstrom has joined #dri-devel
<jenatali>
It did :P
<jenatali>
You're not building the x86_64 library for MSVC?
<jekstrand>
Nope. Recent Windows versions require CMPXCHG16B to even boot so why bother
<jekstrand>
If we care about 64-bit Vista, maybe
agd5f has quit [Remote host closed the connection]
<jekstrand>
But if someone cares about that, THEY can write the patches. :P
<jenatali>
Yeah... I hope nobody cares about that...
agd5f has joined #dri-devel
<jekstrand>
jenatali: As of right now, you don't get the benchmark on Windows.
<jekstrand>
jenatali: If you want it, we'll have to write a u_time.h header and implement gettime_ns on Windows.
<jekstrand>
That doesn't sound like a terrible addition, honestly.
<jekstrand>
I may add such a header as part of this MR.....
<jenatali>
:D
<jekstrand>
I don't know how to write the windows bits, thoguh.
<jenatali>
I can probably add that or at least point you in the right direction. Probably just QueryPerformanceCounter
<jenatali>
jekstrand: CI repros my linker error FYI
<jenatali>
unresolved external symbol u_atomic_list_init_x86_64 referenced in function run_test
<jekstrand>
jenatali: Ugh... Right.
<jekstrand>
jenatali: I guess we can build x86_64 on Windows. It won't hurt anything
<jekstrand>
jenatali: Just drop that bit from the meson
<jenatali>
Ack
<jenatali>
jekstrand: #error("This file must be built with -mcx16")
<jenatali>
:P
<jekstrand>
jenatali: Ok, fixing
hikiko has joined #dri-devel
hikiko_ has quit [Ping timeout: 480 seconds]
vivek has joined #dri-devel
camus has joined #dri-devel
<jenatali>
jekstrand: Yep, stack overflow
hikiko has quit [Remote host closed the connection]
pnowack has quit [Quit: pnowack]
pnowack has joined #dri-devel
hikiko has joined #dri-devel
<glennk>
now i have to wonder what the use case for an inherently serial data structure, ie linked list, using atomics is?
pnowack has quit [Quit: pnowack]
<jekstrand>
jenatali: I thought I fixed the stack overflow. :-/
<jekstrand>
glennk: Oh, it's very much not serial
<jekstrand>
glennk: The use-case is re-use pools where you want arbitrarily many threads to be able to return and fetch items at-will.
<jenatali>
jekstrand: del_thread_data is huge
<jekstrand>
jenatali: Right. Need to calloc that.
hikiko has quit [Read error: Connection reset by peer]
hikiko has joined #dri-devel
<glennk>
are there per thread lists, but then you want a thread to be able to steal from another threads list when its own list is empty?
<jekstrand>
Today, there are no per-thread lists
<jekstrand>
That's tricky to do in userspace
<jekstrand>
Unless you're the one managing the threads
<jekstrand>
Vulkan provides some intermediate allocators that can be used for such stashing, if desired, though.
<jekstrand>
So far, I've yet to see the atomics show up in a benchmark, so I'm not worried about the global return list
<jenatali>
jekstrand: If you get a skeleton u_time.h and want me to chip in something for Windows, let me know
<jekstrand>
jenatali: Yeah, working on it. Deleting the dozen copies of gettime_ns I've found scattered about.
<glennk>
i guess you could key off of something other than thread id and still have several lists
<jekstrand>
Sure but, again, I'll wait for it to show up in perf.
<glennk>
well, one of the issues with atomics is often they don't really show up in perf
<jekstrand>
The reason for adding the data structure to util/ is because our architecture for handling free lists is currently an utter disaster.
<jekstrand>
glennk: Sure, but I've yet to see anv/allocator.c show up in perf. Except for that one time we accidentally had a 64-bit atomic spanning a cache line. That showed up. :D
hikiko has quit [Read error: Connection reset by peer]
<glennk>
next question, how frequent is adding/freeing, how many times per frame are we talking about?
<jekstrand>
Command buffer chunks, 100/frame
<glennk>
so in most instances that is approximately never then
<jekstrand>
Misc associated state, probably dozens, maybe 100s, but that's less likely.
<jekstrand>
By far the highest-frequency allocations come out of the state stream which is a thread-local slab allocator that pulls its slabs from the central allocator.
<jekstrand>
Within a slab, it's as close to free as you're going to get.
hikiko has joined #dri-devel
<glennk>
just be careful not to broadcast out cache line updates between more than two cores
<jekstrand>
Yeah, I know. Atomics don't scale as well as you want them to.
<jekstrand>
jenatali: Looks like os_time_get_nano already exists. :D
hikiko has quit [Remote host closed the connection]
<jekstrand>
glennk: The truly frustrating part is that, in my benchmark, just taking a simple_mtx around it is just as fast. :-(
<jekstrand>
Actually faster with more threads because the round-trip through the kernel on lock failure reduces the cache line thrashing.
hikiko has joined #dri-devel
<glennk>
yup, thats usually the case if you have a decent algorithm
<glennk>
basically the answer is always "lock less often" than "make lock go faster"
<jekstrand>
Yup
<jekstrand>
Although, simple_mtx does "make lock go faster"
<zmike>
locks go brrrrr
mbrost has joined #dri-devel
mbrost has quit []
<glennk>
brrrrn power
thellstrom has quit [Remote host closed the connection]
<jenatali>
jekstrand: Oh cool. Sounds like someone needs to go erase all those gettime_ns copies :D
<jekstrand>
jenatali: Yup
<jekstrand>
jenatali: The one difference is that it returns an int64_t rather than uint64_t. I think that's ok, though.
<jenatali>
Interesting
<jekstrand>
Not really. INT64_MAX nanoseconds of uptime isn't likely to happen in the real world either. :)
<jekstrand>
And it makes deltas more reliable
mbrost has joined #dri-devel
<glennk>
aren't there cts tests that feel like they take 292 years?
<jekstrand>
There are....
<jekstrand>
jenatali: Works on windows now. :D I didn't fail at InterlockedCompareExchange!
<jenatali>
jekstrand: \o/
<jekstrand>
Now to decide if I really want to bother with the atomics at all. (-:
<jekstrand>
The simple_mtx implementation is almost always faster....
<jekstrand>
But I'm glad I wrote them all
<dcbaker>
venemo: that missed 21.2.0, but it's already staged fro 21.2.1
nchery has joined #dri-devel
sneil has joined #dri-devel
<Venemo>
dcbaker: great, thanks
<Venemo>
glennk: depends on your CPU mostly. on a decent-ish mid range machine the whole CTS takes like 20-30 minutes
sneil_ has quit [Ping timeout: 480 seconds]
<jekstrand>
It's often compiler-bound
nchery is now known as Guest3485
nchery has joined #dri-devel
bcarvalho has quit [Ping timeout: 480 seconds]
Guest3485 has quit [Ping timeout: 480 seconds]
<karolherbst>
jekstrand: mhhh.. so this txp lowering adds an explicit lod argument
<karolherbst>
ignoring it makes one of the regression pass again :)
<karolherbst>
but it might actually be a bug in nouveau on how we handle scalar tex
<karolherbst>
yeah....
<karolherbst>
well, seems like the lower_tex pass is doing it regardless
<karolherbst>
we don't want pointless lod arguments :)
<karolherbst>
but maybe I should check if it's a 0 constant and force our lz flag
<jekstrand>
karolherbst: Sure, feel free to check for 0. We do in our back-end.
<jekstrand>
We also have a sample_lz
<karolherbst>
ahh
<karolherbst>
would be nice if nir would eliminate the lod arg if it's 0 though :D
<bl4ckb0ne>
are the gles spec free?
<karolherbst>
bl4ckb0ne: you can download them if that's what you mean
<jekstrand>
karolherbst: That one goes both ways, I'm afraid. karolherbst NIR actually inserts it in a bunch of cases.
<karolherbst>
but maybe I should fix the code in case there is a tex with a lod
<jekstrand>
karolherbst: The problem is that there's a difference between tex with no LOD and tex with lod0
<bl4ckb0ne>
i was again on the wrong page, thanks jekstrand
<jekstrand>
karolherbst: Probably easiest to do "if (nir_src_is_const(src) && nir_src_as_uint(src) == 0)" in your back-end and set a flag.
<jekstrand>
karolherbst: We could add a tex_lz opcode, I guess but meh.
<karolherbst>
jekstrand: the issue is.. for us it's different instructions
<karolherbst>
at least from an IR perspective
<karolherbst>
or we opt texlod with 0 lod to tex anyway
<karolherbst>
jekstrand: that would be txl :p
<karolherbst>
nir_texop_txl, /**< Texture look-up with explicit LOD */
<jekstrand>
karolherbst: Oh.... We're adding an LOD and not swapping the opcode?
<karolherbst>
I guess so
<jekstrand>
karolherbst: I thought I fixed those bugs.
<karolherbst>
maybe you did
<karolherbst>
but your MR branch was old
<jekstrand>
Yeah, rebase the MR
<jekstrand>
I fixed those bugs
<karolherbst>
ahh
<karolherbst>
but I guess there are not texop_tex anymore after that lowering pass or so?
<karolherbst>
oh well..
<karolherbst>
doesn't matter
<jekstrand>
karolherbst: If they have lod, they should be texop_txl
<karolherbst>
sure
<karolherbst>
but a normal tex with no projection got a lod arg added :)
<jekstrand>
karolherbst: Only in shader stages where LOD isn't allowed
<karolherbst>
ahh
<jekstrand>
Like VS
<jekstrand>
nir_lower_tex.c:1376
<jekstrand>
We do that lowering unconditionally because no one actually wants implicit LOD there.
<jekstrand>
If you've got a lz modifier, feel free to use it to get rid of the LOD source.
<karolherbst>
apparently codegen optimizes it away already
<karolherbst>
so not caring all too much
<karolherbst>
the painful part is, we not only have an lz modifier, but also lb and ll :D
<karolherbst>
but oh well
gouchi has quit [Remote host closed the connection]
<karolherbst>
in the end it's all the same instruction, just the IR cares
<jekstrand>
lb and ll?
<karolherbst>
I think we have like... 4 native tex instructions?
<karolherbst>
jekstrand: txb vs txl
<jekstrand>
Right. We have like 12
<jekstrand>
'cause it's all different opcodes
<karolherbst>
actually we have 6
<karolherbst>
TLD and TLD4 are two of those...
<karolherbst>
and whatever the hell TMML is
<ccr>
TMNT?
<karolherbst>
no, TMML
<ccr>
:)
<karolherbst>
"Texture MipMap Level" but no idea if we even use that one
jkrzyszt has quit [Ping timeout: 480 seconds]
<karolherbst>
we used to have more
<karolherbst>
like all the scalar variants
<karolherbst>
but on volta+ even that is a flag
<karolherbst>
TEXS on the maxwell ISA is weird
<karolherbst>
it's a mix of everything
<karolherbst>
but doens't support all combinations
K`den has joined #dri-devel
Kayden has quit [Read error: Connection reset by peer]
Peste_Bubonica has joined #dri-devel
Miepee has quit [Remote host closed the connection]
nsneck has joined #dri-devel
K`den is now known as Kayden
ngcortes has joined #dri-devel
nchery is now known as Guest3488
nchery has joined #dri-devel
<dcbaker>
jekstrand: it's kinda a semantics question, but does nir handle lowering glsl, or does glsl lower itself to nir? I'm just thinking about a post-classic world, where the glsl compiler probably belongs in frontends/mesa
nsneck has quit [Remote host closed the connection]
Guest3488 has quit [Ping timeout: 480 seconds]
nsneck has joined #dri-devel
macromorgan_ has joined #dri-devel
macromorgan has quit [Read error: Connection reset by peer]
mbrost_ has joined #dri-devel
nchery is now known as Guest3490
nchery has joined #dri-devel
tzimmermann has quit [Quit: Leaving]
mbrost has quit [Ping timeout: 480 seconds]
nchery is now known as Guest3491
nchery has joined #dri-devel
Guest3490 has quit [Ping timeout: 480 seconds]
Guest3491 has quit [Ping timeout: 480 seconds]
mlankhorst has quit [Ping timeout: 480 seconds]
nchery is now known as Guest3493
nchery has joined #dri-devel
<jekstrand>
dcbaker: GLSL produces NIR
<jekstrand>
dcbaker: Everything talks NIR (except the few bits still stuck on TGSI). GLSL is part of the GL front-end.
Guest3493 has quit [Ping timeout: 480 seconds]
NiksDev has quit [Ping timeout: 480 seconds]
camus has quit []
nirmoy has quit []
<dcbaker>
jekstrand: cool. I've been trying to move mesa into gallium/frontends, and the one thing that is really annying is the intel compiler
<dcbaker>
But it sounds like the fact that it still knows a lot about glsl is more technical debt than anything
<zmike>
pepp: great news! I fixed it!
<zmike>
it turns out I was just passing the zoffset and not the depth range, so it shouldn't ever have been able to do more than 1 layer (yet mysteriously was managing to do a few)
silver has quit [Ping timeout: 480 seconds]
idr has joined #dri-devel
Duke`` has quit [Ping timeout: 480 seconds]
<karolherbst>
dcbaker: I'd assume most of the glsl stuff in the intel compiler can be removed once i965 is killed, no?
<dcbaker>
karolherbst: that's my hope. I'm starting some of the cleanup toward moving mesa in gallium/frontends, like removing our generous uses of mesa and mapi as includes
<karolherbst>
ahh
<dcbaker>
My gut feeling is that we really want to treat glsl as an implementation detail of st/mesa in the glorious gallium only future
<karolherbst>
I'd start with removing the glsl bits :D feels like a more natural thing to do
<karolherbst>
yeah
<karolherbst>
I doubt anybody wants to deal with the glsl ir directly anymore anyway
<karolherbst>
sadly we still have those drivers caring about TGSI
<dcbaker>
the Intel compiler also uses the list implementation from glsl in it's IR, so there's some work to do there as well
<karolherbst>
uhhh
<karolherbst>
it's fascinating that we have so many lists implementations
<karolherbst>
even in mesa
<karolherbst>
I think nir uses it as well?
<dcbaker>
I feel like NIR uses both? but could totally be wrong on that
<karolherbst>
I think it might be a better idea to extract this list implementation
<karolherbst>
and deal with merging to the one and only list implementation in the far future
<dcbaker>
I'd really prefer just to move it with the glsl compiler and have intel and nir use the util list
<dcbaker>
nir shouldn't be too hard, but the intel compiler relies on a bunch of C++ additions to the glsl list
<karolherbst>
uhh
<dcbaker>
I'll probably just have them keep reaching into glsl for now
<airlied>
dcbaker: jekstrand: posted an MR to remove some GLSL stuff from the intel compiler a few days ago
<dcbaker>
because that's easier, lol
<dcbaker>
\o/
<karolherbst>
maybe in 5 years I even find enough time to get rid of the TGSI stuff from nouveau
<karolherbst>
who knows
<karolherbst>
but we have nv30 using it as well and nv30 doesn't have support for nir
<karolherbst>
but my hope is, that in the near future there will be a good reason to care about perf in nouveau
<karolherbst>
but it's not that day today
<dcbaker>
karolherbst: would it be easier to build an nv30 only nouveau and just send nv30 to the Amber?
gouchi has joined #dri-devel
gouchi has quit [Remote host closed the connection]
<dcbaker>
s/Amber/Amber branch/
<karolherbst>
uhm.... maybe?
<karolherbst>
but nv30 isn't really a burden
<karolherbst>
now that I have GPUs supported by nv30 :O
<karolherbst>
dunno
<karolherbst>
I guess we could?
<karolherbst>
but nir support for nv30 would be a fun idea
<karolherbst>
and we do have anholt_ working on glsl to nir to tgsi
<dcbaker>
yeah, the nir->tgsi seems like the most realistic bet for tgsi
gouchi has joined #dri-devel
Peste_Bubonica has quit [Quit: Leaving]
<ajax>
karolherbst: is nouveau (the ddx) still dri2 by default?
<karolherbst>
ajax: you wanna hear the reason?
<ajax>
oh this oughta be good
<karolherbst>
apparently exa and dri3 don't play well together and this breaks plasma or something
<karolherbst>
never tried it myself, that's just what I got told
<karolherbst>
dri3 works perfectly fine under gnome as far as I can tell
<ajax>
boy i sure do love all this choice
<karolherbst>
:D
<karolherbst>
I hope I didn't disappoint you
<ajax>
oh not at all
<ajax>
exactly as dumb a reason as i expected and that's in no way your fault
<karolherbst>
:)
<karolherbst>
but hey
<karolherbst>
I am working on stuff to blame modesetting less for random nouveau bugs
<karolherbst>
so maybe once all my crappy patches land we blame applications instead
<airlied>
is anyone except Ilia still using the nouveau ddx? :-)
<karolherbst>
atm I have Ilia going beserk on any nouveau bug telling users to use the nouveau DDX because we blame Xorg on everything :)
<karolherbst>
airlied: users Ilia tells to use the nouveau ddx?
<airlied>
karolherbst: doh
<airlied>
it's like he could fix dri3 at least if he's going to keep pusing it