sdutt has quit [Read error: Connection reset by peer]
alanc has quit [Remote host closed the connection]
alanc has joined #dri-devel
macromorgan has quit [Read error: Connection reset by peer]
danvet has joined #dri-devel
ppascher has joined #dri-devel
rasterman has joined #dri-devel
Namarrgon has quit [Ping timeout: 480 seconds]
Namarrgon has joined #dri-devel
<krh>
Good Isa spec discussion
yk has quit [Remote host closed the connection]
chinsaw has joined #dri-devel
chinsaw has quit []
flacks has quit [Quit: Quitter]
flacks has joined #dri-devel
anarsoul|2 has joined #dri-devel
pcercuei has joined #dri-devel
anarsoul has quit [Read error: Connection reset by peer]
camus has quit [Remote host closed the connection]
camus has joined #dri-devel
Peste_Bubonica has joined #dri-devel
Peste_Bubonica has quit [Quit: Leaving]
simon-perretta-img has joined #dri-devel
luckyxxl has quit [Quit: bye]
srslypascal has quit [Quit: Leaving]
dogukan has joined #dri-devel
srslypascal has joined #dri-devel
Haaninjo has joined #dri-devel
kts has joined #dri-devel
dogukan has quit [Remote host closed the connection]
camus has quit []
<alyssa>
manhattan seems to be triggering tons of resource shadowing flushes ... that seems not great
<alyssa>
and staging<-->AFBC blits
<alyssa>
things are way faster if I skip the shadowing flushes, results visually seem ok but that of course doesn't mean much
<alyssa>
23fps->25fps overall, that's substantially of course
<alyssa>
curiously deqp-gles2 passes without the flush
srslypascal has quit [Ping timeout: 480 seconds]
srslypascal has joined #dri-devel
tjmercier has quit [Remote host closed the connection]
srslypascal has quit [Ping timeout: 480 seconds]
dogukan has joined #dri-devel
Company has joined #dri-devel
lemonzest has joined #dri-devel
srslypascal has joined #dri-devel
dogukan has quit [Remote host closed the connection]
mclasen has quit []
mclasen has joined #dri-devel
srslypascal is now known as Guest1175
srslypascal has joined #dri-devel
Guest1175 has quit [Ping timeout: 480 seconds]
srslypascal is now known as Guest1177
srslypascal has joined #dri-devel
Guest1177 has quit [Ping timeout: 480 seconds]
srslypascal has quit [Ping timeout: 480 seconds]
srslypascal has joined #dri-devel
flacks has quit [Quit: Quitter]
Anorelsan has joined #dri-devel
sdutt has joined #dri-devel
<srslypascal>
Hi, on 2 of my machines, I'm seeing a very strange but reproducible crash pattern with kernel 5.18.x built with the "linux-hardened" patchset that's currently maintained by Levente Polyak. I assume that the hardened patchset does not *cause* but merely trigger the bug due to it acting a bit more paranoid with memory errors than a vanilla kernel. The crashes seem to be related to both the machine having an AMD GPU (using the amdgpu
<srslypascal>
module) and the machine loading/using the snd_hda_intel module. I've got two machines that the same Dell model, but one of them has an NVidia GPU and the other has an AMD GPU, and the crash occurs only on the machine with the AMD GPU. Also, the crash does not occur when I either boot with "modprobe.blacklist=snd_hda_intel" or when I boot with "snd_hda_intel.snoop=1". Interestingly, booting with "modprobe.blacklist=amdgpu" does *not*
<srslypascal>
prevent the crashes. I'm currently trying to narrow it down to a specific patch, but bisecting between kernel version branches really sucks since I'd have to adapt the hardening patchset for each bisect step… :/
<srslypascal>
I have already tried reverting commits 69458e2c27800da7697c87ed908b65323ef3f3bd, 6317f7449348a897483a2b4841f7a9190745c81b and acd289e04a0a1f52bea7ff1129b365626059e3c2 but that didn't help. I'm currently trying to build a kernel with commits fefee95488412796b293d28c948be6fce63d149b, 327e8ba54a212f707a68670c9372747b7a32bb92, c9db8a30d9f091aa571b5fb7c3f434cde107b02c and 00fd7cfad0548b6b7234c93370076f9b9c2e39f8 reverted. Any hints/guesses
<srslypascal>
what to try if this doesn't help either?
slattann has joined #dri-devel
srslypascal has quit [Quit: Leaving]
srslypascal has joined #dri-devel
<srslypascal>
I've also made screenshots of the error messages when the crashes occur (the two screenshots from today were made with "loglevel=7" and "snd_hda_intel.single_cmd=1"):