ChanServ changed the topic of #dri-devel to: <ajax> nothing involved with X should ever be unable to find a bar
ngcortes has joined #dri-devel
<kode54> I'm also waiting on gsc-mei or similar for a "single gen of hw" for xe.ko
<kode54> so that HuC firmware works
<kode54> upstream doesn't want to do that because that "single gen of hw" is different from preceding and following generations
<kode54> except it's literally the first two generations of dedicated GPUs from Intel
<kode54> well, no
<kode54> DG1 is early gen, but has the benefit of working with the old method
<kode54> and apparently DG3 or Xe2 is going to use something new, similar to MTL
<kode54> nobody seems to have surfaced media or CL support for Xe.ko yet, either
<kode54> I'm currently using a Polaris 10 GPU with PRIME for emulators, since those otherwise run like crap on the Arc cards, even though they should be technically faster
psykose has quit [Remote host closed the connection]
psykose has joined #dri-devel
kzd has quit [Quit: kzd]
co1umbarius has joined #dri-devel
columbarius has quit [Ping timeout: 480 seconds]
ngcortes has quit [Ping timeout: 480 seconds]
ngcortes has joined #dri-devel
rsalvaterra has quit []
rsalvaterra has joined #dri-devel
macromorgan has joined #dri-devel
Company has quit [Quit: Leaving]
<alyssa> kode54: likely dxvk bug, if the VVL is complaining
<alyssa> thoug crashing the GuC seems.. extreme
<kode54> It only crashes the GuC on xe.ko, not i915.ko
<kode54> I posted an apitrace of enough dx11 to cause the error (four whole frames, crashes on the first)
<kode54> Doesn’t crash with wined3d either
<kode54> But wined3d has other issues with this game, such as the gui occasionally causing polygon explosions that obscure the entire viewport and don’t go away unless I either dismiss the elements by picking up the item spawning them, or quit the game
<kode54> It’s basically what happens when a dev takes an early unreal engine game and ports it to newer unreal engine, from dx9 to dx11
<DemiMarie> kode54: gsc-mei is only needed for restricted content, which is something that I suspect most upstream developers only support reluctantly. The GuC can authenticate the HuC sufficiently for unrestricted media workloads.
<DemiMarie> In fact, one could make an argument that all restricted content support should be dropped because (IIUC) it requires closed source userspace and drivers/gpu requires open source userspace for all APIs.
<DemiMarie> alyssa: what I meant is that crashing the GuC is a Xe bug by definition, irrespective of whether the Vulkan API usage was valid or not.
<kode54> DG2 needs gsc to upload the HuC firmware
<DemiMarie> Ah
<DemiMarie> I had forgotten about those GPUs
<kode54> HuC firmware is needed for things like bitrate control
<DemiMarie> why is that important?
<kode54> Some people want to use this for streaming
<DemiMarie> How does the quality of the produced video compare to software encoding?
<kode54> It can encode 1080p av1 at over 600fps
<DemiMarie> I’m talking about quality vs bandwidth of the output
<kode54> Guess I’ll stick to taking a whole 24 hours to encode a movie
<DemiMarie> Does SW encoding parallelize well?
<kode54> I don’t know how the quality compares to software
<kode54> Av1 can’t parallelize at all unless I use tiled encoding
<DemiMarie> Tiled encoding?
<kode54> Splitting the frame into tiles and encoding them separately
<kode54> Ffmpeg does support this for svt av1
<kode54> It also requires a massive amount of memory
<kode54> But the quality is probably orders of magnitude better than the hardware
<kode54> I’ll have to test it
<DemiMarie> Context: in Qubes OS I want to expose the minimum acceleration necessary, because each GPU feature that is exposed is extra attack surface. Battery life is terrible anyway because of e.g. wakeups not getting batched by the hypervisor.
<DemiMarie> kode54: does the HuC firmware do any parsing of media during decode?
lemonzest has quit [Quit: WeeChat 4.0.5]
<DemiMarie> If so, that is a hard no for Qubes OS and will require that hardware decode remain unused.
<DemiMarie> How much memory are you talking about?
<kode54> HuC isn’t needed for decode
<kode54> But you will need GuC anyway for xe.ko, since it uses GuC scheduling only
<kode54> Not sure how much of i915 needs it for newer cards
<kode54> Does Qubes just throw out all proprietary firmware?
<DemiMarie> GuC doesn’t talk to untrusted inputs so doesn’t bother me
<kode54> Oh
<DemiMarie> kode54: No, but we are very skeptical of exposing proprietary firmware to potentially hostile inputs.
<kode54> Gotcha
<DemiMarie> Hence all of my concerns about userspace command submission.
kzd has joined #dri-devel
<DemiMarie> gfxstrand finally persuaded me that the inputs to the firmware are so simple that this is not a serious concern in practice, especially since the doorbells must be proxied when virtualization is in use.
lemonzest has joined #dri-devel
<DemiMarie> However, if the video firmware were to e.g. parse H.265 inputs, that would be a hard no.
<DemiMarie> Since I don’t do any video production (only calls), the metric that matters to me is realtime quality: maximum quality that can be encoded in real time at a given bandwidth constraint.
<DemiMarie> Is SW or HW better for that?
yyds has joined #dri-devel
<kode54> For av1, software can’t really do real time
<DemiMarie> Is that why video calls use other codecs like H.264?
<kode54> H.264 is the lowest common denominator, yes
<DemiMarie> What is the best codec for software real-time encoding?
<kode54> Good enough for general use, fast enough to do in software
<DemiMarie> Gotcha
<DemiMarie> Ah, apparently AV1 can be software encoded in real time but there are tradeoffs
Danct12 has joined #dri-devel
alanc has quit [Remote host closed the connection]
alanc has joined #dri-devel
Danct12 has quit [Ping timeout: 480 seconds]
kzd has quit [Quit: kzd]
<mareko> zmike goes all in on those antimodifiers ;)
<ascent12> Is there any equivilent of USE_SCANOUT for vulkan? Or is it always expected to use GBM for allocation of buffers for KMS?
ngcortes_ has joined #dri-devel
<zmike> mareko: just let it end 🤕
<kode54> yes, let those crappy old cards end
<kode54> and let vendors not bother to come to market if their contribution is going to be the Arc
<kode54> otherwise I would have just bought another AMD card
<kode54> or maybe a nice 3060
ngcortes has quit [Ping timeout: 480 seconds]
<mareko> more modifier-dependent stuff isn't a bad thing
Danct12 has joined #dri-devel
egbert is now known as Guest1537
egbert has joined #dri-devel
orbea has quit [Remote host closed the connection]
orbea has joined #dri-devel
Guest1537 has quit [Ping timeout: 480 seconds]
<mareko> also the linear modifier isn't compatible between GPUs, so interop isn't always guaranteed to work
crabbedhaloablut has joined #dri-devel
<mareko> we should have LINEAR_64B, LINEAR_128B, LINEAR_256B, etc.
<mareko> #define DRM_FORMAT_MOD_LINEAR_ALIGNED(pitch_align) fourcc_mod_code(NONE, pitch_align) // there I fixed the linear modifier for you
ngcortes_ has quit [Ping timeout: 480 seconds]
<kode54> I'm kind of mad I bought the wrong GPU though
<kode54> wish I knew why ANV was so CPU dependent
<ids1024[m]> Instead of having different modifiers for different linear pitch alignments, would it make sense to have an EGL extension method to query what alignment is required for import, and a gbm function that allocated with a given minimum alignment? Are alignment issues like this applicable to any modifiers other than linear?
egbert has quit [Remote host closed the connection]
egbert has joined #dri-devel
<ids1024[m]> And I guess the same minimum alignment would apply to both offset and pitch?
<mareko> only linear
<mareko> we haven't run into an issue with offset alignment, though there is probably something too
<mareko> modifiers contain all information about themselves, I don't think any query API can impose additional restrictions
<mareko> AMD only has 256B pitch alignment and Intel had to increase their alignment to match AMD to make interop work, but it's not sustainable or compatible with anything else
<soreau> vkICantBelieveItsNotModifiersEXT
fab has joined #dri-devel
yuq825 has joined #dri-devel
<kode54> flickering on the bottom of the compositor
Duke`` has joined #dri-devel
habernir has joined #dri-devel
habernir has quit []
Danct12 has quit [Quit: WeeChat 4.0.4]
itoral has joined #dri-devel
Danct12 has joined #dri-devel
Duke`` has quit [Ping timeout: 480 seconds]
Danct12 has quit [Quit: WeeChat 4.0.4]
camus has quit []
Danct12 has joined #dri-devel
yyds has quit [Remote host closed the connection]
YuGiOhJCJ has joined #dri-devel
tzimmermann has joined #dri-devel
yyds has joined #dri-devel
pekkari has joined #dri-devel
pcercuei has joined #dri-devel
<kode54> damn
<kode54> this annoys the hell out of me
<kode54> I had plugged in my Radeon card as a second GPU because I thought my primary GPU was too slow at running Yuzu
<kode54> turns out it was running too slow even with xe.ko because I left Perfetto support enabled
mripard has joined #dri-devel
rasterman has joined #dri-devel
An0num0us has joined #dri-devel
sghuge has quit [Remote host closed the connection]
sghuge has joined #dri-devel
<airlied> mripard: a-b
<mripard> thanks :)
<lumag> pinchartl, just another ping for https://lore.kernel.org/linux-arm-msm/CAA8EJprBGrG0qMO3yrPxcPZu8kqcOZNw6htZZSKutYfFcZxBfQ@mail.gmail.com/ (and another answer in that thread).
mvlad has joined #dri-devel
<lumag> The mesa CI is now hitting this issue (cc DavidHeidelberg), so we'd like solve this somehow. Either using this approach or some other one.
Company has joined #dri-devel
jmondi has quit [Ping timeout: 480 seconds]
sgruszka has joined #dri-devel
tursulin has joined #dri-devel
<kode54> um
<kode54> some person is trying to merge new functionality into wlroots to support tearing protocol
<kode54> they seem to think that they need `DRM_MODE_PAGE_FLIP_ASYNC` and that atomic modesetting doesn't support that
pekkari has quit [Quit: Konversation terminated!]
kts has joined #dri-devel
pekkari has joined #dri-devel
pekkari has quit [Quit: Konversation terminated!]
pekkari has joined #dri-devel
lynxeye has joined #dri-devel
<kode54> hmm
<kode54> I didn't realize it wasn't implemented
<kode54> oh I see
<kode54> that patch set is currently dependent on some comment changes
<kode54> and emersion has been mostly afk for a while
<kode54> or at least, out of the picture
<kode54> I wish him the best
vliaskov has joined #dri-devel
sarahwalker has joined #dri-devel
<psykose> on vacation
<kode54> cool
anarsoul has quit [Remote host closed the connection]
anarsoul has joined #dri-devel
<pq> DemiMarie, karolherbst, I don't think GBM is the best EGL platform for headless rendering apps, because it's geared towards dmabuf import to KMS. The better choices are surfaceless platform, and maybe something with EGLDevice. Then just use glReadPixels in the app, and do wl_shm yourself. No need to hassle trying to get pixels from dmabuf or gbm_bo copied to CPU. You can probably even pipeline that
<pq> glReadPixels, too, instead of stalling?
<pq> OTOH, to avoid needing to touch app code, that's something you'd need to do in the Mesa EGL implementation or in the guest side Wayland proxy if you have one.
<pq> karolherbst, btw. gbm_bo_map() will do a de-tiling blit when necessary.
<pq> dmabuf mmap() won't
gio has quit [Ping timeout: 480 seconds]
pekkari has quit [Quit: Konversation terminated!]
<karolherbst> ah, fair enough
<MrCooper> mareko: one issue with your linear-with-pitch-alignment modifier proposal is that different pitch alignments would always be considered incompatible, even if one is a multiple of the other, so drivers would have to advertise every possible multiple as well
<MrCooper> the general feeling has been that this kind of restriction would need to be handled separately from modifiers
<kode54> multiple of the other should be tested by testing if source is an even multiple of the destination alignment
<MrCooper> not how modifiers work
<kode54> I literally don't know what modifiers even do
<kode54> if it's just recast buffer as another format, why didn't GPUs already support that from the getgo
<pq> kode54, a modifier is an opaque number. Applications filter lists of modifiers without understanding what they mean. Either they match, or they don't.
<pq> then they pass the list forward, e.g. to a driver to allocate something
<kode54> and what the heck are they for
<pq> they are to agree on a buffer layout that everyone involved can understand
<kode54> oh
<kode54> I was under the mistaken impression that buffers were just arbitrarily configurable by a lengthy descriptor block of some sort
<karolherbst> it's more about tiling format
<pq> kode54, please, mind your language. It's making me not want to reply to you.
<pq> layout is tiling format, yeah
<kode54> saying heck is a swear now?
<pq> pixel format is separate
<pq> yes
<karolherbst> though in theory we could make GPUs eat tiling formats of other vendors by retiling through shaders instead of going through linear, but....
kode54 has quit []
shashanks_ has joined #dri-devel
<karolherbst> actually..... that shouldn't be too hard, just need a shader reading a tilied buffer through raw ssbos (or whatever) and the GPU automatically tiles it if rendering into a tiled surface...
<karolherbst> not sure if it's worth the effort, but might help laptops
<pq> karolherbst, what about texture filtering?
<karolherbst> why would that be a concern?
<pq> you'd lose the benefits of using dedicated filtering hardware?
<karolherbst> atm you blit into linear for scanout, and the receiving side tries to display it
<karolherbst> that would just skip that linear blit there
<pq> oh, you mean just for blits for display purposes
<karolherbst> yeah
<pq> alright
<karolherbst> like some hardware can't even render to linear e.g. nvidia
<karolherbst> well.. it can as long as you have no depth buffer
<karolherbst> but then again.. no idea if it's worth the effort
<karolherbst> but instead of going the linear route, Intel could e.g. just detile directly for displaying it
shashanks__ has quit [Ping timeout: 480 seconds]
<karolherbst> s/for/when/
kts has quit [Ping timeout: 480 seconds]
kts has joined #dri-devel
gio has joined #dri-devel
habernir has joined #dri-devel
shashanks__ has joined #dri-devel
nir2042 has joined #dri-devel
shashanks_ has quit [Ping timeout: 480 seconds]
kts has quit [Ping timeout: 480 seconds]
apinheiro has joined #dri-devel
kts has joined #dri-devel
<zamundaaa[m]> <ascent12> "Is there any equivilent of..." <- Sadly there is not. If you want to have a guarantee that the buffers you get are scanout capable, you have to use GBM
<ascent12> Yeah I was skimming through the WSI code, and I saw some internal struct added onto pNext which seems to do it.
<ascent12> An API would be nice, but it would just be me saving a little bit of code setting up/using GBM, probably too niche a thing to really bother :P
junaid has joined #dri-devel
JohnnyonFlame has quit [Ping timeout: 480 seconds]
junaid has quit [Remote host closed the connection]
JohnnyonFlame has joined #dri-devel
<cwabbott> dj-death: while rebasing the vulkan VK_EXT_feedback_attachment_loop_dynamic_state series I realized that we totally forgot to update the common renderpass/pipeline flag handling code to handle maintenance5 pipeline create flags :/
<cwabbott> I'm adding commits to introduce common helpers for that, switch all of that stuff over to the new pipeline flags enum, and then rebase everything on top of that
<dj-death> cwabbott: thanks, I can review
Danct12 has quit [Quit: WeeChat 4.0.4]
kts has quit [Ping timeout: 480 seconds]
itoral_ has joined #dri-devel
itoral_ has quit []
itoral_ has joined #dri-devel
itoral has quit [Ping timeout: 480 seconds]
<cwabbott> dj-death: done, it's part of !25436 now
<cwabbott> I wrote the changes to other drivers blind, so I'm build-testing now
sukrutb has quit [Ping timeout: 480 seconds]
kts has joined #dri-devel
habernir has quit [Quit: Leaving]
i509vcb has quit [Quit: Connection closed for inactivity]
<dj-death> cwabbott: looks correct to me
Danct12 has joined #dri-devel
Danct12 has quit [Quit: WeeChat 4.0.4]
Danct12 has joined #dri-devel
yyds has quit [Remote host closed the connection]
An0num0us has quit [Ping timeout: 480 seconds]
mal has quit [Quit: leaving]
Daaanct12 has joined #dri-devel
Danct12 has quit [Ping timeout: 480 seconds]
mal has joined #dri-devel
minecrell has quit [Quit: :( ]
minecrell has joined #dri-devel
yuq825 has quit []
CATS has quit [Ping timeout: 480 seconds]
CATS has joined #dri-devel
CATS has quit [Read error: Connection reset by peer]
macromorgan has quit [Read error: Network is unreachable]
CATS has joined #dri-devel
macromorgan has joined #dri-devel
agd5f has quit [Remote host closed the connection]
steve--w has quit [Read error: Network is unreachable]
steve--w has joined #dri-devel
ccaione has quit [Read error: Network is unreachable]
ccaione has joined #dri-devel
CosmicPenguin has quit [Read error: Network is unreachable]
rcn-ee___ has quit [Read error: Network is unreachable]
rcn-ee___ has joined #dri-devel
CosmicPenguin has joined #dri-devel
rsripada_ has quit [Remote host closed the connection]
zf has quit [Remote host closed the connection]
Daaanct12 has quit [Quit: WeeChat 4.0.4]
agd5f has joined #dri-devel
rsripada has joined #dri-devel
lstrano_ has quit [Remote host closed the connection]
zf has joined #dri-devel
lstrano_ has joined #dri-devel
unerlige has quit [Ping timeout: 480 seconds]
nchery has quit [Remote host closed the connection]
alyssa has quit [Remote host closed the connection]
nchery has joined #dri-devel
orbea has quit [Remote host closed the connection]
orbea has joined #dri-devel
alyssa has joined #dri-devel
shashanks_ has joined #dri-devel
itoral_ has quit [Remote host closed the connection]
DodoGTA has quit [Quit: DodoGTA]
DodoGTA has joined #dri-devel
shashanks__ has quit [Ping timeout: 480 seconds]
enunes has quit [Ping timeout: 480 seconds]
CATS has quit [Read error: Connection reset by peer]
CATS has joined #dri-devel
enunes has joined #dri-devel
lynxeye has quit [Quit: Leaving.]
kts has quit [Remote host closed the connection]
snaibc has joined #dri-devel
<snaibc>    ,     ,
<snaibc>    /(_,   ,_)\
<snaibc>     \ _/  \_ /  i​rc.d⁠e⁠ft.c​o​m
<snaibc>    //       \\
<snaibc>  \\ (@)(@) //   #supe⁠rbo​wl
<snaibc>     \'=\"==\"='/
<snaibc>  ,===/      \===,
<snaibc> \",===\  /===,\"
<snaibc> \" ,==='------'===, \"
<snaibc>  \"              \"
<snaibc> snaibc enunes CATS DodoGTA shashanks_ alyssa orbea nchery lstrano_ zf rsripada agd5f CosmicPenguin rcn-ee___ ccaione steve--w macromorgan minecrell mal JohnnyonFlame apinheiro nir2042 gio anarsoul sarahwalker vliaskov tursulin sgruszka Company mvlad sghuge rasterman mripard pcercuei tzimmermann YuGiOhJCJ fab egbert crabbedhaloablut alanc lemonzest rsalvaterra co1umbarius psykose
<snaibc> OftenTimeConsuming jernej_ Dark-Show Daanct12 DemiMarie rcf pochu xroumegue dtmrzgl RSpliet Leopold_ neniagh ungeskriptet jfalempe mairacanal novaisc imre atipls praneeth_ tales-aparecida glennk cef guru_ bbrezillon mauld robmur01 larunbe DavidHeidelberg tonyk ascent12 kos_tom Kayden rgallaispou kem Peuc anholt a-865 Ryback_[WORK] shankaru1 fdu_ dolphin aswar002 pzanoni mattrope_ tristianc6704
<snaibc> noord simon-perretta-img yogesh_m1 sumoon kchibisov ella-0 rosefromthedead rpigott kennylevinsen ifreund MrCooper bnieuwenhuizen sre ndufresne nuclearcat2 i-garrison qyliss hch12907 illwieckz sravn akselmo Ristovski digetx dos1 ced117 dv_ pinchartl italove8 the_sea_peoples Armote[m] treeq[m] karolherbst soreau nightquest cyrozap Mangix Emantor sarnex dhmltb^ Cyrinux9474 danylo Kwiboo mwk_ t
<snaibc> arceri libv eukara rz xxmitsu KitsuWhooa exit70 dwlsalmeida jkhsjdhjs DragoonAethis RAOF mattst88 Namarrgon phryk xypron lcn dliviu leo60228 konstantin TMM melonai339 dschuermann arnd robher kerneltoast jimjams eric_engestrom dianders zx2c4 zmike rodrigovivi rib daniels steev austriancoder zzag markco mdnavare hfink hashar rg3igalia olv linusw kxkamil neggles DPA q66 linkmauve immibis azerov
<snaibc> Surkow|laptop Stary tyalie lina zehortigoza Lyude vdavid003[m] Net147 invertedoftc09691 xantoz kugel pixelcluster mareko dri-logger glisse vup doras pankart[m] T_UNIX Sofi[m] moben[m] pushqrdx[m] siddh shoffmeister[m] orowith2os[m] masush5[m] gnustomp[m] swick[m] msizanoen[m] tomba BilalElmoussaoui[m] Mershl[m] aura[m] YHNdnzj[moz] Eighth_Doctor Vin[m] MayeulC ids1024[m] go4godvin EricCurtin[m]
<snaibc> cmeissl[m] Tooniis[m] YaLTeR[m] nielsdg ram15[m] tintou enick_991 jenatali sknebel q4a dantob koike Anson[m] nyorain[m] kelbaz[m] aradhya7[m] nick1343[m] isinyaaa[m] jtatz[m] kusma Newbyte zzxyb[m] Quinten[m] xerpi[m] Vanfanel gallo[m] yshui` ajhalaney[m] samueldr tomeu JosExpsito[m] dabrain34[m]1 Hazematman kallisti5[m] viciouss[m] sergi1 vidal72[m] znullptr[m] KunalAgarwal[m][m] c
<snaibc> wfitzgerald[m] AlexisHernndezGuzmn[m] pp123[m] Targetball[m] AlaaEmad[m] kunal_10185[m] zzoon_OOO_till_03_Oct[m] krh ogabbay vignesh benettig ernstp tfiga SanchayanMaity vgpu-arthur pundir norris NishanthMenon linyaa kathleen_ tchar ddavenport_ jhugo appusony____ naseer__ angular_mike______ _alice lvrp16 jluthra haasn lileo pendingchaos seanpaul hwentlan_ cheako sskras jstultz cwabbott TimurTabi
<snaibc> vaishali ezequielg khilman mvchtz enick_185 Wallbraker tak2hu[m] zamundaaa[m] dcbaker devarsht[m] robertmader[m] Sumera[m] x512[m] heftig FloGrauper[m] jeeeun841351 halfline[m] ohadsharabi[m] K0bin[m] undvasistas[m] sigmoidfunc[m] tleydxdy ttayar[m] gdevi reactormonk[m] dhirschfeld2[m] bubblethink[m] daniliberman[m] jasuarez kunal10710[m] knr urja bylaws egalli MotiH[m] onox[m] ella-0[m]
<snaibc> naheemsays[m] talcohen[m] Ella[m] nicofee[m] DUOLabs[m] nekit[m] fkassabri[m] ofirbitt[m] exp80[m] vjaquez ishitatsuyuki tlwoerner MoeIcenowy _whitelogger UndeadLeech kurufu _isinyaaa bcheng airlied Frogging101 jolan gfxstrand KungFuJesus wens Shibe rossy graphitemaster narmstrong rcombs kisak SolarAquarion greaser|q gabertron Lightsword clever jrayhawk schaeffer thaytan smaeul JPEW bwidawsk
<snaibc> robclark nirmoy nicolejadeyee jbarnes caseif_ _lemes codingkoopa32 robink sh-zam Simonx22 radii_ siqueira MTCoster demarchi andrey-konovalov kallisti5 sumits ayaka melissawen cmarcelo ManMower abhinav__ lumag quantum5 jessica_24 jljusten ZeZu hays JTL flto anujp jhli swivel phire gerddie6 paulk adavy cazzacarna pq milek7 lplc padovan Omax kbingham bbhtt Rayyan moony shoragan mceier turol APic
<snaibc> aissen tjaalton marex yoslin Prf_Jakob dakr opotin65 BobBeck tango_ mriesch evadot_ gerddie Lynne FLHerne neobrain mstoeckl skinkie Adrinael jadahl mort_ tnt sigmaris_ Venemo lanodan calebccff ccr robertfoss ds` iokill mlankhorst Plagman llyyr yang3 mmind00 dj-death Koniiiik sven a1batross wv rawoul vapid haagch BobBeck9 aleasto sebastiencs Mary klounge kgz LaserEyess lucaceresoli hakzsam
snaibc has left #dri-devel [FUCK YOU FROM IRC.SERVERCENTRAL.ORG]
<kallisti5[m]> nods
<kennylevinsen> has this kind of spam strategy *ever* worked to justify the scripting cost?
<mattst88> can someone give me ops here?
<karolherbst> same
<kallisti5[m]> I think the problem is the cost is 0
<ndufresne> don't you want to watch the super bowl now ?
<zmike> it's not even the right season
<tlwoerner> maybe they're advocating for more rust usage?
<kennylevinsen> I imagine I wouldn't want to watch it over IRC regardless
<kennylevinsen> kallisti5[m]: I mean, it took time away from improving their spam email techniques, no?
<hch12907> _sigh_
<heftig> could also be a false flag attack to get angry people to join that channel
<kallisti5[m]> also, by the responses here... it's working
<kallisti5[m]> I should start asking that channel for mesa support
<hch12907> not if we don't care about the message at all
<hch12907> (other than being tagged)
<kallisti5[m]> Your time isn't valuable if you're worthless 🤗
<kennylevinsen> kallisti5[m]: that is a *great* idea
<pixelcluster> smh not even their ascii art works
<zmike> oh I get it
<zmike> it's superb owl
vliaskov has quit [Ping timeout: 480 seconds]
<zmike> not superbowl
<kennylevinsen> 00000000 23 73 75 70 65 e2 81 a0 72 62 6f e2 80 8b 77 6c |#supe...rbo...wl|
<kennylevinsen> unicode fun?
<simon-perretta-img> Padding to get around word/pattern filters I guess
<pixelcluster> an attempt at irc color codes maybe?
<Newbyte> I see colour here in my Matrix client
YuGiOhJCJ has quit [Quit: YuGiOhJCJ]
neniagh has quit [Ping timeout: 480 seconds]
<hashar> e2 81 a0 is unicode word joiner, it is not visible and merely an indication to prevent line breaking at that point
alyssa has quit [Quit: alyssa]
apinheiro has quit [Quit: Leaving]
rasterman has quit [Quit: Gettin' stinky!]
<karolherbst> zmike: soo.. it looks like in lvp 1Darray, 2Darray and 3D images are kinda broken, or at least I have crashes in JIT code... but the nir llvmpipe generates is also... "interesting"
<karolherbst> 32x4 %44 = (float32)txl %29 (0x3, 0x1, 0x0) (texture_handle), %43 (0x3, 0x0, 0x0) (sampler_handle), %42 (coord), %0 (0.000000) (lod), 0 (texture), 0 (sampler)
<karolherbst> in case you have any ideas
<karolherbst> uhhh
<karolherbst> I think I know what's up
<karolherbst> yo.. pain
<karolherbst> it's unnormalized coordinates
<karolherbst> are unnormalized coordinates an optional vulkan feature or something?
<karolherbst> though the vvl should have complained
<karolherbst> maybe some weird lowering somewhere messing it up
jmondi has joined #dri-devel
kzd has joined #dri-devel
<karolherbst> same issue on radv
<karolherbst> nice
<Ristovski> It seems like the spam was actually a 1000IQ move to revive this channel
<karolherbst> that must be it
unerlige has joined #dri-devel
<Ristovski> joke was on us all along
<austriancoder> karolherbst: In an old MR I found this commit: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/18986/diffs?commit_id=1d3f6fa7d68471e3d7b6d2868bd9beb0aae95181 do still plan to do it in the frontend or should that be done in the backend? I am happy with everything ... just want to know
heat has joined #dri-devel
<karolherbst> austriancoder: good question. I think it might make sense for drivers to report what sys vals they want to see lowered in the frontend
<karolherbst> a lot of drivers are basically lowering them via push constants or uniforms or other similar things...
<karolherbst> and every driver lowers some
<karolherbst> so we could just make it a frontends problem in the future, but I also don't know if anybody actually cares enough to make that change
kts has joined #dri-devel
<karolherbst> or if driver want to be in control to do something better than just putting those into ubos
<austriancoder> so leave it to the driver .. fine
<karolherbst> yeah... how much work would that be?
<austriancoder> 5-10 minutes
<karolherbst> though I'm really considering, because I might need this for non uniform work groups anyway
<karolherbst> ahh
<karolherbst> yeah, that's fine then
<austriancoder> an other question
<austriancoder> is there a spirv to nir cache somewhere used by rusticl that I need to enable? It takes about 7-8 seconds until I see the first nir_print_shader(..) outputs on my dut.
<karolherbst> it uses the driver cache
<karolherbst> the 7-8 seconds startup time is for converting libclc to nir
<karolherbst> it's like a 2MiB spirv
<karolherbst> rusticl also has its own cache for some CLC to spirv stuff I think? But anyway, it uses the driver cache for driver specific binaries
<austriancoder> ah okay .. I have disabled the driver cache as I want it most of the time disabled and I was to lazy to setup my env with MESA_SHADER_CACHE_DISABLE
<karolherbst> yeah.... I'm wondering if I want to make the libclc -> nir thing driver independent, but...
<karolherbst> I tihnk the reason is, that we need the drivers compile options here
<karolherbst> everything OpenCL to SPIR-V gets cached by rusticls internal cache
yyds has joined #dri-devel
yyds has quit []
neniagh has joined #dri-devel
pekkari has joined #dri-devel
<mareko> MrCooper: yes, advertising all possible multiples of the pitch would be necessary up to a certain number e.g. 1024, isn't it how it's suppposed to work? or do you want the kernel, mesa, and other drivers to each have their own pitch alignment query?
Haaninjo has joined #dri-devel
<MrCooper> that'd be a long list of linear modifiers; the idea would be some kind of other mechanism which better fits this kind of constraint
<zmike> karolherbst: huh
<mareko> MrCooper: that would require a change in every user of modifiers
<karolherbst> zmike: yeah.. I have no idea what's causing it.. the nir looks fine
<zmike> I feel like I've asked this before but is there a reason you can't use the same lowering for unnormalized that every other state tracker uses
<karolherbst> because hardware actually supports it?
<karolherbst> and I don't really want to do shader variants just for that
<karolherbst> it's unknown at compile time, so that would be a huge pain and everything
Duke`` has joined #dri-devel
<karolherbst> anyway.. it works with all the drivers, just radv/lvp are kinda.. either broken there or something else is going on, but anv is entirely fine here
<karolherbst> mhh.. even if all coords are 0 it defaults in lvp...
<karolherbst> *segfaults
kts has quit [Ping timeout: 480 seconds]
<karolherbst> uhhh actually...
<karolherbst> yeah.. no idea what's up there
<karolherbst> probably some weirdo driver bug
kts has joined #dri-devel
<pekkari> can I have your thoughts in the following kasan report? http://paste.debian.net/1293407/
<pekkari> it seems a legit bug, and more or less easy to reproduce in my vm, but I fail to find where update_plane could free a crtc structure
kts has quit []
<karolherbst> fixed it for lvp.. uhhh
<karolherbst> zmike: anyway.. it's a lvp driver bug.. I just fixed it, and I expect the same to be true for radv
<zmike> 🤔
<zmike> seems like this should be tested by vkcts if it's a driver bug
<karolherbst> something with the JIT
<karolherbst> this was changed like 4 months ago
<karolherbst> maybe a regression
<karolherbst> maybe not
<karolherbst> but maybe nobody cares/notices if it's not 1D/2D
<zmike> huh
<zmike> oh okay
<zmike> you're violating spec
<zmike> but there's no explicit VU
<karolherbst> I am?
<zmike> When unnormalizedCoordinates is VK_TRUE, images the sampler is used with in the shader have the following requirements:
<zmike> The viewType must be either VK_IMAGE_VIEW_TYPE_1D or VK_IMAGE_VIEW_TYPE_2D.
<zmike> The image view must have a single layer and a single mip level.
<karolherbst> pain
<zmike> but there's no VU
<zmike> so I'll raise and issue and see what happens
<karolherbst> yeah.. I kinda need it for all image types
<zmike> most likely any solution here would involve a new extension
<karolherbst> but the vvl didn't complain
<zmike> or maybe a maintenance extension as a shortcut
<zmike> yeah like I said there's no VU for it
<karolherbst> ahh
<zmike> I expect that to be resolved within the next week or two
<karolherbst> cool
<karolherbst> okay.. so anything non 1D/2D we'd have to lower it
rgallaispou has left #dri-devel [#dri-devel]
jernej_ is now known as jernej
<karolherbst> I'm inclined to not lower it and wait until anybody actually files a bug requiring it... or I'll write a CL specific vulkan extension to allow this kinda of nonsense :D but probably the lesser evil than shader variants, given there is hardware supporting it
<karolherbst> I wonder what clvk is doing here...
<zmike> given that approximately zero people will be using rusticl+zink seriously for the next year+ I'd say just go with it for now
<zmike> leave a ticket open in the tracker
<karolherbst> we'd only have to care if we want to file for conformance
<karolherbst> probably
<zmike> they care about validation errors?
<zmike> in that case file now since there's no errors
<zmike> :P
<karolherbst> good question
<karolherbst> "For layered implementations, for each OS, there must be Successful Submissions using at least two (if
<karolherbst> available) independent implementations of the underlying API from different vendors (if available), at
<karolherbst> least one of which must be hardware accelerated." is all I can find
<zmike> so you're good
<karolherbst> if radv and anv count as "independent implementations of the underlying API from different vendor"
<karolherbst> because they sure don't sound like independent to me :D
<zmike> dunno
<zmike> not sure anyone's ever hit this before
<karolherbst> yeah...
<karolherbst> I could probably ask
<karolherbst> zink-lvp: Pass 2403 Fails 49 Crashes 2
Company has quit [Quit: Leaving]
Company has joined #dri-devel
Company has quit [Remote host closed the connection]
<karolherbst> still want to get luxmark to run on radv without crashing the GPU :D
<karolherbst> probably another bug lurking somewhere
paulk-bis has joined #dri-devel
paulk has quit [Read error: Connection reset by peer]
sgruszka has quit [Ping timeout: 480 seconds]
Daanct12 has quit [Quit: What if we rewrite the code?]
karolherbst has quit [Remote host closed the connection]
karolherbst has joined #dri-devel
CATS has quit [Read error: Connection reset by peer]
<karolherbst> funky.. the rendering is broken on anv :D... I guess there is a real bug somewhere then
<karolherbst> not even texture related...
DodoGTA has quit [Read error: Connection reset by peer]
An0num0us has joined #dri-devel
Company has joined #dri-devel
CATS has joined #dri-devel
vliaskov has joined #dri-devel
Kayden has quit [Quit: -> JF]
sarahwalker has quit [Remote host closed the connection]
vliaskov_ has joined #dri-devel
konstantin has quit [Ping timeout: 480 seconds]
Company has quit [Read error: Connection reset by peer]
konstantin has joined #dri-devel
Company has joined #dri-devel
vliaskov has quit [Ping timeout: 480 seconds]
kts has joined #dri-devel
i509vcb has joined #dri-devel
cmichael has joined #dri-devel
jkrzyszt has joined #dri-devel
pekkari has quit [Quit: Konversation terminated!]
Danct12 has joined #dri-devel
ngcortes has joined #dri-devel
tzimmermann has quit [Quit: Leaving]
Kayden has joined #dri-devel
sukrutb has joined #dri-devel
vyivel has joined #dri-devel
rasterman has joined #dri-devel
An0num0us has quit [Ping timeout: 480 seconds]
DodoGTA has joined #dri-devel
CATS has quit [Ping timeout: 480 seconds]
jhli has quit [Remote host closed the connection]
CATS has joined #dri-devel
cmichael has quit [Quit: Leaving]
ngcortes has quit [Ping timeout: 480 seconds]
jkrzyszt has quit [Ping timeout: 480 seconds]
ngcortes has joined #dri-devel
jhli has joined #dri-devel
Leopold__ has joined #dri-devel
Leopold_ has quit [Remote host closed the connection]
vyivel has quit [Remote host closed the connection]
vyivel has joined #dri-devel
Danct12 has quit [Quit: What if we rewrite the code?]
flom84 has joined #dri-devel
rasterman has quit [Quit: Gettin' stinky!]
ngcortes_ has joined #dri-devel
CATS has quit [Ping timeout: 480 seconds]
CATS has joined #dri-devel
ngcortes has quit [Ping timeout: 480 seconds]
kts has quit [Quit: Konversation terminated!]
crabbedhaloablut has quit []
Leopold__ has quit [Remote host closed the connection]
Leopold_ has joined #dri-devel
<mareko> karolherbst: the fix for radeonsi compute-only contexts is in main, FYI
Haaninjo has quit [Quit: Ex-Chat]
fdu_ is now known as fdu
rsalvaterra has quit []
fab has quit [Quit: fab]
rsalvaterra has joined #dri-devel
sravn has quit [Quit: WeeChat 3.5]
sravn has joined #dri-devel
alarumbe has joined #dri-devel
larunbe has quit [Ping timeout: 480 seconds]
flom84 has quit [Quit: Leaving]
rsalvaterra has quit []
rsalvaterra has joined #dri-devel
An0num0us has joined #dri-devel
Duke`` has quit [Ping timeout: 480 seconds]
heat has quit [Remote host closed the connection]
<robclark> pekkari: see 4e076c73e4f6e90816b30fcd4a0d7ab365087255
<mareko> karolherbst: it also allows compute jobs to run in parallel with gfx and increases the GPU hang timeout from 10s to 60s
<karolherbst> ohh, so radeonsi allocates a compute queue thing then?
<mareko> karolherbst: yes
<karolherbst> okay, that's going to be useful indeed
<karolherbst> could we also enable hmm/mmu_notifiers with then to support SVM? Or would that require more work?
<mareko> karolherbst: I'm not familiar with that
<karolherbst> it's about replayable page faults
<airlied> I think that's more work
<karolherbst> okay
<airlied> not sure if that stuff is only exposed via amdkfd at the moment
<mareko> karolherbst: RDNA can't do that anyway, not very well at least
<karolherbst> ahh
<karolherbst> okay
<airlied> oh yeah it's also limited on non-compute gpus
<karolherbst> I also have a userptr based SVM implementation I need to finish, but that doesn't require kernel support, just the driver to manage the VM
Kayden has quit [Quit: move stuffs]
<karolherbst> on the mesa side I mean
<mareko> I'm sure KFD doesn't support and won't support replayble page faults on RDNA due to hw limitations
<airlied> yeah that would be difficult :-)
<airlied> one more reason CUDA will continue to eat people's lunch :-P
<mareko> it's also not a good idea to have that complexity in a gaming GPU in general
<karolherbst> well.. nvidia has it on all GPUs
<karolherbst> since pascal
<mareko> do they have a TLB in every SM?
<karolherbst> it's only for compute tho
<karolherbst> mhh.. good question
<airlied> mareko: it is if you want to beat NVIDIA and provide a consistent behaviour across devices :-P
<mareko> you can either have a TLB per SM or higher in the hierarchy
<airlied> I understand it might not be easy or trivial to get working :-)
<mareko> e.g. 128 SM is 128 TLBs vs 16 TLBs per memory channel
<karolherbst> I think on nvidia it's one level higher than SMs
<karolherbst> they have pairs of SMs
<karolherbst> called TPC
<mareko> getting it working isn't the issue, putting it in the chip and increasing power, cost, and latencies is
<karolherbst> but I think the HPC gpus are a bit special there, just not in terms of functionality
<karolherbst> but anyway.. no idea about the specifics of the TLB arengement here
<DemiMarie> karolherbst: could there be a kernel command line option to ensure that compute and gfx are serialized? I don’t want gfx crashes killing long running compute workloads.
<karolherbst> depends on the driver I suspect
<DemiMarie> The correct fix is of course to be able to kill individual crashed workloads without taking unrelated workloads with them, but I don’t know which consumer GPUs support that (and do not care about datacenter GPUs)
<karolherbst> on nvidia there isn't really a strict separation between compute/gfx and you can't really run it in parallel anyway
<DemiMarie> AGX apparently allows for running multiple contexts in parallel, as shown by faults in one job killing others
<DemiMarie> Which to me is just a flat out bug in the firmware and/or hardware — the GPU should provide the same amount of isolation as the CPU.
<DemiMarie> It wouldn’t help for Apple, but hopefully a future Vulkan spec requires this level of isolation.
<karolherbst> I doubt you'd get enough people on the WG to sign up on it
<DemiMarie> why?
<karolherbst> sounds like a lot of work for everybody :P
<karolherbst> also.. how would you be able to ensure this
<karolherbst> would have to write a CTS tests which tests this
<mareko> DemiMarie: amdgpu only kills shaders of one process and only if that fails, it resets everything
<DemiMarie> mareko: when can that fail?
<mareko> DemiMarie: any situation that can't be resolved by killins shaders, such as rasterizer hang
<DemiMarie> karolherbst: add a new SPIR-V intrinsic that deliberately crashes the shader, implemented on Mesa as an illegal instruction
<DemiMarie> mareko: when can the rasterizer hang?
<karolherbst> DemiMarie: not the same
<mareko> if the driver is bad
<DemiMarie> mareko: kernel or userspace driver?
<mareko> either
<mareko> or vulkan app
<DemiMarie> I assume that having the kernel-mode driver prevent this would require moving too much of Mesa into the kernel?
<karolherbst> it would mean validating the entire command submission
<mareko> vulkan allows hanging the GPU entirely and not just shaders
<karolherbst> which means you get CPU rendering speed
<DemiMarie> karolherbst: why is command validation so slow?
<karolherbst> seriously, you can't validate that userspace won't be able to crash the GPU
<karolherbst> DemiMarie: because you'd have to execute the sahder on the CPU
<karolherbst> to check it doens't do anything bad, like a null pointer access
<DemiMarie> karolherbst: then it is a hardware bug that the GPU cannot recover from any crashes
<karolherbst> or passes a null/invalid buffer where it shouldn't
<ccr> :P
<karolherbst> well...
<karolherbst> yeah, but we have that hardware
<DemiMarie> karolherbst: so obviously I am very much missing something that is presumably obvious to everyone else here, but to me this just seems like GPU hardware is horrible
<karolherbst> well
<mareko> there is specialized hw that has better QoS like CDNA
<karolherbst> let me put it this way
<karolherbst> why can't the kernel verify an userspace application won't access memory OOB before running it?
<karolherbst> smae thing
<DemiMarie> mareko: not helpful for any of my use-cases
atipls has quit [Killed (NickServ (Too many failed password attempts.))]
atipls has joined #dri-devel
<karolherbst> why is it okay for the CPU do allow such applications, but not for the GPU?
<DemiMarie> karolherbst: my point is that a faulting application’s impact _should_ be limited to that single application
<DemiMarie> and on the CPU, it is
<ccr> the halting problem called and wants its Turing machine back
<karolherbst> DemiMarie: ohh sure,.. in the perfect world GPU contexts are completely isolated and you just need to reap that context
<karolherbst> but sometimes hardware can't do it, or there are bugs
<mareko> you can buy 1 GPU per process :D
<DemiMarie> karolherbst: why can’t the hardware do it?
<karolherbst> because it can't
<HdkR> Expose SR-IOV on consumer class chips, SR-IOV every process :D
<karolherbst> can't fix hardware after it was produced, can you
<mareko> buy 1 GPU per process thread
<DemiMarie> no, but this kind of bug never happens on a CPU
<karolherbst> I'm sure they do happen on CPUs
<DemiMarie> okay, almost never happens
<mareko> it's not a bug, it's a feature :)
<karolherbst> but yeah.. on CPU the matter is more sensitive
<karolherbst> so the vendors generally care more
<karolherbst> and it's also not bad on all GPUs
<DemiMarie> which GPUs is it not bad on?
<HdkR> H100 because then you don't have graphics to muck things up ;)
<karolherbst> good question... I want to say nvidia, but the driver is terrible in terms of recovery and it generally just crashes the entire GPU without actually recovering... no idea why, because it's not really a hardware problem, just the driver being bad here
<mareko> I suggest you go in front of NVIDIA HQ and protest
<karolherbst> well.. nvidia's driver that is
<karolherbst> nouveau also has bugs in the recovery code and it's generally less stable if you have multiple processes crash their contex
<mareko> you can buy TV ads to increase awareness
<karolherbst> anyway.. it's not really considered a problem yet, so I doubt much will change, but vendors are slowly moving towards making things more robust
<karolherbst> but anyway, GPUs are fully programmable hardware, and it's not feasible to reject command buffers triggering full GPU resets
<DemiMarie> karolherbst: unless _no_ command buffer can trigger a GPU reset
<DemiMarie> which is the case on the CPU
<DemiMarie> the worst you can get is SIGSEGV/SIGABRT/etc
<DemiMarie> is there a reason that this analogy does not hold?
<karolherbst> well.. then you kill the process
<karolherbst> and (some?) gpu drivers can just kill the context triggering those
<karolherbst> it's just not perfect
<DemiMarie> I guess my question is why it is not perfect.
<karolherbst> ask the vendors
<DemiMarie> Or is this a question that only the HW vendors can answer?
<karolherbst> there are always hardware limitations on how you can recover from what type of faults.
<karolherbst> GPUs area also way more complex than CPUs
<mareko> DemiMarie: cost
<mareko> if you are willing to pay 10x or 100x the price for a GPU, we can make it perfect
<DemiMarie> karolherbst mareko: why are there these limitations? why is it so much harder on a GPU?
mvlad has quit [Remote host closed the connection]
<mareko> nobody wants to pay for such a feature
<DemiMarie> Or is it just that GPUs are where CPUs were 20 years ago, because they are so much younger than CPUs are?
<karolherbst> ehh no, GPUs are just more complex
<DemiMarie> because of fixed function?
<mareko> like seriously, we can make it perfect
<karolherbst> well.. the graphic pipeline consists of many pieces
<karolherbst> not just the shader
<DemiMarie> mareko: why does it add such a huge overhead on a GPU but not on a CPU?
<mareko> not huge overhead
<mareko> just a little bit
<DemiMarie> how much, ish?
<mareko> the cost is in R&D, die size, and volume produced
<karolherbst> doesn't really matter in the end, because as long as vendors don't care enough, we have to deal with what we got
<DemiMarie> ugh
<karolherbst> but they are getting better
<mareko> R&D is pricy, the die size increases cost and power, and low volume increases the cost since you need to buy whole wafers and the first one is like 20 millions or so
<karolherbst> 10 years ago not all GPUs had MMUs :')
<karolherbst> well
<kisak> There's one place where it makes sense to go through all the extra costs to be completely fault tolerant, and that's if you're sticking it in mission critical hardware for space flight, but who needs a GPU to run a spacecraft or satellite?
<karolherbst> or was it 5?
<karolherbst> do we still have GPUs without MMUs produced today?
<DemiMarie> BTW, software fault isolation is a thing, so one can have a secure system with no MMU
Kayden has joined #dri-devel
<karolherbst> mhhh
<karolherbst> well
<karolherbst> in theory
<DemiMarie> but anyway, now I understand that this is not a technical problem but an economic problem
<mareko> that's exactly right
<Dark-Show> In general when an issue is fixed and a merge request is submitted, does the issue owner close the issue then or wait until the merge is in main? (Submitted my first issue and want to close it properly)
<mareko> Dark-Show: if the fix is in main, you can close manually, but usually pushing the fix closes tickets automatically if the commit message references it
<Dark-Show> Ok, I'll leave it alone then, thanks!
pcercuei has quit [Quit: dodo]
ngcortes_ has quit [Ping timeout: 480 seconds]
ngcortes_ has joined #dri-devel
vliaskov_ has quit [Remote host closed the connection]
ngcortes_ has quit []
ngcortes has joined #dri-devel
Company has quit [Quit: Leaving]
shashanks__ has joined #dri-devel
shashanks_ has quit [Ping timeout: 480 seconds]
tursulin has quit [Read error: Connection reset by peer]
Kayden has quit [Quit: reboot]
Mangix has quit [Read error: No route to host]
Mangix has joined #dri-devel
TMM has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
TMM has joined #dri-devel
An0num0us has quit [Ping timeout: 480 seconds]
Mangix has quit [Read error: No route to host]
Mangix has joined #dri-devel
Mangix has quit []
Mangix has joined #dri-devel