<javierm>
tzimmermann: oh, I didn't know that. Sorry!
<tzimmermann>
not really a big deal
<tzimmermann>
but dim cherry-pick logs the original commit id, which can be helpful for people dealing with conflicting trees
<javierm>
tzimmermann: got it. Will use it next time
<javierm>
tzimmermann: I also wondered how it would work since the sha-1 won't match. Would drm-misc-next backmerge v5.19-rc5 where the fixes landed ?
<tzimmermann>
well, the maintainers have to deal with it during (back-)merges
<tzimmermann>
sometimes git seems to auto-detect and resolve such conflicts.
<javierm>
tzimmermann: Yeah. And I guess drm-next will backmerge at some point and drm-misc-next backmerge drm-next
<javierm>
so the conflict will get resolved before the next merge window
<javierm>
well, the duplicated commits really because I don't expect conflict since the patches apply cleanly with both bases
lynxeye has joined #dri-devel
mwalle has joined #dri-devel
jkrzyszt has joined #dri-devel
saurabhg has joined #dri-devel
Company has quit [Read error: Connection reset by peer]
kts has quit [Quit: Konversation terminated!]
rgallaispou has joined #dri-devel
lygstate has joined #dri-devel
jkrzyszt has quit [Ping timeout: 480 seconds]
apinheiro has joined #dri-devel
lynxeye has quit [Remote host closed the connection]
lynxeye has joined #dri-devel
bmodem has quit []
pixelcluster_ is now known as pixelcluster
pixelcluster has quit [Quit: ZNC 1.8.2+deb2 - https://znc.in]
pixelcluster has joined #dri-devel
hansg has quit [Remote host closed the connection]
pcercuei has joined #dri-devel
lygstate_ has joined #dri-devel
lygstate has quit [Read error: Connection reset by peer]
wvanhauwaert has joined #dri-devel
<wvanhauwaert>
Hello, I want to make sure that no color correction is happening on my output. For that, I wanted to verify first if there's something configured, to adapt afterwards. However, drmModeGetCrtc returns me a crtc gamma size of 0. What does that mean? It's not supported or not implemented? My platform is nxp's imx53, having kernel 5.15
maxzor_ has quit [Ping timeout: 480 seconds]
lygstate has joined #dri-devel
lygstate_ has quit [Read error: Connection reset by peer]
<lynxeye>
wvanhauwaert: The IPU on the i.MX53 does not have a LUT for the gamma correction, but a piece-wise polynomial approximation. As there is some impedance mismatch between the two methods, nobody cared to implement support for gamma correction in the driver.
Daanct12 has quit [Remote host closed the connection]
Daanct12 has joined #dri-devel
Daanct12 has quit [Remote host closed the connection]
Daanct12 has joined #dri-devel
lygstate_ has joined #dri-devel
<wvanhauwaert>
lynxeye, and is it possible to turn of the gamma correction completely on this device?
<wvanhauwaert>
so no polynomial approximation either?
lygstate has quit [Read error: Connection reset by peer]
<wvanhauwaert>
or no default things? just a pass-through of what you put in
<pq>
or something else less horrible to fix the issue explained in that commit
<pq>
it's about running apps with ASan
JoniSt has joined #dri-devel
alarumbe has joined #dri-devel
mclasen_ has quit [Ping timeout: 480 seconds]
agx has quit [Read error: Connection reset by peer]
agx has joined #dri-devel
<ishitatsuyuki>
so, I assume --- if you load and unload llvmpipe multiple times, it would actually leak things?
lygstate_ has quit [Remote host closed the connection]
<ishitatsuyuki>
if that's the case, it sounds very sensible to suppess unload
<javierm>
tzimmermann: we forgot to include b84efa28a48 ("drm/aperture: Run fbdev removal before internal helpers") in the fixes for -rc5 :/
<pq>
ishitatsuyuki, I've no idea if that actually leaks or just finds the old thread-local stuff again.
<javierm>
tzimmermann: and I think both Daniel and Dave are on PTO so probably there won't be DRM fixes sent to Linus for rc6...
<tzimmermann>
javierm, please queue it up
<javierm>
tzimmermann: yes, I'll do that now
<javierm>
tzimmermann: sorry, I completely forgot about your follow-up patch
<tzimmermann>
is it in 5.18?
<tzimmermann>
or 5.19-rc*
<javierm>
tzimmermann: 5.19-rc5
<pq>
but it does make me wonder how that thread-local stuff is going to be freed when the thread exits... or maybe that's up to the app to call something on each thread before it exits? But then, Weston should be doing that already...
<tzimmermann>
the next drm-fixes should go upstream early next week
<pq>
also, if the thread-local stuff has a desctructor defined, then what's to stop that from calling into unloaded LLVM?
<pq>
err...
<ishitatsuyuki>
DSO lifetime issue is not a new problem, but the particular case of LLVM sounds very weird indeed
<pq>
LLVM is not unloaded, just llvmpipe is, so...
<javierm>
I guess is OK if I change to the sha-1 in v5.19-rc5 ?
<tzimmermann>
javierm, what did you do?
<javierm>
tzimmermann: dim cherry-pick fb84efa28a48 && dim push-branch drm-misc-fixes
<javierm>
tzimmermann: but the problem is that the Fixes: tag in your commit that's in drm-misc-next is for the commit that's in drm-misc-next, not the one in v5.19-rc5
<karolherbst>
jekstrand, dcbaker: wondering if we want to target 22.2 for rusticl (+ conformance)... there isn't much left but depends on if we want to merge rusticl this week (or next one)
aravind has quit [Ping timeout: 480 seconds]
zehortigoza has joined #dri-devel
thellstrom has joined #dri-devel
kts has quit [Quit: Konversation terminated!]
zehortigoza has quit [Remote host closed the connection]
alyssa has joined #dri-devel
zehortigoza has joined #dri-devel
<alyssa>
Would it make sense to transcode unsupported BC* formats to supported ones?
<alyssa>
Kayden: ^^ IIRC you were working in this area recently
saurabhg has joined #dri-devel
<HdkR>
alyssa: Probably only makes sense for GL, not Vulkan. Vulkan just...not report support for those formats
<HdkR>
Both Intel and AMD have had to emulate some unsupported compressed formats
<alyssa>
HdkR: Yeah, this is for big GL
<HdkR>
Sadly these formats are expected to be supported in GL space
<HdkR>
Desktop GPU influence is strong
jfalempe has quit [Quit: Leaving]
<HdkR>
Of course with more games running on ARM devices, the more edge cases people are going to be hitting :)
<bnieuwenhuizen>
HdkR: we emulate ETC2 if needed on radv :)
<bnieuwenhuizen>
sadly a requirements for android
<HdkR>
Yep, same problem space there. Which is why Nvidia emulated ASTC with GLES
<alyssa>
HdkR: Yeah, the linked bug report is really, really weird.
<alyssa>
"Too much memory used in a desktop GL x86 game played on an Arm/Mali system, where Mali is a GLES part and box86 is emulating things"
<alyssa>
That it works at all (even with decompression in Mesa) is frankly a miracle
<HdkR>
alyssa: A miracle? Same situation would work with FEX-Emu as well :)
<alyssa>
HdkR: Also that icecream95 and I have cobbled together enough desktop GL support for it
<HdkR>
It's good stuff!
<HdkR>
3.3 plus a bunch of extensions is enough for a /lot/ of games
<alyssa>
we're geometry shaders short of 3.3
<HdkR>
Which most games know to avoid :P
<alyssa>
HdkR: So.. all in all unsure what to do with that report
<alyssa>
Support for BC* formats on Mali is up to the SoC integrator, Arm supplies all the decoders but whether they're hooked up depends on the SoC
<alyssa>
Panfrost is doing all it can natively
<HdkR>
I guess with Unity decompressing the textures itself, it wouldn't really change much with memory usage problem they have. Will still need a shadow copy lying around
<alyssa>
mesa/st is decompressing, and that user is not hitting OOM
<HdkR>
ah, I thought that decompressing was from the game
jfalempe has joined #dri-devel
<javierm>
hmm, it seems I'm not getting dri-devel emails. Only the ones were I'm Cc'ed
<javierm>
does anyone else have the same issue or is only me ?
<jfalempe>
javierm, I was going to ask you the same.
<emersion>
maybe because our cert expired recently
<javierm>
jfalempe: it seems there's an issue since I also don't see them in the archives. Even the ones where I got because I was on Cc
<emersion>
does postfix need a restart daniels?
<javierm>
emersion: I thought that was a quiet day due the holiday in the US but it seems that wasn't the case :)
saurabhg has quit [Ping timeout: 480 seconds]
heat_ has joined #dri-devel
camus has quit [Remote host closed the connection]
camus has joined #dri-devel
Net147_ has joined #dri-devel
Net147 has quit [Ping timeout: 480 seconds]
<javierm>
tzimmermann: unsure if you got my emails due the issues with the ML, but just in case this is what I suggested as an alternative approach: https://paste.centos.org/view/raw/c5cd1c85
iive has joined #dri-devel
Haaninjo has joined #dri-devel
<javierm>
and that makes much more sense indeed than making drm_copy_field() paranoid about the callers
<tzimmermann>
javierm, my concern was that i don't see the point in testing this at all
Company has joined #dri-devel
<javierm>
tzimmermann: but what happens if a driver author forgets to do set those fields? We shouldn't let the kernel oops or crash IMO
<tzimmermann>
is that related to ioctl security?
<javierm>
tzimmermann: no, just not taking the kernel down due a driver bug
<tzimmermann>
i'm thinking that there are plenty of parameters and fields that are assumed to be initialized correctly. i don't see the difference to this one
<javierm>
in this case the reporter I think had some additional patches on his kernel package but I think is still fair that a registered DRM device shouldn't let to a kernel crash just running modetest
<javierm>
*lead
<tzimmermann>
and the kernel would panic anyway, that's warning enough to driver writers
<javierm>
tzimmermann: not always panic, sometimes is an oops but the kernel is in an incosistent state
<tzimmermann>
is it triggerable from userspace?
<javierm>
tzimmermann: yes
<tzimmermann>
liek an easy crash?
<javierm>
tzimmermann: just a DRM_IOCTL_VERSION ioct on the device
<javierm>
the kernel will happily try to copy a NULL pointer
<tzimmermann>
well, that's different then
<javierm>
tzimmermann: I explained in the commit message :)
<javierm>
the "Unable to handle kernel access to user memory outside uaccess routines at virtual address 0000000000000000" is because the virtual address is NULL
<javierm>
tzimmermann: but you are right that's a driver bug. I just think that we should make the core more robust
<javierm>
so not even allow to register the device as the alternative patch makes more sense
* tzimmermann
needs to start reading commit messages
<javierm>
:D
<tzimmermann>
ok. let's see. in the ioctl code, i'd do a drm_warn_once() and treat them like empty "" strings
<tzimmermann>
so there's a warnign in the log and userspace get s something useful
<tzimmermann>
well, 'useful'
<javierm>
tzimmermann: I think that we should just make the ioctl fail instead of adding empty strings
<vsyrjala>
does anyone actually care about those strings anyway? might as well allow them to be null imo
<javierm>
if the driver didn't provide that data, it should be an error so user-space knows that can rely on it
<javierm>
vsyrjala: I don't know tbh. I don't even know why the date for example is relevant
<tzimmermann>
javierm, it's just the version info. do we really want to escalate the problem to userspace? and returning an empty string is not really incorrect
<tzimmermann>
i'd really like to hear a maintainer's take on this.
sergi has quit [Ping timeout: 480 seconds]
<javierm>
tzimmermann: hmm, that's a good point
<tzimmermann>
i'll send this as reply to the ML. maybe we can ask one of the maintainers about this
<pinchartl>
I have been officially forbidden to work on dw-hdmi by Synopsys, they required the SoC vendor who provided me documentation to send all patches to Synopsys for legal review before being published, so I dropped the ball
<karolherbst>
alyssa: btw.. after that MR lands, the rusticl MR will only contain rusticl changes (unless I messed up) so.. might be easier to review/ack/whatever? dunno and not even sure if we want to rush it for 22.2
<karolherbst>
but that means we could be CL conformant in 22.2
<pinchartl>
alyssa: some companies seem to never learn :-)
<alyssa>
karolherbst: Yeah, I saw that, planning to get to it.
<karolherbst>
cool
<alyssa>
Was OoO last week, got a big pile of todo's to catch up on
<karolherbst>
but I think we can merge it for 22.2, make sure it's disabled by default and make it conformant on the stable branch... but that also requires iris bumping samler/image limits
<karolherbst>
alyssa: sure... the release schedule says we still have ~9 days until rc1 is out
<karolherbst>
I also assume that's when the branchpoint is made?
<karolherbst>
yeah.. looks like it
<karolherbst>
so yeah.. we got time :)
<alyssa>
karolherbst: IMHO, I'd rather push CL out until 22.3
<karolherbst>
yeah.... probably
<alyssa>
It will be too stressful (on me, at least) to rush rusticl in, it will end in a sloppy job, and "bare minimum conformant impl" is a big shrug to have in the release
<karolherbst>
not that it changes much. I doubt anybody would enable it for 22.2 (or so I hope?)
<alyssa>
right
<karolherbst>
there are quite some API validation bits we still need to fix, but meh...
<karolherbst>
rather fix real bugs
<alyssa>
Another 3 months of clean up, optimizations, etc -- and have something actually polished landed in August with plenty of time in main before the 22.3 bp, no deadline stress, etc
<alyssa>
I like that plan a lot better, don't you?
<karolherbst>
I think I'd rather have it in asap so people can do MRs to fix things
<karolherbst>
if that's after the 22.2 branchpoint, so be it
<karolherbst>
but we should still merge it soonish
<alyssa>
sure ... there's a strong case for merging it the day after the bp then ;P
<karolherbst>
alyssa: the only annoying thing is, we should only do the conformance thing from a released tag
<karolherbst>
so we'd have to wait until 22.3 which I think is fine
<karolherbst>
anyhow, wasn't aware of the "must be from a release tag" req, so I wasn't really rushing it
kts has joined #dri-devel
apinheiro has quit [Ping timeout: 480 seconds]
pcercuei has quit [Remote host closed the connection]
pcercuei has joined #dri-devel
tobiasjakobi has joined #dri-devel
tobiasjakobi has quit []
MajorBiscuit has quit [Ping timeout: 480 seconds]
frieder has quit [Remote host closed the connection]
chslt^ has joined #dri-devel
<zmike>
put up the MRs and see what happens ?
rgallaispou has quit [Read error: Connection reset by peer]
<alyssa>
zmike: you get a 4-day weekend for Canada Day? nice
<HdkR>
kisak: Throw a 64 core CPU at the problem and they might not even realize
<HdkR>
"I guess this game just isn't optimized"
<kisak>
does lavapipe multithread enough to take advantage? I thought that was more of a SWR thing
tzimmermann has quit [Quit: Leaving]
<HdkR>
It certainly tried, not sure how far it gets. Haven't tried personally
<zmike>
lavapipe is just llvmpipe
<zmike>
it uses all available cpus
<alyssa>
zmike: marketing told us to rebrand llvmpipe to lavapipe because lava is cool and llvm is not, right?
<daniels>
notoriously cool
<zmike>
more like because 'vallium' wasn't a strategy for market dominance
<zmike>
swiftpipe would've been more optimal though
<alyssa>
zmike: i was just telling jekstrand, that since the name 'vallium' is available for the VK gallium...
<kisak>
my memory is trying to tell me that part of the argument for swr existing was that it scaled better to a high core counts compared to llvmpipe. I must have misremembered something.
<bbrezillon>
pinchartl: :-(
<alyssa>
kisak: swr parallelized vertex shaders, they both(?) parallelized fragment shaders
<zmike>
llvmpipe has parallel vertex stages now
<alyssa>
so for high vert, low pixel count workloads (for scientific viz) swr won
<zmike>
though this is very recent
<alyssa>
zmike: and swr doesn't exist now C:
<zmike>
it exists in our hearts.
<JoniSt>
Who knows, maybe we will have so many CPU cores in the future that discrete GPUs become obsolete and we all just run llvmpipe instead
<alyssa>
we tried that, it didn't work
<alyssa>
power efficiency disaster
<karolherbst>
also, it's a programming model thing
<karolherbst>
(which CPUs aren't optimized for)
<JoniSt>
Yeah... I just wish GPUs were a little bit more like CPUs, but they're slowly getting there anyway :)
<karolherbst>
if they would, they would be also as slow as CPUs
<JoniSt>
Well, I'm more interested in it from a memory model perspective - resizable BAR and persistent mapping already gets us quite far in that respect
<karolherbst>
those are not really GPU features though
<karolherbst>
also not quite sure why resizeable BARs are so relevant here, because that's mostly just affecting how you are able to upload data to the GPU
<karolherbst>
or well.. download them
<JoniSt>
Well, it does depend on the GPU quite a bit... My Turing GPU can't do it, my Radeon Pro can
<karolherbst>
turing GPUs can do it
<karolherbst>
nvidia just doesn't implemented it
<karolherbst>
*didn't implement
<karolherbst>
also.. it's just a small perf optimization
<karolherbst>
and doesn't really matter for gaming at all
<JoniSt>
Of course. I'd just like to be able to persistently map all my VRAM for some things :P
<bnieuwenhuizen>
karolherbst: the consequence is often that without resizable BAr games often put buffers in host memory
<bnieuwenhuizen>
which has its consequences for GPU perf
<karolherbst>
yeah.. to not having to sync between host and GPU
<karolherbst>
but they could do it and then it wouldn't hurt perf
<alyssa>
JoniSt: have you considered an integrated GPU
<JoniSt>
And then simply pass pointers to my shaders / kernels, without any binding stuff etc, it would make programming GPUs so much easier
<karolherbst>
I agree that it's easier to use the GPU with resizable BARs
<alyssa>
i hear Apple has a great one
<karolherbst>
but...
<bnieuwenhuizen>
yeah, fastest path is often DMAing it (especially for eGPUs) but nobody does that ...
<karolherbst>
yeah....
<karolherbst>
JoniSt: that's unrelated to resizable BAR
<bnieuwenhuizen>
(well the driver could in d3d9/11 but d3d12/vulkan is messy)
bmodem has joined #dri-devel
<JoniSt>
alyssa: Well, the problem is not that there isn't hardware that can do what I need it to (my Radeon card exposes a 32GB host-visible, host coherent VRAM region), the problem is that it's not widespread enough yet so I can rely on it
<karolherbst>
also
<alyssa>
have you considered only writing software for new macs
<alyssa>
but only on linux
<karolherbst>
"simply pass pointers" is more of a mess if you;d know how that all works :P
<JoniSt>
:D
<alyssa>
and also you have to write the drivers yourself
<alyssa>
here want commit rights? congrats youre the new maintainer
<alyssa>
:p
<karolherbst>
and if you don't know, you are better of not doing it at all
* JoniSt
runs far far away
<karolherbst>
well.. at least how things are implemented today
<karolherbst>
userptrs are probably the better thing to use
<JoniSt>
karolherbst: So you're saying that it's a bad idea to build an app that relies on VK_KHR_buffer_device_address?
<karolherbst>
ahh wait.. you mean GPU pointers
<JoniSt>
I actually wanted to try doing vertex and uniform fetch with buffer addresses...
<JoniSt>
(So I can ignore most of Vulkan's descriptor set stuff)
<karolherbst>
yeah.. for VK it should reduce some GPU overhead
<JoniSt>
Phew :P
<bnieuwenhuizen>
eh, shared pointers between GPU and CPU can also be ok. The problem is having to make malloc work instead of explicit buffer allocation, and doing implicit placement
<karolherbst>
although not quite sure how you can actually make use of it from a vulkan perspective
<alyssa>
isn't VK_KHR_buffer_device_address just fancy bindless?
<karolherbst>
bnieuwenhuizen: yeah...
<karolherbst>
alyssa: afaik yes
<JoniSt>
alyssa: Yeah, but with more footguns attached.
<bnieuwenhuizen>
alyssa: it is basically GPU pointers
<alyssa>
tasty
<bnieuwenhuizen>
the diff with a "bindless" is that you lose all the concepts of buffers on the GPU side
<bnieuwenhuizen>
you only have that on the CPU API, and just ask "what is the pointer to the start of this buffer"
<bnieuwenhuizen>
and then mess around
lynxeye has quit [Quit: Leaving.]
<karolherbst>
just the thing I need for rusticl on top of zink :P
<karolherbst>
ehhh wait
<karolherbst>
ahh yeah
<alyssa>
is rusticl/zink any more feasible than clvk?
<JoniSt>
Yeah... My plan was to do a multi-draw with a uniform that only contains GPU pointers (and no vertex buffers), then fetch per-model data from there and get vertex buffer pointers etc. Basically make my shaders interpret my CPU-side data structures
<alyssa>
I would think no, but/
<alyssa>
(also, is there a use case for rusticl/zink? I realize building a GL driver is a lot but presumably a compute-only Gallium driver would be pretty thin)
<karolherbst>
alyssa: yeah
<karolherbst>
we can simply add vulkan extensions we need :D
<bnieuwenhuizen>
VK_MESA_svm when?
<karolherbst>
oh wow...
<alyssa>
karolherbst: is there a use case for that?
<karolherbst>
for SVM?
<alyssa>
for rusticl on zink on a mesa VK driver
<karolherbst>
not having to rely on gallium?
<alyssa>
blinl
<alyssa>
blink
<karolherbst>
I mean.. not haivng to rely on a gallium driver :P
<karolherbst>
but yeah...
<karolherbst>
dunno, I think it makes more sense once we get the first vulkan only drivers, which rely on zink for GL
<alyssa>
sure
<alyssa>
I guess PowerVR is the likely candidate there
<alyssa>
Or maybe Nouveau depending how things go ther e;-p
<karolherbst>
yeah...
<karolherbst>
I don't think much is missing from vulkan.. just some things are really really inconvenient
bmodem has quit [Ping timeout: 480 seconds]
<karolherbst>
no idea how to deal wiht all that userptr nonsense
<alyssa>
clspv has a .... very large list of limitations, idk
<karolherbst>
yep
<JoniSt>
Oh right, the whole NV kernel driver thing... How would you figure out the machine instructions for NV? Do you reverse-engineer it, like it was done for panfrost?
<karolherbst>
but I plan to expose a vk_mesa_CL_shaders ext :P
<karolherbst>
or CL_spirv
<alyssa>
karolherbst: at that point why not vk_mesa_serialized_nir and it has to be from exact same mesa build (up to commit hash) ....
<alyssa>
skip the roundtrip ...
<karolherbst>
please no
<alyssa>
alid
<alyssa>
valid
<karolherbst>
we can share the entire nir cl pipeline bits, so doesn't matter if that's done inside rusticl or somewhere else... but yeah...
Duke`` has quit [Ping timeout: 480 seconds]
kts has quit [Quit: Konversation terminated!]
Duke`` has joined #dri-devel
Terman_ has quit [Remote host closed the connection]
dylanusdt[m] has quit [autokilled: Please do not spam on IRC. Email support@oftc.net with questions. (2022-07-04 17:54:24)]
rkanwal has quit [Ping timeout: 480 seconds]
<karolherbst>
JoniSt: yes, but nvidia started to publish "documentation"
<JoniSt>
Hmm... Maybe they will just make the ISA public at some point... At least we can hope, I guess
<JoniSt>
But their PTX probably makes figuring the ISA out quite a bit easier since it likely exposes most of the HW's features directly
<alyssa>
So does Vulkan ....
<JoniSt>
True
jewins has joined #dri-devel
gouchi has joined #dri-devel
mclasen has quit [Ping timeout: 480 seconds]
jewins has quit [Quit: jewins]
jewins has joined #dri-devel
rkanwal has joined #dri-devel
<pixelcluster>
bnieuwenhuizen: someone was already experimenting with SVM
<bnieuwenhuizen>
pixelcluster: if this isn't the malloc stuff but just shared pointers and friendsthen it should be fairly easy to hook up on AMD as well I think
<pixelcluster>
wdym malloc stuff?
<pixelcluster>
it's mmap with MAP_ANONYMOUS | MAP_PRIVATE and importing that pointer as a new bo
<bnieuwenhuizen>
pixelcluster: automatically being able to use any malloced memory from the GPU
<pixelcluster>
ah, yea no it isn't that
<pixelcluster>
should indeed be fairly easy, almost a copy-paste I think
KunalAgarwal[m]1 has joined #dri-devel
jewins has quit [Ping timeout: 480 seconds]
thellstrom has quit [Ping timeout: 480 seconds]
jewins has joined #dri-devel
Haaninjo has quit [Quit: Ex-Chat]
jewins has quit [Ping timeout: 480 seconds]
jewins has joined #dri-devel
apinheiro has joined #dri-devel
chslt^ has quit [Ping timeout: 480 seconds]
gouchi has quit [Quit: Quitte]
chslt^ has joined #dri-devel
Duke`` has quit [Ping timeout: 480 seconds]
rsalvaterra has quit []
rsalvaterra has joined #dri-devel
rkanwal has quit [Quit: rkanwal]
mvlad has quit [Remote host closed the connection]
MajorBiscuit has joined #dri-devel
apinheiro has quit [Quit: Leaving]
<karolherbst>
JoniSt, alyssa: it's actually easier with PTX as you have ptxas which you can just disassemble with their nvdisasm, there is no equivalent with spir-v afaik
<JoniSt>
Yeah, that's why I suggested it - PTX should map quite nicely to the actual underlying architecture. Didn't know of nvdisasm yet... But apparently NV does have at least a list of instructions for each arch, although I doubt it's complete
jewins has quit [Ping timeout: 480 seconds]
alyssa has left #dri-devel [#dri-devel]
wvanhauwaert has quit [Ping timeout: 480 seconds]
srslypascal is now known as Guest4203
srslypascal has joined #dri-devel
JohnnyonFlame has quit [Ping timeout: 480 seconds]
MajorBiscuit has quit [Ping timeout: 480 seconds]
icecream95 has joined #dri-devel
Guest4203 has quit [Ping timeout: 480 seconds]
CME has quit []
CME has joined #dri-devel
karolherbst has quit [Remote host closed the connection]
karolherbst has joined #dri-devel
zehortigoza has quit [Read error: Connection reset by peer]