ChanServ changed the topic of #dri-devel to: <ajax> nothing involved with X should ever be unable to find a bar
kzd has quit [Quit: kzd]
<airlied> nope that is the marge-bot job
zzoon_2 has joined #dri-devel
oneforall2 has joined #dri-devel
vliaskov has quit [Remote host closed the connection]
alyssa has joined #dri-devel
alyssa has quit []
K0bin[m] has joined #dri-devel
nick1343[m] has joined #dri-devel
co1umbarius has joined #dri-devel
columbarius has quit [Ping timeout: 480 seconds]
gnustomp[m] has joined #dri-devel
xerpi[m] has joined #dri-devel
yyds has joined #dri-devel
kusma has joined #dri-devel
<airlied> twice now the full marge bot run has gone 3 minus over the hour
guru_ has joined #dri-devel
Company has quit [Quit: Leaving]
ram15[m] has joined #dri-devel
shoffmeister[m] has joined #dri-devel
oneforall2 has quit [Ping timeout: 480 seconds]
kallisti5[m] has joined #dri-devel
heat has quit [Ping timeout: 480 seconds]
oneforall2 has joined #dri-devel
<Wallbraker> Are you allowed to return image usage from swapchains for an extension you haven't enabled?
<Wallbraker> The ATTACHMENT_FEEDBACK_LOOP_BIT_EXT bit is set, but I haven't enabled the extension.
guru_ has quit [Ping timeout: 480 seconds]
KunalAgarwal[m][m] has joined #dri-devel
<zmike> I don't think there's any restriction on what drivers can return, but you can only use what you enable
<Wallbraker> Oki thanks.
oneforall2 has quit [Ping timeout: 480 seconds]
Anson[m] has joined #dri-devel
exp80[m] has joined #dri-devel
oneforall2 has joined #dri-devel
DUOLabs[m] has joined #dri-devel
Sofi[m] has joined #dri-devel
Ella[m] has joined #dri-devel
guru_ has joined #dri-devel
oneforall2 has quit [Ping timeout: 480 seconds]
tomeu has joined #dri-devel
crabbedhaloablut has joined #dri-devel
<dcbaker> bnieuwenhuizen: I’m only working half days right now for personal reasons, it the CI has been so flaky of late that I can’t reliably pull patches. I haven’t tried since Monday to be fair, so I need to try again tomorrow
flynnjiang has joined #dri-devel
DavidHeidelberg[m] has joined #dri-devel
flynnjiang has quit []
moben[m] has joined #dri-devel
guru_ has quit [Read error: Connection reset by peer]
vidal72[m] has joined #dri-devel
oneforall2 has joined #dri-devel
<gfxstrand> dcbaker: Hey! Since you're here, how are things in the world of Meson and crates/proc macros?
<Lynne> airlied: patchset pushed in ffmpeg, got tired of waiting for marge to become unstuck
<airlied> Lynne: I'll keep throwing at the wall until it lands
oneforall2 has quit [Ping timeout: 480 seconds]
<dcbaker> gfxstrand: I’ve been kinda disconnected for a bit. I’m actually on bereavement right now, so I’m trying to do release stuff because it’s mostly mindless busywork
<gfxstrand> dcbaker: That's fair.
<gfxstrand> I can ask Xavier
<gfxstrand> I think we're proabably a month or two out from merging NAK anyway.
<gfxstrand> Hoping for an XDC merge or so, maybe?
TMM has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
TMM has joined #dri-devel
<DemiMarie> gfxstrand: is there a document explaining what is so awesome about the future described in <https://lore.kernel.org/all/CAOFGe957uYdTFccQp36QRJRDkWQZDCB0ztMNDH0=SSf-RTgzLw@mail.gmail.com/>?
siddh has joined #dri-devel
yuq825 has joined #dri-devel
<DemiMarie> You’ve convinced me not to fear it, but it would be nice if there was something could show others.
aravind has joined #dri-devel
* DemiMarie wonders if she should start a document summarising what she has learned
bylaws has joined #dri-devel
* DemiMarie hopes to someday see the blog post mentioned in that message
<gfxstrand> DemiMarie: No. Writing that blog post has been on my ToDo for like a year now.
oneforall2 has joined #dri-devel
<DemiMarie> gfxstrand: fair, and you (obviously!) don’t owe me anything, so don’t feel bad about it.
<gfxstrand> You're fine.
<gfxstrand> I can feel bad about my blog backlog all on my own. :-P
EricCurtin[m] has joined #dri-devel
pankart[m] has joined #dri-devel
Newbyte has joined #dri-devel
doras has joined #dri-devel
Guest555 has quit []
Rayyan has joined #dri-devel
JohnnyonFlame has quit [Read error: Connection reset by peer]
sergi1 has joined #dri-devel
fab has joined #dri-devel
samueldr has joined #dri-devel
YaLTeR[m] has joined #dri-devel
swick[m] has joined #dri-devel
pushqrdx[m] has joined #dri-devel
bgs has joined #dri-devel
i-garrison has quit []
i-garrison has joined #dri-devel
masush5[m] has joined #dri-devel
sima has joined #dri-devel
flynnjiang has joined #dri-devel
Mis012[m]1 has joined #dri-devel
jenatali has joined #dri-devel
bmodem has joined #dri-devel
ohmacs^ has quit [Ping timeout: 480 seconds]
Hazematman has joined #dri-devel
junaid has joined #dri-devel
alpalcone has joined #dri-devel
lplc has quit [Ping timeout: 480 seconds]
flynnjiang has quit [Ping timeout: 480 seconds]
flynnjiang has joined #dri-devel
tjaalton_ has quit []
tjaalton has joined #dri-devel
thellstrom has joined #dri-devel
youmukonpaku133 has quit [Read error: Connection reset by peer]
youmukonpaku133 has joined #dri-devel
thellstrom1 has joined #dri-devel
alanc has quit [Remote host closed the connection]
alanc has joined #dri-devel
thellstrom has quit [Ping timeout: 480 seconds]
fkassabri[m] has joined #dri-devel
crabbedhaloablut has quit []
zzoon_2 has quit [Ping timeout: 480 seconds]
crabbedhaloablut has joined #dri-devel
mripard has joined #dri-devel
sghuge has quit [Remote host closed the connection]
sghuge has joined #dri-devel
mbrost_ has quit [Ping timeout: 480 seconds]
pcercuei has joined #dri-devel
Duke`` has joined #dri-devel
onox[m] has joined #dri-devel
vliaskov has joined #dri-devel
pcercuei has quit [Quit: brb]
ofirbitt[m] has joined #dri-devel
viciouss[m] has joined #dri-devel
pcercuei has joined #dri-devel
ayaka_ has joined #dri-devel
<ayaka_> pq, about that HDR metadata, I don't cover the case that GPU render
sukrutb has joined #dri-devel
<ayaka_> because GPU can't render it. Nowadays, not many hardware uses like plain pixel formats, the GPU can't read its tile or compression pixel format
sassefa has joined #dri-devel
apinheiro has joined #dri-devel
<pq> ayaka_, sorry, what GPU render?
<pq> ayaka_, even if all sources go directly to KMS planes, userspace still needs inspect all metadata, decide what metadata send to the video sink, and progam the color pipelines of each KMS plane and CRTC accordingly. No GPU render involved.
<pq> and that color pipeline programming cannot be left for the KMS driver to guess
<ayaka_> pq, yes, that is another problem, how to negotiate the format of that metadata with KMS
<pq> no, that's not what I mean
<pq> even if the KMS driver fully undertands all metadata of everything, it *cannot* come up with the proper color pipeline universally
<pq> that must be done by userspace, because it involves policy and end user preferences
<ayaka_> I think I know your concern, even in the embedded platform with only one CRTC
<pq> therefore userspace needs access to all metadata of everything
<ayaka_> you need to know the target screen colorspace and you may need to change it(likes from yuv to rgb, 8bits to 10bits)
<pq> no, I'm talking in more general sense.
sassefa has quit [Remote host closed the connection]
<pq> There is no single correct way to map from one set of metadata to another. Which way to use and what additional adjustments to do is policy and end user preferences.
<ayaka_> likes, we have two CRTCs, one could accept HDR while the other one can't?
lynxeye has joined #dri-devel
<pq> Hard-coding policy and preferences in KMS drivers could work for downstream that is specific to a particlar product, but it's not good for upstream which needs to work for many different use cases.
<pq> ayaka_, no. I mean that converting from one HDR or SDR standard to another HDR or SDR has no single right way to do it.
<ayaka_> pq, well that could be a case, but you need to know you can't convert those dynamic HDR to SDR in those kms devices usually
<ayaka_> we usually do that in the separately hardware
<ayaka_> about the SDR to HDR, that is not case this RFC metadata covers
<pq> ayaka_, the community is already convinced that KMS shall not be programmed with colorspace etc. definitions letting drivers come up with the conversion in between, but programming explicit mathematical operations in the KMS color pipeline, decided by userspace.
<ayaka_> pq, then how do you present a HDR10+ or DV image
<pq> by extending KMS properties to include HDR dynamic metadata to be sent to the sink, and userspace inspecting that dynamic metadata frame by frame and adjusting KMS color pipeline accordingly when necessary.
<pq> What you tell the video sink as the metadata is completely separate from how you program the KMS color pipelines.
<ayaka_> how? are you going to create many properties for colorspace data?
<ayaka_> kms properties
<pq> because you cannot derive a single correct pipeline from two sets of metadata alone, there are always more variables involved, and innovation to be had
<pq> we only need KMS connector properties for all metadata to be sent to the sink.
<ayaka_> that would be the plane properties
<pq> Independently, we need KMS plane and CRTC properties to program the color pipeline mathematical operations.
<pq> no
<ayaka_> you already know we have HDR plane(video) and SDR planes(UI)
<pq> you never set a colorspace on a KMS plane, no
<pq> I don't, actually
<ayaka_> yes, I am always wondering why we make colorspace(bt709, bt601) in the connector
<pq> your per-plane color pipeline capabilities can be different, sure, but they are not described in terms of HDR or SDR. They are described in terms of what mathematical operations that can do on pixel values.
<ayaka_> I should say the vendor won't like this idea
<pq> We've had long threads on dri-devel@ trying to figure out how to salvage the "Colorspace" connector property in general, and it doesn't look good.
egalli has joined #dri-devel
<ayaka_> I will talk about that plane colorspace problem later, let me explain why generic properties for color won't work
<pq> The latest concensus is to make it work with just "default" and "BT.2020", getting rid of the RGB/YCC variants, and essentially leaving all other options more or less undefined.
<ayaka_> because dolby vision won't allow that
<ayaka_> what about display p3
<pq> I'm not sure Dolby Vision can ever be in upstream Linux.
<pq> Linux requires open specs of all metdaata, and I don't think Dolby Vision will provide that?
<ayaka_> I know, it may because we can't offer they interface to do so
<ayaka_> no, dolby would not, that is why I would send a data blob and let the driver writes to registers directly
<ayaka_> ever I don't know what the meanings of data
<pq> yeah, that cannot fly upstream
<pq> you need to keep that interface downstream
<pq> if there is a solid definition of what display P3 means in "Colorspace" property, then that should be salvagable.
<ayaka_> besides, this metadata is not just for HDR but also the other metadata. We could discuss the HDR part which you care more
<pq> right, I'm not sure it is good to lump all kinds of metadata together
<ayaka_> I would talk about the requirement for those static colorspaces later
<ayaka_> I didn't, in the previous email, I said we could define a common header for such metadata
<pq> for connector "Colorspace", we only need entries for those things that can be communicated in HDMI and DisplayPort signals. It's not about conversion at all.
<ayaka_> then driver in the pipeline know which metadata it need to process
<ayaka_> pq, you must notice the HDMI phy could convert colorspace
<pq> convert what exactly?
<pq> It's in the source side still, not sink?
<ayaka_> both side, it is connector level, likes from bt709 mpeg to bt709 full range
donaldrobson has joined #dri-devel
<pq> If it's source side, it's just an implementation detail of the KMS color pipeline. If it's sink side, it doesn't even concern us. The only things that concern us is what goes over the cable: the metadata and the pixel values.
<ayaka_> I forgot the name of synopsis hdmi phy
<pq> We assume that when we send the metadata to the sink, the sink adheres to it. It makes no difference how it does that, as long as it does.
<ayaka_> let me explain when we use this function in hdmi
<pq> I don't see how it's relevant.
<pq> it's just a driver internal detail how it programs the source side hardware components.
pixelcluster has quit [Ping timeout: 480 seconds]
<ayaka_> I don't know whether it is relevant or not
<ayaka_> let me explain why plane has the colorspace first
<pq> KMS UAPI describes the KMS color pipelines, and the driver's job is to map that to hardware any way it likes.
<pq> KMS UAPI also lets userspace set the metadata that is sent to the video sink, and it is userspace's responsiblity to program the color pipelines to match what the metadata says.
<ayaka_> let us regard the CRTC(the part before phy) as a compositor
<pq> What happens at the sink end is irrelevant as long as the sink handles the pixel values according to the metadata. Otherwise, the sink is faulty and not interesting anyway.
<ayaka_> let's suppose the tv requests RGB pixel format
<ayaka_> while the data for a plane is yuv, it is important to know the colorspace and range of this yuv format
<ayaka_> or the compositor can't output the correct image
swalker__ has joined #dri-devel
<pq> if by compositor you meant a Wayland compositor, I would agree.
<ayaka_> no, the hardware compositor
<pq> CRTC not really
fab has quit [Quit: fab]
<ayaka_> in embedded platforms, GPU, CRTC and PHY are three different hardware
fab has joined #dri-devel
fab has quit []
<pq> Sure, but CRTC and PHY are not exposed to userspace as separate things.
fab has joined #dri-devel
<ayaka_> they are, drm_crtc and drm connector
<pq> Userspace only knows about the KMS abstractions called plane, (confusingly) CRTC, and connector. These do not match hardware CRTC or hardware PHY 1:1.
<ayaka_> so in your case, we should program the connector properties that set the colorspace of each plane?
<pq> the KMS UAPI uses an abstract model that the KMS driver can map to actual hardware any way it wants.
<pq> no
<ayaka_> there are three colorspace you need to care(actually four), planes, compositor hardware and phy in and phy out
<pq> connector properties are only the metadata being sent to the sink
<ayaka_> so the colorspace could be the plane's properties?
<pq> no
<ayaka_> then how to render in this case
<pq> userspace programs the KMS color pipeline mathematical operations so that whatever is in the framebuffers of each KMS plane, the end result after color pipelines and composition matches the metadata being sent to the sink.
nekit[m] has joined #dri-devel
<pq> at no point you tell KMS about the colorspace of any framebuffer
<pq> you only program the operations
<ayaka_> are you going to program EOCF function as the property
<pq> yes
swalker__ has quit [Ping timeout: 480 seconds]
<ayaka_> then need to have a function 1 function 2..function N
<pq> but not as EOCF per se, but as a mathematical curve, which could be one of enumerated ones for example
<pq> yes
<pq> AMD's private KMS UAPI proposal already has those
<pq> and the generic KMS color UAPI plans have them too
<ayaka_> any example of that?
<pq> I'm trying to find the latest generic proposal atm.
<ayaka_> besides my proposal didn't prevent the userspace to read it. I would let the vendor just don't take the DRM property part
<pq> emersion, do you have a link at hand to the latest KMS new color pipeline UAPI draft?
<ayaka_> because it is just a container for different kind of metadata not just HDR
<pq> ayaka_, I specifically replied to this sentence: "I don't want the userspace access it at all."
<emersion> hwentlan_: did you have time to experiment a bit with an impl for the RFC?
<pq> and the email made the point that HDR metadata would a use case here
<emersion> is your WIP code pushed somewhere?
<pq> emersion, thanks!
<pq> ayaka_, have a look at emersion's link above.
<pq> ayaka_, the most important concept there is the "prescriptive approach".
<ayaka_> yes, the proposal covers the lut case also for those 1d is not that bad
T_UNIX has joined #dri-devel
<pq> I mean the explanation why do not set framebuffer colorspace in KMS at all.
cmichael has joined #dri-devel
fab has quit [Ping timeout: 480 seconds]
<pq> We do need to set the metadata to be sent to the sink, but that's completely independent in the UAPI compared to what happens to pixels.
<pq> ...in the source side
<pq> Now, if you have a metadata pass-through from video decoder to KMS, and no way for userspace to read and understand that metadata, and that metadata causes global effects on the final image in the sink, then this whole model simply doesn't work anymore.
<ayaka_> pq, from my first email where I quote sima sentence
<ayaka_> the HDR is just for a excuse that why there is a common data need container, what I need to delivery is the vendor pixel format compression options
<pq> Ok. So it was just a mistake to take HDR metadata as an example.
flynnjiang has quit [Remote host closed the connection]
<ayaka_> no, dolby vision is still the case
flynnjiang has joined #dri-devel
<pq> and we concluded that dolby vision cannot happen
<ayaka_> yes, it does, many android devices have supported dolby vision not just synaptics
<pq> with vendor downstream BSPs, I presume
<ayaka_> anyway, just regard it as vendor data is enough
aknautiy has left #dri-devel [#dri-devel]
<pq> data is fine, as long as it is fully documented
<ayaka_> for the GPU likes AMD which only supports one plane
<ayaka_> this property is fine. But for the case likes intel, we would have a CSC pipeline for each YUV plane
<pq> If you need additional planes for wacky metadata that describes how decode a framebuffer into pixel values, that's totally fine. It only affects how a framebuffer is read in order to produce input to the KMS color pipelines.
<ayaka_> all right, may be those RGB planes as well
zzoon_2 has joined #dri-devel
<ayaka_> the wayland compositor need to know the output colorspace and decide the proper EOCF or OECF for every planes
<ayaka_> it is little complex for a demo app
<pq> yes, a Wayland compositor does need to know all colorspaces
<pq> However, userspace must know the colorspace of the pixel values entering the KMS color pipeline, so that userspace can program the KMS color pipeline correctly. The with GPU composition, userspace must know what colorspace texturing from the buffer will produce. This means that the "hidden" metadata cannot change in ways that would change the resulting colorspace.
<ayaka_> GPU is the other case
<pq> *The same with GPU composition
<ayaka_> if we don't count the AFBC, GPU can't render the vendor pixel format usually, which most of HDR data would use
<pq> that's another problem, but more about the (Wayland) compositor design, so ok
<ayaka_> anyway, I would just quote your "If you need additional planes for wacky metadata that describes how decode a framebuffer into pixel values, that's totally fine. It only affects how a framebuffer is read in order to produce input to the KMS color pipelines." in my future email. Plane properties won't break the GKI(Android generic kernel)
<pq> sure! but include my "However" as well.
<pq> demo apps also are not enough to prove new UAPI, so you are going to need a proper userspace anyway
<ayaka_> "However" is that GPU part?
<daniels> the point of GKI was to get people actually working upstream to co-operate on generic userspace. picking random properties you can stuff unknown magic blobs into isn't doing that, it's just a really bad version of ioctl()
* sima concurs with daniels
<pq> the whole highlighted one line
<sima> we pretty much stopped taking random properties in upstream drivers because of this
<ayaka_> daniels, if you are talking about v4l2 that is why my pains coming from. I should blame the upstream design a bad interfaces that vendor hard to fit their devices into it
<sima> also the reason why ADF got shot down, it's commit function was just a huge blob
<pq> lunch, bbl
<ayaka_> as I said, you can't request too much for the vendor, if there is not a GKI, my boss would tell me to finish the work as soon as possible
<sima> we have a few decades of tradition of "asking for too much from vendors" here in upstream gpu :-)
<ayaka_> pq, that highlight part is fine. But in practice, we would say the colorspace is DV not telling which variant of it
flynnjiang has quit [Ping timeout: 480 seconds]
<daniels> ayaka_: shrug, it is how it is
<daniels> ayaka_: think of it this way - you're coming in and telling upstream that you're ignoring (or haven't even read) any of the design around colour management, and you're looking for a way to completely subvert it and do something against that design, so you don't have to care about anything upstream does
<daniels> why would any sane upstream accept that patch? there's zero motivation to do so
<daniels> if you're at least involved in the design discussions and implementation, then sure, you get a voice
<daniels> but that hasn't happened up until now, so ... shrug
<daniels> if this makes GKI hard for the vendors, then that's a problem for the vendors to solve, and they can solve that by actually participating
<ayaka_> as I always said, that didn't stop nvidia do what they want to. What I am doing is making people not make too bad design that nobody could understand
aissen has quit [Ping timeout: 480 seconds]
<daniels> NVIDIA don't do GKI either
<ayaka_> except the vendor itself
<daniels> AMD, on the other hand, spent a long time participating in both the design and the implementation of upstream colour management
<ayaka_> because it is nvidia
<daniels> so it's not like upstream and vendors are completely different things
<daniels> some participate (costs time, gives the benefit of a voice); others don't participate (benefit of being easier, cost is you have no say in what upstream does)
<ayaka_> let me focus my point, I am going to sell my RFC as a generic metadata container exchange interfaces between driver interfaces
<daniels> it's not going to get merged
aknautiy has joined #dri-devel
<ayaka_> daniels, so any idea about exchange a vendor data assigned with a graphics buffer
<daniels> I mean, it seems pretty clear that you either haven't read the section of the DRM docs about new uAPI (which is bad), or you have read it and you think 'these rules are only for other people' (worse)
<ayaka_> I know we need a FOSS userspace implementation
<Kayden> that's kind of the bare minimum though. just because there is a userspace available that could use a uAPI that upstream doesn't like, doesn't mean they're going to like/accept it
<daniels> right. in general you're just dumping a problem ('GKI means vendors have to do more work'), and trying to transfer the problem to other people ('hey DRM people, I haven't bothered contributing anything to help solve your problems, but take this to solve my problem, and take the burden of supporting that forever'). it's a really really bad tradeoff for upstream.
<ayaka_> what is not what I am thinking about
apinheiro has quit [Quit: Leaving]
<ayaka_> it is I can't convince the vendor to accept a clear and FOSS implementation
<ayaka_> neither convince my boss to do so. So I have an idea what balance the secure and open
pixelcluster has joined #dri-devel
<ayaka_> my RFC is just solving a simple problem, how to delivery a vendor specific data from one driver interface to another, which the vendor won't tell you the detail about what it is
<daniels> yes, that is a problem _for vendors_
<daniels> 'how do I transfer opaque blobs that do stuff I have no idea about' is not a problem that upstream has
<ayaka_> if we solve this problem, at least we would have DRM drivers that could display the images beyond DV
<daniels> so why should upstream accept the burden of maintaining this interface forever?
<Kayden> yeah, I really don't see amd/intel/others being in favor of merging a generic blob passer
<ayaka_> I think I have explained why I need this metadata in my previous email
<ayaka_> the patches version has been the fifth, I wonder when it would be merged
<daniels> you have explained why _you_ need this metadata
<daniels> you have not explained why _upstream_ needs this metadata
<ayaka_> I don't know why upstream needs this metadata either
<emersion> you need to convince the community that it needs it
<emersion> if you want to ship something
<ayaka_> if the reason that such data exchange mechanism is enough to attach the vendor to contribute their drivers
<emersion> but vendor-specific blobs doesn't sound like a great API
<ayaka_> because that pixel format is vendor specific
<ayaka_> could you display a intel Yf CCS image in the other vendor
<daniels> CCS is very well understood
<daniels> 'blob of stuff that does stuff' is not well understood
<daniels> Intel also spent _years_ of effort plumbing modifiers through the entire stack
<daniels> putting that effort that benefits everyone is what gains you credibility in upstream
<ayaka_> does intel tell you how to uncompress it?
<ayaka_> or arm gave out the afbc algorithm
<daniels> by contrast, you are showing up years after colour-management design discussions started, after years of work has happened between Collabora/AMD/Google/Valve/others, and saying 'I haven't even looked at the other stuff but you need to merge my stuff which completely subverts the design'. it's _hugely_ disrespectful if nothing else.
<daniels> if you want a formal NAK to the mailing list to help your internal discussion about how you need a proper submission, I can provide one
<ayaka_> I didn't say so, I just say dolby vision won't give out their IP
<daniels> (the CCS/AFBC examples are totally different - not only is there OSS code which does de/compress them, but it's a very well-understood intermediate transition phase - input->output->input is something you can measure and process. this is talking about input + unknown aux input -> unknown output. that's completely different to lossless compression!)
<daniels> right, and NVIDIA wouldn't give out their IP either. but our answer to that wasn't to merge their driver upstream.
<ayaka_> daniels, you may mistake, there are two patches
<ayaka_> one is for synaptics pixel formats(metadata is about uncompression), one is for metadata exchange(HDR is the excuse which is commonly found in bitstream)
<daniels> yes, I've seen
<ayaka_> I think I offer the same info about the pixel formats as the intel
ella-0[m] has joined #dri-devel
isinyaaa[m] has joined #dri-devel
<Kayden> documentation that says that things "have variants" / "may work a certain way" / "we won't describe it" / "is similar to Intel's Y tile but not" isn't striking me as great documentation
<ayaka_> Kayden, I have listed the most of common variants there
<emersion> the patch has more details iirc
<ayaka_> Kayden, and I have explained why it is similar but not in the next sentence
znullptr[m] has joined #dri-devel
<ayaka_> I can't get where the rest two blames are pointing to
zzoon_2 has quit [Ping timeout: 480 seconds]
junaid has quit [Ping timeout: 480 seconds]
sgruszka has joined #dri-devel
<ayaka_> daniels, could you tell why the intel or arm were giving out the algorithm, I think I could bring the description from their documents
<ayaka_> s/why/where/
<ayaka_> I found something like Intel® Integrated Performance Primitives
<daniels> there's an igt_ccs test which does CCS, and AFBC also has open implementations
<daniels> but again, those are merely intermediate stages: known input -> AFBC -> de-AFBC, produces known output
<daniels> known input + DV -> display gives unknown output
<ayaka_> daniels, just ignore the DV
<daniels> you're asking for a generic mechanism to allow drivers to do completely unknown things
<ayaka_> I am talking about the a pixel format with compression options would work with or without DV
<daniels> ok, if you're instead asking about the Synaptics modifiers, I think all that's missing is actually describing the tile layout
<ayaka_> the container is for that(also things likes secure pipeline's key id)
<daniels> if you want to know what to aim for, look at the AMD/NV/Intel/AFBC modifier descriptions, where the (super)tile size/layout/etc is made very explicit within the modifier
<ayaka_> daniels, where? Should I draw a layout in the document
<daniels> just look at the other vendors, and describe your modifiers to the same level of detail
<emersion> in drm_fourcc.h
<daniels> the container discussion isn't worth having; as per above, it's fundamentally not going to be accepted
<ayaka_> yes, I think I am not worse than nvidia, maybe it is my english didn't make it clear
<ayaka_> daniels, because they are the arguments for the algorithm, I didn't know the algorithm myself
<ayaka_> emersion, what was missing in the drm fourcc document part https://lore.kernel.org/lkml/20230402153358.32948-2-ayaka@soulik.info/
mripard has quit [Quit: mripard]
dabrain34[m]1 has joined #dri-devel
<daniels> you explicitly state that the super/sub tiling layout is unknown
<daniels> NV/AMD explicitly describe the layotu
<daniels> that's one big difference
<ayaka_> well, 48x4 pixels, where a tile has 3x4 pixels and a 8 bits padding in the end of a tile
<ayaka_> you could just simply calculate out the layout
junaid has joined #dri-devel
Quinten[m] has joined #dri-devel
aissen has joined #dri-devel
<ayaka_> I think the introduce section may be misleading, it is hardware may not read from memory in logic address order(consider of memory bank)
<ayaka_> if that bothers you, I could offer a version without those formats in the super group or compression.
<ayaka_> In short, the modifiers cover have the parameters for hardware except the compression options when the compressed version is used
talcohen[m] has joined #dri-devel
<ayaka_> if you ignore the bits description likes padding, that is how we program hardware
alyssa has joined #dri-devel
mripard has joined #dri-devel
nicofee[m] has joined #dri-devel
aradhya7[m] has joined #dri-devel
bmodem has quit [Ping timeout: 480 seconds]
bmodem has joined #dri-devel
bmodem has quit [Remote host closed the connection]
bmodem has joined #dri-devel
bmodem has quit [Ping timeout: 480 seconds]
bmodem has joined #dri-devel
youmukonpaku133 has quit [Ping timeout: 480 seconds]
youmukonpaku133 has joined #dri-devel
ayaka_ has quit [Ping timeout: 480 seconds]
bmodem has quit [Ping timeout: 480 seconds]
bmodem has joined #dri-devel
JohnnyonFlame has joined #dri-devel
bmodem has quit [Ping timeout: 480 seconds]
bmodem has joined #dri-devel
bmodem has quit [Ping timeout: 480 seconds]
bmodem has joined #dri-devel
bmodem has quit [Excess Flood]
bmodem has joined #dri-devel
nyorain[m] has joined #dri-devel
sukrutb has quit [Ping timeout: 480 seconds]
MotiH[m] has joined #dri-devel
Duke`` has quit [Ping timeout: 480 seconds]
naheemsays[m] has joined #dri-devel
Company has joined #dri-devel
<ayaka> emersion, about blob for lut data, I think blob id could be replaced with this shmem fd(memfd). For example, upstreamer(decoder) offers the HDR10+ metadata
<ayaka> and userspace read it knowning its HDR10+ data, could be sent to the kms
<emersion> we already discussed this
<emersion> we do not believe that performance is an issue
<emersion> so we do not believe that this is necessary to optimize
<ayaka> yes, it is not
<emersion> also, i don't think tying metadata to buffers is a good idea
<daniels> ^
<Lynne> did something break wayland? I'm getting a segfault in wsi_CreateSwapchainKHR/wsi_wl_surface_create_swapchain
bmodem has quit [Ping timeout: 480 seconds]
<emersion> Lynne: do you have a stace trace?
<ayaka> Stop blame me about that and I am sure the upstream won't accept it. But I am sure people would implement DV in this way
<ayaka> HDR10+ data is open and small, but a page length lut is not, although it won't change
<daniels> ayaka: people can do whatever they want to downstream. I have no idea why you believe that upstream are required to accept whatever anyone thinks of.
yyds has quit [Remote host closed the connection]
<ayaka> sorry, I want to say, don't blame me no that
<Lynne> emersion: wl_proxy_get_version with a null proxy parameter
<ayaka> at least I tried, also I want to say I am not disrespect the upstream or try to waste people's time
aravind has quit [Ping timeout: 480 seconds]
<ayaka> and I can tell my boss the upstream solution is not suitable for us
JosExpsito[m] has joined #dri-devel
qyliss has quit [Quit: bye]
qyliss has joined #dri-devel
<hwentlan_> emersion, I have most of the API stuff there, with a simple pipeline in VKMS and an IGT test... There are still bits missing to show how I envision this to work, but I'll push my (very messy) WIP branches today
<pq> ayaka, I'd like to say there is nothing against your person or your employer. It's just the design concept that does not fit upstream.
<daniels> yeah, absolutely
<ayaka> this really makes me feel better. People know me that I am struggling to make more drivers work as possible formal to linux kernel
<ayaka> but they are many challenges invoking with many parties that I can't change their minds
<daniels> hopefully by clearly stating the upstream principles, it's easier to take back to the decision-makers and tell them: 'the problem isn't that I can't convince them, the problem is that our approach is not compatible'
mbrost has joined #dri-devel
<pq> ayaka, I'm sure there are. Having to commit to interoperable and maintained forever interfaces is worlds apart from doing an integrated product that no-one (else) cares how it works inside.
mbrost_ has joined #dri-devel
zzxyb[m] has joined #dri-devel
yuq825 has left #dri-devel [#dri-devel]
<ayaka> I don't want to create a separately world and pushing people to google android or chromebook. But I should say if this doesn't work here, it could try my luck in google. But in this case, there is not much restriction to the vendor
<pq> ayaka, do you now have a feeling of what makes a design incompatible? That a design needs to produce predetermined results, even if some intermediate data was hardware-specific or undecipherable?
ajhalaney[m] has joined #dri-devel
<pq> just after saying that, I realize that HDR static metadata fails that test: the metadata has more or less open specifications, both in itself and in HDMI and DP specs, but its results are... hardware-dependent in monitors >_<
mbrost has quit [Ping timeout: 480 seconds]
<ayaka> you could analyse the signal
<pq> I mean some monitors ignore metadata, others ignore different bits of metadata
<ayaka> not exactly
<swick[m]> I actually believe that we can have an opaque blob thing modifying the colors, as long as it happens after the very end of the exposed CRTC pipeline
<pq> most monitors ignore some bits of metadata
<swick[m]> if that happens still in the CRTC, or PHY, or the sink itself doesn't really matter
<pq> so I guess either is needed: a spec of the data, or predictable results (like compression metadata is unknown, but the result is identical to the original data)
<ayaka> the EDID or extend EDID would let you know which HDR formats you suggest
<pq> swick[m], you mean a little bit as if it was the sink doing that on its own? But where would you get the right kind of data matching the content?
<ayaka> s/you suggest/the tv support/
<swick[m]> pq: up for userspace to figure out
<swick[m]> if user space has content then it should be able to get the matching metadata blob
<ayaka> for our video case, there won't be SDR to HDR(unless you are using the AI)
kelbaz[m] has joined #dri-devel
<pq> ayaka, for example, my HDR monitor seems ignore almost all HDR static metadata, but it does check ih maxLuminance > 100.
<pq> swick[m], if it's an opaque blob, how could userspace figure it out?
<swick[m]> it gets it from somewhere, most likely a video decoder
<ayaka> pq, but it didn't stop you send the other HDR metadata to it right
<swick[m]> this whole thing won't be useful to me because you can't do compositing anymore in that case because it would invalidate the blob
<ayaka> so the CSC pipeline still works in new HDR apis
<swick[m]> but for specific use cases... why not
<DemiMarie> gfxstrand: if your blog post could explain why that future is still secure and fully virtio-GPU native context compatible that would be amazing.
<pq> ayaka, no, it just ignores most of the metadata. Some other monitor might not ignore the same way. Ergo, we have hardware-specific behaviour, which is unpredictable in general.
<DemiMarie> Security people generally aren’t convinced by what Windows and game consoles do.
<ayaka> pq, yes, but nothing you could do or should do here, the signal is out of phy
<pq> ayaka, yes, but it's still bad. I can never be sure what I'm actually displaying.
<swick[m]> this is accaptable for a lot of use cases though
<swick[m]> and so is DV
<pq> if I knew which parts of metadata the monitor ignores, and I knew limits of the monitors, I could compensate in the source.
<swick[m]> all true, but does this matter in this case?
<pq> but I can know neither, not without SBTM standard at least
<swick[m]> if someone wants to play back a DV source then the proprietary, unknown process is what they signed up for
<swick[m]> they can and do change the details of how they process the metadata
<DemiMarie> swick: maybe the answer is that DV is not suitable for upstream
<ayaka> pq, as I said, you can't know, because monitor EDID or extend EDID won't tell you
<swick[m]> that's not the answer. the answer is that user space knows what it wants
<ayaka> and the signal is right
<swick[m]> and we have to design KMS so that it can achieve what it wants
<swick[m]> and achieving the exact same output via the color pipeline and shader is one possible user space scenario
<ayaka> although I know a MIPI or HDMI analyser is very expensive, especially when it comes to UHD or 8K
<swick[m]> just pushing through a DV video with the metadata with possibly some overlay which will get slightly changed by the DV metadata is also a completely reasonable user space scenario
<emersion> Lynne: bleh, on it
<swick[m]> we already have HDR static metadata. the point of them is that after the pixel pipeline exposed by KMS there will be some adjustments which are guided by the metadata
<swick[m]> if that happens in the CRTC, PHY or the sink is not relevant
<swick[m]> I don't see how dolby vision is different here, other than being a opaque blob
<swick[m]> and if a sink implements the DV metadata guided conversions or the PHY or CRTC does it is also utterly irrelevant from a user space POV
<pq> As a display system developer, my goal is to present content the way it is intended to be perceived. I cannot do that if I don't know what's happening. There are a couple of ways to go about that: either I target a reference display in reference environment and trust that the monitor adjusts the picture to the actuals, or I know the actuals and target those directly while the monitor doesn't adjust.
<zamundaaa[m]> The difference is that we don't want to be stuck in this situation. We want monitors that are predictable, and we don't want opaque steps in between userspace and those predictable monitors that we hopefully will eventually get
<swick[m]> pq: completely reasonable, but other people have other goals and I think that's fine
<swick[m]> as long as that doesn't contradict with other goals that is
<pq> but it seems I'm usually given the worst of the two: a standard signal format, unknown actuals, and a monitor that does not adjust well enough.
<swick[m]> all very true and extremely frustrating pq, zamundaaa
<swick[m]> but things like DV are a thing and are very much the opposite of what we're aiming for
<hwentlan_> DV = Dolby Video?
<pq> that's why I dislike different monitors ignoring different bits of metadata
<swick[m]> hwentlan_: dolby vision
<hwentlan_> ah, right
<pq> swick[m], I haven't yet started replying to your DV comments, maybe later. :-)
<hwentlan_> I would love to support it someday. Haven't looked at it closely. Would be nice if there could be something like a closed-source userspace library that deals with it and spits out a 3DLUT or some other well-defined operations that can be programmed through the color pipeline API
<swick[m]> the only way yo support DV in a composited system without screwing up color accuracy of the rest of the content is to apply the DV transformation before the compositing. that makes the whole "DV after the exposed color pipeline" thing unusable
<swick[m]> hwentlan_: yeah, that would probably be how we'd have to do it
<swick[m]> in the wayland color management protocol we could have a "pre-apply LUT" thing that acts directly on the provided pixel values from the buffer
<swick[m]> and then the compositor can figure out how to integrate that
<swick[m]> but I'm with ayaka that DV metadata could be supported in the KMS API just like HDR static metadata. if the hardware has the capability to do the DV transformation in the CRTC or PHY, or if the sink somehow supports it DV.
<swick[m]> how to supply the DV metadata blob is another question
<ayaka> swick[m], hardware can't do DV CSC as far as I knew
<ayaka> because Dobly didn't release an license for it
<ayaka> you could just design a hardware as dolby said
<swick[m]> not surprised, I'm just saying that conceptually it doesn't matter
cmichael has quit [Quit: Leaving]
f_ has joined #dri-devel
f_ has left #dri-devel [\o]
<ayaka> swick[m], also emersion killed the possible that metadata assigned with framebuffer
<emersion> i did not "kill" it
<swick[m]> yeah, that one is a hard sell
<emersion> i just said that i don't believe it's a viable path
<emersion> we went that path with implicit sync and we're trying to undo it now
<emersion> in general, it sounds like a good idea until it's not
<ayaka> so your idea is with the daniels that "The mechanism (shmem, dmabuf, copy_from_user, whatever) is _not_ the
<ayaka> problem. The problem is the concept."
<ayaka> the barrier here is the secure memory, even I could guess out what it is in DV metadata
<ayaka> I can't access it. The only possible is that synaptics compression pixel formats but the more detail is unknown to me
<Lynne> emersion: vulkaninfo crashed in wsi_wl_surface_get_capabilities2 even with the patch
<emersion> damn
<Lynne> also not seeing immediate swapchain mode supported in mpv, not sure if that's intended
<emersion> the compositor doesn't support the ext most likel
<emersion> y
<Lynne> ah, no support in wlroots/sway yet?
<emersion> not yet, there is a MR
<emersion> hm, i think that's a bug in vulkaninfo?
<emersion> > If pPresentModes is NULL, then the number of present modes that are compatible with the one specified in VkSurfacePresentModeEXT is returned in presentModeCount
<emersion> however it seems like no VkSurfacePresentModeEXT is chained?
<apteryx> has there been other reports than mine regarding OpenGL regressions on old nVIDIA GPUS using Nouveau after moving to Linux 6.x ? https://gitlab.freedesktop.org/drm/nouveau/-/issues/192
<emersion> or am i misreading the spec?
Duke`` has joined #dri-devel
Haaninjo has joined #dri-devel
Haaninjo has quit []
<emersion> If a VkSurfacePresentModeCompatibilityEXT structure is included in the pNext chain of pSurfaceCapabilities, a VkSurfacePresentModeEXT structure must be included in the pNext chain of pSurfaceInfo
mbrost has joined #dri-devel
TMM has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
TMM has joined #dri-devel
mbrost_ has quit [Ping timeout: 480 seconds]
thellstrom1 has quit [Ping timeout: 480 seconds]
kts has joined #dri-devel
mbrost_ has joined #dri-devel
heat has joined #dri-devel
mbrost has quit [Ping timeout: 480 seconds]
Jeremy_Rand_Talos has joined #dri-devel
Duke`` has quit []
<hwentlan_> emersion, pq, swick, and anyone else interested... the very much WIP work for the color pipeline API:
<swick[m]> hwentlan_: oh, cool! will take a look next week
Jeremy_Rand_Talos has quit [Remote host closed the connection]
yyds has joined #dri-devel
<hwentlan_> I intend to bring them to a point where I have two real pipelines, one with more than one op
<hwentlan_> and then clean them up
<emersion> nice!
Jeremy_Rand_Talos has joined #dri-devel
<hwentlan_> for now it's best to not look at individual patches, but look at the "Changes" tab on the MR, i.e. look at the entire diff
<swick[m]> ack
<hwentlan_> I also need to implement a basic algorithm for the pipe discovery and programming in IGT
<hwentlan_> that's sort of next on the list
<koike> hwentlan_ btw, since you have it on fdo, I was wondering if you could test your changes with drm ci (drm topic/drm-ci branch), and it would be great to have your feedback on the ci :D
<hwentlan_> let me know any feedback you have for the bits that are there. the API stuff and the bits for the new drm_object should pretty much be there
Jeremy_Rand_Talos has quit [Remote host closed the connection]
<hwentlan_> koike: hmm, never looked at it. does it run IGT
<hwentlan_> ?
Jeremy_Rand_Talos has joined #dri-devel
<hwentlan_> koike: gfx-ci/drm-ci?
<pq> swick[m], you're right from that point of view, that things like network cards are well accepted, and NICs can be used to send arbitrary payload anywhere. So why shouldn't a KMS driver be able to do the same. OTOH, the KMS payload affects everything KMS does on that screen, and a DV blob does unknown things. I think opaque vs. understood blob is a major difference, even if the results are not predetermined.
mbrost_ has quit [Ping timeout: 480 seconds]
yyds has quit [Remote host closed the connection]
<swick[m]> both metadata that you understand and a opaque metadata blob result in more or less arbitrary color transforms anyway
<swick[m]> I think this is mostly a question of policy and has no technical relevance
<swick[m]> I would certainly prefer metadata that I understand
<pq> indeed
<pq> I also think there is huge political difference whether an unknown blob is used to program local hardware vs. sent outside as-is.
<swick[m]> how so?
<swick[m]> oh, you mean because programming local hardware is part of the kernel driver and if something goes wrong and we don't understand the data we send to the hardware that's just horrible
<swick[m]> yeah, I guess I agree
<pq> yeah, like that, or even executing that unknown code on local hardware, even if it's not the CPU
<pq> the very existence of a mandatory unknown blob is a violation of a user's right to use the hardware they own for whatever they want to, but for practical reasons it often (firmware) has no alternative
sgruszka has quit [Remote host closed the connection]
<pq> "I bought a monitor that understands DV, but I can't produce any DV myself to make use of it."
<pq> as long as a DV blob does not program any local hardware, I really don't know what to think of it.
<pq> Do we let users enjoy DV content with their DV monitors and support the proprietary system, or not.
mripard has quit [Quit: mripard]
<pq> I suppose it has a practical answer: I do not have a license to develop, review, or test anything related to DV. So that's it. Someone would have to figure out if Linux can even legally forward prebaked DV blobs in general.
<karolherbst> users caring enough probably also run linux-libre where things like that will probably be patched out anyway
<karolherbst> and the others just want things to work
<koike> hwentlan_ yes it runs igt on several devices, git://anongit.freedesktop.org/drm/drm branch topic/drm-ci
<pq> I mean, could e.g. Red Hat get sued if Fedora Workstation shipped a kernel that allows activating the DV mode of a DV certified display after the end user installs some proprietary video player? Or worse, some FOSS project reverses enough of the DV blob to make use of it.
<karolherbst> RH has lawyers to figure that out
<pq> would be nice to know before people waste time on it
<karolherbst> but given how much of a problem h.264 was... maybe the answer is that RH won't support it
<karolherbst> yeah... maybe it makes sense if people knowing the details enough to bring that up, do we have any lawyers we could ask from a fdo/linux kernel perspective? does the linux foundation have lawyers we could ask?
<koike> hwentlan_ basically the branch I sent applies this patch https://lists.freedesktop.org/archives/dri-devel/2023-August/418499.html (with a few fixes to the commit, but doesn't affect its execution), you basically just need to apply this commit, go to the settings of you linux gitlab fdo CI and point the CI yml file to
<koike> drivers/gpu/drm/ci/gitlab-ci.yml
<karolherbst> though I guess for a pure linux perspective it doens't matter as only distributions/vendors ship binaries
<karolherbst> and it's their problem
<pq> personally, I really cannot be interested in going through all that trouble to support a proprietary ecosystem
<karolherbst> yeah....
benjamin1 has quit [Ping timeout: 480 seconds]
<karolherbst> but also the linux desktop ecosystem is kinda lacking a lot and it won't be better if we choose to not support those things. Maybe DV doesn't matter and that's the end of it, maybe it matters a lot and it will be a deal breaker, no idea myself :) Just I think a general user isn't happy if they buy fancy hardware and nothing works
<karolherbst> or like users have a netflix subscriptions but only get 720p content, $because
<karolherbst> even though owning 4K@120 hardware
<koike> hwentlan_ this is an example of a pipeline it runs https://gitlab.freedesktop.org/helen.fornazier/linux/-/pipelines/970661 , the branch is already included in linux-next (in case you want to test on top of that)
<karolherbst> in a perfect world everything would be open and we wouldn't have such issues, but reality is as nice to us, so we are left with that and have to figure how to make the best of it, and what even the best means here
<koike> hwentlan_ you can even point to your version of igt, so it builds your version
<karolherbst> and we already have constraints anyway, and what if e.g. a kernel regresses with certain blobs nobody understands? that's also a major issue, I just don't know if that's even relevant in this case
<hwentlan_> koike: thanks for the great pointers. Will take a look at that
<pq> hwentlan_, awesome :-)
<Lynne> pq: libplacebo supports dovi
<Lynne> it can even convert dovi to regular hlg hdr
<Lynne> ah, but only the profile used in web distribution, blu-rays use a different profile which probably couldn't be supported without major DRM changes
<Lynne> that profile requires two frames to correctly present, a regular 10bit 4k image, along with an 8-bit 1080p image, of which only the top two bits are set
<Lynne> really, dovi is basically a flexible compatibility layer from which other HDR variants can be generated
junaid has quit [Remote host closed the connection]
<MrCooper> pq: "which parts of metadata does this monitor respect/ignore?" seems like another thing which could be tracked in a libdisplay-info database (though in some cases it might depend on firmware version, which I'm not sure can be reliably determined)
yyds has joined #dri-devel
kasper93 has joined #dri-devel
junaid has joined #dri-devel
benjamin1 has joined #dri-devel
yyds has quit [Remote host closed the connection]
yyds has joined #dri-devel
junaid has quit []
thellstrom has joined #dri-devel
yyds has quit [Remote host closed the connection]
jfalempe has quit [Quit: Leaving]
heat has quit [Read error: Connection reset by peer]
heat has joined #dri-devel
lynxeye has quit [Quit: Leaving.]
<DemiMarie> karolherbst: personally I think Netflix should be handled on dedicated media player hardware.
<DemiMarie> Better yet would be for DRM to just be outlawed, but that won’t happen.
heat has quit [Ping timeout: 480 seconds]
<DemiMarie> <pq> "I suppose it has a practical..." <- Yup. Something that can’t be reviewed can’t be accepted.
<gcarlos> Hi guys, I just sent my patchset to the kernel mailing list but got some strange error on git send-email and messed it up by sending just half of it, what should I do? I tried to sent the remainder patches manually but they didn't reply the patchset thread :(
<karolherbst> DemiMarie: sure, but that's not what users have right now
<karolherbst> uhhh
<karolherbst> why are there spir-vs with the `Shader` _and_ the `Kernel` cap?
<karolherbst> *sigh*
dviola has quit [Quit: WeeChat 4.0.4]
penguin42 has joined #dri-devel
macromorgan is now known as Guest656
macromorgan has joined #dri-devel
<penguin42> are there any open tools for reading Radeon profiling registers - if there are any to read - I'm after finding info on thing like bank conflicts and the like
Guest656 has quit [Ping timeout: 480 seconds]
sauce has joined #dri-devel
kts has quit [Quit: Konversation terminated!]
kts has joined #dri-devel
<agd5f> penguin42, mesa supports the GL_AMD_performance_monitor extension. You can also use something like RGP: https://gpuopen.com/rgp/
<penguin42> agd5f: I did download rgp and go tthe gui running but it complained of being unable to start; I'm assuming it wants AMD rather than standard Linux drivers but it wasn't clear (I'm on F39) any examples of using GL_AMD_performance_monitor?
<penguin42> oh hang on, rdp has started up today - it didn't want to do that the other day
benjaminl has joined #dri-devel
sukrutb has joined #dri-devel
<penguin42> agd5f: I'm missing how to actually gather a profile (This is an OpenCL application) - I have the rdp open that lets me configure stuff, and I have rgp which looks like it would be great to analyse a profile if I had one
benjamin1 has quit [Ping timeout: 480 seconds]
gouchi has joined #dri-devel
junaid has joined #dri-devel
donaldrobson has quit [Ping timeout: 480 seconds]
youmukonpaku133 has quit [Read error: Connection reset by peer]
youmukonpaku133 has joined #dri-devel
youmukonpaku133 has quit [Ping timeout: 480 seconds]
youmukonpaku133 has joined #dri-devel
qyliss has quit [Quit: bye]
qyliss has joined #dri-devel
evadot has quit [Remote host closed the connection]
junaid has quit [Remote host closed the connection]
evadot has joined #dri-devel
idr has joined #dri-devel
<idr> Any suggestions?
<dj-death> idr: update the hash
<dj-death> idr: it's know to change with compiler changes
<dj-death> idr: not just on intel
<idr> Right... but usually it will show the before and after images so that you can decided if the change is okay.
<idr> I have had cases where an image change was a bug.
<idr> The reference image is just a broken image link. :(
junaid has joined #dri-devel
<dj-death> yeah
<dj-death> not sure what's up with that
junaid has quit []
oneforall2 has quit [Quit: Leaving]
<idr> Hrm... I clicked 'Retry' to re-run the test. Maybe that will sort it out.
oneforall2 has joined #dri-devel
<dj-death> idr: usually no
<dj-death> idr: there is probably the hash somewhere in the log
<idr> I don't think it will change the rendered results. :) I'm just hoping it will fix the broken image link.
<dj-death> actual: b1c96546107d8a7c01efdafdd0eabd21
<dj-death> expected: 5bc82f565a6e791e1b8aa7860054e370
<dj-death> interesting, none of those are on main it seems :)
<dj-death> ah no
<dj-death> I have an out-of-date one
<dj-death> idr: you see that a NIR change updated that hash :)
<dj-death> that trace should probably use a human perceptible difference
<dj-death> it's an option for imagemagick, that might be the solution to this
<idr> Blarg.
<idr> Didn't change anything.
<idr> anholt, daniels ^^^ Suggestions?
qyliss has quit [Quit: bye]
qyliss has joined #dri-devel
<daniels> DavidHeidelberg[m]: ^ help pls?
oneforall2 has quit [Ping timeout: 480 seconds]
oneforall2 has joined #dri-devel
<DavidHeidelberg[m]> idr: look good, we dont have uploaded ref image, so harder to compare :(
<idr> Bummer. :(
<idr> Okay... I'll just update the hash and move one.
<idr> *move on.
<idr> DavidHeidelberg[m], daniels, dj-death: Thanks.
<DavidHeidelberg[m]> Maybe it got dropped w/ some migration, for some hashes it happened i think
<Sachiel> wouldn't it make more sense to disable that trace then?
<DavidHeidelberg[m]> If I follow right, we just dont have reference screenshot, the trace is fine
* DavidHeidelberg[m] is on the phone right now, so rechecking orc history again
<DavidHeidelberg[m]> *irc
<idr> DavidHeidelberg[m]: Correct.
<idr> The trace runs fine and produces a result. When the result hash doesn't match the expected hash, you don't get to see an image of what is expected. You only see the "after" image.
junaid has joined #dri-devel
<DavidHeidelberg[m]> Btw. yes, in worst case of doubt you can look at different hash from different HW which is not http 404 (I did that few times), but what I recall the screenshot look right
youmukonpaku133 has quit [Ping timeout: 480 seconds]
Kayden has quit [Quit: wake up, nvme controller, it's not time to go to sleep]
junaid has quit [Remote host closed the connection]
Kayden has joined #dri-devel
youmukonpaku133 has joined #dri-devel
gouchi has quit [Quit: Quitte]
junaid has joined #dri-devel
JohnnyonFlame has quit [Ping timeout: 480 seconds]
junaid has quit [Remote host closed the connection]
kts has quit [Ping timeout: 480 seconds]
sima has quit [Ping timeout: 480 seconds]
oneforall2 has quit [Quit: Leaving]
oneforall2 has joined #dri-devel
kts has joined #dri-devel
rasterman has joined #dri-devel
benjaminl has quit [Quit: WeeChat 3.8]
<DemiMarie> Does a GPU reset mean that something went wrong with the GPU hardware, firmware, or driver? Or are some GPUs still unable to cleanly recover from faults, timeouts, etc without resetting the whole GPU?
oneforall2 has quit [Ping timeout: 480 seconds]
thellstrom has quit [Ping timeout: 480 seconds]
<robclark> gpu reset can be anything, but usually it amounts to usermode driver did something wrong (which could potentially involve not working around a hw/fw limitation)
bgs has quit [Remote host closed the connection]
oneforall2 has joined #dri-devel
crabbedhaloablut has quit []
guru_ has joined #dri-devel
crabbedhaloablut has joined #dri-devel
<Lynne> sometimes it could mean "user program did something wrong"
<robclark> true.. especially with faults
Duke`` has joined #dri-devel
oneforall2 has quit [Ping timeout: 480 seconds]
<robclark> (but at least for drm/msm we don't reset the gpu on mem faults.. unless the gpu hangs or generates hw fault)
guru__ has joined #dri-devel
guru_ has quit [Ping timeout: 480 seconds]
Haaninjo has joined #dri-devel
* penguin42 thinks he's seen it on Radeon when his shader has screwed up badly
guru_ has joined #dri-devel
<DemiMarie> robclark: obviously a bad shader can cause the GPU to fault, but I was hoping that the impact of that would be contained to whichever userspace process submitted the buggy shader. Being able to reset the GPU seems analogous to a buggy unprivileged userspace process causing a kernel panic, which would obviously be an OS or hardware problem.
<DemiMarie> Are GPU hardware and drivers just not at that level of robustness yet?
guru__ has quit [Ping timeout: 480 seconds]
<ccr> will they ever be
<Lynne> penguin42: it's not an achievement, I can crash both intel and radeon cards
<Lynne> though when intel resets, you barely notice these days, but when radeon goes, sometimes not even a reisub is enough
<Lynne> what would be an achivement would be to cause nvidia gpus to crash in a way you'd notice, so far even running the dirtiest decode/subgroup/oob code I haven't been able to
Duke`` has quit [Ping timeout: 480 seconds]
rasterman has quit [Quit: Gettin' stinky!]
<idr> DavidHeidelberg[m]: Is there something I can do to get a reference image added? So this doesn't happen to the next person.
<DavidHeidelberg[m]> last time I looked, it's automated somehow, but I can look into it again
<anarsoul> are there any guarantees in NIR about store output intrinsics regarding their location?
guru_ has quit [Quit: Leaving]
<idr> Okay. That would make sense. Hopefully changing the expected checksum will trigger that.
<anarsoul> basically I need to combine 2 store_output intrinsics into a single store_zs_output, since Z and S outputs are written at once on Utgard (i.e. lima)
<anarsoul> i.e. something similar to pan_nir_lower_zs_store()
<robclark> DemiMarie: we replay the jobs from other processes queued up behind the faulting job, so no impact to other processes
<DemiMarie> robclark: ah, do GPUs not have more than one process executing at once?
<DemiMarie> s/process/job/
<DemiMarie> and what about with firmware scheduling where many jobs can be scheduled at once? hopefully one job faulting does not bring down all of them.
<DavidHeidelberg[m]> usually the driver survives it :)
<robclark> with fw scheduling, the fw would have to do the equiv thing.. kill the job that crashed and replay the others (possibly with help of kernel? Not really sure, I don't yet have fw sched)
Guest673 has joined #dri-devel
Guest673 is now known as red_user
youmukonpaku133 has quit [Ping timeout: 480 seconds]
youmukonpaku133 has joined #dri-devel
red_user has quit [Remote host closed the connection]
kzd has joined #dri-devel
<DemiMarie> robclark: why “replay” as opposed to “allow to continue”?
<DemiMarie> I’m probably missing something obvious here
* DemiMarie wishes there were a book that explained how modern GPUs work internally
<robclark> well, it could be either.. "replay" is the implementation detail.. since we've reset the gpu.. maybe if something had a way to reset the "gpu" part of the gpu without resetting the fw scheduler it could simply be "allowed to continue"
<robclark> it's just an implementation detail
crabbedhaloablut has quit []
<DemiMarie> Okay so I am definitely misunderstanding something.
<robclark> I mean, the details might differ per gpu, but it amounts to "kill the bad job, let the others proceed"
<DemiMarie> thanks!
Haaninjo has quit [Quit: Ex-Chat]
shashanks_ has joined #dri-devel
mvchtz has quit [Ping timeout: 480 seconds]
shashanks__ has quit [Ping timeout: 480 seconds]
<DavidHeidelberg[m]> zmike: except that when not cached your csgo trace loads like 3 minutes, it seems to behave stable in CI and it's pretty complex, so thanks again! (also I force pushed the compressed version without any changes)
pcercuei has quit [Quit: dodo]
<nightquest> DemiMarie: I remember someone in #radeon had similar question (ie. "how GPUs work internally") and this link came up: https://www.rastergrid.com/blog/gpu-tech/2022/02/simd-in-the-gpu-world/ - sorry if it's not entirely relevant to the question, hope somehow this can help you
mstoeckl_ is now known as mstoeckl
<DemiMarie> nightquest: I’m somewhat familiar with how GPUs execute instructions (the “data plane”, so to speak), but how they are _managed_ is much less well documented.
<DemiMarie> Before a GPU can execute any user instructions, page tables need to be set up, the MMU needs to be pointed at the right page table root, textures need to be bound, etc. On a CPU the equivalent operations would be done by code executing in a privileged mode of the CPU, but my understanding is that GPUs generally don’t have such a thing, so something else needs to do that.
<DemiMarie> Such details are only really relevant to two groups of people: driver writers (most people in this chat) and those who want to understand what happens when stuff goes wrong (me!).
JohnnyonFlame has joined #dri-devel
TMM has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
TMM has joined #dri-devel
<nightquest> Yes, I assumed this is kinda "elementary" stuff for folks here. But I'm glad I replied, as you have given very nice introduction to this article for me. Thanks!