<sricharan>
This is a new remote proc driver that we are upstreaming for our latest IPQ chipsets.
<sricharan>
Briefly, Architecture is like this, There is one Central Q6 DSP processor (which runs the OS 'root' domain) and there are other 'wifi' radios (both AHB and pcie based, no FW).
<sricharan>
The 'wifi' radios are controlled ('user' domain) by the Q6. This is called as the Multi PD (protection domain) architecture.
<sricharan>
Based on the reviews so far, looks like there are some concerns about the words 'pcie' , 'ahb' on the bindings/compatibles.
<sricharan>
Not sure, if will be better to call out the bindings/compatibles based on root/user domain nomenclature.
<sricharan>
So would like to know if we can discuss little more here and guidance how to proceed further on this topic.
sricharan has quit [Remote host closed the connection]
<minecrell>
lumag: When making the msm8996 cbf interconnect stuff, have you considered defining a generic "clk-interconnect" driver that would be completely independent of the clock driver? Something like "pwm-backlight", "clk-pwm", "pwm-clock" with a dedicated DT node that wouldn't require driver changes
<minecrell>
not sure if that would work
<lumag>
minecrell, this is not that easy. Each icc should have unique id, etc. I thought about it, but decided that it is not worth the troubles.
<minecrell>
lumag: uh, a globally unique ID?
<lumag>
So we have the icc_clk_register()
<lumag>
Yep
<lumag>
minecrell, see icc_create_node()
<minecrell>
lumag: hm, that's not really a great design choice if one wants to have hardware-independent drivers that can be combined with each other arbitrarily :/
<lumag>
minecrell, yep.
<minecrell>
lumag: I think 8939 needs pretty much the same for the CCI, except in apcs-msm8916.c. Adding the icc_clk_register() call there is a bit weird because the same driver is also used for CPU clocks...
<lumag>
minecrell, I assume that apcs0_mbox and apcs1_mbox drive the clusters and apcs2 drives the CCI, correct?
<minecrell>
lumag: yep
<lumag>
Well, the easiest way would be to check for the presense of #interconnect-cells property and call icc_clk_register if it is present
<lumag>
Another option would be a separate compatibility string for CCI APCS (like we have for per-core SAW2 and for L2 SAW2 devices)
<lumag>
the separate-compat approach is cleaner, property-check is easier
<minecrell>
lumag: Also, I think the clk-interconnect may need an OPP table in the future similar to other devices so it could vote for performance states on vdd-dig and CPR. Given that OPP operates on a device pointer and wants to request clocks itself this gets rather messy if there isn't a separate device node for the clk-interconnect
<minecrell>
I feel like having a separate node would integrate more naturally where there is a clk provider and a consumer like everywhere
<lumag>
It is not the interconnect, who votes there.
<lumag>
I'd push the vdd votes to the per-cluster OPPs
<lumag>
So that a call to dev_pm_opp() would adjust all the parameters
<lumag>
a call to dev_pm_opp_set_rate() from cpufreq-dt.c
<minecrell>
lumag: The vdd-dig/CPR vote of CCI/CBF is only indirectly related to the CPU freq of a single cluster. It needs to be set based on the final clk rate the interconnect framework chooses and sets for CCI. You could pre-compute the necessary OPPs for a given peak bandwidth and add them to the CPUs but that's quite confusing imo
<minecrell>
especially since the CPUs already make their actual own votes for their cpu freq
<konradybcio>
yes it would be very confusing
<konradybcio>
especially since many CPUs have OPP tables which already include icc votes for dram
<konradybcio>
and/or L3
<minecrell>
konradybcio: well those are additional to CCI anyway
<lumag>
icc-clk votes using the peak freq, so the bandwidths do not sum up.
<konradybcio>
did anyone figure out what made nexts since 20230606 unbootable yet?
<lumag>
So each CPU freq opp entry would have a final CCI freq and thus vdd/CPR vote
<minecrell>
but imo if the interconnect subsystem decides for whatever reason to apply peak bw X (let's assume temporary maximum bandwidth until icc_sync_state) then IMO the clock shouldn't be changed without ensuring proper voltage
<lumag>
I think it will apply the result of get_bw(), but I'll have to check.
<lumag>
Otherwise you have to put opp table to the apcs2 and somehow make OPP aware of it.
<lumag>
And at this points it definitely prompts for a separate compat
<minecrell>
lumag: It's trivial with a separate node, something like the "clk-interconnect", because it's just a clock consumer like any other device
<konradybcio>
changing voltage based on any other scaling parameter screams OPP, yes minecrell
<minecrell>
It's messsy when clock provider and consumer are mixed into one node
<lumag>
minecrell, you have the node, apcs2
<lumag>
The problem would be to tackle icc/clk rate change and setting of the opp state
<minecrell>
lumag: yeah but the OPP core does clk_get(dev, ...), so it expects a `clocks = <&apcs2>;`. Don't think apcs2 can reference itself as clock...
<lumag>
minecrell, this would be if you want to call dev_pm_opp_set_rate from the apcs2 driver
<minecrell>
lumag: I think this was rejected for gcc before, doing OPP scaling inside clock drivers
<minecrell>
so it must be on the consumer side which is the clk-interconnect
<lumag>
clk-interconnect is not a real consumer. It is just a translation layer The consumer is cpufreq and its opp tables
<lumag>
clk-interconnect just translates bandwidth vote onto the clk rate
<lumag>
The fact that CCI bandwidth is set via the clock is an internal Linux design.
<minecrell>
lumag: From my point of view these are two separate concepts, just like PWM and backlight. One component models the clock source (PWM generator), the other the interconnect (backlight chip)
<minecrell>
it's obvious that apcs has nothing to do with an interconnect
<lumag>
I toyed with the idea of turning the CBF into a proper ICC driver. In the end I decided against it because
<minecrell>
since the same hw block provides the cpu clocks
<lumag>
because clock framework makes it easier to handle all the mux & stuff
<minecrell>
As an exaggerated example, if I use an i2c controlled clock generator and wire it as clock source into an interconnect, that doesn't automatically make the i2c clock controller an interconnect :)
<minecrell>
because I can buy the i2c clock controller for arbitrary uses
<lumag>
CBF / APCS2 are interconnects, because they drive the connection between clusters
<minecrell>
lumag: I see apcs2 like the i2c clock controller. it's clock output just happens to be wired to the interconnect (standard ARM CCI-400 clock input)
<minecrell>
the interconnect is the ARM CCI-400 to be absolute correct
<lumag>
ack. Then we have the device node
<lumag>
Which internally might use the icc-clk
<minecrell>
lumag: although we need to be careful with this path or someone will ask to model all interfaces of the CCI-400 properly in the device tree :S
<lumag>
Or it would be a separate driver calling devm_pm_opp_set_rate
<lumag>
I'd prefer not to do so from icc-clk
<lumag>
minecrell, we might have to step on the same path with 8996/CBF/CPRv3. But I hope to be able to cast CPR votes from cpufreq.
<minecrell>
lumag: I think it's not a matter of what works or not. Functionally both should result in the same. It would be nice to agree on a common approach so the device trees look similar
junari_ has joined #linux-msm
junari__ has quit [Ping timeout: 480 seconds]
swdefrgthfgjhk has quit [Remote host closed the connection]
swdefrgthfgjhk has joined #linux-msm
junari_ is now known as junari
junari has quit [Ping timeout: 480 seconds]
sricharan has joined #linux-msm
jhovold has quit [Quit: WeeChat 3.8]
pespin has quit [Remote host closed the connection]
Guest2487 has quit [Remote host closed the connection]