user982492 has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
user982492 has joined #asahi-dev
user982492 has quit []
slicey has joined #asahi-dev
slicey has quit []
<marcan>
I was thinking about asking if anyone had the 8 core models, since we probably need to do something there
<marcan>
but it looks like Jawse left :/
<marcan>
if anyone else has an 8-core, I'd appreciate an ADT dump (in m1n1/proxyclient: `python -m m1n1.adt -r adt.bin > adt.txt` and note that it might contain your wifi password; remove the huge `nvram-proxy-data` line if you want to get rid of that)
slicey has joined #asahi-dev
robinp[m] has joined #asahi-dev
maor26 has joined #asahi-dev
the_lanetly_052___ has joined #asahi-dev
d4ve has quit [Remote host closed the connection]
d4ve has joined #asahi-dev
slicey has quit [Quit: cya]
Dcow has joined #asahi-dev
curlyqueue_ has quit [Remote host closed the connection]
rkt has joined #asahi-dev
robinp[m] has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
robinp[m] has joined #asahi-dev
kov has joined #asahi-dev
<marcan>
ha, while messing with the HV I caught m1n1 booting the primary core from RVBAR (i.e. through what normally is the secondary entry) and then blowing up on the spinlocks not working
<marcan>
that was when macos went to sleep
<marcan>
so indeed the CPUs come back through RVBAR, but the more interesting question is how did this happen....
aleasto has joined #asahi-dev
robinp[m] has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<marcan>
ah, it sets [0] Disable WFI Return. fair.
gabuscus has quit [Remote host closed the connection]
<marcan>
maz: since we're *eventually* going to have to support this suspend dance, what's your thinking? right now for arm64 there is only PSCI and ACPI.
<marcan>
either we do an Apple-specific driver, or we add a non-hvc/smc PSCI interface and make m1n1 into a PSCI provider
gabuscus has joined #asahi-dev
<marcan>
doing it the PSCI way should also allow m1n1 to handle the deep cpuidle stuff instead of having to write a driver for that too
<marcan>
but calling from the kernel into m1n1 has some, er, interesting consequences (e.g. does the kernel 1:1 map m1n1 in the low address space for this, or do the called functions have to be position-independent and callable at any random vaddr the kernel feels like using, or do we turn the MMU off?)
<marcan>
(how does EFI do this, I wonder...)
<marcan>
apparently there is a SetVirtualAddressMap thing, heh
the_lanetly_052__ has joined #asahi-dev
the_lanetly_052___ has quit [Ping timeout: 480 seconds]
<maz>
marcan: yeah, EFI provides its own address map (which is a 1:1 mapping IIRC), which we use when calling into it.
<maz>
marcan: ardb had some idea on how to fake this up, though this will require some buy-in from the usual suspects.
<maz>
marcan: I'm obviously partial to the PSCI solution, as it avoids reinventing another wheel, even if that's not a very nice wheel.
<marcan>
fair enough
<marcan>
in other news, all the pmgr-pwrstate instances for t6001 are ~2000 lines of DT
<marcan>
I'm thinking that should at least go in an include :-)
<marcan>
on the plus side, the extra stuff in t6001 vs t6000 is neatly in a separate PMGR instance, so I can keep my "t6000.dtsi includes t6001.dtsi and removes stuff" thing and all it takes is one node deletion
<kettenis>
marcan: the UEFI memory map basically indicates memory ranges that need to be preserved for runtime services
<kettenis>
these are 1:1 mappings, but they can be relocated by calling SetVirtualAddressMap