ChanServ changed the topic of #asahi to: Asahi Linux: porting Linux to Apple Silicon macs | General project discussion | GitHub: https://alx.sh/g | Wiki: https://alx.sh/w | Topics: #asahi-dev #asahi-re #asahi-gpu #asahi-offtopic | Keep things on topic | Logs: https://alx.sh/l/asahi
<phire>
Well, you can already use Linux in a VM under macos
<phire>
Which is almost the same as a hypervisor running both?
<dgc[m]>
Not almost enough for me :)
<dgc[m]>
I've done that for years, but I really would like the linux system not to be hostage to MacOS. And the inverse, running MacOS on a linux HV, suffers from HackOS not really being all quite there in the head.
<dgc[m]>
I truly just want a computer that can treat multiple OSes as first-class simultaneously. Not problem-solving here, just signing on to this as a great goal.
<chadmed>
thats something that sounds trivial in your head when you think about the basic problem definition but then you try to implement it and it ends up taking you 5 years and squillions of lines of code
<dgc[m]>
eh, I don't think it's trivial. But it's something I've not heard of anyone working on since Mach and the little I've heard on your project here sounds like it could be a lead.
<dgc[m]>
If that's nowhere near scope, my apologies. I'm really not asking for anything — just excited to hear of someone working on something interesting.
<chadmed>
nah its obviously not trivial, hence why i said its something that can sound trivial in your head but really isnt once you try to implement it
<dgc[m]>
I guess maybe it can sound trivial in your head, I don't know.
<chadmed>
is there any real benefit to doing it that way rather than, say, an API translation layer a la wine?
<dgc[m]>
what I would enjoy is complete system independence. I'm not looking to run linux binaries on macos or vice versa. I don't care about that at all, I have linux machines and macos machines I can run those on.
<dgc[m]>
But I would like to have only one device to do it on
<dgc[m]>
when macos inevitably crashes after 8 or 9 days I'd like my opensolaris system with 190 days of uptime to still be running. that kind of thing.
<dgc[m]>
(opensolaris was the first thing I ran as a VM on OSX)
<chadmed>
well yeah thats kinda what im getting at. i appreciate the philosophical idea of independence that such an approach would bring, but what practical benefit does on-the-fly "kernel switching" have over a wine-style implementation?
<chadmed>
you get the stability, uptime and freedom of (pick your *nix) but the ability to run all your macos applications as normal
<dgc[m]>
I don't know, that's not something I want so I can't speak toit
<dgc[m]>
OK. let me back up on this.
<chadmed>
btw i am genuinely curious as to what youre thinking so i can better understand what you want out of the system, im not trying to put you down or anything like that :)
<dgc[m]>
Where I'm coming from is that I value being able to run the whole ecosystem. I want a Mac with the whole Mac environment on it. I also want a beos box (not really, this is just to abstract the capability from a specific use case, which I don't have). It's not about being able to run omnigraffle because it's better than any structured drawing tools on windows or linux. It's not about applications. I want to have both operating environments —
<dgc[m]>
VM-style — but not have to physically switch plastic to do it.
<chadmed>
right so the benefit would be to have the OSs "containerised" to a degree but allow on-the-fly switching between the two running close to metal
<dgc[m]>
Honestly having a beos machine and a mac on my desk with a full keyboard/video/mouse switch in front of it accomplishes my entire goal, except for being able to pack it into a suitcase and take it on a plane.
<dgc[m]>
yes
<dgc[m]>
I always hoped that Mach would accomplish this but in practice, nobody ever really took it there
<dgc[m]>
actual implementations of mach (next, DEC, even apple) were single-server
<dgc[m]>
Mainframes did this well
<dgc[m]>
nobody has figured out how to virtualize operating systems as well as IBM did in 1970.
<chadmed>
perhaps its issues in dealing with IPC prevented it from practically being taken further
<chadmed>
im sure you could find an airline willing to haul your system/z machine around with you :-P
<dgc[m]>
that would be pretty funny if I were elon musk
<chadmed>
thin hypervisor implemented entirely in PL/I, sounds like what you want to me lmao
<dgc[m]>
hahaha
<dgc[m]>
I've been meaning to install hercules on a raspi, I wonder how useful I could make that
<dgc[m]>
Well, you're right that this isn't actually useful to very many people.
<dgc[m]>
I suppose Iament that computing at large never committed to this road. I thought that maybe this idea I read about in the trade pulp about Asahi had something of that concept to it, but maybe not.
<chadmed>
i consider it a hypervisor in that it pretends to be XNU kernel so that iBoot will set up the hardware and jump to it, and allow it to set up a more "sane" environment for loading Linux and Linux-adjacent stuff
<chadmed>
i had this discussion with a mate the other day who was dismayed that he is unable to use Boot Camp on his new macbook. conceivably, m1n1 may at some point be useable to get a boot sequence like m1n1 --> u-boot --> edk2 --> windows boot manager --> nt kernel
<dgc[m]>
I see, so it's new/custom code with specific purpose
<dgc[m]>
the whatever I read sounded like "we ground down kvm to fit on this shelf"
<chadmed>
but i doubt that is anywhere near the priority list at all, nor would microsoft bother to write or sign drivers for new mac hardware as it would cannibalise surface sales. nor would apple bother writing drivers for arm windows as it would stifle arm macos development
yuyichao has joined #asahi
cptcobalt has joined #asahi
PhilippvK has joined #asahi
phiologe has quit [Ping timeout: 480 seconds]
marvin24 has joined #asahi
marvin24_ has quit [Ping timeout: 480 seconds]
boardwalk has quit [Quit: Ping timeout (120 seconds)]
jkkm has joined #asahi
boardwalk has joined #asahi
opticron has quit [Ping timeout: 480 seconds]
opticron has joined #asahi
<krbtgt>
wrt above i'm not really aware of any lightweight desktop hypervisors. for prior art on similar, you might be interested in the virt on IBM power systems
<krbtgt>
kinda-xen like except not really
<krbtgt>
with a wacky custom OS underpinning it
Z751 has joined #asahi
Z750 has quit [Read error: Connection reset by peer]
<sven>
iirc the pongoOs/checkrain people (never_released, but i haven't seen him here in a while) wanted to write a small hv to allow running windows
<sven>
no idea if they ever started working on that though
nkaretnikov has joined #asahi
bisko has joined #asahi
<phire>
dgc[m]: m1n1 (at least it's current form/direction) isn't really what you want.
<phire>
It's a lightwight hypervisor designed to run only a single operating system while giving it nearly full access to hardware
<dgc[m]>
@phire ok, thanks. I heard "lightweight hypervisor" and got my hopes up
<phire>
Though, I agree. The idea of a hypervisor of running two operating systems that feel first class is appealing
<j_ey>
dgc[m]: its a hv for tracing MMIO accesses mostly
<dgc[m]>
right - I just had hoped it wouldn't be limited to that (valid!) purpose :)
<phire>
I have schemed about intergrating linux and windows tightly
<phire>
making them share a schedular and memory allocator
<phire>
The real barrior to such ideas is that *most* hardware doesn't suport being controlled by two seperate driver stacks.
<phire>
either they clash, or the introduce security flaws
<phire>
so your hypervisor is doomed to getting more and more heavy-weight
bisko has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
bisko has joined #asahi
<arnd>
One thing that may be possible is to have a purely partitioning hypervisor that passes through (almost) everything to macos, while giving Linux access to half the CPUs, memory and one USB controller, and then put all the peripherals for the Linux guest on a USB hub.
<arnd>
I don't see a real practical application for that kind of setup though, so I don't see who would spend the effort to implement that
<pipcet[m]>
arnd: I did something like that, communicating using "shared" memory (actually magic changing signatures in RAM that linux found and overwrote with its data)
<chadmed>
i think dgc's idea was, as he said, more like Mach where multiple userspaces have access to the entire hardware pool. though theres multiple reasons that approach never really went anywhere
<chadmed>
not least of which being the monstrous IPC overheads in Mach when used this way, and also the overheads incurred by the enormous amount of context switching the machine needs to do in order to make such a setup possible
<pipcet[m]>
arnd: even sharing just USB is hard because the interrupt controller isn't designed to be shared and the two USB ports (their PD chips) share an IRQ line..
<arnd>
There are a number of implementations along that line, in Linux we have 'jailhouse', and Intel is working on 'ACRN'
<arnd>
pipcet[m]: I meant the Linux guest would either get the PCI XHCI (type-A on mac mini) ports, or the DCW3 (type-C) ports, or do you mean some bits are even shared between the two sets?
<pipcet[m]>
oh. that makes sense, sorry. I have a MBP :-)
<pipcet[m]>
arguably, it might make sense to let macos initialize the hardware, then kill it and run Linux instead using initialized hardware, but I think it's way more trouble than any benefit we might see...
<sven>
i'm still sad that they didn't hook up the "new usb connection" line to the dwc3 controllers :(
<sven>
fwiw, mac os will lock down *a lot* while initializing the hardware
<pipcet[m]>
sven: is it possible we get a "proper" SMC notification? I just got those to work...
<pipcet[m]>
yes, that's the trouble
<sven>
possibly. but it's still annoying because we'll have to hook up that notification to the dwc3 driver :/
<pipcet[m]>
(you set the "NTAP" flag in the SMC, then you get messages on EP20 with the low byte set to 0x18)
<sven>
hrm. or maybe we'll just need to hook it up to the usb phy driver
<sven>
i think that could actually work. make the usb phy listen for that SMC notification and then just flip the reset bits. dwc3 will actually get a "new device" event then
<pipcet[m]>
(we should probably have a nice DT-based API for sharing the SMC between all the drivers that use it, anyway. Currently working on something but it's not quite ready yet.)
<chadmed>
isnt that what mailboxes are for?
<pipcet[m]>
(mailboxes are designed to have a few channels. The SMC has ~4000 "properties" that can be read and set. but, anyway, it's quite possible we'll go with a mailbox-based hack in the end)
bisko has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<sven>
what other drivers need to use the SMC in which ways? smc -> usb phy could e.g. just be an interrupt
bisko has joined #asahi
<pipcet[m]>
the ones I know about are battery/power supply, usb, pci (for the power pin), power switch, lid switch, backlight (the on/off switch, not the PWM which goes through DCP), hwmon of course, gpio for when you really want to blow up your mac, system shutdown, (I think the way macos uses the WDT also makes use of the SMC)
Vaughn has joined #asahi
<pipcet[m]>
they all read or write properties, some need interrupts which are provided by the SMC
<marcan>
sven: is the different nvme queue thing new for M1? with the larger qes I was referring to the quirk that T2 macs needed
<marcan>
I hadn't realized there were deeper differences this time around
<marcan>
(other than it not being pci)
bisko has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<sven>
i think so
<sven>
nvme works on the T2 mac, doesn't it?
<sven>
if yes, than this queue thing is new
<sven>
apple calls it "nvme-linear-sq" in the ADT
<sven>
i thought previous macs were just that the tags are shared between queues, that there's only a single interrupt and that the queue size is limited
<sven>
oh.. there's also NVME_QUIRK_128_BYTES_SQES, but that's not required anymore on the m1
<sven>
possibly because they moved the AES stuff to their weird iommu-like thing
<marcan>
ah, makes sense then
<j_ey>
marcan: did you see sven's latest iommu pagesize mismatch code?
<sven>
my plan is to send a RFC soonish anyway just to see if the iommu maintainers will even consider accepting such changes with their disadvantages
eric_engestrom has joined #asahi
<chadmed>
distros not keen on building 16k kernels i take it?
<sven>
i'm sure some of them will come around eventually, but if supporting 4k kernels with some limitations isn't too painful i'd like to do that
<sven>
that would at least allow to use the official distro installer + then add a custom package for the 16k kernel after everything already works
<JTL>
I was going to ask about the current situation with IOMMU and page sizes, just out of self interest.
<chadmed>
oh yeah i fully get the rationale for doing it, just as you say the iommu guys may come down on it like a ton of bricks
<chadmed>
maybe we just force everyone to use gentoo so they *have* to build their own kernel :-P
<JTL>
rofl
<sven>
they weren't opposed to it so far fwiw, but that can change once some of the limitations are clear
<sven>
which is why i'll try to prepare that RFC series this weekend to see if it makes sense to continue working in that direction
<marcan>
j_ey: I saw the tweet :)
<marcan>
sven: awesome :D
* j_ey
follows sven on tweeter
<marcan>
btw, for those that missed the installer stream: building a bootable partition "manually" works, copying files from another one. I believe I know how to construct all those files from ipsw contents, so now I just need to write a script to do it.
<marcan>
The initial installer prototype will probably be a simple thing that you can point at an APFS container and it'll create everything needed to run m1n1, I might add some basic logic so you can just point it at a macOS partition and give it a target size and it'll make space for a secondary linux install next to it and set things up
<chadmed>
were you able to stream in just the bytes of the ipsw that we need? i played around with it for a bit in python but i couldnt get any useable data out of it :( (i am exceedingly stupid though so maybe i missed something)
<marcan>
I haven't tried that yet but I know that will work. I've done streamed zips before.
<marcan>
If standalone python bootstrap isn't an issue and I write this in python as I intend to, I expect just making a little cache layer between ZipFile and urllib will do what I want
<marcan>
also, all of this can be tested/developed under macOS. in fact you don't need recovery at all for the first step of installation *if* you're using a base macOS version that Apple still signs in TSS.
<marcan>
(stage 2, i.e. bputil/kmutil, always has to happen in 1TR, but we know that already)
<marcan>
if you are using an older base macOS then you either need to run the installer from recovery first, or make it a three step process (macOS -> recovery -> 1TR) I believe
<chadmed>
as in, older than monterey? you mentioned some things that got fixed and changed with 12.0 on stream last night
<marcan>
no, Apple will only sign a subset of recent macOS versions for security reasons if you're in "full security" mode
<chadmed>
ah right yeah i follow now
<marcan>
you *can* do a "reduced security" mode install, I actually got a prompt for that while installing macOS once because I guess the network failed?
<marcan>
but AIUI that can only happen from recovery mode
<alyssa>
pipcet[m]: Do you have system shutdown/reboot patches I can cherrypick on top of mainline? (SMC stuff?)
<alyssa>
sven: Also do they work with your ASC driver?
<sven>
uhhh... who knows. maybe?
<sven>
thanks for volunteering to test and fix them!
<sven>
that asc driver will probably required some changes either way. i'm still not sure the rtkit handling belongs in drivers/mailbox
<sven>
*require
<alyssa>
:F
bps has quit [Remote host closed the connection]
bps has joined #asahi
Mary has joined #asahi
<alyssa>
sven: Are dma_alloc_noncoherent/dma_alloc_wc expected to work on m1?
<alyssa>
(do I need specific CONFIG options? or more patches? or..)
<alyssa>
or.. maybe I shouldn't be going down this path at all, since I don't actually want CMA memory but rather rando memory mapped to the disp0-dart..?
<sven>
the framebuffer is coherent fwiw. either by default or because the DART works its magic
<sven>
no idea about dma_alloc_noncoherent
<sven>
looks like it should work though
<sven>
alyssa: so to have memory mapped to the disp0-dart you need to declare that dart in the DT, then add iommus = <&dart stream-id>; to whatever device you want to use DMA with
<sven>
and then dma_alloc_coherent should just work
<sven>
(internally, the device's dma_map_ops will be set to the ones from dma-iommu.c which will eventually call into apple-dart.c)
bps has quit [Ping timeout: 480 seconds]
<marcan>
alyssa: do not use any of that nonsense
<marcan>
that is mostly x86 specific
<marcan>
on M1 all memory is coherent, and you want it like that
<marcan>
dma_alloc_coherent is probably what you want
<marcan>
don't model your memory management after x86 drm drivers; that is the ~one place weird stuff happens on x86 that has no place anywhere else
<marcan>
on M1 dma_alloc_coherent should be a normal memory allocation
<arnd>
IIRC dma_alloc_noncoherent() is a hack that is used for some of the old pa-risc/mips/ia64 numa systems, it has nothing to do with x86
<arnd>
otherwise as marcan says, don't use that
<marcan>
I was thinking _wc
<marcan>
which ISTR being used to map framebuffers
<krbtgt>
DRM drivers are like the 50th circle of hell per the people i know trying to get them working on non-x86
<krbtgt>
which basically means AMD's DRM driver, lol
<marcan>
heh, a bunch of arm drivers also use dma_alloc_wc I see
<arnd>
marcan: You are probably thinking of ioremap_wc() vs ioremap(), which is the right idea for frame buffers to speed up consecutive stores
<marcan>
arnd: yeah but not here, here you just want flat out cached
<marcan>
since we know it's coherent
<arnd>
right
<marcan>
and do we even do _wc? it probably falls back to uncached if it even works, which would be bad
<marcan>
(I forget how this ended up :))
<arnd>
there is no need to ioremap() here, since it's just normal memory, not mmio
<arnd>
oh, I wonder why that's different from arm32
<arnd>
anyway, if the gpu can see the cache, there is no need for write-through or uncached write-combining mapping of regular memory
<marcan>
yup
<marcan>
so far we have no evidence of *anything* being noncoherent on M1
<arnd>
nice!
<marcan>
I think there's a trap bit for cache ops in the VM stuff, maybe I should turn that on and see how much cache management macos does
<marcan>
might reveal some stuff
<marcan>
and also if I can somehow determine if it's for icache/dcache or something else (maybe always ignore dcache ops and always turn icache invals into dcache flush/icache inval?) I can just disable all device cache management... and see what breaks
bisko has joined #asahi
<pipcet[m]>
well, isb is sometimes necessary, but that's not really noncoherent
bisko has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<j_ey>
pipcet[m]: why did you have to register a restart handler? looks like the core wdt code already deos that
arnidg[m] has joined #asahi
<pipcet[m]>
j_ey: I think you're right, thanks!
gomikun has joined #asahi
gomikun has quit []
<pipcet[m]>
j_ey: I think it was a restart handler that grew a WDT "driver" (not much of one, admittedly), if there's any interest in it I'll obviously clean it up :-)
<j_ey>
:)
<alyssa>
sven: ACk
<alyssa>
marcan: those are two function calls that different code paths for gem fb alloc use
<alyssa>
presumably _wc is the one to use, just need to make sure that works
<alyssa>
will try adding the disp0 dart to the dt as sven suggested and see what has to happen to make that work
<alyssa>
Ok, so can I igure out how to get ssh onto the mac without having wired connectivity hMMM
<alyssa>
I can probably share a connection from my linux machine? maybe?
<alyssa>
better question is if I have a second usb ethernet adaptor..
roxfan2 has joined #asahi
<alyssa>
Maybe not. Is this fate telling me to cherry pick the PCI patches
<alyssa>
mumble. ok
<alyssa>
sven: halp
<alyssa>
pipcet[m]: horrifying
<alyssa>
thanks
<pipcet[m]>
i'm confused, where are you trying to get ssh? if you have USB the ethernet-over-usb stuff used to work :-)
roxfan has quit [Ping timeout: 480 seconds]
<alyssa>
pipcet[m]: I have only one USB ethernet adaptor with me rn
<alyssa>
and my laptop is a chromebook (debianized) so ..
<alyssa>
pipcet[m]: re the watchdog driver, have you sent that upstream? scared of LKML?
<pipcet[m]>
sorry, what I meant was making the USB gadget pretend to be an ethernet adapter
<alyssa>
er wait what? :P
<pipcet[m]>
sorry, I'm not sure I'm thinking of the right situation. If you have the M1 running Linux with Sven's patches you should be able to configure in eem support, and then you can run TCP/IP over the same cable you presumably use to talk to m1n1?
<alyssa>
Huh. this is new to me
<alyssa>
and yes I have a m1n1 cable it's just inert right now after boot
<alyssa>
will give that a try thank you!!
<pipcet[m]>
it might be easier to do it the other way around, if your chromebook has usb c and gadget support...
<alyssa>
heh
<alyssa>
they both are type-c
<pipcet[m]>
and you're using a C-C cable? that setup always scares me because I'm afraid they'll start charging each other and all of my computers blow up at the same time ;-)
<alyssa>
lol
<alyssa>
m1 charges the chromebook
<opticron>
my friend has a 16" MBP and plugged a C-C cable into two of its ports...it started "charging" and the screen got brighter as if it were plugged in
<pipcet[m]>
it'll go blind
<alyssa>
pipcet[m]: so how do I make this gadget support come on
<pipcet[m]>
did you select the preconfigured gadget or the configfs one?
<alyssa>
hmmm is configuring it important? ;p
<alyssa>
I have CONFIG_USB_ETH set which is in `USB Gadget precomposed configurations`
<alyssa>
so that sounds promising
<alyssa>
ah but i'm in host only mode umm
<alyssa>
errr why does it only have host-only or gadget-only ....
<alyssa>
Is there really no dual mode? :|
<alyssa>
dual role uhh
<pipcet[m]>
in the kernel config? yes, there is, it just depends on usb role switching IIRC