<ktz_[m]>
well arch on arm is half-baked at least, at least that's my experience. a good majority of the packages simply won't run on aarch64
boardwalk has quit [Quit: Ping timeout (120 seconds)]
<ktz_[m]>
you may want to take a look at alpine in case you get too frustrated. mps saying that arm is a first class citizen in alpine is right as per my experience so far as well
boardwalk has joined #asahi
bisko has quit [Ping timeout: 480 seconds]
<ktz_[m]>
<j`ey> "for those that missed it https:/..." <- hey what can you do with this?
<j`ey>
ktz_[m]: it's just showing the first triangle produced from the m1 GPU, without using the macOS kernel
<ktz_[m]>
ahhh!! amazing then indeed gratz :)
<j`ey>
lina is doing some great work, probably not too long before two triangles.. and beyond!
<ktz_[m]>
yeah great stuff well done
<CaptainYukinoshitaHachiman[m]>
<ktz_[m]> "you may want to take a look at..." <- Thanks! Actually I'm seeking to run some Windows apps and I don't know if there is any way to do that currently on Arch
<ktz_[m]>
check offtopic as well some nice news there too
<ktz_[m]>
well not sure about arch but I spent a couple of days trying to get it working on qemu and I was close
nehsou^ has joined #asahi
<ktz_[m]>
I spent a lot of time reading qemu docs as well so it shouldn't take so much but I couldn't get past a point eventually, its doable I think tho
<j`ey>
CaptainYukinoshitaHachiman[m]: have you tried sudo pacman -Sy qemu-system-aarch64
<CaptainYukinoshitaHachiman[m]>
j`ey: Yes and I get the same error
<mps>
ktz_[m]: windows apps? I'm not sure wine runs on aarch64
<mps>
fexemu could be solution
<ktz_[m]>
windows 10/11 on qemu
<ktz_[m]>
never heard of fexemu
<mps>
fexemu is x86 emulator
<ktz_[m]>
well I guess running it on x86 will be a piece of cake
<mps>
I started to port it on alpine/musl but lost interest
derzahl has quit [Remote host closed the connection]
<mps>
according to 'marketing' it should be (a lot) faster than qemu
<ktz_[m]>
mps: ahh so it isn't related to qemu directly I see
<mps>
ktz_[m]: I'm not sure zfs have new release ready for 5.17 kernels
<ktz_[m]>
this is up to 5.18 supposedly
<mps>
aha, maybe I will find time this evening to try with current asahi kernel, 5.18-rc5
<ktz_[m]>
they merged many commits from the 2.1.5 staging and I think they address the bio errors
<ktz_[m]>
send it to me if you can I'll try to do it
<ktz_[m]>
I somehow got it running too but no scripts available
<ktz_[m]>
I'm trying with zfs package instead of lts now
<mps>
I have business meeting all the day, so don't have time till the evening
<ktz_[m]>
yes just cp the APKBUILD you have currently and I think I'll get it going
<mps>
ktz_[m]: btw, I fixed alpine linux-asahi-dev to use proper symlinks, did you upgraded it
<ktz_[m]>
yes I saw, good
<ktz_[m]>
thanks :)
<mps>
(last week I worked only on riscv64 kernel and u-boot, got first riscv SBC)
<ktz_[m]>
if there are any rules you can add to mdev would be nice, I cp'd a couple of them from here https://arvanta.net/alpine/libudev-zero/ but still can't get adb to see the phone
<mps>
we could discuss this on #asahi-alt
<ktz_[m]>
let me join
systwi has quit [Read error: Connection reset by peer]
systwi has joined #asahi
ptudor has quit [Remote host closed the connection]
ptudor has joined #asahi
f-fritz[m] has left #asahi [#asahi]
snajpa has quit [Ping timeout: 480 seconds]
povik has quit [Ping timeout: 480 seconds]
bisko has joined #asahi
nicklas[m] is now known as nIcKlAs[m]
nIcKlAs[m] is now known as Pr[m]
Pr[m] is now known as Nilsson[m]
hir0pro has joined #asahi
bisko has quit [Remote host closed the connection]
n1c has quit [Quit: ZNC 1.8.2+deb1+focal2 - https://znc.in]
n1c has joined #asahi
rootbeerdan3 has joined #asahi
rootbeerdan has quit [Read error: Connection reset by peer]
rootbeerdan3 is now known as rootbeerdan
Ry_Darcy has quit [Remote host closed the connection]
Ry_Darcy has joined #asahi
<mps>
ktz_[m]: re: zfs, who merged 'many commits from the 2.1.5 staging and I think they address the bio errors'
povik has joined #asahi
<kjellarne[m]>
How long time will the Asahi alfa take to install approximate?
<j`ey>
less than an hour
<_jannau_>
assuming the macOS partition resize works smoothly
<rowang077[m]>
Does linux have something equivalent to QoS settings for a thread/process?
<chadmed>
man nice
<rowang077[m]>
or is there onyl thread priority?
<chadmed>
the scheduler takes care of "qos" under the hood, you can use the nice command to hint to the scheduler how you want certain things prioritised though
<rowang077[m]>
What does that mean in practice? Software not made for heterogeneous architectures will "waste" power/resources by running on P cores? Or does the scheduler see, this program doesn't need much researches so it gets moved to an E core?
<rowang077[m]>
Sorry if those are obvious questions. I have no linux experience with heterogeneous architectures.
<chadmed>
the scheduler knows about the performance level of each core from the devicetree (we tell the kernel what each core can do)
<chadmed>
the scheduler then shifts threads around based on their utilisation time/whether or not theyre sleeping/some other factors
<chadmed>
you can of course manually set thread affinity to lock certain threads to certain cores if you really really want to minmax perf per watt
<chadmed>
if we had finer grained control over the CPU we could profile its energy use and then feed that into the Energy Model subsystem of the kernel which then makes scheduling decisions based on the core's perf/W and how much juice it thinks the thread needs
<chadmed>
currently we just use the standard performance level bindings for heterogeneous systems which is basically just a dimensionless scaling factor that tells the kernel how much more powerful the big cores are relative to the little ones
kov has joined #asahi
<chadmed>
adding the core and cluster cost bindings to the dt for each op point isnt much work, its deriving the actual numbers accurately that would suck
<chadmed>
once thats done you just have to write the energy model functions into the cpufreq driver and schedutil handles the rest
nehsou^ has quit [Ping timeout: 480 seconds]
nehsou^ has joined #asahi
hir0pro has quit [Quit: hir0pro]
hir0pro has joined #asahi
hir0pro has quit [Remote host closed the connection]
hir0pro has joined #asahi
Ry_Darcy has quit [Remote host closed the connection]
hir0pro has quit [Ping timeout: 480 seconds]
<chadmed>
oh lol you dont even have to do that anymore, cpufreq automagically gets it from the bindings now
nehsou^ has quit [Ping timeout: 480 seconds]
nehsou^ has joined #asahi
c10l has quit [Quit: Bye o/]
hir0pro has joined #asahi
hir0pro has quit [Remote host closed the connection]
hir0pro has joined #asahi
snajpa has joined #asahi
LinuxM1 has joined #asahi
<tsujp>
Can m1n1 be used to inject kexts?
<tsujp>
I guess the crux of my question is: can m1n1 boot macOS proper or is it "just" (for lack of a better word) compatible with Apple's UEFI or what have you, and then it's called from that
<sven>
you can only boot xnu inside the m1n1 hv
<sven>
but I guess you could inject kexts into the kernel ache you pass to run_guest.py
<tsujp>
So I'm checking the wiki page for "SW:Hypervisor" sven and when it says "start into 1tr" that is a real macOS installation (per above steps on that page) but it was booted via m1n1?
<j`ey>
no, 1tr is not started via m1n1
<j`ey>
booting into 1TR is done so that you can install m1n1
<sven>
yeah, you have to have a real macOS installation and then replace XNU of that installation with m1n1 from 1TR
<tsujp>
Oh right "1TR" is boot ID 1 "system/one true" recoveryOS
<tsujp>
I know basically nothing so trying to get started here is a bit tricky for me hehe
<sven>
not sure what boot ID 1 is but if you just remove that from your sentence you’re correct ;)
<sven>
looks it’s just another name for the 1TR mode
<j`ey>
cant you load custom kexts some other way?
<sven>
never heard about that before :D
<j`ey>
trying to understand why you want to do it with m1n1
c10l has joined #asahi
<tsujp>
I was thinking about trying to get something like Lilu going which can live load and replace kexts but that uses OpenCore and that's x86 only
<tsujp>
I also thought any hacking I do with kexts on my m1 machine might be translateable to Asahi (idk how else to help) and I want to customise my m1 macos installation
c10l has quit [Quit: Bye o/]
<j`ey>
guess it depends what kext hacking you do!
hir0pro has quit [Quit: hir0pro]
nehsou^ has quit [Ping timeout: 480 seconds]
Gaspare has joined #asahi
hir0pro has joined #asahi
<marcan>
tsujp: one true yes, system no
<marcan>
those two used to be related but no longer are, which is an endless source of confusion
<marcan>
1TR is *any* recoveryOS booted by holding down the power button, which usually is the paired recoveryOS for the default boot volume
<marcan>
just updated the page
<marcan>
I wonder if we'll ever fix all references to that
<marcan>
tsujp: you can build a custom kernelcache (with kmutil) and install it with kmutil itself, or you can load it with m1n1 as a chainload on bare metal, or run it under the hypervisor
<marcan>
don't think anyone has written the code to do what kmutil does
<marcan>
live loading/replacing kexts, I have no idea how that goes. you definitely need to downgrade security to do that and disable kernel CTRR at least.
guillaume_g has quit []
<marcan>
rowang077[m]: what macOS does isn't necessarily optimal for perf/W; depending on the situation running on the P cores and finishing faster can save energy
<marcan>
Linux just lets you set process/thread affinity, just use `taskset`
^GaveUp^ has joined #asahi
<j`ey>
gpudata is missing which means agx_1tri.py wont be runnable, but I guess her plan is to un-hardcode that stuff anyway
TheFirst has quit [Ping timeout: 480 seconds]
^GaveUp^ is now known as TheFirst
<marcan>
j`ey: I think you're supposed to get that by dumping a render from mesa on macos with that tooling
Ry_Darcy has joined #asahi
herbas has joined #asahi
<herbas>
How does reverse engineering with m1n1 like on the recent gpu streams work? I have already seen many articles falsely claiming that the triangle was rendered with a linux kernel GPU driver, which is claimed to be false. What does m1n1 run? How does it run a python interpreter without a linux kernel apparently for the prototype GPU driver?
<j`ey>
the python runs on another machine, which sends commands via an RPC over usb
<j`ey>
which m1n1 then executes as commands, to setup memory etc
<j`ey>
(m1n1 runs on an m1, python runs on any other machine)
herbas has quit [Quit: herbas]
<opticron>
ah, that makes sense, for some reason I was thinking m1n1 was running a super slim python backend
LinuxM1 has quit [Ping timeout: 480 seconds]
RevHelix has joined #asahi
Gaspare has quit [Quit: Gaspare]
Revy has quit [Ping timeout: 480 seconds]
LinuxM1 has joined #asahi
LinuxM1 has quit [Ping timeout: 480 seconds]
LinuxM1 has joined #asahi
LinuxM1 has quit [Read error: Connection reset by peer]
LinuxM1 has joined #asahi
c10l has joined #asahi
LinuxM1 has quit [Read error: Connection reset by peer]
<UnicornsOnLSD>
Note that I just skipped hashes/signatures, shouldn't be too big of an issue though
UnicornsOnLSD has left #asahi [#asahi]
james has joined #asahi
james is now known as UnicornsOnLSD
UnicornsOnLSD has quit []
yuyichao has joined #asahi
trouter has quit [Ping timeout: 480 seconds]
bisko has joined #asahi
trouter has joined #asahi
c10l has joined #asahi
trouter- has joined #asahi
trouter has quit [Ping timeout: 480 seconds]
vmeson has quit [Quit: Konversation terminated!]
vmeson has joined #asahi
<Skirmisher[m]>
I''
<Skirmisher[m]>
agh shoot
<Skirmisher[m]>
I've been wondering... since M1 family doesn't support aarch32 mode, would it still be possible to run some sort of aarch64ilp32 chroot/multilib (analogous to what the "x32" Linux ABI does) to aid the likes of FEX/box86 library thunks?
<Skirmisher[m]>
since people are talking about x86 emulation lately what with The First Triangle happening, I was reminded of that idea
<opticron>
that doesn't really do anything for you since while the M1 can run ILP32, it's really only cutting down the available memory space
<opticron>
it's still the same instructions and *MMU configuration
<opticron>
I don't know for certain that the kernel doesn't map memory differently in that mode, but I thought I read that it doesn't
<Skirmisher[m]>
true, but then is the bottleneck different between ia32->aarch64 and x86_64->aarch64? you need a different dynarec, but after that it seems like the main difference is different data/pointer sizes
<opticron>
from what I can tell, x86 assumes a page size of 4k though 2m or 1g are available on some systems. both will have the same problem unless the x86 native code is written for the other page sizes
<Skirmisher[m]>
that's true, but that's orthogonal to the instruction set and ABI, no?
<Skirmisher[m]>
page size is already something that will be solved at one level or another (people working on kernel patches, box64 works on 16k pages I think)