marcan changed the topic of #asahi to: Asahi Linux: porting Linux to Apple Silicon macs | Not ready for end users / self contained install yet. Soon. | General project discussion | GitHub: https://alx.sh/g | Wiki: https://alx.sh/w | Topics: #asahi-dev #asahi-re #asahi-gpu #asahi-stream #asahi-offtopic | Keep things on topic | Logs: https://alx.sh/l/asahi
<kov> rkjnsn, there is a patch for that already, using 4k pages
<kov> rkjnsn, the thing is there is a huge 16-25% performance hit
g3blv[m] has joined #asahi
<g3blv[m]> Will it be possible to boot Linux on iPads with the M1 processor once Asahi has been merged into mainline Linux?
<Dcow[m]1> the bootloader on iPads is locked
<g3blv[m]> OK and there is now way of unlocking the bootloader?
<g3blv[m]> * OK and there is no know way of unlocking the bootloader?
chadmed has quit [Quit: Konversation terminated!]
<g3blv[m]> * OK and there is no known way of unlocking the bootloader?
<Dcow[m]1> nope
chadmed has joined #asahi
<milek7> "ioctl(KVM_CREATE_VM) failed: 22 Invalid argument"
<milek7> can I tell why exactly it doesn't like it?
chadmed has quit [Remote host closed the connection]
chadmed has joined #asahi
hizonxx has joined #asahi
hizonxx has quit []
dsrt^ has joined #asahi
dsrt^ has quit [Ping timeout: 480 seconds]
dsrt^ has joined #asahi
yuyichao has joined #asahi
<opticron> hmm....now how to get rid of the old chunks I installed
riker77_ has joined #asahi
riker77 has quit [Ping timeout: 480 seconds]
riker77_ is now known as riker77
Emantor has quit [Quit: ZNC - http://znc.in]
Emantor has joined #asahi
riker77_ has joined #asahi
riker77 has quit [Ping timeout: 480 seconds]
riker77_ is now known as riker77
<opticron> welp, got that figured out, apparently you do things to partitions, but deleting one is eraseVolume
<rkjnsn> kov, are you referring to sven's patch? My understanding is that is to support accessing hardware behind a 16k IOMMU while running a 4k kernel on the CPU's 4k page mode. The bootlin patch I linked is to allow running a 16k kernel on a CPU without hardware 16k page support by backing each kernel page with 4 4k hardware pages, which is what I understood agraf to be asking about.
<rkjnsn> (I gather the idea would then be to get distros to standardize on 16k-page kernels to avoid the 4k performance hit, and on CPUs without hardware 16k support, the kernel would fall back to using 4 4k hardware kernel pages per kernel page? That wouldn't help with things that need 4k pages like FEX though.)
PhilippvK has joined #asahi
phiologe has quit [Ping timeout: 480 seconds]
PaterTemporalis has quit [Ping timeout: 480 seconds]
kov has quit [Quit: Coyote finally caught me]
marvin24_ has joined #asahi
zamadatix has joined #asahi
zamadatix has quit []
linearcannon has quit [Quit: Textual IRC Client: www.textualapp.com]
marvin24 has quit [Ping timeout: 480 seconds]
<sorear> "portable 16k-page kernels" wouldn't work, because the kernel needs to know the page table fanout at compile time and that's different between native and emulated 16k pages
<sorear> running an emulated-16k kernel on M1 could provide interesting information (is the observed 16k speedup due to TLB/cache issues alone, or do kernel algorithms and fewer page faults also have a large impact?)
nepeat has quit [Remote host closed the connection]
nepeat has joined #asahi
ave3 has quit []
bpye has quit [Quit: Ping timeout (120 seconds)]
linuxgemini has quit [Quit: Ping timeout (120 seconds)]
kode54 has quit [Quit: Ping timeout (120 seconds)]
KDDLB has quit [Quit: Ping timeout (120 seconds)]
kode54 has joined #asahi
bpye has joined #asahi
eric_engestrom has quit [Read error: Connection reset by peer]
nkaretnikov has quit [Read error: Connection reset by peer]
linuxgemini has joined #asahi
robher has quit [Remote host closed the connection]
sorear has quit [Remote host closed the connection]
WindowPain has quit [Remote host closed the connection]
vx has quit [Quit: G-line: User has been permanently banned from this network.]
Hotswap has quit [Remote host closed the connection]
skipwich has quit [Quit: DISCONNECT]
sorear has joined #asahi
Hotswap has joined #asahi
tardyp has quit [Read error: Connection reset by peer]
eric_engestrom has joined #asahi
vx has joined #asahi
KDDLB has joined #asahi
skipwich has joined #asahi
tardyp has joined #asahi
nkaretnikov has joined #asahi
erincandescent has quit [Quit: No Ping reply in 180 seconds.]
robher has joined #asahi
WindowPain has joined #asahi
ave3 has joined #asahi
erincandescent has joined #asahi
KDDLB has quit []
WindowPain has quit []
KDDLB has joined #asahi
WindowPain has joined #asahi
darkapex1 is now known as darkapex
<arnd> rkjnsn: the only thing that old patch does is to get around the 32-bit pgoff_t restrictions to allow larger disk partitions on arm32 Nas systems
<arnd> We never merged that since the better fix would be to use a 64-bit pgoff_t like Annapurna Labs did in their NAS kernels
<arnd> The larger pages are the workaround that Marvell originally put into their kernels
<rkjnsn> I know that was the motivation for it, but I thought it increased the page size across the board, not just for storage? I can't say I understand this stuff all that well.
<rkjnsn> 64-bit pgoff_t would indeed be super useful for my 32-bit NAS, I seem to recall the problem there is it would require changing XArray to be 64-bit on 32-bit platforms, which would be a much wider-reaching change?
<rkjnsn> Unless only the bottom 32-bits of the pgoff_t were used for indexing the XArray as chaining or something was used for the rare case of pages exactly 4GiB apart needing to be cached at the same time? As I say, I'm not really familiar with this stuff.
dsrt^ has quit [Remote host closed the connection]
<arnd> I thought we had a "probably working" patch for 64-bit off_t at the time, but gclement's customer preferred the 64k page approach because it kept the performance characteristics of their older vendor kernel
<arnd> Maybe that was before xarray though
<rkjnsn> I know there's https://sourceforge.net/projects/dsgpl/files/Synology%20NAS%20GPL%20Source/25426branch/alpine-source/linux-3.10.x-bsp.txz/download (heh, looks like you're the one who dug it up last year, when I was struggling with btrfs on my NAS), but it sounded like it was pretty much an unusable mess of vendor code with lots of unrelevant whitespace, etc. changes.
<rkjnsn> irrelevant*
<marcan> the question is how hard would it be to pull a macOS and actually have mixed 4K/16K mode userspace :)
<marcan> (pipe dream, but it'd solve ~all problems of course)
<marcan> IIRC macOS just stuck all the PTE management behind an ops structure so they can have two versions and choose at process creation time/exec
<marcan> but then of course you're still going to have "interesting" things going on at the userspace<->kernel interface, no idea how macOS handles that
gladiac has joined #asahi
<bluetail[m]> is the usb 3.x port of the mac mini as fast as the lightning port? I think they are sitting on the same bus, right?
<jannau> no, the thunderbolt ports (USB-C connectors) are directly on the SoC and the USB-A ports are PCIe USB controller. The USB-C ports are currently running at usb 2.0 speeds so the USB-A ports are faster
<bluetail[m]> jannau: does that only apply to Asahi Linux or is that generally even in default state like you say?
<jannau> I don't understand the question. This not true on macos as macos has proper USB 4/Thunderbolt 3 drivers for the usb-c port
<marcan> that they aren't the same applies to everything; that the TB ports only run at 2.0 applies to Asahi
<marcan> hopefully once sven is done with the PHY drivers Asahi will have better type C support than macOS though ;)
<bluetail[m]> Thanks. I was just wondering why I'm getting "only" 320 megabytes per second on a USB A Port right now. Cables are thick enough...
<marcan> 320 mega *bytes*?
<marcan> that sounds about right for USB 3.0
<bluetail[m]> amazon says my hub can do usb 3.2 https://www.amazon.de/gp/product/B075BV684S/
<bluetail[m]> lindy says its usb 3.0
<bluetail[m]> yea, it sounds right, but I should have ~ 200 MB/s more
<marcan> do you get 200 MB/s more in macOS?
<bluetail[m]> No, I am not comparing to Asahi. I'm getting 320MB/s on macOS.
<bluetail[m]> But the capability is up to 540MB/s
<marcan> that depends on a lot of factors
<marcan> USB 3.0 performance can vary widely between controllers
<bluetail[m]> I see. That's interesting.
<marcan> same chip, marketing says 380MB/s, so that sounds about right
tanty has quit [Remote host closed the connection]
tanty has joined #asahi
tanty has quit []
tanty has joined #asahi
apg has joined #asahi
amarioguy has joined #asahi
ChaosPrincess has joined #asahi
kajiryoji has quit []
PaterTemporalis has joined #asahi
chadmed has quit [Read error: Connection reset by peer]
chadmed has joined #asahi
<arnd> marcan: mixed page table formats per task are much more realistic, the two major problems I'd expect we'd have to solve for this are
<arnd> a) change linux/arch/arm64 to actually use separate page tables for kernel and user space. I forget how the separate is today, but as the kernel normally has access to user pages, they are at least sometimes visible together
<arnd> b) rework all the accessors for page tables to have separate kernel vs user logic. Some architectures (at least s390, possibly others) already do per-task page table levels, so a 32-bit task can use two levels, while a 64-bit task can use three, four or five levels depending on how much it uses
<marcan> arnd: there's two top level page tables, one for kernel and for userspace, and in principle you'd set only the user/lower one to 4K/16K and keep the kernel one at 16K always (that's how macOS does it)
<marcan> (the CPU does support mixing 4K/16K this way, simultaneously)
<marcan> however, the weirdness will happen when the kernel has to access userspace pages, yes, since a lot of things will probably assume a 1:1 mapping between page sizes for kernel/userspace
<marcan> and there's horrible corner cases like what happens when a 4K process tries to mmap memory that a 16K process has mapped, or worse, vice versa
<arnd> and then there is the page cache itself, since all memory is managed in units of kernel pages, having 16KB kernel page tables would likely mean that user space can only map memory in that unit as well, even if the TLB uses smaller pages
<marcan> yeah, but that defeats the entire purposes
<marcan> *purpose
<marcan> the point of 4K pages is so that mmap() on a 4K granularity actually works
<arnd> if the kernel uses 4KB pages, it would be a little easier, user space could be treated as using 16KB "huge" pages with the 16KB page table format, while kernel and other tasks use 4K pages or the normal huge pages
<marcan> OTOH 4K processes mapping chunks of 16K page cache pages isn't necessarily a problem
<marcan> yeah, that would be easier
<marcan> with memory folios
<marcan> but then we still run into the IOMMU issues
<marcan> in principle I want to say 4K processes working as a layer on top of a native-16K kernel should *work* in that you can just treat each 16K page as 4 4K pages that can be mapped independencly by 4K processes, though you'd need accounting for that kind of splitting
ciggi has joined #asahi
<marcan> it'd complicate anonymous memory allocation though, how do you keep track of how much of a 16K page is actually in use and hand out the rest?
<arnd> right, that doesn't sound realistic either. What is the actual requirement for processes that want 4KB pages?
<arnd> are there applications that work on 4K and 64K but not on 16K pages?
<marcan> no, the 4K story is basically for x86 compat
<marcan> mmap() needs to work with 4K alignment, and 4K pages need to have independent protection status
<marcan> that's most of it
<arnd> ah, because x86 applications are built with 4K section alignment in binutils?
<marcan> yeah, and because x86 applications can and do assume they can just mmap stuff at random 4K aligned vaddrs
<marcan> also emulators for other 4K native systems of course
<arnd> we had the same problem on 32-bit arm applications that initially didn't work on arm64 kernels with 64K
<marcan> we still do, chromium/jemalloc/WebKit/Emacs and I think a few others were broken on asahi, some of those have landed fixes, some haven't
<marcan> the emacs problem was assuming the build-time page size == the run-time page size
<arnd> that is mostly fixed these days, but it doesn't help for existing binaries, and there is much less incentive for x86 applications to care about a random other architecture emulating theirs than there was for arm32 vs arm64
<marcan> and yeah, for x86 the target is stuff like games
<marcan> so no chance of recompiling those
<marcan> if you're running x86 code in the first place it's because it's not open source
<marcan> otherwise you'd have built it for arm64
<arnd> IIRC there are also still assumptions about page size in Android
<marcan> yes, that's just the building for 4K alignment problem etc
<mps> I've got mail answer from f2fs developers about 16K page problem, one of them is interested to make it work with 16K pages but that will take time
<mps> quote from Chao Yu `I'm interest in supporting 16KB page in f2fs, but it looks this feature needs a big change. I'd like to investigate how to handle 4KB block read/write request through 16KB page cache first.`
kov has joined #asahi
<kov> rkjnsn, oh interesting
OwOwA has joined #asahi
<OwOwA> Hello!
<kov> but yeah I think that is not a big problem for distros, there is already a history of providing various kernel packages and installing the most appropriate one (686 686-pae amd64 etc)
OwOwA has quit [Quit: WeeChat 3.4]
OwOwA has joined #asahi
<OwOwA> This might be a stupid question, but would it make sense for a package manager to include the special instructions from x86 used to speed up Rosetta 2 when compiling packages, or is it as stupid as it might sound?
<sven> that wouldn’t make any sense, no
<sven> that special flag is only helpful if you are emulating x86 code
<kode54> what is this flag and what does it get passed to?
<sven> it enables TSO ordering and it’s a bit in a sysreg
<sven> -ordering actually :D
<kode54> yeah, TSO doesn't benefit native ARM code
<kode54> it only benefits a recompiler of architectures that expect strict memory ordering so that they can just recompile the code directly without having to make sure things are correctly synchronized
<kode54> native ARM code doesn't need this step
<kode54> or rather, the compiler already deals with memory ordering behavior
<mps> povik: I tried to merge your for-marcan-merge branch but got conflicts https://tpaste.us/RxrY
<OwOwA> That makes a lot of sense, thanks!
amarioguy has quit [Remote host closed the connection]
<marcan> there's some other stuff about SSE flags
<marcan> also not relevant for anything but x86 emu
ciggi_ has joined #asahi
ciggi has quit [Ping timeout: 480 seconds]
<marcan> povik: j313 doesn't have the speakers disabled?
<marcan> I'll add that in for safety
<povik> marcan: that's an oversight if not disabled
<povik> you are merging it?
<povik> i hoped few people will test it before you do
<povik> (as i said, don't have the hardware by me)
<povik> mps: for-marcan-merge is where i have done the merge for you
<marcan> well, I want to test it :)
<povik> good :)
<mps> povik: but why I can't merge it then
<povik> you can't merge it with 'asahi', because 'asahi' has the old audio commits
<mps> povik: i.o.w. how can I merge it on top on asahi branch
<povik> you can't easily now
<povik> you should just pull for-marcan-merged and build that
<povik> or wait until it gets into new 'asahi'
<mps> povik: ah, so I have to git pull --depth=1 it in separate dir
<povik> that would be fine
OwOwA has quit [Quit: WeeChat 3.4]
<mps> povik: ok, thnaks
<mps> oh wait, is it for-marcan-merged or for-marcan-merge?
<povik> ah, -merge probably
<mps> yes, looks like, thanks
<marcan> 0 [a ]: snd-soc-apple-m - MacBook Pro J314/6 integrated a
<marcan> looks like we should change the name, there are some length issues :-)
<povik> sucks
OwOwA has joined #asahi
<marcan> povik: so I set everything to 50 and speaker-test only gives me 5 speakers, and it sounds like the left tweeter is quieter than the right one?
<povik> weird
OwOwA has quit []
<povik> does it give you 5 speakers or 6 and one is dead?
<marcan> 6 and one is dead
<marcan> 0 is dead
<marcan> ("side left" according to speaker-test)
<povik> left tweeter being quiter than right is weird, doesn't happen for me
<povik> i suspect we need to do some more configuration of the speaker amp chips
<chadmed> speaker-test -c 6 -D hw:0,0 -t wav
<povik> in fields undocumented in tas2764
OwOwA has joined #asahi
<chadmed> the speakers are mapped weirdly so you need to tell alsa explicitly to poke the hw device and that it has 6 real channels
<marcan> yeah, I do
<chadmed> yeah one of the left woofers will randomly drop out sometimes too heh
<chadmed> povik's being working on it
<marcan> yup, that's the one I'm missing
<marcan> it's totally dead for me
<chadmed> it should come back if you reboot
<marcan> but yeah, the left tweeter does seem subtly quieter
<marcan> (unless my ears are hosed
<marcan> let me check with a mic
<povik> the question is what's so special about left woofer
<povik> if you swap left/right in DT, it's still the actual left woofer that drops out
<povik> the only idea i had is that some voltages or temperature are different there
<marcan> "uninitialized memory", right
<chadmed> crazy apple-like silliness, could it be why they have ISENSE/VSENSE enabled? i.e. detect when the woofer goes dark and then kick the amp into reset?
<chadmed> seems utterly stupid that theyd let that happen in production machines though
<povik> chadmed: nah, i think it will be simpler
<povik> when i get to it i will try applying the exact same configuration apple does, see if that fixes things
<povik> I/V sense is enabled everywhere, even on the older models
<marcan> oh yeah, the right tweeter is like 9 dB louder according to this
<povik> uh
<chadmed> huh, mine are almost exactly the same volume
<marcan> pretty obvious in the video too
<marcan> the two matched woofers that work are within 1dB though
<chadmed> thats rough yeesh
<marcan> ah, figured it out
<povik> listening
<marcan> it doesn't readback the amp gain setting on init, but does reset the codec
<marcan> I had touched left tweeter
<marcan> that ends up being lower than reset state
<marcan> so they both said 11dB, but for right, that was a lie
<povik> right
<povik> i knew there's some issues there, didn't think that was it
<povik> the defaults for tas2764 don't apply hre
<povik> *here
<marcan> also, I think the default is louder than 21dB
<marcan> so it goes even higher
<chadmed> btw povik for the asoc driver name for things, i suggest just "Apple XXXX", where XXXX is the model of the machine
<marcan> ah, it seems to be doing it wrong
OwOwA has quit [Ping timeout: 480 seconds]
<marcan> the value is supposed to be <<1 according to the datasheet
<chadmed> we can give each machine a more descriptive and user friendly name via config files
<marcan> the default *is* 21dB
<marcan> but what the driver says is 21 dB is a lie
<marcan> sounds like a bug
<j`ey> literally
<marcan> also the tas2764 default is correct
<povik> ah
<povik> doesn't surprise me in the least, had to fix couple other things in the driver
OwOwA has joined #asahi
<povik> hey NCO got into clk-next!
<povik> shame i didn't send the module building fix already
<povik> anyway, will be easy enough to send on its own
<mps> povik: I got this https://tpaste.us/9byl when compiling your for-marcan-merge branch
<mps> so I will wait till marcan merges this and then test
<povik> mps: unrelated
<j`ey> povik: congrats!
<povik> got the surprise email after all :)
<mps> povik: yes, it is unrelated but will not try to find cause for now
<povik> you don't need to, that's just a warning
<povik> if it stops the build disable WERROR
<mps> povik: I got also this `work/asahi-linux/src/for-marcan-merge/sound/soc/apple/macaudio.c:59: undefined reference to `asoc_simple_parse_routing'`
<povik> yeah, that's more severe
<povik> hm, how can you get that
<povik> marcan would have shouted if he ran into the same :-p
<povik> also it builds for me
<povik> missing dependence, of course!
<povik> mps: thanks for reporting it, enable SIMPLE_CARD or similarly named option
<mps> povik: excerpt from build log https://tpaste.us/nVZ4
<mps> I have SIMPLE_CARD enabled I think
<mps> SND_SIMPLE_CARD=m
<povik> ah, try =y, seems like some linkage issue
<mps> ok
<mps> povik: heh, it pass now
<mps> povik: do I have to update dtbs in m1n1 to test this kernel
<mps> I think yes
<povik> you need to. remove the diable line if you want to experiment with speakers
<povik> (knowing the risk)
<povik> although i know you did experiment already :p
<mps> povik: where is this line?
<povik> in the dts file for your model there will be a status="disabled" line for the speakers
<mps> aha
<mps> bellow this comment `* DANGER ZONE: You can blow your speakers!`
<mps> lets test, if I don't come here again means I put mbp in melting pot ;)
<mps> povik: no luck, I got blank console with this kernel
<povik> wouldn't know why
<mps> ok, np
<mps> today and tomorrow I will try to upgrade macos to 12.1, will be easier to test different changes and features
<mps> (sunny saturday is outside, would be better to go out and walk)
apg has quit [Ping timeout: 480 seconds]
apg has joined #asahi
eroux has joined #asahi
OwOwA has quit [Ping timeout: 480 seconds]
memoryleak has quit [Ping timeout: 480 seconds]
<Glanzmann> c
<Dcow[m]1> mps: 12.3 release scheduled for next week
<mps> Dcow[m]1: thanks for info, but I will upgrade to 12.1 or 12.2 (if exists) not risking with new and untested one
nitchi[m] has joined #asahi
nitchi[m] has left #asahi [#asahi]
<mps> and looks like I will not upgrade soon, have problems with booting from usb
<jannau> supported firmware is 12.1 for now. not a problem if you use the installer to install a stub partition
memoryleak has joined #asahi
<mps> I will try 'from the scratch' install, i.e. reset macos to default and then with installer to the step before it install distro
<jannau> but we will have to update to 12.3 for mac studio
<marcan> I tried 12.3 system firmware and nothing went wrong fwiw
<mps> marcan: you are tempting me ;)
eroux has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<jannau> I wouldn't expect problems from dcp either given that there were no breaking changes from 12.0 beta to 12.2
OwOwA has joined #asahi
kov_ has joined #asahi
kov_ has quit []
OwOwA has quit [Ping timeout: 480 seconds]
clrwf0x80 has quit [Quit: Going offline, see ya! (www.adiirc.com)]
eroux has joined #asahi
eroux has quit []
smartmobili has joined #asahi
smartmobili has left #asahi [#asahi]
<zv> do I need to consider anything before I do 'make deb-pkg' in debian and replace the kernel (configured as 16K pagesize) remotely? nothing else to change?
ciggi has joined #asahi
ciggi_ has quit [Ping timeout: 480 seconds]
OwOwA has joined #asahi
<mps> zv: I think it should work, though long time passed when I used deb-pkg
<mps> zv: if you need 4K page size you need to apply patch for it
apg has quit [Ping timeout: 480 seconds]
OwOwA has quit [Ping timeout: 480 seconds]
creechy has joined #asahi
<j`ey> I found the two places that are causing the 'buggy firmware' warning from the kernel (inside U-boot)
nametable[m] has joined #asahi
<nametable[m]> Hello y'all, i am interested in helping with Asahi. Is the best hardware to get specifically the M1 Mac Mini?
<nametable[m]> I do not have any Mac as of yet but I am an active Linux user and computer science student.
<nametable[m]> Also, does it matter what matrix client i use. right now im on Syphon on mobile
<j`ey> I dont think any hardware is the 'best', the project aims to have all models working well!
<nametable[m]> So if I wanted to work on the go, maybe even a M1 Macbook Air would be good?
<tpw_rules> imho mac mini is very slightly worse because some monitors are incompatible with it at this time. with the laptops you get a monitor that's guaranteed to work
<nametable[m]> Is this the case even with a HDMI to VGA adapter?
<tpw_rules> probably
<nametable[m]> Along with a Mac, is there a specific device needed for Serial debugging? if i understand right, the serial port can be accessed through one of the usb c ports
<tpw_rules> a usb A to C cable
<tpw_rules> well, that's with m1n1's hypervisor mode. but it's pretty good
<nametable[m]> So is there no need for the more low level serial interface?
<tpw_rules> not really
<nametable[m]> Any suggestions on where to find the cheapest mac possible? I know i saw an m1 air on ebay for 575 a few days ago, and minis seem to be going for 500
<j`ey> facebook? craiglist type things?
<nametable[m]> That's true I haven't checked Facebook marketplace yet
___nick___ has joined #asahi
___nick___ has quit []
___nick___ has joined #asahi
creechy has quit [Ping timeout: 480 seconds]
ciggi has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
<LuigyLeon[m]> tpw_rules: nice! just ran into your nixos configs after I had already gone with Debian to test. Will try it out later today or tmw :)
OwOwA has joined #asahi
memoryleak_ has joined #asahi
memoryleak has quit [Ping timeout: 480 seconds]
<tpw_rules> LuigyLeon[m]: cool! i'm excited to get more people to try nixos. i'm going to release prebuilt ISOs shortly
creechy has joined #asahi
OwOwA has quit [Ping timeout: 480 seconds]
<tpw_rules> how does the installer choose which device tree to install? is it possible to have multiple ones concatenated and m1n1 picks the right one?
OwOwA has joined #asahi
<matthewayers[m]> What command do I run to get the latest version of asahi-next?
<tpw_rules> yes
<j`ey> tpw_rules: the installer doesnt pick one
<j`ey> Im assuming that yes was to your own question?
<tpw_rules> j`ey: yeah, i just found how the installer m1n1 bin was built
<tpw_rules> it just concatenates all the device trees on which means m1n1 must know how to find the right one for that specific machine
creechy has quit [Ping timeout: 480 seconds]
<j`ey> matthewayers[m]: if youre running the reference distro it will be via pacman -Syu
<matthewayers[m]> I'm not completely sure if I have the reference distro or not
OwOwA has quit [Ping timeout: 480 seconds]
<tpw_rules> no, that's Glanzmann's
<tpw_rules> the reference distro is installing ALARM using the menu option in the asahi linux installer
<matthewayers[m]> Yeah, I used Glanzmann's installer
<tpw_rules> i don't know how to update it though
OwOwA has joined #asahi
<matthewayers[m]> Yeah that's the only unknown I have as of right now
memoryleak__ has joined #asahi
OwOwA has quit [Ping timeout: 480 seconds]
memoryleak_ has quit [Ping timeout: 480 seconds]
memoryleak__ has quit [Ping timeout: 480 seconds]
balrog has quit [Quit: Bye]
balrog has joined #asahi
<Glanzmann> matthewayers[m]: You want to update the kernel?
<Glanzmann> matthewayers[m]: Do you already have the version with the chainloading rust.
<Glanzmann> zv: Should I build a tested kernel for the mini with 16k?
<Glanzmann> zv: When you use the current asahi branch the dtb has changed, so you need to update the 'boot.bin' and the kernel: Like that. But I probably just build a kernel, test it locally on my mini and send you the instructions.
<Glanzmann> matthewayers[m]: To update the kernel to the newest asahi branch you need to run as root: curl -sL tg.st/u/ksh | bash
gabuscus has quit [Ping timeout: 480 seconds]
<mps> started upgrade macos to 12.2.1
<Glanzmann> zv: Execute this as root: https://pbot.rmdir.de/yjN2_z1Bir9I1t52fySgUg and reboot for 16k kernel with the curren asahi tree and device tree. I tested this on my mini: https://pbot.rmdir.de/u/fao2DLLMIwrXNOkqIblVTg
<Glanzmann> zv: If you want to build for yourself, I used this script to build the deb and the u-boot.bin: https://git.zerfleddert.de/cgi-bin/gitweb.cgi/m1-debian/blob/68d9f4415c93e076e1dcc824ba0acea077ac57ae:/bootstrap.sh
<j`ey> Glanzmann: if you have time/etc can you test https://github.com/AsahiLinux/u-boot/commit/75ad0c233c8a0cb5d29b96aa8633bbc1e7b9b29d it fixed an issue I was having, but someone using grub+linux should test it
<Glanzmann> j`ey: Testing right now on air and mini.
<zv> Glanzmann: thanks; so the u-boot update is a one-time thing for now?
<j`ey> Glanzmann: thanks, just one of them should be fine!
<Glanzmann> zv: Yes, because the kernel you have is different from the asahi tree.
<Glanzmann> zv: So with the kernel update you do two things, you switch from 4k to 16k, but at the same time you use the lastest rebase from marcan. As a result the device tree changes.
<Glanzmann> zv: Since asahi is actively developed expect a device tree change with every new kernel. Most of them will be backward compatible and forward compatible, but to be on the safe site you really want to update the device tree (which is in u-boot.bin) along with the kernel.
<tohatsu[m]> I'm unable to boot from USB. Is it because of my usb is Fat32?
<Glanzmann> j`ey: air fine, mini, too: https://pbot.rmdir.de/xLENoHZx_W7AaYofeoIpkA
<tohatsu[m]> Mac OS can't format to vfat
<j`ey> Glanzmann: great thanks!
<Glanzmann> tohatsu[m]: What excatly did you do?
<tohatsu[m]> Glanzmann: with usb?
<tpw_rules> fat32 is vfat
<tpw_rules> but u-boot will not read the fat partition unless its GUID is the ESP type
<mps> tohatsu[m]: looks like your extlinux.conf is correct
<Glanzmann> tohatsu[m]: No, more context.
<mps> u-boot reads even ext4 partition
<Glanzmann> tohatsu[m]: So what did you do and what is your goal?
espo has quit [Quit: Leaving]
<tohatsu[m]> Glanzmann: the goal is to boot Debian from USB. And make it not persistent if possible
<mps> tohatsu[m]: I can post script which creates usb bootable disk when finish macos upgrade
<tohatsu[m]> Glanzmann: i followed this tutorial on Livesystem section https://github.com/AsahiLinux/docs/wiki/Debian
<Glanzmann> tohatsu[m]: Where did you create the usb stick on macos or Linux?
<mps> (time for macos upgrade is more than I need to build alpine base)
<tohatsu[m]> Glanzmann: mac os with disk utility
<Glanzmann> Okay, so I'm trying to reproduce your issue, give me 5 minutes
<j`ey> tpw_rules: try, in u-boot: ls usb 0:1 /
<tpw_rules> j`ey: i guess to clarify u-boot's EFI subsystem will not read non-ESP FAT32 partitions
<j`ey> tpw_rules: I meant that at tohatsu[m] anyway :P
<tohatsu[m]> j`ey: just a sec, need to reboot
<mps> j`ey: I tested boot from usb and it works with ext4
<mps> at least worked when I tested it
<mps> j`ey: iirc I posted script to you
<tohatsu[m]> j`ey: 0 file(s), 0 dir(d)
<tpw_rules> hm i'm trying to set up wifi and getting a lot of "timeout on response for query command" from the brcmfmac driver. is this what other people have been complaining about
<j`ey> mps: Im just trying to help tohatsu[m] :)
<j`ey> tpw_rules: no that sounds new
<mps> j`ey: ok :)
<tpw_rules> hm, i reloaded the driver and now it seems to work okay. weird
<tohatsu[m]> j`ey: bad partition specification usb 0:1/ Couldn't find partiotion on usb 0:1/
balrog has quit [Quit: Bye]
<tohatsu[m]> i will try ext4
<j`ey> tohatsu[m]: you missed a space after the 1
<tohatsu[m]> 0 file(s), 0 dir(s)
<Glanzmann> tohatsu[m]: I created the livestick like that: In the graphical diskutil from macos right-click and say erase and than put that: https://tg.st/u/Screenshot_2022-03-12_at_19.06.13.png on the terminal I did: https://pbot.rmdir.de/4faY6zui2bD4yLYpJESgRA
<j`ey> tohatsu[m]: can you read it on another machine / from recovery mode?
<Glanzmann> tohatsu[m]: My assumption is that you by accident extracted the tar archive not on the usb stick but somewhere else. I also think that the instructions in the wiki are bad, I'll improve them right now.
<Glanzmann> tohatsu[m]: Let me test the usb stick
mavericks has quit [Quit: The Lounge - https://thelounge.chat]
ciggi has joined #asahi
<tohatsu[m]> Glanzmann: ah, i set GUID partition table
mavericks has joined #asahi
<tohatsu[m]> Glanzmann: after untar usb contained 2 filed and 1 dir
balrog has joined #asahi
<Glanzmann> tohatsu[m]: So with the instructions I send you above it works, I just tested it on the air. Can you try to reproduce, I update in the meanwhile the wiki article.
<Glanzmann> Usb layout should be: https://pbot.rmdir.de/nrtpKb14NLznzBgexbjdrg
creechy has joined #asahi
<Glanzmann> tohatsu[m]: I improved the instructions in the wiki: https://github.com/AsahiLinux/docs/wiki/Debian#livesystem
<tohatsu[m]> Glanzmann: the thing is now i don't have third option to choose Master Boot Record
<tohatsu[m]> tried several times
<Glanzmann> tohatsu[m]: That is really strange.
<Glanzmann> tohatsu[m]: You're up for a quick zoom session?
<mps> Glanzmann: you told me that I can use `(diskutil apfs deleteContainer), use the asahi installer to install m1n1 and than you do the usual dance.
<mps> uhm
<Glanzmann> Yep.
<tohatsu[m]> Glanzmann: need to install something?
<Glanzmann> tohatsu[m]: Probably not.
<mps> Glanzmann: how do I know which container to delete
<mps> or I could use macos gui tool for this
<Glanzmann> mps: The one with the 80 GB.
<Glanzmann> mps: Paste me the output of diskutil list again.
<mps> Glanzmann: I can't, I don't have tool to paste in macos
<Glanzmann> tohatsu[m]: Send you link via msg.
<Glanzmann> mps: tg.st/p
<Glanzmann> Select in the terminal commad + c and than go to the webpage and do comand + v and click on paste.
<Glanzmann> Tell me when pasted.
<mps> I see with 'diskutil list' /dev/disk4 is 80GB
<Glanzmann> mps: That is the wrong.
<Glanzmann> disk0sX
<mps> disk0s3 is 80GB
<Glanzmann> That's it.
<Glanzmann> tohatsu[m]: I'm in the meeting, can you join?
<mps> ok, deleted
<tohatsu[m]> Glanzmann: I pasted to rmdir.de
<tohatsu[m]> i'm in browser element.io
<Glanzmann> tohatsu[m]: Can you join the zoom?
<Glanzmann> tohatsu[m]: So the issue is that you have an gpt partition table on the usb stick, but you need a msdos partition table.
<Glanzmann> for the others: pbot.rmdir.de/uWx2GBd8OoZjaxPz1OHDig
<tohatsu[m]> Glanzmann: I don't have zoom. You want to debug for wiki? I can just format it on windows PC
<Glanzmann> tohatsu[m]: YOu can just click on the link and it should allow to join the session in the browser.
<Glanzmann> But if you don't want that, than yes do it on windows or another Linux.
<mps> Glanzmann: I got to point when asking me 'f: Install ans OS into free space', do I have to quit if I don't want to install asahi distro or continue with 'f' if i want macos on stub partition
<Glanzmann> tohatsu[m]: Also you have to partitions on your usb stick, but you need one partition and the data needs to be on the first partition.
<tpw_rules> mps: hit `f` and select the free space, selecting what you want to install is another option
<Glanzmann> press f
<Glanzmann> It will ask you later what you want to install.
<mps> Glanzmann: I see
<tohatsu[m]> Glanzmann: send link
<Glanzmann> When it asks you you choose the last option 3.
<mps> I will select 2: UEFI
<Glanzmann> tohatsu[m]: It is still the same link.
<mps> Glanzmann: it is fine and intuitive, thanks
<sarah[m]> I see so many people explaining stuff in text. Hopefully someone can someday just make a video tutorial that we can follow.
<j`ey> sarah[m]: Glanzmann has
<j`ey> sarah[m]: you can find them https://github.com/AsahiLinux/docs/wiki/Debian
<Glanzmann> tohatsu[m]: Use this script to wipe the installer: tg.st/u/wipe-linux.sh
<Glanzmann> So the problem for tohatsu[m] was he had an esp and msdos parition on the usb stick.
<Glanzmann> As soon as we delete the partition he could change the partitioning.
<Glanzmann> from gpt to msdos. But the problem was probably that he had two paritions and the files were on the second.
<mps> so, in 12.1 on stub partition we can't use macho m1n1?
vmeson has quit [Quit: Konversation terminated!]
vmeson has joined #asahi
<tohatsu[m]> now i can't start wifi
<matthewayers[m]> Was povik the one with a sound patch?
<mps> ohm, u-boot in installer couldn't load compressed kernel I think
<j`ey> matthewayers[m]: the audio jack should work on your kerne
<matthewayers[m]> Thanks Glanzmann, I was able to update the kernel successfully to 20220310
<mps> uhm, it can
<pugguu[m]1> Hey
<tohatsu[m]> tohatsu[m]: wif fimware tar is on USB. On what step it should be untared and installed?
<Glanzmann> tohatsu[m]: Does 'ip link show' show you the wifi device?
<Glanzmann> mps: Did everything work?
<mps> Glanzmann: I can boot from usb
<Glanzmann> mps: Can you also boot your usual boot chain?
<mps> now trying to fix boot from nvme
<tohatsu[m]> Glanzmann: only loopback and ether
<Glanzmann> I see.
<Glanzmann> tohatsu[m]: Okay than the firmware extraction thing went wrong.
<Glanzmann> tohatsu[m]: You said you have the firmware tar put on the usb?
<Glanzmann> If that is the case do the following: mount /dev/sda1 /mnt; tar -C /usr/lib/firmware -xf /mnt/linux-firmware.tar
<tohatsu[m]> Glanzmann: nevermind, i forgot that i rewrote usb, so firmware is gon
<Glanzmann> Than do the following dance: https://tg.st/u/wifi.pl
<Glanzmann> tohatsu[m]: But the firmware should also be on the efi parition that I saw.
<jannau> Glanzmann: how slow was dcp for you? slow as in the mouse pointer jumps 1-2 centimeter on the screen (14" display)? I hit bug today which resulted in that behavior after switching the display's refresh rate
<Glanzmann> tohatsu[m]: do a 'lsblk' or 'blkid'.
<Glanzmann> jannau: Yes, like that.
<Glanzmann> jannau: I see. I always ran 'xrandr to switch my resolution and also tried rotate'. As a result probably I also screwed the refresh rate.
<Glanzmann> tohatsu[m]: Identify your 'esp' partition on the nvme and mount that and search there for the vendorfw/firmware.tar
<jannau> ok, looks reproducible on my end. I'll notify you after I've fixed it
<Glanzmann> But to be honest, my script should have done that.
<Glanzmann> jannau: Perfect, once you did I will switch to it on the air and mini and work a few days with it and give you feedback.
<mps> Glanzmann: oh, didn't noticed that the installer moved up for one number my rootFS partition :)
<mps> now I'm typing from it
<Glanzmann> mps: Yes, you had one parition before your root, now you have two, because previosuly you did not have an esp and now you might have.
<Glanzmann> mps: Perfect. :-)
<mps> I had ESP and now I have two ;)
<Glanzmann> Perfect, you can never have to many esp partitions ...
<mps> Glanzmann: here they are https://tpaste.us/Z4Vl
<Glanzmann> mps: Btw. if you want you can boot the debian live thingy from usb and use parted or gparted to ~77.5GB unused disk space.
<Glanzmann> apt update; apt install xinit blackbox gparted xterm; startx
<mps> Glanzmann: I have alpine on usb from day one for rescue
<mps> Glanzmann: anyway thank you for your help and effort to get asahi so good
<Glanzmann> I just kept asking questions and wrote it to a shell script. :-) You have to thank marcan, alyssa, jannau, kettenis, marcan, sven, j`ey and yourself :-)
<tohatsu[m]> Glanzmann: /vendorfw/firmware.tar is present
<tohatsu[m]> after untar to /usr/lib/firmware nothing changed
<tohatsu[m]> but actually i don't need internet. Can i download needed deb packages with all dependencies to usb and make it being installed automatically on boot?
nepeat has quit [Quit: ZNC 1.8.2 - https://znc.in]
nepeat has joined #asahi
nepeat has quit [Remote host closed the connection]
nepeat has joined #asahi
<Glanzmann> tohatsu[m]:
<Glanzmann> tohatsu[m]: So the easiest way is probably to modify the livesystem how you want it and than create a initrd out of it.
<Glanzmann> tohatsu[m]: Do you know if the file was untared before?
<Glanzmann> tohatsu[m]: Which device (air, pro 14", pro 15") do you have?
<tohatsu[m]> i didn't untar it before
<tohatsu[m]> air
<Glanzmann> tohatsu[m]: cat /etc/rc.local
<Glanzmann> There is a script which runs on boot which searches for the wifi firmware and extracts it, so it probably was already extracted.
<Glanzmann> I'm also on the air.
<Glanzmann> tohatsu[m]: Can you run 'lspci' and let us know if you see the two broadcom devices?
<tohatsu[m]> lspci command not found
<Glanzmann> tohatsu[m]: If you have an ethernet donlge, you can instlal it:
<Glanzmann> sudo apt update; sudo apt install -y pciutils
<tohatsu[m]> Glanzmann: sh /etc/rc.local should execute without errors?
<tohatsu[m]> don't have an adapter
<tohatsu[m]> oh, it's perl
<Glanzmann> Yes, it should.
creechy has quit [Ping timeout: 480 seconds]
<Glanzmann> tohatsu[m]: When you execute it, does your wifi show up?
<mps> Glanzmann: re: live system, didn't I told that is how I installed linux first time, i.e. first booting was on usb ssd and when everything worked good then created FS on nvme and rsynced rootFs from usb
<tohatsu[m]> Glanzmann:
<Glanzmann> tohatsu[m]: That worked.
<Glanzmann> Last thing is your wifi interface.
<Glanzmann> vi /etc/wpa/wpa_supplicant.conf
<Glanzmann> or nano /etc/wpa/wpa_supplicant.conf if you like that better.
<Glanzmann> put your ssid and psk in the file, safe it and do a 'ifup wlp1s0f0'
<Glanzmann> If this interface was there all along, than it worked from the beginning without intervention how it should work ...
<Glanzmann> mps: Yes, I remember that. You we're one of the first who used it as a desktop even before ther was nvme.
<Glanzmann> I need to redo the live video.
<tohatsu[m]> yes it worked
<Glanzmann> Perfect.
creechy has joined #asahi
<mps> Glanzmann: right I used it as desktop from ssd usb, but I'm not first I think. tmlind was probably first but I think not with asahi kernel and with kexec instead of u-boot
<Glanzmann> Maybe I should add a script that lets you modify the live system and write it back to the initrd on the usb stick.
<tohatsu[m]> Glanzmann: that would be perfect
<Glanzmann> tohatsu[m]: So if you want to safe the state of your initrd, you can do this with the following script executed as root in / : cd /; find . -xdev | cpio --quiet -H newc -o | pigz -9 > ../live-stick/initrd.gz
<mps> I usually resync all my working machines with usb disk for archiving and use as rescue if I need them
<Glanzmann> You might need to apt-get install -y pigz cpio
<Glanzmann> tohatsu[m]: And of course you need to mount /dev/sda1 /mnt
<Glanzmann> and the script needs to be: cd /; find . -xdev | cpio --quiet -H newc -o | pigz -9 > /mnt/initrd.gz
<Glanzmann> mps: I see, I never do that and once I was really pissed at myself, because I reinstalled it for fun and did not get touchpad up and running but than you and jannau came and rescued me.
<tohatsu[m]> will try it
<Glanzmann> tohatsu[m]: Let me know, if it work, if it doesn't also let me know, than I'll try to reproduce, but I assume it works. and the initrd is created like that. I just added the '-xdev' parameter to find so that it does not decent into mountpoints but keeps the mountpoints itself.
creechy has quit [Ping timeout: 480 seconds]
<Glanzmann> tohatsu[m]: I had to try that initrd repack think from live. It works like charm. I would recommend that you delete /etc/rc.local and uncomment the allow-hotplug in /etc/network/interface so that the network comes up fast.
<milek7> is "nvme0: Admin Cmd(0x6), I/O Error (sct 0x0 / sc 0xb)" during boot expected?
<Glanzmann> milek7: I have it, too. So I assume yes.
<Glanzmann> milek7: I think it tried to run a command which is not supported by apple nvme and gets an error, but it is not criticial and just carries on.
clrwf0x80 has joined #asahi
creechy has joined #asahi
creechy has quit [Ping timeout: 480 seconds]
creechy has joined #asahi
<milek7> is kvm supposed to work? (on kernel from asahi branch)
<j`ey> milek7: yes
<j`ey> milek7: not sure what config/build you are using though, check it has CONFIG_KVM?
<tohatsu[m]> I installed and started Gnome. Now it prompts for user/pass. root is not working
<tohatsu[m]> and can't go back to terminal to view /etc/passwd
creechy has quit [Ping timeout: 480 seconds]
<milek7> j`ey: it does
<milek7> but I'm getting "ioctl(KVM_CREATE_VM) failed: 22 Invalid argument"
<j`ey> running qemu or?
<milek7> qemu 6.2.0
<j`ey> milek7: is that from source then?
<tohatsu[m]> <tohatsu[m]> "I installed and started Gnome..." <- had to create new user
<axboe> j`ey: running 6.2.50 here and it's fine, use it all the time
<axboe> eh that was for milek7
<j`ey> Im not sure of anything in particular in qemu apart from https://github.com/qemu/qemu/commit/d26f2f93c1853810fad7da7faa2fa1d590c1017b that was needed. and thats in 6.2.0
<milek7> maybe it requires some specific machine type? I'm trying with eg. raspi3b
<j`ey> maybe that's 32-bit?
<j`ey> try with -M virt
<j`ey> milek7: what kernel/image are you running in the guest?
<milek7> virt complains about IPA size
<milek7> no image, it doesn't get to that point
<j`ey> -M virt,highmem=off, but I thought the patch above meant you didn't need highmem=off anymore.
<milek7> I will try with qemu from source
<zv> what is the expected bandwidth on any usb-c port to, say, a usb 3.2 nvme adapter? full speed?
<j`ey> zv: USB2 speeds for now
creechy has joined #asahi
<zv> ack
<mps> milek7: sure, you need these patches
<mps> and highmem=off is not needed
gladiac has quit [Quit: k thx bye]
<mps> milek7: here is how I start it `qemu-system-aarch64 -bios QEMU_EFI.fd -machine virt -m 1G -cpu host -smp cores=4 -accel kvm -nographic -cdrom alpine-standard-3.15.0-aarch64.iso`
___nick___ has quit [Ping timeout: 480 seconds]
<milek7> ok, with qemu 6.2.50 virt machine does work
<j`ey> milek7: great
<mps> extracted m1n1.bin from installer.tar.gz doesn't work, looks like it doesn't try to find m1n1/boot.bin
<jannau> mps: you probably need to append the esp partuuid and the chainload path as arguments
clrwf0x80 has quit [Quit: Going offline, see ya! (www.adiirc.com)]
<mps> jannau: I think something like this, but 'strings m1n1.bin' don't give me any hint
<mps> tpw_rules: I see
<mps> hmm, that chainload params must be given during build?
<mps> or the UUID set by installer
<j`ey> not at build, you can do: cat m1n1.bin <(echo 'chainload=<your-uuid>;m1n1/boot.bin') > m1n1-chainload.bin
<mps> j`ey: thank you
<milek7> fresh kernel still does hang sometimes :/
creechy has quit [Ping timeout: 480 seconds]
snnw[m] has joined #asahi
creechy has joined #asahi
<zv> huh http://ix.io/3S4t disk errors
shenki has quit [Remote host closed the connection]
<mps> zv: bad cables or connectors, happened few times to me
<zv> weird. rebooted and it came up again. hopefully it doesn't turn into a recurring issue.
<mps> I have these 'things' with some cables and usb to sata adapters
Raito_Bezarius has quit [Ping timeout: 480 seconds]
tfl^ has joined #asahi
Raito_Bezarius has joined #asahi
ChaosPrincess has quit [Remote host closed the connection]
creechy has quit [Ping timeout: 480 seconds]
creechy has joined #asahi
axboe has quit [Remote host closed the connection]
axboe has joined #asahi
sheepgoose has quit [Remote host closed the connection]
sheepgoose has joined #asahi