marcan changed the topic of #asahi to: Asahi Linux: porting Linux to Apple Silicon macs | Not ready for end users / self contained install yet. Soon. | General project discussion | GitHub: https://alx.sh/g | Wiki: https://alx.sh/w | Topics: #asahi-dev #asahi-re #asahi-gpu #asahi-stream #asahi-offtopic | Keep things on topic | Logs: https://alx.sh/l/asahi
<kov>
rkjnsn, there is a patch for that already, using 4k pages
<kov>
rkjnsn, the thing is there is a huge 16-25% performance hit
g3blv[m] has joined #asahi
<g3blv[m]>
Will it be possible to boot Linux on iPads with the M1 processor once Asahi has been merged into mainline Linux?
<Dcow[m]1>
the bootloader on iPads is locked
<g3blv[m]>
OK and there is now way of unlocking the bootloader?
<g3blv[m]>
* OK and there is no know way of unlocking the bootloader?
chadmed has quit [Quit: Konversation terminated!]
<g3blv[m]>
* OK and there is no known way of unlocking the bootloader?
<opticron>
welp, got that figured out, apparently you do things to partitions, but deleting one is eraseVolume
<rkjnsn>
kov, are you referring to sven's patch? My understanding is that is to support accessing hardware behind a 16k IOMMU while running a 4k kernel on the CPU's 4k page mode. The bootlin patch I linked is to allow running a 16k kernel on a CPU without hardware 16k page support by backing each kernel page with 4 4k hardware pages, which is what I understood agraf to be asking about.
<rkjnsn>
(I gather the idea would then be to get distros to standardize on 16k-page kernels to avoid the 4k performance hit, and on CPUs without hardware 16k support, the kernel would fall back to using 4 4k hardware kernel pages per kernel page? That wouldn't help with things that need 4k pages like FEX though.)
PhilippvK has joined #asahi
phiologe has quit [Ping timeout: 480 seconds]
PaterTemporalis has quit [Ping timeout: 480 seconds]
<sorear>
"portable 16k-page kernels" wouldn't work, because the kernel needs to know the page table fanout at compile time and that's different between native and emulated 16k pages
<sorear>
running an emulated-16k kernel on M1 could provide interesting information (is the observed 16k speedup due to TLB/cache issues alone, or do kernel algorithms and fewer page faults also have a large impact?)
nepeat has quit [Remote host closed the connection]
nepeat has joined #asahi
ave3 has quit []
bpye has quit [Quit: Ping timeout (120 seconds)]
linuxgemini has quit [Quit: Ping timeout (120 seconds)]
kode54 has quit [Quit: Ping timeout (120 seconds)]
KDDLB has quit [Quit: Ping timeout (120 seconds)]
kode54 has joined #asahi
bpye has joined #asahi
eric_engestrom has quit [Read error: Connection reset by peer]
nkaretnikov has quit [Read error: Connection reset by peer]
linuxgemini has joined #asahi
robher has quit [Remote host closed the connection]
sorear has quit [Remote host closed the connection]
WindowPain has quit [Remote host closed the connection]
vx has quit [Quit: G-line: User has been permanently banned from this network.]
Hotswap has quit [Remote host closed the connection]
skipwich has quit [Quit: DISCONNECT]
sorear has joined #asahi
Hotswap has joined #asahi
tardyp has quit [Read error: Connection reset by peer]
eric_engestrom has joined #asahi
vx has joined #asahi
KDDLB has joined #asahi
skipwich has joined #asahi
tardyp has joined #asahi
nkaretnikov has joined #asahi
erincandescent has quit [Quit: No Ping reply in 180 seconds.]
robher has joined #asahi
WindowPain has joined #asahi
ave3 has joined #asahi
erincandescent has joined #asahi
KDDLB has quit []
WindowPain has quit []
KDDLB has joined #asahi
WindowPain has joined #asahi
darkapex1 is now known as darkapex
<arnd>
rkjnsn: the only thing that old patch does is to get around the 32-bit pgoff_t restrictions to allow larger disk partitions on arm32 Nas systems
<arnd>
We never merged that since the better fix would be to use a 64-bit pgoff_t like Annapurna Labs did in their NAS kernels
<arnd>
The larger pages are the workaround that Marvell originally put into their kernels
<rkjnsn>
I know that was the motivation for it, but I thought it increased the page size across the board, not just for storage? I can't say I understand this stuff all that well.
<rkjnsn>
64-bit pgoff_t would indeed be super useful for my 32-bit NAS, I seem to recall the problem there is it would require changing XArray to be 64-bit on 32-bit platforms, which would be a much wider-reaching change?
<rkjnsn>
Unless only the bottom 32-bits of the pgoff_t were used for indexing the XArray as chaining or something was used for the rare case of pages exactly 4GiB apart needing to be cached at the same time? As I say, I'm not really familiar with this stuff.
dsrt^ has quit [Remote host closed the connection]
<arnd>
I thought we had a "probably working" patch for 64-bit off_t at the time, but gclement's customer preferred the 64k page approach because it kept the performance characteristics of their older vendor kernel
<marcan>
the question is how hard would it be to pull a macOS and actually have mixed 4K/16K mode userspace :)
<marcan>
(pipe dream, but it'd solve ~all problems of course)
<marcan>
IIRC macOS just stuck all the PTE management behind an ops structure so they can have two versions and choose at process creation time/exec
<marcan>
but then of course you're still going to have "interesting" things going on at the userspace<->kernel interface, no idea how macOS handles that
gladiac has joined #asahi
<bluetail[m]>
is the usb 3.x port of the mac mini as fast as the lightning port? I think they are sitting on the same bus, right?
<jannau>
no, the thunderbolt ports (USB-C connectors) are directly on the SoC and the USB-A ports are PCIe USB controller. The USB-C ports are currently running at usb 2.0 speeds so the USB-A ports are faster
<bluetail[m]>
jannau: does that only apply to Asahi Linux or is that generally even in default state like you say?
<jannau>
I don't understand the question. This not true on macos as macos has proper USB 4/Thunderbolt 3 drivers for the usb-c port
<marcan>
that they aren't the same applies to everything; that the TB ports only run at 2.0 applies to Asahi
<marcan>
hopefully once sven is done with the PHY drivers Asahi will have better type C support than macOS though ;)
<bluetail[m]>
Thanks. I was just wondering why I'm getting "only" 320 megabytes per second on a USB A Port right now. Cables are thick enough...
<marcan>
same chip, marketing says 380MB/s, so that sounds about right
tanty has quit [Remote host closed the connection]
tanty has joined #asahi
tanty has quit []
tanty has joined #asahi
apg has joined #asahi
amarioguy has joined #asahi
ChaosPrincess has joined #asahi
kajiryoji has quit []
PaterTemporalis has joined #asahi
chadmed has quit [Read error: Connection reset by peer]
chadmed has joined #asahi
<arnd>
marcan: mixed page table formats per task are much more realistic, the two major problems I'd expect we'd have to solve for this are
<arnd>
a) change linux/arch/arm64 to actually use separate page tables for kernel and user space. I forget how the separate is today, but as the kernel normally has access to user pages, they are at least sometimes visible together
<arnd>
b) rework all the accessors for page tables to have separate kernel vs user logic. Some architectures (at least s390, possibly others) already do per-task page table levels, so a 32-bit task can use two levels, while a 64-bit task can use three, four or five levels depending on how much it uses
<marcan>
arnd: there's two top level page tables, one for kernel and for userspace, and in principle you'd set only the user/lower one to 4K/16K and keep the kernel one at 16K always (that's how macOS does it)
<marcan>
(the CPU does support mixing 4K/16K this way, simultaneously)
<marcan>
however, the weirdness will happen when the kernel has to access userspace pages, yes, since a lot of things will probably assume a 1:1 mapping between page sizes for kernel/userspace
<marcan>
and there's horrible corner cases like what happens when a 4K process tries to mmap memory that a 16K process has mapped, or worse, vice versa
<arnd>
and then there is the page cache itself, since all memory is managed in units of kernel pages, having 16KB kernel page tables would likely mean that user space can only map memory in that unit as well, even if the TLB uses smaller pages
<marcan>
yeah, but that defeats the entire purposes
<marcan>
*purpose
<marcan>
the point of 4K pages is so that mmap() on a 4K granularity actually works
<arnd>
if the kernel uses 4KB pages, it would be a little easier, user space could be treated as using 16KB "huge" pages with the 16KB page table format, while kernel and other tasks use 4K pages or the normal huge pages
<marcan>
OTOH 4K processes mapping chunks of 16K page cache pages isn't necessarily a problem
<marcan>
yeah, that would be easier
<marcan>
with memory folios
<marcan>
but then we still run into the IOMMU issues
<marcan>
in principle I want to say 4K processes working as a layer on top of a native-16K kernel should *work* in that you can just treat each 16K page as 4 4K pages that can be mapped independencly by 4K processes, though you'd need accounting for that kind of splitting
ciggi has joined #asahi
<marcan>
it'd complicate anonymous memory allocation though, how do you keep track of how much of a 16K page is actually in use and hand out the rest?
<arnd>
right, that doesn't sound realistic either. What is the actual requirement for processes that want 4KB pages?
<arnd>
are there applications that work on 4K and 64K but not on 16K pages?
<marcan>
no, the 4K story is basically for x86 compat
<marcan>
mmap() needs to work with 4K alignment, and 4K pages need to have independent protection status
<marcan>
that's most of it
<arnd>
ah, because x86 applications are built with 4K section alignment in binutils?
<marcan>
yeah, and because x86 applications can and do assume they can just mmap stuff at random 4K aligned vaddrs
<marcan>
also emulators for other 4K native systems of course
<arnd>
we had the same problem on 32-bit arm applications that initially didn't work on arm64 kernels with 64K
<marcan>
we still do, chromium/jemalloc/WebKit/Emacs and I think a few others were broken on asahi, some of those have landed fixes, some haven't
<marcan>
the emacs problem was assuming the build-time page size == the run-time page size
<arnd>
that is mostly fixed these days, but it doesn't help for existing binaries, and there is much less incentive for x86 applications to care about a random other architecture emulating theirs than there was for arm32 vs arm64
<marcan>
and yeah, for x86 the target is stuff like games
<marcan>
so no chance of recompiling those
<marcan>
if you're running x86 code in the first place it's because it's not open source
<marcan>
otherwise you'd have built it for arm64
<arnd>
IIRC there are also still assumptions about page size in Android
<marcan>
yes, that's just the building for 4K alignment problem etc
<mps>
I've got mail answer from f2fs developers about 16K page problem, one of them is interested to make it work with 16K pages but that will take time
<mps>
quote from Chao Yu `I'm interest in supporting 16KB page in f2fs, but it looks this feature needs a big change. I'd like to investigate how to handle 4KB block read/write request through 16KB page cache first.`
kov has joined #asahi
<kov>
rkjnsn, oh interesting
OwOwA has joined #asahi
<OwOwA>
Hello!
<kov>
but yeah I think that is not a big problem for distros, there is already a history of providing various kernel packages and installing the most appropriate one (686 686-pae amd64 etc)
OwOwA has quit [Quit: WeeChat 3.4]
OwOwA has joined #asahi
<OwOwA>
This might be a stupid question, but would it make sense for a package manager to include the special instructions from x86 used to speed up Rosetta 2 when compiling packages, or is it as stupid as it might sound?
<sven>
that wouldn’t make any sense, no
<sven>
that special flag is only helpful if you are emulating x86 code
<kode54>
what is this flag and what does it get passed to?
<sven>
it enables TSO ordering and it’s a bit in a sysreg
<sven>
-ordering actually :D
<kode54>
yeah, TSO doesn't benefit native ARM code
<kode54>
it only benefits a recompiler of architectures that expect strict memory ordering so that they can just recompile the code directly without having to make sure things are correctly synchronized
<kode54>
native ARM code doesn't need this step
<kode54>
or rather, the compiler already deals with memory ordering behavior
<mps>
povik: I tried to merge your for-marcan-merge branch but got conflicts https://tpaste.us/RxrY
<OwOwA>
That makes a lot of sense, thanks!
amarioguy has quit [Remote host closed the connection]
<marcan>
there's some other stuff about SSE flags
<marcan>
also not relevant for anything but x86 emu
ciggi_ has joined #asahi
ciggi has quit [Ping timeout: 480 seconds]
<marcan>
povik: j313 doesn't have the speakers disabled?
<marcan>
I'll add that in for safety
<povik>
marcan: that's an oversight if not disabled
<povik>
you are merging it?
<povik>
i hoped few people will test it before you do
<povik>
(as i said, don't have the hardware by me)
<povik>
mps: for-marcan-merge is where i have done the merge for you
<marcan>
well, I want to test it :)
<povik>
good :)
<mps>
povik: but why I can't merge it then
<povik>
you can't merge it with 'asahi', because 'asahi' has the old audio commits
<mps>
povik: i.o.w. how can I merge it on top on asahi branch
<povik>
you can't easily now
<povik>
you should just pull for-marcan-merged and build that
<povik>
or wait until it gets into new 'asahi'
<mps>
povik: ah, so I have to git pull --depth=1 it in separate dir
<povik>
that would be fine
OwOwA has quit [Quit: WeeChat 3.4]
<mps>
povik: ok, thnaks
<mps>
oh wait, is it for-marcan-merged or for-marcan-merge?
<povik>
ah, -merge probably
<mps>
yes, looks like, thanks
<marcan>
0 [a ]: snd-soc-apple-m - MacBook Pro J314/6 integrated a
<marcan>
looks like we should change the name, there are some length issues :-)
<povik>
sucks
OwOwA has joined #asahi
<marcan>
povik: so I set everything to 50 and speaker-test only gives me 5 speakers, and it sounds like the left tweeter is quieter than the right one?
<povik>
weird
OwOwA has quit []
<povik>
does it give you 5 speakers or 6 and one is dead?
<marcan>
6 and one is dead
<marcan>
0 is dead
<marcan>
("side left" according to speaker-test)
<povik>
left tweeter being quiter than right is weird, doesn't happen for me
<povik>
i suspect we need to do some more configuration of the speaker amp chips
<chadmed>
speaker-test -c 6 -D hw:0,0 -t wav
<povik>
in fields undocumented in tas2764
OwOwA has joined #asahi
<chadmed>
the speakers are mapped weirdly so you need to tell alsa explicitly to poke the hw device and that it has 6 real channels
<marcan>
yeah, I do
<chadmed>
yeah one of the left woofers will randomly drop out sometimes too heh
<chadmed>
povik's being working on it
<marcan>
yup, that's the one I'm missing
<marcan>
it's totally dead for me
<chadmed>
it should come back if you reboot
<marcan>
but yeah, the left tweeter does seem subtly quieter
<marcan>
(unless my ears are hosed
<marcan>
let me check with a mic
<povik>
the question is what's so special about left woofer
<povik>
if you swap left/right in DT, it's still the actual left woofer that drops out
<povik>
the only idea i had is that some voltages or temperature are different there
<marcan>
"uninitialized memory", right
<chadmed>
crazy apple-like silliness, could it be why they have ISENSE/VSENSE enabled? i.e. detect when the woofer goes dark and then kick the amp into reset?
<chadmed>
seems utterly stupid that theyd let that happen in production machines though
<povik>
chadmed: nah, i think it will be simpler
<povik>
when i get to it i will try applying the exact same configuration apple does, see if that fixes things
<povik>
I/V sense is enabled everywhere, even on the older models
<marcan>
oh yeah, the right tweeter is like 9 dB louder according to this
<povik>
uh
<chadmed>
huh, mine are almost exactly the same volume
<mps>
povik: do I have to update dtbs in m1n1 to test this kernel
<mps>
I think yes
<povik>
you need to. remove the diable line if you want to experiment with speakers
<povik>
(knowing the risk)
<povik>
although i know you did experiment already :p
<mps>
povik: where is this line?
<povik>
in the dts file for your model there will be a status="disabled" line for the speakers
<mps>
aha
<mps>
bellow this comment `* DANGER ZONE: You can blow your speakers!`
<mps>
lets test, if I don't come here again means I put mbp in melting pot ;)
<mps>
povik: no luck, I got blank console with this kernel
<povik>
wouldn't know why
<mps>
ok, np
<mps>
today and tomorrow I will try to upgrade macos to 12.1, will be easier to test different changes and features
<mps>
(sunny saturday is outside, would be better to go out and walk)
apg has quit [Ping timeout: 480 seconds]
apg has joined #asahi
eroux has joined #asahi
OwOwA has quit [Ping timeout: 480 seconds]
memoryleak has quit [Ping timeout: 480 seconds]
<Glanzmann>
c
<Dcow[m]1>
mps: 12.3 release scheduled for next week
<mps>
Dcow[m]1: thanks for info, but I will upgrade to 12.1 or 12.2 (if exists) not risking with new and untested one
nitchi[m] has joined #asahi
nitchi[m] has left #asahi [#asahi]
<mps>
and looks like I will not upgrade soon, have problems with booting from usb
<jannau>
supported firmware is 12.1 for now. not a problem if you use the installer to install a stub partition
memoryleak has joined #asahi
<mps>
I will try 'from the scratch' install, i.e. reset macos to default and then with installer to the step before it install distro
<jannau>
but we will have to update to 12.3 for mac studio
<marcan>
I tried 12.3 system firmware and nothing went wrong fwiw
<mps>
marcan: you are tempting me ;)
eroux has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<jannau>
I wouldn't expect problems from dcp either given that there were no breaking changes from 12.0 beta to 12.2
OwOwA has joined #asahi
kov_ has joined #asahi
kov_ has quit []
OwOwA has quit [Ping timeout: 480 seconds]
clrwf0x80 has quit [Quit: Going offline, see ya! (www.adiirc.com)]
eroux has joined #asahi
eroux has quit []
smartmobili has joined #asahi
smartmobili has left #asahi [#asahi]
<zv>
do I need to consider anything before I do 'make deb-pkg' in debian and replace the kernel (configured as 16K pagesize) remotely? nothing else to change?
ciggi has joined #asahi
ciggi_ has quit [Ping timeout: 480 seconds]
OwOwA has joined #asahi
<mps>
zv: I think it should work, though long time passed when I used deb-pkg
<mps>
zv: if you need 4K page size you need to apply patch for it
apg has quit [Ping timeout: 480 seconds]
OwOwA has quit [Ping timeout: 480 seconds]
creechy has joined #asahi
<j`ey>
I found the two places that are causing the 'buggy firmware' warning from the kernel (inside U-boot)
nametable[m] has joined #asahi
<nametable[m]>
Hello y'all, i am interested in helping with Asahi. Is the best hardware to get specifically the M1 Mac Mini?
<nametable[m]>
I do not have any Mac as of yet but I am an active Linux user and computer science student.
<nametable[m]>
Also, does it matter what matrix client i use. right now im on Syphon on mobile
<j`ey>
I dont think any hardware is the 'best', the project aims to have all models working well!
<nametable[m]>
So if I wanted to work on the go, maybe even a M1 Macbook Air would be good?
<tpw_rules>
imho mac mini is very slightly worse because some monitors are incompatible with it at this time. with the laptops you get a monitor that's guaranteed to work
<nametable[m]>
Is this the case even with a HDMI to VGA adapter?
<tpw_rules>
probably
<nametable[m]>
Along with a Mac, is there a specific device needed for Serial debugging? if i understand right, the serial port can be accessed through one of the usb c ports
<tpw_rules>
a usb A to C cable
<tpw_rules>
well, that's with m1n1's hypervisor mode. but it's pretty good
<nametable[m]>
So is there no need for the more low level serial interface?
<tpw_rules>
not really
<nametable[m]>
Any suggestions on where to find the cheapest mac possible? I know i saw an m1 air on ebay for 575 a few days ago, and minis seem to be going for 500
<j`ey>
facebook? craiglist type things?
<nametable[m]>
That's true I haven't checked Facebook marketplace yet
<LuigyLeon[m]>
tpw_rules: nice! just ran into your nixos configs after I had already gone with Debian to test. Will try it out later today or tmw :)
OwOwA has joined #asahi
memoryleak_ has joined #asahi
memoryleak has quit [Ping timeout: 480 seconds]
<tpw_rules>
LuigyLeon[m]: cool! i'm excited to get more people to try nixos. i'm going to release prebuilt ISOs shortly
creechy has joined #asahi
OwOwA has quit [Ping timeout: 480 seconds]
<tpw_rules>
how does the installer choose which device tree to install? is it possible to have multiple ones concatenated and m1n1 picks the right one?
OwOwA has joined #asahi
<matthewayers[m]>
What command do I run to get the latest version of asahi-next?
<tpw_rules>
yes
<j`ey>
tpw_rules: the installer doesnt pick one
<j`ey>
Im assuming that yes was to your own question?
<tpw_rules>
j`ey: yeah, i just found how the installer m1n1 bin was built
<tpw_rules>
it just concatenates all the device trees on which means m1n1 must know how to find the right one for that specific machine
creechy has quit [Ping timeout: 480 seconds]
<j`ey>
matthewayers[m]: if youre running the reference distro it will be via pacman -Syu
<matthewayers[m]>
I'm not completely sure if I have the reference distro or not
<tpw_rules>
the reference distro is installing ALARM using the menu option in the asahi linux installer
<matthewayers[m]>
Yeah, I used Glanzmann's installer
<tpw_rules>
i don't know how to update it though
OwOwA has joined #asahi
<matthewayers[m]>
Yeah that's the only unknown I have as of right now
memoryleak__ has joined #asahi
OwOwA has quit [Ping timeout: 480 seconds]
memoryleak_ has quit [Ping timeout: 480 seconds]
memoryleak__ has quit [Ping timeout: 480 seconds]
balrog has quit [Quit: Bye]
balrog has joined #asahi
<Glanzmann>
matthewayers[m]: You want to update the kernel?
<Glanzmann>
matthewayers[m]: Do you already have the version with the chainloading rust.
<Glanzmann>
zv: Should I build a tested kernel for the mini with 16k?
<Glanzmann>
zv: When you use the current asahi branch the dtb has changed, so you need to update the 'boot.bin' and the kernel: Like that. But I probably just build a kernel, test it locally on my mini and send you the instructions.
<Glanzmann>
matthewayers[m]: To update the kernel to the newest asahi branch you need to run as root: curl -sL tg.st/u/ksh | bash
<Glanzmann>
j`ey: Testing right now on air and mini.
<zv>
Glanzmann: thanks; so the u-boot update is a one-time thing for now?
<j`ey>
Glanzmann: thanks, just one of them should be fine!
<Glanzmann>
zv: Yes, because the kernel you have is different from the asahi tree.
<Glanzmann>
zv: So with the kernel update you do two things, you switch from 4k to 16k, but at the same time you use the lastest rebase from marcan. As a result the device tree changes.
<Glanzmann>
zv: Since asahi is actively developed expect a device tree change with every new kernel. Most of them will be backward compatible and forward compatible, but to be on the safe site you really want to update the device tree (which is in u-boot.bin) along with the kernel.
<tohatsu[m]>
I'm unable to boot from USB. Is it because of my usb is Fat32?
<j`ey>
tpw_rules: I meant that at tohatsu[m] anyway :P
<tohatsu[m]>
j`ey: just a sec, need to reboot
<mps>
j`ey: I tested boot from usb and it works with ext4
<mps>
at least worked when I tested it
<mps>
j`ey: iirc I posted script to you
<tohatsu[m]>
j`ey: 0 file(s), 0 dir(d)
<tpw_rules>
hm i'm trying to set up wifi and getting a lot of "timeout on response for query command" from the brcmfmac driver. is this what other people have been complaining about
<j`ey>
mps: Im just trying to help tohatsu[m] :)
<j`ey>
tpw_rules: no that sounds new
<mps>
j`ey: ok :)
<tpw_rules>
hm, i reloaded the driver and now it seems to work okay. weird
<tohatsu[m]>
j`ey: bad partition specification usb 0:1/ Couldn't find partiotion on usb 0:1/
<j`ey>
tohatsu[m]: can you read it on another machine / from recovery mode?
<Glanzmann>
tohatsu[m]: My assumption is that you by accident extracted the tar archive not on the usb stick but somewhere else. I also think that the instructions in the wiki are bad, I'll improve them right now.
<tohatsu[m]>
Glanzmann: ah, i set GUID partition table
mavericks has joined #asahi
<tohatsu[m]>
Glanzmann: after untar usb contained 2 filed and 1 dir
balrog has joined #asahi
<Glanzmann>
tohatsu[m]: So with the instructions I send you above it works, I just tested it on the air. Can you try to reproduce, I update in the meanwhile the wiki article.
<Glanzmann>
tohatsu[m]: So the issue is that you have an gpt partition table on the usb stick, but you need a msdos partition table.
<Glanzmann>
for the others: pbot.rmdir.de/uWx2GBd8OoZjaxPz1OHDig
<tohatsu[m]>
Glanzmann: I don't have zoom. You want to debug for wiki? I can just format it on windows PC
<Glanzmann>
tohatsu[m]: YOu can just click on the link and it should allow to join the session in the browser.
<Glanzmann>
But if you don't want that, than yes do it on windows or another Linux.
<mps>
Glanzmann: I got to point when asking me 'f: Install ans OS into free space', do I have to quit if I don't want to install asahi distro or continue with 'f' if i want macos on stub partition
<Glanzmann>
tohatsu[m]: Also you have to partitions on your usb stick, but you need one partition and the data needs to be on the first partition.
<tpw_rules>
mps: hit `f` and select the free space, selecting what you want to install is another option
<Glanzmann>
press f
<Glanzmann>
It will ask you later what you want to install.
<mps>
Glanzmann: I see
<tohatsu[m]>
Glanzmann: send link
<Glanzmann>
When it asks you you choose the last option 3.
<mps>
I will select 2: UEFI
<Glanzmann>
tohatsu[m]: It is still the same link.
<mps>
Glanzmann: it is fine and intuitive, thanks
<sarah[m]>
I see so many people explaining stuff in text. Hopefully someone can someday just make a video tutorial that we can follow.
<Glanzmann>
tohatsu[m]: But the firmware should also be on the efi parition that I saw.
<jannau>
Glanzmann: how slow was dcp for you? slow as in the mouse pointer jumps 1-2 centimeter on the screen (14" display)? I hit bug today which resulted in that behavior after switching the display's refresh rate
<Glanzmann>
tohatsu[m]: do a 'lsblk' or 'blkid'.
<Glanzmann>
jannau: Yes, like that.
<Glanzmann>
jannau: I see. I always ran 'xrandr to switch my resolution and also tried rotate'. As a result probably I also screwed the refresh rate.
<Glanzmann>
tohatsu[m]: Identify your 'esp' partition on the nvme and mount that and search there for the vendorfw/firmware.tar
<jannau>
ok, looks reproducible on my end. I'll notify you after I've fixed it
<Glanzmann>
But to be honest, my script should have done that.
<Glanzmann>
jannau: Perfect, once you did I will switch to it on the air and mini and work a few days with it and give you feedback.
<mps>
Glanzmann: oh, didn't noticed that the installer moved up for one number my rootFS partition :)
<mps>
now I'm typing from it
<Glanzmann>
mps: Yes, you had one parition before your root, now you have two, because previosuly you did not have an esp and now you might have.
<Glanzmann>
mps: Perfect. :-)
<mps>
I had ESP and now I have two ;)
<Glanzmann>
Perfect, you can never have to many esp partitions ...
<mps>
Glanzmann: I have alpine on usb from day one for rescue
<mps>
Glanzmann: anyway thank you for your help and effort to get asahi so good
<Glanzmann>
I just kept asking questions and wrote it to a shell script. :-) You have to thank marcan, alyssa, jannau, kettenis, marcan, sven, j`ey and yourself :-)
<tohatsu[m]>
Glanzmann: /vendorfw/firmware.tar is present
<tohatsu[m]>
after untar to /usr/lib/firmware nothing changed
<tohatsu[m]>
but actually i don't need internet. Can i download needed deb packages with all dependencies to usb and make it being installed automatically on boot?
<tohatsu[m]>
Glanzmann: sh /etc/rc.local should execute without errors?
<tohatsu[m]>
don't have an adapter
<tohatsu[m]>
oh, it's perl
<Glanzmann>
Yes, it should.
creechy has quit [Ping timeout: 480 seconds]
<Glanzmann>
tohatsu[m]: When you execute it, does your wifi show up?
<mps>
Glanzmann: re: live system, didn't I told that is how I installed linux first time, i.e. first booting was on usb ssd and when everything worked good then created FS on nvme and rsynced rootFs from usb
<Glanzmann>
or nano /etc/wpa/wpa_supplicant.conf if you like that better.
<Glanzmann>
put your ssid and psk in the file, safe it and do a 'ifup wlp1s0f0'
<Glanzmann>
If this interface was there all along, than it worked from the beginning without intervention how it should work ...
<Glanzmann>
mps: Yes, I remember that. You we're one of the first who used it as a desktop even before ther was nvme.
<Glanzmann>
I need to redo the live video.
<tohatsu[m]>
yes it worked
<Glanzmann>
Perfect.
creechy has joined #asahi
<mps>
Glanzmann: right I used it as desktop from ssd usb, but I'm not first I think. tmlind was probably first but I think not with asahi kernel and with kexec instead of u-boot
<Glanzmann>
Maybe I should add a script that lets you modify the live system and write it back to the initrd on the usb stick.
<tohatsu[m]>
Glanzmann: that would be perfect
<Glanzmann>
tohatsu[m]: So if you want to safe the state of your initrd, you can do this with the following script executed as root in / : cd /; find . -xdev | cpio --quiet -H newc -o | pigz -9 > ../live-stick/initrd.gz
<mps>
I usually resync all my working machines with usb disk for archiving and use as rescue if I need them
<Glanzmann>
You might need to apt-get install -y pigz cpio
<Glanzmann>
tohatsu[m]: And of course you need to mount /dev/sda1 /mnt
<Glanzmann>
and the script needs to be: cd /; find . -xdev | cpio --quiet -H newc -o | pigz -9 > /mnt/initrd.gz
<Glanzmann>
mps: I see, I never do that and once I was really pissed at myself, because I reinstalled it for fun and did not get touchpad up and running but than you and jannau came and rescued me.
<tohatsu[m]>
will try it
<Glanzmann>
tohatsu[m]: Let me know, if it work, if it doesn't also let me know, than I'll try to reproduce, but I assume it works. and the initrd is created like that. I just added the '-xdev' parameter to find so that it does not decent into mountpoints but keeps the mountpoints itself.
creechy has quit [Ping timeout: 480 seconds]
<Glanzmann>
tohatsu[m]: I had to try that initrd repack think from live. It works like charm. I would recommend that you delete /etc/rc.local and uncomment the allow-hotplug in /etc/network/interface so that the network comes up fast.
<milek7>
is "nvme0: Admin Cmd(0x6), I/O Error (sct 0x0 / sc 0xb)" during boot expected?
<Glanzmann>
milek7: I have it, too. So I assume yes.
<Glanzmann>
milek7: I think it tried to run a command which is not supported by apple nvme and gets an error, but it is not criticial and just carries on.
clrwf0x80 has joined #asahi
creechy has joined #asahi
creechy has quit [Ping timeout: 480 seconds]
creechy has joined #asahi
<milek7>
is kvm supposed to work? (on kernel from asahi branch)
<j`ey>
milek7: yes
<j`ey>
milek7: not sure what config/build you are using though, check it has CONFIG_KVM?
<tohatsu[m]>
I installed and started Gnome. Now it prompts for user/pass. root is not working
<tohatsu[m]>
and can't go back to terminal to view /etc/passwd
creechy has quit [Ping timeout: 480 seconds]
<milek7>
j`ey: it does
<milek7>
but I'm getting "ioctl(KVM_CREATE_VM) failed: 22 Invalid argument"
<j`ey>
running qemu or?
<milek7>
qemu 6.2.0
<j`ey>
milek7: is that from source then?
<tohatsu[m]>
<tohatsu[m]> "I installed and started Gnome..." <- had to create new user
<axboe>
j`ey: running 6.2.50 here and it's fine, use it all the time
<milek7>
maybe it requires some specific machine type? I'm trying with eg. raspi3b
<j`ey>
maybe that's 32-bit?
<j`ey>
try with -M virt
<j`ey>
milek7: what kernel/image are you running in the guest?
<milek7>
virt complains about IPA size
<milek7>
no image, it doesn't get to that point
<j`ey>
-M virt,highmem=off, but I thought the patch above meant you didn't need highmem=off anymore.
<milek7>
I will try with qemu from source
<zv>
what is the expected bandwidth on any usb-c port to, say, a usb 3.2 nvme adapter? full speed?
<j`ey>
zv: USB2 speeds for now
creechy has joined #asahi
<zv>
ack
<mps>
milek7: sure, you need these patches
<mps>
and highmem=off is not needed
gladiac has quit [Quit: k thx bye]
<mps>
milek7: here is how I start it `qemu-system-aarch64 -bios QEMU_EFI.fd -machine virt -m 1G -cpu host -smp cores=4 -accel kvm -nographic -cdrom alpine-standard-3.15.0-aarch64.iso`
___nick___ has quit [Ping timeout: 480 seconds]
<milek7>
ok, with qemu 6.2.50 virt machine does work
<j`ey>
milek7: great
<mps>
extracted m1n1.bin from installer.tar.gz doesn't work, looks like it doesn't try to find m1n1/boot.bin
<jannau>
mps: you probably need to append the esp partuuid and the chainload path as arguments
clrwf0x80 has quit [Quit: Going offline, see ya! (www.adiirc.com)]
<mps>
jannau: I think something like this, but 'strings m1n1.bin' don't give me any hint