ChanServ changed the topic of #asahi-dev to: Asahi Linux: porting Linux to Apple Silicon macs | General development | GitHub: https://alx.sh/g | Wiki: https://alx.sh/w | Logs: https://alx.sh/l/asahi-dev
al3xtjames has quit [Read error: Connection reset by peer]
al3xtjames has joined #asahi-dev
chadmed has quit [Quit: Konversation terminated!]
yuyichao has quit [Ping timeout: 480 seconds]
yuyichao has joined #asahi-dev
phiologe has joined #asahi-dev
PhilippvK has quit [Ping timeout: 480 seconds]
kov has quit [Quit: Coyote finally caught me]
kov has joined #asahi-dev
AnalogDigital[m] has joined #asahi-dev
the_lanetly_052___ has joined #asahi-dev
MajorBiscuit has joined #asahi-dev
the_lanetly_052___ has quit [Ping timeout: 480 seconds]
bisko has quit [Remote host closed the connection]
<sven> jannau: / axboe: so... uh.. can you guys try to set the nvme irq to IRQ_TYPE_EDGE_RISING and see if you can still reproduce the timeouts?
<sven> (also seems to use randwrite+flush performance for me)
<kettenis> use?
<sven> erm..
<_jannau_> I will tonight. do you mean s/use/improve/ ?
<sven> *increase
<kettenis> note that aic doesn't seem to have a way to configure level vs. edge
<sven> i think those two take a different path in the core irq code though
<kettenis> probably
axboe has joined #asahi-dev
<axboe> sven: was thinking last night, what prevents two writel() from running at the same time? I just cannot convince myself that the current situation is safe
<kettenis> it should be possible to test whether the interrupt is really edge triggered or level triggered at the hardware level
<axboe> running with that now, seems fine too (as expected)
<j`ey> axboe: < sven> jannau: / axboe: so... uh.. can you guys try to set the nvme irq to IRQ_TYPE_EDGE_RISING and see if you can still reproduce the timeouts?
<axboe> j`ey: I don't think this is an irq vs issue problem
<sven> if i understand the memory model correctly (very big if) two stores to device memory from two separate cores will always issue both stores but their order isn't guaranteed
<j`ey> axboe: just the messenger, sven wrote that while you were away
<axboe> j`ey: gotcha
<maz> sven: without other synchronisation, there is indeed no ordering.
<axboe> but that should be fine, we don't need ordering here
<axboe> as long as the writel is ordered with the memcpy, it doesn't matter if one issue ends up writing the tag before another
<maz> writel has a DMB, so that part is safe.
<axboe> sven: I can test the dtsi change
<axboe> sven: s/IRQ_TYPE_LEVEL_HIGH/IRQ_TYPE_EDGE_RISING for nvme@393...?
<axboe> sven: or do you want both
<sven> just IRQ_TYPE_EDGE_RISING. also not totally convinced anymore at this point it will do anything different
<maz> sven: well, it certainly has a very different behaviour at the interrupt controller level.
<axboe> let's give it a shot...
axboe has quit [Quit: leaving]
axboe has joined #asahi-dev
<axboe> sven: huh that does seem to work, at least the usual "start all the things" combined with the full find and a randread iops test didn't cause any issues
<sven> maz: ah, true. multi-tasking with $work right now and my brain doesn't seem to like that
<sven> axboe: so i had some explanation why this might be the issue in the shower this morning but i can't put it together again right now :/
<kettenis> take another shower!
<kettenis> (not implying you actually need one)
<sven> :D
<axboe> haha I totally know that feeling
<Glanzmann> axboe: Can you run this please and see if you get more than 56 IOPS? https://pbot.rmdir.de/ED-j8SkeZv_zYgRWJE9x9g
DmitrySboychakov[m] is now known as Dcow[m]
katatafjsh[m] has joined #asahi-dev
the_lanetly_052___ has joined #asahi-dev
<kov> Glanzmann, do you need that test with some specific environment/patch/config?
<Glanzmann> kov: IIUC sven had the hope that it fixes our performance problems when running with a device tree that has: s/IRQ_TYPE_LEVEL_HIGH/IRQ_TYPE_EDGE_RISING/
<sven> i think i just messed up there fwiw
<sven> it seems to fix the random lockup but i don't think it improves flush/randwrite performance
<Glanzmann> I see.
yuyichao has quit [Ping timeout: 480 seconds]
<axboe> Glanzmann: 56 iops indeed :)
<axboe> that is miserable
<Glanzmann> axboe: Thank you for testing. Worth a try. Almost as fast as my noname usb-2 stick.
<axboe> Glanzmann: lol
<axboe> looking at blktrace, some FWFW writes do indeed take forever
<axboe> 17-18ms
<_jannau_> I will tonight. do you mean s/use/improve/ ?
<_jannau_> err
axboe has quit [Quit: bbiab]
c10l39 is now known as c10l
chengsun has quit [Ping timeout: 480 seconds]
yuyichao has joined #asahi-dev
axboe has joined #asahi-dev
<axboe> sven: still triggers hang with rising edge, just a bit harder to trigger
<axboe> was poking at the slower flush
<sven> yeah.. i've been reading some irq code and i haven't even found a different code path with the M1 interrupt controller yet
<sven> so i'm just even more confused now :D
plantaintion3[m] has joined #asahi-dev
<kettenis> the hangs are seen with both AIC1 and AIC2 isn't it?
<_jannau_> yes and cpu frequency scaling makes them much more likely. to the point that nobody has noticed them without it
axboe has quit [Read error: Connection reset by peer]
axboe has joined #asahi-dev
<axboe> sven: seems that grabbing the cq lock really is required, still see timeouts/hangs with just serializing the issue itself
axboe has quit [Quit: leaving]
chengsun has joined #asahi-dev
axboe has joined #asahi-dev
<sven> it smells like we are racing something in the irq handler that makes us miss the next interrupt for the completion :/
<sven> I think to actually serialize the writes we’d need an additional dummy read after that write due to https://www.kernel.org/doc/html/latest/driver-api/io_ordering.html
<kettenis> I thought the dummy read was necessary in the case writes are posted
<kettenis> but on these machines mmio to non-PCIE devices is explicitly non-posted
___nick___ has joined #asahi-dev
axboe has quit [Quit: Lost terminal]
the_lanetly_052__ has joined #asahi-dev
the_lanetly_052___ has quit [Ping timeout: 480 seconds]
axboe has joined #asahi-dev
<axboe> sven: curious on how much you looked at the slow FLUSH
<axboe> did some checking here, and the only fast flushes I see are the ones where there has been no writes since the last flush
<axboe> which is unsurprising
<axboe> even if we only did a single write, and that write was FUA, the flush is still slow
<axboe> 16.5 -> 17.5 msec
<sven> That’s already more than I figured out
<axboe> sven: have you looked at osx timings? I seem to recall you said it just does fewer flushes, but are all flushes slow there too
<sven> the same thing happens on the T2 Macs with pci.c fwiw
<axboe> sven: heh ok, curious
<axboe> so it could very well be that the device just sucks at flushes and we're screwed
<sven> I only looked at the commands it issued, but I can see if I can also get get the flush timing
<axboe> on the data I collected, nothing else seems useful apart from the fact that writes-since-flush is always != for a slow flush, and only fast if writes-since-flush == 0
<axboe> and even if there are no writes, it's still a 20-25 usec request
___nick___ has quit []
___nick___ has joined #asahi-dev
___nick___ has quit []
axboe has quit [Quit: Lost terminal]
the_lanetly_052__ has quit [Ping timeout: 480 seconds]
___nick___ has joined #asahi-dev
axboe has joined #asahi-dev
<axboe> outside of missing some weird init and not having timings for osx, guess I'm just going to mark it 'write through' for now :/
<marcan> axboe: I have a suspicion that macos doesn't issue flushes and instead relies on having the proper last-gasp hooks to guarantee it'll issue one (and it will complete) if power is lost, or maybe that's outright guaranteed by the controller itself?
<axboe> marcan: was thinking that's probably the case on osx, and hence nobody ever cared about flush latencies
<marcan> I should test if macos does anything funky on the mac mini when pulling the power
<marcan> let's see
<sven> macOS does issues flushes but with a much lower frequency compared to linux when running the same fio command
<axboe> marcan: if it's guaranteed by the controller itself, it should just not advertise a volatile write cache
<sven> and you can lose data if you just hard reset the machine before a flush
<marcan> yeah but that normally wouldn't happen
<sven> the firmware will also complain about an unclean shutdown if you don’t clear that control layer enable but before rebooting but I haven’t seen any data loss from that
<sven> *controller enable
<sven> I think openbsd takes that “just never issue flushes” approach though
<Glanzmann> axboe: Could you explain how to mark a device as write-through? Does that mean if I issue a sync in Linux that no flush will happen. Because this would be helpful for the m1 notebook owners to improve performance.
<axboe> I just do: `echo "write through" > /sys/block/nvme0n1/queue/write_cache"` for now...
<axboe> Glanzmann: ^^
<axboe> Glanzmann: and yes, that's what it means
<Glanzmann> axboe: Thanks.
<axboe> Glanzmann: it'll bump your test case from 56 iops to 14k or something like that :)
<axboe> alternatively, some sort of time based hack might make sense
<axboe> "only issue flush if X seconds has passed since last issue"
<axboe> kinda nasty, but safer
<Glanzmann> axboe: Perfect. I was hopeing for such a workaround.
<Glanzmann> axboe: With your little tunening, I already notice when doing a 'apt install -y fio' that it is faster. A lot faster. ;-)
<axboe> Glanzmann: yeah, I noticed that too, just flies through the install part
<sven> it wouldn’t be the first nasty hack we need for these machines :(
<marcan> < axboe> "only issue flush if X seconds has passed since last issue" <- isn't that what laptop_mode used to do back in the day?
<marcan> haven't seen any last-gasp stuff from macos
<axboe> marcan: no, that was on dirty data writeback
<axboe> side bar, I actually wrote laptop_mode back in the day
<marcan> ha
<marcan> also yeah, now I remember it was that *plus* I had an LD_PRELOAD to stop firefox from issuing syncs all the time and breaking it
<marcan> :)
<axboe> oh yeah, LD_PRELOAD hack was popular
<axboe> but for this one, probably a 5 second period for flush would be useful
<marcan> even 1 second tbh
<marcan> I wonder what macos does
<marcan> I wouldn't put it past apple to do some similar hack
<axboe> yeah, might be useful to just benchmark for typical workloads
<axboe> let me hack it up
<sven> I’ve tried looking for their flush logic but ugh compiled iokit c++ code
<marcan> I need to get some sleep, but yeah
<marcan> sven: maybe just looking at flush commands from the HV and seeing if there is some logical distribution?
<marcan> (in intervals9
<sven> but it wouldn’t surprise me and would also fit that I only rarely saw flush commands in the HV
<sven> yeah, just need to add some time stamp whenever the command was issued
<marcan> at least for laptops this is a ~non issue, inherently, especially if we can make sure a flush gets issued on panic and other special cases like that
<marcan> OTOH on the mac mini, if there is no last gasp signal, this means potential data loss
<marcan> I need to look closer but I didn't see anything obvious going to nvme right now
<marcan> that said, that thing keeps running for like >2 seconds after I pull the plug
<marcan> so if there is no actual last gasp signal, that is one hack of a missed opportunity :p
<marcan> who needs SSD backup caps when the entire system is low power and the PSU is built in? that's a free 2 second UPS.
<sven> that’s quite a long time
<marcan> turns out when your entire system sips power, that PSU main reservoir cap is pretty good...
<sven> but then again.. it doesn’t even have the display drawing lots of power and the rest is very efficient
<marcan> also note that this is in japan, which has the lowest AC voltage in the world, so worst possible case :)
<marcan> anyway, I should get some sleep
<marcan> I'll clean up kernel branches tomorrow, today I ended up spending most of the day on non-asahi stuff
<sven> sounds good. We should figure out how to submit rtkit then.
<sven> and hopefully once we figure out this weird missed irqs nvme as well
<marcan> yup
<marcan> did you look at the atomic stuff?
<marcan> I kind of hate it but...
<sven> yeah, I remember trying to implement it and then just leaving it for another day
<sven> glad you took a shot at it ;)
<sven> I’ll take a closer look tomorrow but it looked reasonable at first glance
<jannau> I'll test dcp on top of spmi/work
<marcan> I want to merge that if it's usable on all the machines these days too, so that will be useful
<jannau> I haven't tested dcp on t600x yet
<marcan> I imagine it won't work if nobody has tried it
<marcan> at the very least the DCP DVA offset is different
<marcan> that needs to go in the DT
mps has joined #asahi-dev
<jannau> also the reserved regions changes are still very premature and require m1n1 changes
<jannau> it might work if the region-id numbers in the adt are constant
<jannau> for the premapped regions I'm still unsure whether to use regions from the carveout-memory-map or just scan the dart
axboe has quit [Quit: Lost terminal]
<alyssa> marcan: oops? *sweat*
<alyssa> most of the dcp driver code predates the t600x machines being released, don't sue me! :-p
<sven> I think jannau volunteered to take over anyway :>
<jannau> volunteered == didn't say no fast enough
<alyssa> luv u guys
MajorBiscuit has quit [Ping timeout: 480 seconds]
gpanders_ has joined #asahi-dev
axboe has joined #asahi-dev
<axboe> hack alert...
<axboe> Glanzmann: let me know if you try the above and if there's any noticeable difference between that and using the write through hack
<sven> :D
<axboe> sven putting on a polite face ;)
<Glanzmann> axboe: Will do so. ;-)
<sven> I enjoy the occasional hack very much ;)
<axboe> a 200x speedup at minimal risk warrants a hack ;)
<sven> definitely!
<j`ey> especially a non-invasive one
<axboe> it really should be done at the nvme_ns level, or on the block side. but... would rather keep it local
gpanders_ has quit [Remote host closed the connection]
gpanders_ has joined #asahi-dev
gpanders_ is now known as Guest579
<Glanzmann> axboe: About the 12k. Do you have an explanation why macos can do 18.9k and OpenBSD can do 31.2k? (OpenBSD don't do any flushes at all IIRC, but kettenis knows for sure).
gpanders is now known as Guest580
Guest579 is now known as gpanders
gpanders is now known as andg
Guest580 is now known as gpanders
<axboe> Glanzmann: not right now, no
<axboe> even doing no flushes on linux will be about the same as 1 per second
<axboe> so it's not the flushes at that point
<axboe> might be differences in what O_SYNC provides?
<Glanzmann> axboe: Maybe but there was also and end-fsync ...
<Glanzmann> s/and/an/
<axboe> Glanzmann: end_fsync is just an fsync() when all 1G have been written
<axboe> so won't change much
andg is now known as gpanders_
<Glanzmann> Yep, but when doing I/O tests I sometimes notice that while the test is running throughput and iops are faster than when the fsync is commenced and than the results will be 1/3 or so less than what was shown during the workload.
<axboe> if you change sync=1 to sync=dsync, for example, I get ~50K IOPS
<axboe> Glanzmann: end_fsync will make a potentially big difference if you're just doing regular buffered writes
<axboe> as otherwise the job may just be dirtying page cache, and run very fast
<axboe> but not touch disk
<Glanzmann> I see. I saw that probably while benchmarking nfs volumes.
<Glanzmann> axboe: I wasn't aware of O_DSYNC. But I get the idea.
<Glanzmann> After reading up on it.
<axboe> it's similar to fdatasync
<axboe> data vs data + metadata
gpanders_ has quit [Remote host closed the connection]
gpanders_ has joined #asahi-dev
<Glanzmann> I see. Btw. with your patch I don't notice any additional latency. And my fio test comes also back with: 64K, how can that be? https://pbot.rmdir.de/Bizcz1sc3Vs3EZ1raP31kA
<axboe> maybe your drive is faster than mine?
<Glanzmann> axboe: I have a macbook air with worse specs than yours.
<axboe> is it an overwrite pass?
<axboe> maybe rm test file and see if it reproduces
<Glanzmann> Nope.
<Glanzmann> That was the first fio run I ever did on this particular installation. before I did not benchmark because I already notices that the issue was gone due to how fast apt was installing packages.
<Glanzmann> noticed*
<axboe> I get ~12k first pass, ~50k subsequent pass
<Glanzmann> axboe: Okay it must have been an overwrite, because: https://pbot.rmdir.de/INSyOFFbbiBinZsyUYoPpA
<axboe> you can add --unlink=1, then fio will remove the test file(s) after the run
<axboe> if it created them
<axboe> yeah that looks inline with what I see
<Glanzmann> I see, will do that in the future. I normally have scripts rming the files between runs ...
<tpw_rules> axboe: does that 1 sec flush patch violate any barrier guarantees made by fsync?
<axboe> tpw_rules: certainly
<axboe> it's more of an eventually consistent model ;)
<tpw_rules> bleh, i really don't like that. is it possible to fix?
<axboe> tpw_rules: make flushes faster on the nvme hw...
<axboe> in a laptop, it should be fine
<tpw_rules> i mean in theory, barriers are orthogonal to syncs
<tpw_rules> to flushes, rather
<axboe> we don't do barriers in linux anymore
<jannau> marcan, sven: dcp works with rtkit from spmi/work
<tpw_rules> idk if you are in the other channel, but there i asked, who flushes more than 50 times/sec except apt and fio?
<axboe> tpw_rules: probably pretty limited
David[m]123456789 has joined #asahi-dev
<tpw_rules> ahh. well if someone is tallying i don't like that patch
<axboe> totally fair, I did preface it with a hack alert
<Glanzmann> tpw_rules: I notice when I install debian packages (which syncs after extracting every package) that it is dog slow without a measuring tool. That is how I noticed that we had the problem in the first place. Because the nvme felt like a spinning disk.
<axboe> obviously the issue here is hw and that's where it ultimately should be fixed. I have worked with vendors before on stupid issues like that, but I have no clout at apple, so...
<tpw_rules> what does "sync" mean? surely you are not extracting more than 50 packages per second?
<axboe> if the patch were to go anywhere, it would just mean that the default should be 0 and hence retain standard behavior
<tpw_rules> axboe: oh okay
<axboe> tpw_rules: if you do fsync, then you can get 2 flushes per write sometimes (data + meta)
<sven> we can always file a radar with apple and have it disappear and get no feedback for years! ;)
<axboe> sven: perfect
<axboe> ;)
<axboe> surely somewhere on this project must have some good apple internal contacts?
<tpw_rules> axboe: so apt runs down and fsyncs every file it's written?
<axboe> s/somewhere/someone
<axboe> tpw_rules: haven't checked, but looking at difference in speed, seems very likely
<tpw_rules> would just one sync() have just one or perhaps two penalties?
<sven> there are some apple employees lurking in this channel but I think the issue with this problem is that it only happens under linux
<axboe> sven: I thought osx just does way fewer flushes? surely the actual flush is just as slow there. but I guess the end result is the same
<sven> axboe: yeah, if fio (or anything else in userland) was slow on macos we could probably get it fixed. but with the way it currently is I don't have high hopes
<axboe> tpw_rules: hard to say, depends on what is dirty in the page cache
<axboe> sven: makes sense
<axboe> if this patch got cleaned up and changed to 0 by default, then it could be applicable. I'd run it ;)
<sven> i'm in favor of that patch fwiw
<axboe> ok
<axboe> got a meeting in a few, but I'll spend a bit of time to make it solid (multi-ns, ns ref, etc)
<sven> i'll still try to confirm if macos does the same thing but it certainly looks like it does
* Glanzmann I'm also of in favor for the patch, but I begged for months for it.
<sven> measuring flush time there is a bit tricky because by the time I can figure out the command is a flush there's already usb and python in the way but i'll see what i can do
<axboe> sven: at these completions rates, even serial would likely show it ;)
<sven> fair :D
<tpw_rules> if the default is 0 and that means the behavior doesn't change by default so the user can decide if they want to be unsafe, then i think it would be a reasonable patch
<axboe> that'd be the plan. it's safe now, just not by default, you'd have to load it with the option to set flush_interval to 0
<tpw_rules> so why did linux ditch the concept of barriers?
<tpw_rules> because nobody else in the stack cared?
<axboe> barriers were originally done by myself and chris mason, back in ~2001 as suse was looking for an EMC contract
<axboe> we thought it was pretty nifty, but it was too complicated and didn't really yield much of a benefit
<axboe> so at some point later in time it was abandoned for the much simpler write-and-wait
<axboe> one idea back then was to propagate barriers into ordered commands on the hw, like tasks on SCSI
<axboe> but that never came to fruition
<axboe> sadly we never really got useful hw support beyond "flush the cache", which is sad
<tpw_rules> just so we're on the same page, this is the barriers we are talking about, right: https://lwn.net/Articles/283161/
<axboe> yep
<tpw_rules> okay. yeah that is sad
<axboe> it's just a way to do ordered writes, or it was
<axboe> looks like the time was about right
<axboe> sad to say I avoid most papers on anything related to OS stuff
<sven> ~15-30 msec on macos for flushes (with all the additional delays introduced due to usb and python)
<axboe> looks on par then
<tpw_rules> and macos just coalesces flushes?
<axboe> I only see two ranges here: ~20 usec if no writes have happened since last flush, ~20 msec if even a FUA write has happened
<axboe> eg slow and fast, nothing really in between
<sven> the python+usb delay is already bigger than 20 usec afaict
<sven> let's see if i can find a fast flush. but so far i've only seen the slow ones
<axboe> fast ones are very rare on linux at least, we generally don't do that.
<axboe> you might have the synthetically issue them on osx to test
<axboe> and saying that without knowing what kind of passthrough support osx has
<tpw_rules> basically that paper is about separating fsync into two functions: one that just preserves ordering and so doesn't have to flush the cache, and one that enforces durability and ordering. in most cases only the former can be used, so you get crash-safe guarantees without paying the significant flush cost
<axboe> with the removal of barriers, there's zero notion of ordering in the linux IO stack
<axboe> with multi-queue, it would be difficult too...
<sven> ok, also found a ~20x write -> flush -> 1x write -> flush sequence where both flushs are slow as well
<axboe> yeah that's what I see here too, doesn't matter if it's 1 or 1000 writes
<axboe> takes the same amount of time
<sven> yeah
<axboe> only exception is zero writes
<sven> so it looks like macos doesn't have any kind of magic to make them faster
<axboe> so they just do fewer, fs specific
___nick___ has quit [Ping timeout: 480 seconds]
<axboe> checked the samsung drive in my other laptop, and it's about 6x faster for flushes
<sven> and if only macos would now let me ssh to it again I could run fio with randwrite once more to see if there's a pattern for the issued flushes
<axboe> I'd be curious if the flushes impact read latencies
<axboe> I'll come up with a few test cases
<sven> can't see any obvious pattern and it's getting late here. i'll take a closer look tomorrow
<Glanzmann> sven: Have a good night sleep. I'm curious if you find something.
jeffmiw has quit [Ping timeout: 480 seconds]
jeffmiw has joined #asahi-dev
plantaintion3[m] has quit [coherence.oftc.net reticulum.oftc.net]
katatafjsh[m] has quit [coherence.oftc.net reticulum.oftc.net]
gabuscus has quit [coherence.oftc.net reticulum.oftc.net]
eragon has quit [coherence.oftc.net reticulum.oftc.net]
Ferluci[m] has quit [coherence.oftc.net reticulum.oftc.net]
Liam[m] has quit [coherence.oftc.net reticulum.oftc.net]
perigoso[m] has quit [coherence.oftc.net reticulum.oftc.net]
milek7_ has quit [coherence.oftc.net reticulum.oftc.net]
vup has quit [coherence.oftc.net reticulum.oftc.net]
NightsOnly[m] has quit [coherence.oftc.net reticulum.oftc.net]
DarkShadow44 has quit [coherence.oftc.net reticulum.oftc.net]
akemin_dayo has quit [coherence.oftc.net reticulum.oftc.net]
retonlage[m] has quit [coherence.oftc.net reticulum.oftc.net]
Emantor has quit [coherence.oftc.net reticulum.oftc.net]
SocioProphet[m] has quit [coherence.oftc.net reticulum.oftc.net]
ybk[m] has quit [coherence.oftc.net reticulum.oftc.net]
drwhax[m]1 has quit [coherence.oftc.net reticulum.oftc.net]
NotHere[m] has quit [coherence.oftc.net reticulum.oftc.net]
ianlienfa[m] has quit [coherence.oftc.net reticulum.oftc.net]
Dcow[m]1 has quit [coherence.oftc.net reticulum.oftc.net]
houlton[m] has quit [coherence.oftc.net reticulum.oftc.net]
JuniorJPDJ has quit [coherence.oftc.net reticulum.oftc.net]
jannau has quit [coherence.oftc.net reticulum.oftc.net]
j`ey has quit [coherence.oftc.net reticulum.oftc.net]
flying_sausages has quit [coherence.oftc.net reticulum.oftc.net]
nirusu[m] has quit [coherence.oftc.net reticulum.oftc.net]
matthewayers[m] has quit [coherence.oftc.net reticulum.oftc.net]
long[m] has quit [coherence.oftc.net reticulum.oftc.net]
sppdqd[m] has quit [coherence.oftc.net reticulum.oftc.net]
ogimgd[m] has quit [coherence.oftc.net reticulum.oftc.net]
latko[m] has quit [coherence.oftc.net reticulum.oftc.net]
LorenzKofler[m] has quit [coherence.oftc.net reticulum.oftc.net]
roxfan has quit [coherence.oftc.net reticulum.oftc.net]
ponkey364[m] has quit [coherence.oftc.net reticulum.oftc.net]
NightRaven[m] has quit [coherence.oftc.net reticulum.oftc.net]
fezhead[m] has quit [coherence.oftc.net reticulum.oftc.net]
blazra[m] has quit [coherence.oftc.net reticulum.oftc.net]
AnushervonTabarov[m] has quit [coherence.oftc.net reticulum.oftc.net]
Bastian[m] has quit [coherence.oftc.net reticulum.oftc.net]
xerpi[m] has quit [coherence.oftc.net reticulum.oftc.net]
_alice has quit [coherence.oftc.net reticulum.oftc.net]
emilazy has quit [coherence.oftc.net reticulum.oftc.net]
exhan[m] has quit [coherence.oftc.net reticulum.oftc.net]
wCPO6 has quit [coherence.oftc.net reticulum.oftc.net]
casperes1996[m] has quit [coherence.oftc.net reticulum.oftc.net]
wollymilkcap[m] has quit [coherence.oftc.net reticulum.oftc.net]
jeffmiw has quit [coherence.oftc.net reticulum.oftc.net]
AnalogDigital[m] has quit [coherence.oftc.net reticulum.oftc.net]
David[m]123456789 has quit [coherence.oftc.net reticulum.oftc.net]
phiologe has quit [coherence.oftc.net reticulum.oftc.net]
chengsun has quit [coherence.oftc.net reticulum.oftc.net]
n1c has quit [coherence.oftc.net reticulum.oftc.net]
mrkajetanp has quit [coherence.oftc.net reticulum.oftc.net]
yrlf has quit [coherence.oftc.net reticulum.oftc.net]
pulpy_orange2[m] has quit [coherence.oftc.net reticulum.oftc.net]
DanielHuisman[m] has quit [coherence.oftc.net reticulum.oftc.net]
dhewg has quit [coherence.oftc.net reticulum.oftc.net]
krirogn[m] has quit [coherence.oftc.net reticulum.oftc.net]
M0x8FF[m] has quit [coherence.oftc.net reticulum.oftc.net]
kit_ty_kate has quit [coherence.oftc.net reticulum.oftc.net]
blasty has quit [coherence.oftc.net reticulum.oftc.net]
l3k[m] has quit [coherence.oftc.net reticulum.oftc.net]
IvanMaksimovic[m] has quit [coherence.oftc.net reticulum.oftc.net]
RianSouzaSantos[m] has quit [coherence.oftc.net reticulum.oftc.net]
rethematrix[m] has quit [coherence.oftc.net reticulum.oftc.net]
mini has quit [coherence.oftc.net reticulum.oftc.net]
josipknezovic[m] has quit [coherence.oftc.net reticulum.oftc.net]
deathdisco[m] has quit [coherence.oftc.net reticulum.oftc.net]
Rhys[m]12 has quit [coherence.oftc.net reticulum.oftc.net]
jeh[m] has quit [coherence.oftc.net reticulum.oftc.net]
lockna has quit [coherence.oftc.net reticulum.oftc.net]
djk121[m] has quit [coherence.oftc.net reticulum.oftc.net]
legarts[m] has quit [coherence.oftc.net reticulum.oftc.net]
bluerise has quit [coherence.oftc.net reticulum.oftc.net]
suricato has quit [coherence.oftc.net reticulum.oftc.net]
sproede[m] has quit [coherence.oftc.net reticulum.oftc.net]
mariogrip[m] has quit [coherence.oftc.net reticulum.oftc.net]
citruscitrus[m] has quit [coherence.oftc.net reticulum.oftc.net]
Retr0id has quit [coherence.oftc.net reticulum.oftc.net]
rohin[m] has quit [coherence.oftc.net reticulum.oftc.net]
lucifer178[m] has quit [coherence.oftc.net reticulum.oftc.net]
izzyisles[m] has quit [coherence.oftc.net reticulum.oftc.net]
facez[m] has quit [coherence.oftc.net reticulum.oftc.net]
digitalfx[m] has quit [coherence.oftc.net reticulum.oftc.net]
CristianMgheruan-Stanciu[m] has quit [coherence.oftc.net reticulum.oftc.net]
bmrgz[m] has quit [coherence.oftc.net reticulum.oftc.net]
sven has quit [coherence.oftc.net reticulum.oftc.net]
HayashiEsme[m] has quit [coherence.oftc.net reticulum.oftc.net]
Eighth_Doctor has quit [coherence.oftc.net reticulum.oftc.net]
user974[m] has quit [coherence.oftc.net reticulum.oftc.net]
trouter has quit [coherence.oftc.net reticulum.oftc.net]
Esmil has quit [coherence.oftc.net reticulum.oftc.net]
cynthia has quit [coherence.oftc.net reticulum.oftc.net]
GregoryRWarnes[m] has quit [coherence.oftc.net reticulum.oftc.net]
Emantor[m] has quit [coherence.oftc.net reticulum.oftc.net]
spot[m] has quit [coherence.oftc.net reticulum.oftc.net]
Xichao[m] has quit [coherence.oftc.net reticulum.oftc.net]
fried_dede[m] has quit [coherence.oftc.net reticulum.oftc.net]
Synth[m] has quit [coherence.oftc.net reticulum.oftc.net]
Jamie[m]1 has quit [coherence.oftc.net reticulum.oftc.net]
uur[m] has quit [coherence.oftc.net reticulum.oftc.net]
aleasto has quit [coherence.oftc.net reticulum.oftc.net]
XeR has quit [coherence.oftc.net reticulum.oftc.net]
msmith12[m] has quit [coherence.oftc.net reticulum.oftc.net]
lonjil has quit [coherence.oftc.net reticulum.oftc.net]
ah-[m] has quit [coherence.oftc.net reticulum.oftc.net]
ella-0[m] has quit [coherence.oftc.net reticulum.oftc.net]
samfromspace[m] has quit [coherence.oftc.net reticulum.oftc.net]
xorly[m] has quit [coherence.oftc.net reticulum.oftc.net]
enick_341 has quit [coherence.oftc.net reticulum.oftc.net]
peerp[m] has quit [coherence.oftc.net reticulum.oftc.net]
ar88kk[m] has quit [coherence.oftc.net reticulum.oftc.net]
Hinata[m] has quit [coherence.oftc.net reticulum.oftc.net]
Redecorating[m] has quit [coherence.oftc.net reticulum.oftc.net]
rusty-nail[m] has quit [coherence.oftc.net reticulum.oftc.net]
Nspace has quit [coherence.oftc.net reticulum.oftc.net]
zelig_[m] has quit [coherence.oftc.net reticulum.oftc.net]
LilleCarl[m] has quit [coherence.oftc.net reticulum.oftc.net]
pg12 has quit [coherence.oftc.net reticulum.oftc.net]
TheLink has quit [coherence.oftc.net reticulum.oftc.net]
BenPetterborg[m] has quit [coherence.oftc.net reticulum.oftc.net]
vimsos[m] has quit [coherence.oftc.net reticulum.oftc.net]
IbrahimMAkrab[m] has quit [coherence.oftc.net reticulum.oftc.net]
c10l has quit [coherence.oftc.net reticulum.oftc.net]
timokrgr has quit [coherence.oftc.net reticulum.oftc.net]
riker77 has quit [coherence.oftc.net reticulum.oftc.net]
gladiac has quit [coherence.oftc.net reticulum.oftc.net]
YichaoYu[m] has quit [coherence.oftc.net reticulum.oftc.net]
DanStrong[m] has quit [coherence.oftc.net reticulum.oftc.net]
PthariensFlame[m] has quit [coherence.oftc.net reticulum.oftc.net]
petermlyon[m] has quit [coherence.oftc.net reticulum.oftc.net]
ghantaz[m] has quit [coherence.oftc.net reticulum.oftc.net]
psydroid has quit [coherence.oftc.net reticulum.oftc.net]
Glanzmann has quit [coherence.oftc.net reticulum.oftc.net]
psykose has quit [coherence.oftc.net reticulum.oftc.net]
philhug has quit [coherence.oftc.net reticulum.oftc.net]
_andy_t_ has quit [coherence.oftc.net reticulum.oftc.net]
nico_32_ has quit [coherence.oftc.net reticulum.oftc.net]
c1truz[m] has quit [coherence.oftc.net reticulum.oftc.net]
ograff has quit [coherence.oftc.net reticulum.oftc.net]
mr_sq[m] has quit [coherence.oftc.net reticulum.oftc.net]
hramrach has quit [coherence.oftc.net reticulum.oftc.net]
thermoblue[m] has quit [coherence.oftc.net reticulum.oftc.net]
javier_varez[m] has quit [coherence.oftc.net reticulum.oftc.net]
xiaomingcc[m] has quit [coherence.oftc.net reticulum.oftc.net]
zbotpath[m]1 has quit [coherence.oftc.net reticulum.oftc.net]
ey3ball[m] has quit [coherence.oftc.net reticulum.oftc.net]
thasti has quit [coherence.oftc.net reticulum.oftc.net]
vafanlignarde has quit [coherence.oftc.net reticulum.oftc.net]
povik has quit [coherence.oftc.net reticulum.oftc.net]
abilash1994[m] has quit [coherence.oftc.net reticulum.oftc.net]
Mary has quit [coherence.oftc.net reticulum.oftc.net]
arnidg[m] has quit [coherence.oftc.net reticulum.oftc.net]
gpanders[m] has quit [coherence.oftc.net reticulum.oftc.net]
unevenrhombus[m] has quit [coherence.oftc.net reticulum.oftc.net]
rowang077[m] has quit [coherence.oftc.net reticulum.oftc.net]
davay[m] has quit [coherence.oftc.net reticulum.oftc.net]
HaoYanQi[m] has quit [coherence.oftc.net reticulum.oftc.net]
Andre[m]1 has quit [coherence.oftc.net reticulum.oftc.net]
spokv[m] has quit [coherence.oftc.net reticulum.oftc.net]
blassphemy[m] has quit [coherence.oftc.net reticulum.oftc.net]
BastienSaidi[m] has quit [coherence.oftc.net reticulum.oftc.net]
grange_c has quit [coherence.oftc.net reticulum.oftc.net]
arekm has quit [coherence.oftc.net reticulum.oftc.net]
h_ro[m] has quit [coherence.oftc.net reticulum.oftc.net]
agraf has quit [coherence.oftc.net reticulum.oftc.net]
sajattack[m] has quit [coherence.oftc.net reticulum.oftc.net]
mixi has quit [coherence.oftc.net reticulum.oftc.net]
happy-dude[m] has quit [coherence.oftc.net reticulum.oftc.net]
leah2 has quit [coherence.oftc.net reticulum.oftc.net]
lovesegfault has quit [coherence.oftc.net reticulum.oftc.net]
dottedmag has quit [coherence.oftc.net reticulum.oftc.net]
V has quit [coherence.oftc.net reticulum.oftc.net]
dnjmis[m] has quit [coherence.oftc.net reticulum.oftc.net]
ar has quit [coherence.oftc.net reticulum.oftc.net]
bpalmer4[m] has quit [coherence.oftc.net reticulum.oftc.net]
ducc[m] has quit [coherence.oftc.net reticulum.oftc.net]
unrelentingtech has quit [coherence.oftc.net reticulum.oftc.net]
rkjnsn has quit [coherence.oftc.net reticulum.oftc.net]
nilsi[m] has quit [coherence.oftc.net reticulum.oftc.net]
RasmusEneman[m] has quit [coherence.oftc.net reticulum.oftc.net]
user1tt[m] has quit [coherence.oftc.net reticulum.oftc.net]
KrushnaDeore[m] has quit [coherence.oftc.net reticulum.oftc.net]
Stary has quit [coherence.oftc.net reticulum.oftc.net]
kettenis has quit [coherence.oftc.net reticulum.oftc.net]
daniel0611[m] has quit [coherence.oftc.net reticulum.oftc.net]
sikkileo[m] has quit [coherence.oftc.net reticulum.oftc.net]
Dcow[m] has quit [coherence.oftc.net reticulum.oftc.net]
feeleep[m] has quit [coherence.oftc.net reticulum.oftc.net]
FireFox317 has quit [coherence.oftc.net reticulum.oftc.net]
M1bn3mar[m] has quit [coherence.oftc.net reticulum.oftc.net]
foxlet has quit [coherence.oftc.net reticulum.oftc.net]
commandoline has quit [coherence.oftc.net reticulum.oftc.net]
DragoonAethis has quit [coherence.oftc.net reticulum.oftc.net]
os has quit [coherence.oftc.net reticulum.oftc.net]
gpanders_ has quit [coherence.oftc.net reticulum.oftc.net]
bpye has quit [coherence.oftc.net reticulum.oftc.net]
al3xtjames has quit [coherence.oftc.net reticulum.oftc.net]
WhyNotHugo has quit [coherence.oftc.net reticulum.oftc.net]
alyssa has quit [coherence.oftc.net reticulum.oftc.net]
skipwich has quit [coherence.oftc.net reticulum.oftc.net]
boardwalk has quit [coherence.oftc.net reticulum.oftc.net]
kendfinger has quit [coherence.oftc.net reticulum.oftc.net]
cptcobalt has quit [coherence.oftc.net reticulum.oftc.net]
conradev has quit [coherence.oftc.net reticulum.oftc.net]
pFalken has quit [coherence.oftc.net reticulum.oftc.net]
Z750 has quit [coherence.oftc.net reticulum.oftc.net]
jkkm has quit [coherence.oftc.net reticulum.oftc.net]
x56 has quit [coherence.oftc.net reticulum.oftc.net]
nyx_o has quit [coherence.oftc.net reticulum.oftc.net]
axboe has quit [coherence.oftc.net reticulum.oftc.net]
coder_kalyan_ has quit [coherence.oftc.net reticulum.oftc.net]
refi64 has quit [coherence.oftc.net reticulum.oftc.net]
nafod has quit [coherence.oftc.net reticulum.oftc.net]
kenzie35 has quit [coherence.oftc.net reticulum.oftc.net]
King_InuYasha has quit [coherence.oftc.net reticulum.oftc.net]
tpw_rules has quit [coherence.oftc.net reticulum.oftc.net]
ids1024 has quit [coherence.oftc.net reticulum.oftc.net]
NekomimiScience has quit [coherence.oftc.net reticulum.oftc.net]
TheFirst has quit [coherence.oftc.net reticulum.oftc.net]
jabashque has quit [coherence.oftc.net reticulum.oftc.net]
cyrozap has quit [coherence.oftc.net reticulum.oftc.net]
Lightsword has quit [coherence.oftc.net reticulum.oftc.net]
esden has quit [coherence.oftc.net reticulum.oftc.net]
philpax_ has quit [coherence.oftc.net reticulum.oftc.net]
Method has quit [coherence.oftc.net reticulum.oftc.net]
krbtgt has quit [coherence.oftc.net reticulum.oftc.net]
jbowen has quit [coherence.oftc.net reticulum.oftc.net]
JTL has quit [coherence.oftc.net reticulum.oftc.net]
nathanchance has quit [coherence.oftc.net reticulum.oftc.net]
rcombs has quit [coherence.oftc.net reticulum.oftc.net]
yuyichao has quit [coherence.oftc.net reticulum.oftc.net]
arnd has quit [coherence.oftc.net reticulum.oftc.net]
hays has quit [coherence.oftc.net reticulum.oftc.net]
KDDLB has quit [coherence.oftc.net reticulum.oftc.net]
Gaelan has quit [coherence.oftc.net reticulum.oftc.net]
rbenua has quit [coherence.oftc.net reticulum.oftc.net]
tbodt has quit [coherence.oftc.net reticulum.oftc.net]
koorogi has quit [coherence.oftc.net reticulum.oftc.net]
robher has quit [coherence.oftc.net reticulum.oftc.net]
gruetzkopf has quit [coherence.oftc.net reticulum.oftc.net]
kode54 has quit [coherence.oftc.net reticulum.oftc.net]
weems_ has quit [coherence.oftc.net reticulum.oftc.net]
kov has quit [coherence.oftc.net reticulum.oftc.net]
balrog has quit [coherence.oftc.net reticulum.oftc.net]
grgy has quit [coherence.oftc.net reticulum.oftc.net]
Chainsaw has quit [coherence.oftc.net reticulum.oftc.net]
sjg1 has quit [coherence.oftc.net reticulum.oftc.net]
Chinese_soup has quit [coherence.oftc.net reticulum.oftc.net]
tmlind has quit [coherence.oftc.net reticulum.oftc.net]
skoobasteeve has quit [coherence.oftc.net reticulum.oftc.net]
emptynine has quit [coherence.oftc.net reticulum.oftc.net]
nepeat has quit [coherence.oftc.net reticulum.oftc.net]
snek has quit [coherence.oftc.net reticulum.oftc.net]
linuxgemini has quit [coherence.oftc.net reticulum.oftc.net]
Ariadne has quit [coherence.oftc.net reticulum.oftc.net]
tardyp has quit [coherence.oftc.net reticulum.oftc.net]
austriancoder has quit [coherence.oftc.net reticulum.oftc.net]
opticron has quit [coherence.oftc.net reticulum.oftc.net]
daniels has quit [coherence.oftc.net reticulum.oftc.net]
VinDuv has quit [coherence.oftc.net reticulum.oftc.net]
pieer[m] has quit [coherence.oftc.net reticulum.oftc.net]
minecrell has quit [coherence.oftc.net reticulum.oftc.net]
tuxcaeli[m] has quit [coherence.oftc.net reticulum.oftc.net]
nkaretnikov has quit [coherence.oftc.net reticulum.oftc.net]
Graypup_ has quit [coherence.oftc.net reticulum.oftc.net]
GraysonGuarino[m] has quit [coherence.oftc.net reticulum.oftc.net]
MatthewLeach[m] has quit [coherence.oftc.net reticulum.oftc.net]
kjm99[m] has quit [coherence.oftc.net reticulum.oftc.net]
skrzyp has quit [coherence.oftc.net reticulum.oftc.net]
bastilian[m] has quit [coherence.oftc.net reticulum.oftc.net]
faiz_abbas[m] has quit [coherence.oftc.net reticulum.oftc.net]
jason1923[m] has quit [coherence.oftc.net reticulum.oftc.net]
steffen[m] has quit [coherence.oftc.net reticulum.oftc.net]
roxiun[m] has quit [coherence.oftc.net reticulum.oftc.net]
jato has quit [coherence.oftc.net reticulum.oftc.net]
etsukata[m] has quit [coherence.oftc.net reticulum.oftc.net]
kdrag0n[m] has quit [coherence.oftc.net reticulum.oftc.net]
obflv[m] has quit [coherence.oftc.net reticulum.oftc.net]
stelleg[m] has quit [coherence.oftc.net reticulum.oftc.net]
fetsorn[m] has quit [coherence.oftc.net reticulum.oftc.net]
pikabo[m] has quit [coherence.oftc.net reticulum.oftc.net]
null has quit [coherence.oftc.net reticulum.oftc.net]
kaprests has quit [coherence.oftc.net reticulum.oftc.net]
bngs[m] has quit [coherence.oftc.net reticulum.oftc.net]
notyou[m] has quit [coherence.oftc.net reticulum.oftc.net]
sunyiynus[m] has quit [coherence.oftc.net reticulum.oftc.net]
kdwk-l[m] has quit [coherence.oftc.net reticulum.oftc.net]
BingDennis[m] has quit [coherence.oftc.net reticulum.oftc.net]
manawyrm has quit [coherence.oftc.net reticulum.oftc.net]
DiscoPenguin[m] has quit [coherence.oftc.net reticulum.oftc.net]
RowanGoemans[m] has quit [coherence.oftc.net reticulum.oftc.net]
kloenk has quit [coherence.oftc.net reticulum.oftc.net]
AkihikoOdaki[m] has quit [coherence.oftc.net reticulum.oftc.net]
latosca[m] has quit [coherence.oftc.net reticulum.oftc.net]
mmlb[m] has quit [coherence.oftc.net reticulum.oftc.net]
IsfarSifat[m] has quit [coherence.oftc.net reticulum.oftc.net]
ryanhrob[m] has quit [coherence.oftc.net reticulum.oftc.net]
m42uko has quit [coherence.oftc.net reticulum.oftc.net]
jevinskie[m] has quit [coherence.oftc.net reticulum.oftc.net]
TellowKrinkle[m] has quit [coherence.oftc.net reticulum.oftc.net]
s-urabe[m] has quit [coherence.oftc.net reticulum.oftc.net]
mofux[m] has quit [coherence.oftc.net reticulum.oftc.net]
hectour[m] has quit [coherence.oftc.net reticulum.oftc.net]
cgv[m] has quit [coherence.oftc.net reticulum.oftc.net]
shaman_br[m] has quit [coherence.oftc.net reticulum.oftc.net]
rgort10[m] has quit [coherence.oftc.net reticulum.oftc.net]
thebrinkoftomorrow[m] has quit [coherence.oftc.net reticulum.oftc.net]
vivg[m] has quit [coherence.oftc.net reticulum.oftc.net]
Deewiant has quit [coherence.oftc.net reticulum.oftc.net]
joerosenberg[m] has quit [coherence.oftc.net reticulum.oftc.net]
null-nop[m] has quit [coherence.oftc.net reticulum.oftc.net]
daftfrog[m] has quit [coherence.oftc.net reticulum.oftc.net]
dcavalca has quit [coherence.oftc.net reticulum.oftc.net]
JacksonR[m] has quit [coherence.oftc.net reticulum.oftc.net]
alicela1n has quit [coherence.oftc.net reticulum.oftc.net]
gamble[m] has quit [coherence.oftc.net reticulum.oftc.net]
gpanders has quit [coherence.oftc.net reticulum.oftc.net]
as400[m] has quit [coherence.oftc.net reticulum.oftc.net]
maz has quit [coherence.oftc.net reticulum.oftc.net]
southey has quit [coherence.oftc.net reticulum.oftc.net]
Shiz has quit [coherence.oftc.net reticulum.oftc.net]
cde[m] has quit [coherence.oftc.net reticulum.oftc.net]
Amey has quit [coherence.oftc.net reticulum.oftc.net]
simjnd[m] has quit [coherence.oftc.net reticulum.oftc.net]
maxim[m] has quit [coherence.oftc.net reticulum.oftc.net]
jn has quit [coherence.oftc.net reticulum.oftc.net]
landscape15[m] has quit [coherence.oftc.net reticulum.oftc.net]
Dementor[m] has quit [coherence.oftc.net reticulum.oftc.net]
ryanhrob1[m] has quit [coherence.oftc.net reticulum.oftc.net]
Ziemas has quit [coherence.oftc.net reticulum.oftc.net]
yamii has quit [coherence.oftc.net reticulum.oftc.net]
Fanfwe has quit [coherence.oftc.net reticulum.oftc.net]
astrorion26[m] has quit [coherence.oftc.net reticulum.oftc.net]
Augur[m] has quit [coherence.oftc.net reticulum.oftc.net]
kedde[m] has quit [coherence.oftc.net reticulum.oftc.net]
Name[m] has quit [coherence.oftc.net reticulum.oftc.net]
lewurm has quit [coherence.oftc.net reticulum.oftc.net]
Sebhl[m] has quit [coherence.oftc.net reticulum.oftc.net]
_jannau_ has quit [coherence.oftc.net reticulum.oftc.net]
mps has quit [coherence.oftc.net reticulum.oftc.net]
abbas_faiz[m] has quit [coherence.oftc.net reticulum.oftc.net]
brentr123[m] has quit [coherence.oftc.net reticulum.oftc.net]
fridtjof[m] has quit [coherence.oftc.net reticulum.oftc.net]
not_a_weeaboo[m] has quit [coherence.oftc.net reticulum.oftc.net]
quentin[m] has quit [coherence.oftc.net reticulum.oftc.net]
tophevich[m] has quit [coherence.oftc.net reticulum.oftc.net]
N3ros[m] has quit [coherence.oftc.net reticulum.oftc.net]
alexanderwillner[m] has quit [coherence.oftc.net reticulum.oftc.net]
denden[m] has quit [coherence.oftc.net reticulum.oftc.net]
jix has quit [coherence.oftc.net reticulum.oftc.net]
crabbedhaloablut has quit [coherence.oftc.net reticulum.oftc.net]
axboe has joined #asahi-dev
gpanders_ has joined #asahi-dev
kov has joined #asahi-dev
al3xtjames has joined #asahi-dev
refi64 has joined #asahi-dev
alyssa has joined #asahi-dev
boardwalk has joined #asahi-dev
skipwich has joined #asahi-dev
balrog has joined #asahi-dev
nafod has joined #asahi-dev
grgy has joined #asahi-dev
bpye has joined #asahi-dev
kenzie35 has joined #asahi-dev
King_InuYasha has joined #asahi-dev
WhyNotHugo has joined #asahi-dev
jkkm has joined #asahi-dev
jbowen has joined #asahi-dev
kendfinger has joined #asahi-dev
nyx_o has joined #asahi-dev
cptcobalt has joined #asahi-dev
krbtgt has joined #asahi-dev
x56 has joined #asahi-dev
conradev has joined #asahi-dev
rcombs has joined #asahi-dev
Z750 has joined #asahi-dev
Method has joined #asahi-dev
coder_kalyan_ has joined #asahi-dev
Lightsword has joined #asahi-dev
pFalken has joined #asahi-dev
cyrozap has joined #asahi-dev
tmlind has joined #asahi-dev
tpw_rules has joined #asahi-dev
daniels has joined #asahi-dev
nepeat has joined #asahi-dev
emptynine has joined #asahi-dev
opticron has joined #asahi-dev
TheFirst has joined #asahi-dev
linuxgemini has joined #asahi-dev
Chinese_soup has joined #asahi-dev
esden has joined #asahi-dev
austriancoder has joined #asahi-dev
philpax_ has joined #asahi-dev
Ariadne has joined #asahi-dev
tardyp has joined #asahi-dev
nathanchance has joined #asahi-dev
skoobasteeve has joined #asahi-dev
JTL has joined #asahi-dev
ids1024 has joined #asahi-dev
NekomimiScience has joined #asahi-dev
sjg1 has joined #asahi-dev
snek has joined #asahi-dev
Chainsaw has joined #asahi-dev
jabashque has joined #asahi-dev
tbodt has joined #asahi-dev
yuyichao has joined #asahi-dev
Graypup_ has joined #asahi-dev
hays has joined #asahi-dev
koorogi has joined #asahi-dev
Gaelan has joined #asahi-dev
robher has joined #asahi-dev
weems_ has joined #asahi-dev
nkaretnikov has joined #asahi-dev
gruetzkopf has joined #asahi-dev
KDDLB has joined #asahi-dev
kode54 has joined #asahi-dev
rbenua has joined #asahi-dev
arnd has joined #asahi-dev
tpw_rules has quit [resistance.oftc.net reflection.oftc.net]
King_InuYasha has quit [resistance.oftc.net reflection.oftc.net]
axboe has quit [resistance.oftc.net reflection.oftc.net]
ids1024 has quit [resistance.oftc.net reflection.oftc.net]
NekomimiScience has quit [resistance.oftc.net reflection.oftc.net]
TheFirst has quit [resistance.oftc.net reflection.oftc.net]
jabashque has quit [resistance.oftc.net reflection.oftc.net]
cyrozap has quit [resistance.oftc.net reflection.oftc.net]
Lightsword has quit [resistance.oftc.net reflection.oftc.net]
nafod has quit [resistance.oftc.net reflection.oftc.net]
refi64 has quit [resistance.oftc.net reflection.oftc.net]
esden has quit [resistance.oftc.net reflection.oftc.net]
JTL has quit [resistance.oftc.net reflection.oftc.net]
jbowen has quit [resistance.oftc.net reflection.oftc.net]
krbtgt has quit [resistance.oftc.net reflection.oftc.net]
Method has quit [resistance.oftc.net reflection.oftc.net]
philpax_ has quit [resistance.oftc.net reflection.oftc.net]
kenzie35 has quit [resistance.oftc.net reflection.oftc.net]
rcombs has quit [resistance.oftc.net reflection.oftc.net]
nathanchance has quit [resistance.oftc.net reflection.oftc.net]
axboe has joined #asahi-dev
refi64 has joined #asahi-dev
nafod has joined #asahi-dev
kenzie35 has joined #asahi-dev
King_InuYasha has joined #asahi-dev
krbtgt has joined #asahi-dev
jbowen has joined #asahi-dev
Method has joined #asahi-dev
rcombs has joined #asahi-dev
Lightsword has joined #asahi-dev
cyrozap has joined #asahi-dev
JTL has joined #asahi-dev
NekomimiScience has joined #asahi-dev
esden has joined #asahi-dev
philpax_ has joined #asahi-dev
TheFirst has joined #asahi-dev
nathanchance has joined #asahi-dev
tpw_rules has joined #asahi-dev
ids1024 has joined #asahi-dev
jabashque has joined #asahi-dev
gpanders_ has quit [resistance.oftc.net larich.oftc.net]
pFalken has quit [resistance.oftc.net larich.oftc.net]
cptcobalt has quit [resistance.oftc.net larich.oftc.net]
Z750 has quit [resistance.oftc.net larich.oftc.net]
conradev has quit [resistance.oftc.net larich.oftc.net]
kendfinger has quit [resistance.oftc.net larich.oftc.net]
jkkm has quit [resistance.oftc.net larich.oftc.net]
bpye has quit [resistance.oftc.net larich.oftc.net]
skipwich has quit [resistance.oftc.net larich.oftc.net]
alyssa has quit [resistance.oftc.net larich.oftc.net]
WhyNotHugo has quit [resistance.oftc.net larich.oftc.net]
coder_kalyan_ has quit [resistance.oftc.net larich.oftc.net]
nyx_o has quit [resistance.oftc.net larich.oftc.net]
al3xtjames has quit [resistance.oftc.net larich.oftc.net]
x56 has quit [resistance.oftc.net larich.oftc.net]
boardwalk has quit [resistance.oftc.net larich.oftc.net]
Chainsaw has quit [resistance.oftc.net charm.oftc.net]
snek has quit [resistance.oftc.net charm.oftc.net]
sjg1 has quit [resistance.oftc.net charm.oftc.net]
Chinese_soup has quit [resistance.oftc.net charm.oftc.net]
tmlind has quit [resistance.oftc.net charm.oftc.net]
skoobasteeve has quit [resistance.oftc.net charm.oftc.net]
nepeat has quit [resistance.oftc.net charm.oftc.net]
emptynine has quit [resistance.oftc.net charm.oftc.net]
linuxgemini has quit [resistance.oftc.net charm.oftc.net]
grgy has quit [resistance.oftc.net charm.oftc.net]
Ariadne has quit [resistance.oftc.net charm.oftc.net]
balrog has quit [resistance.oftc.net charm.oftc.net]
opticron has quit [resistance.oftc.net charm.oftc.net]
kov has quit [resistance.oftc.net charm.oftc.net]
austriancoder has quit [resistance.oftc.net charm.oftc.net]
tardyp has quit [resistance.oftc.net charm.oftc.net]
daniels has quit [resistance.oftc.net charm.oftc.net]
al3xtjames has joined #asahi-dev
skipwich has joined #asahi-dev
alyssa has joined #asahi-dev
boardwalk has joined #asahi-dev
bpye has joined #asahi-dev
WhyNotHugo has joined #asahi-dev
jkkm has joined #asahi-dev
kendfinger has joined #asahi-dev
gpanders_ has joined #asahi-dev
nyx_o has joined #asahi-dev
x56 has joined #asahi-dev
cptcobalt has joined #asahi-dev
conradev has joined #asahi-dev
pFalken has joined #asahi-dev
coder_kalyan_ has joined #asahi-dev
Z750 has joined #asahi-dev
tmlind has joined #asahi-dev
kov has joined #asahi-dev
balrog has joined #asahi-dev
grgy has joined #asahi-dev
snek has joined #asahi-dev
skoobasteeve has joined #asahi-dev
tardyp has joined #asahi-dev
Chainsaw has joined #asahi-dev
Ariadne has joined #asahi-dev
Chinese_soup has joined #asahi-dev
austriancoder has joined #asahi-dev
opticron has joined #asahi-dev
linuxgemini has joined #asahi-dev
emptynine has joined #asahi-dev
sjg1 has joined #asahi-dev
daniels has joined #asahi-dev
nepeat has joined #asahi-dev
rethematrix[m] has joined #asahi-dev
jeffmiw has joined #asahi-dev
David[m]123456789 has joined #asahi-dev
chengsun has joined #asahi-dev
mps has joined #asahi-dev
plantaintion3[m] has joined #asahi-dev
katatafjsh[m] has joined #asahi-dev
AnalogDigital[m] has joined #asahi-dev
phiologe has joined #asahi-dev
c10l has joined #asahi-dev
timokrgr has joined #asahi-dev
riker77 has joined #asahi-dev
n1c has joined #asahi-dev
gladiac has joined #asahi-dev
VinDuv has joined #asahi-dev
mrkajetanp has joined #asahi-dev
yrlf has joined #asahi-dev
philhug has joined #asahi-dev
gabuscus has joined #asahi-dev
eragon has joined #asahi-dev
vup has joined #asahi-dev
retonlage[m] has joined #asahi-dev
YichaoYu[m] has joined #asahi-dev
pulpy_orange2[m] has joined #asahi-dev
DanStrong[m] has joined #asahi-dev
Ferluci[m] has joined #asahi-dev
Liam[m] has joined #asahi-dev
DanielHuisman[m] has joined #asahi-dev
perigoso[m] has joined #asahi-dev
milek7_ has joined #asahi-dev
dhewg has joined #asahi-dev
krirogn[m] has joined #asahi-dev
NightsOnly[m] has joined #asahi-dev
NotHere[m] has joined #asahi-dev
josipknezovic[m] has joined #asahi-dev
ybk[m] has joined #asahi-dev
mini has joined #asahi-dev
RianSouzaSantos[m] has joined #asahi-dev
Emantor has joined #asahi-dev
SocioProphet[m] has joined #asahi-dev
jeh[m] has joined #asahi-dev
ianlienfa[m] has joined #asahi-dev
wCPO6 has joined #asahi-dev
Dcow[m]1 has joined #asahi-dev
lockna has joined #asahi-dev
djk121[m] has joined #asahi-dev
houlton[m] has joined #asahi-dev
legarts[m] has joined #asahi-dev
samfromspace[m] has joined #asahi-dev
xorly[m] has joined #asahi-dev
enick_341 has joined #asahi-dev
casperes1996[m] has joined #asahi-dev
JuniorJPDJ has joined #asahi-dev
bluerise has joined #asahi-dev
jannau has joined #asahi-dev
j`ey has joined #asahi-dev
flying_sausages has joined #asahi-dev
Retr0id has joined #asahi-dev
sproede[m] has joined #asahi-dev
suricato has joined #asahi-dev
ghantaz[m] has joined #asahi-dev
nirusu[m] has joined #asahi-dev
mariogrip[m] has joined #asahi-dev
matthewayers[m] has joined #asahi-dev
long[m] has joined #asahi-dev
citruscitrus[m] has joined #asahi-dev
wollymilkcap[m] has joined #asahi-dev
sppdqd[m] has joined #asahi-dev
ogimgd[m] has joined #asahi-dev
deathdisco[m] has joined #asahi-dev
LorenzKofler[m] has joined #asahi-dev
latko[m] has joined #asahi-dev
bmrgz[m] has joined #asahi-dev
roxfan has joined #asahi-dev
rohin[m] has joined #asahi-dev
ponkey364[m] has joined #asahi-dev
NightRaven[m] has joined #asahi-dev
fezhead[m] has joined #asahi-dev
izzyisles[m] has joined #asahi-dev
lucifer178[m] has joined #asahi-dev
facez[m] has joined #asahi-dev
digitalfx[m] has joined #asahi-dev
AnushervonTabarov[m] has joined #asahi-dev
CristianMgheruan-Stanciu[m] has joined #asahi-dev
blazra[m] has joined #asahi-dev
Bastian[m] has joined #asahi-dev
HayashiEsme[m] has joined #asahi-dev
sven has joined #asahi-dev
emilazy has joined #asahi-dev
Eighth_Doctor has joined #asahi-dev
xerpi[m] has joined #asahi-dev
exhan[m] has joined #asahi-dev
_alice has joined #asahi-dev
tuxcaeli[m] has joined #asahi-dev
Esmil has joined #asahi-dev
pieer[m] has joined #asahi-dev
Glanzmann has joined #asahi-dev
user974[m] has joined #asahi-dev
trouter has joined #asahi-dev
psykose has joined #asahi-dev
minecrell has joined #asahi-dev
povik has joined #asahi-dev
XeR has joined #asahi-dev
IbrahimMAkrab[m] has joined #asahi-dev
cde[m] has joined #asahi-dev
null-nop[m] has joined #asahi-dev
commandoline has joined #asahi-dev
mixi has joined #asahi-dev
Deewiant has joined #asahi-dev
M1bn3mar[m] has joined #asahi-dev
FireFox317 has joined #asahi-dev
BenPetterborg[m] has joined #asahi-dev
vimsos[m] has joined #asahi-dev
aleasto has joined #asahi-dev
TheLink has joined #asahi-dev
grange_c has joined #asahi-dev
pg12 has joined #asahi-dev
LilleCarl[m] has joined #asahi-dev
DragoonAethis has joined #asahi-dev
feeleep[m] has joined #asahi-dev
Dcow[m] has joined #asahi-dev
zelig_[m] has joined #asahi-dev
lonjil has joined #asahi-dev
marcan has joined #asahi-dev
vafanlignarde has joined #asahi-dev
Nspace has joined #asahi-dev
BastienSaidi[m] has joined #asahi-dev
thasti has joined #asahi-dev
rusty-nail[m] has joined #asahi-dev
blassphemy[m] has joined #asahi-dev
Redecorating[m] has joined #asahi-dev
spokv[m] has joined #asahi-dev
ey3ball[m] has joined #asahi-dev
Hinata[m] has joined #asahi-dev
Jamie[m]1 has joined #asahi-dev
Andre[m]1 has joined #asahi-dev
kettenis has joined #asahi-dev
arekm has joined #asahi-dev
msmith12[m] has joined #asahi-dev
zbotpath[m]1 has joined #asahi-dev
Xichao[m] has joined #asahi-dev
xiaomingcc[m] has joined #asahi-dev
uur[m] has joined #asahi-dev
Synth[m] has joined #asahi-dev
GregoryRWarnes[m] has joined #asahi-dev
spot[m] has joined #asahi-dev
RasmusEneman[m] has joined #asahi-dev
peerp[m] has joined #asahi-dev
sikkileo[m] has joined #asahi-dev
HaoYanQi[m] has joined #asahi-dev
lovesegfault has joined #asahi-dev
javier_varez[m] has joined #asahi-dev
fried_dede[m] has joined #asahi-dev
Emantor[m] has joined #asahi-dev
davay[m] has joined #asahi-dev
petermlyon[m] has joined #asahi-dev
ar88kk[m] has joined #asahi-dev
rowang077[m] has joined #asahi-dev
thermoblue[m] has joined #asahi-dev
PthariensFlame[m] has joined #asahi-dev
happy-dude[m] has joined #asahi-dev
foxlet has joined #asahi-dev
h_ro[m] has joined #asahi-dev
ah-[m] has joined #asahi-dev
ella-0[m] has joined #asahi-dev
psydroid has joined #asahi-dev
cynthia has joined #asahi-dev
null has joined #asahi-dev
hramrach has joined #asahi-dev
Stary has joined #asahi-dev
kdwk-l[m] has joined #asahi-dev
mr_sq[m] has joined #asahi-dev
jix has joined #asahi-dev
KrushnaDeore[m] has joined #asahi-dev
denden[m] has joined #asahi-dev
pikabo[m] has joined #asahi-dev
daniel0611[m] has joined #asahi-dev
ar has joined #asahi-dev
fetsorn[m] has joined #asahi-dev
leah2 has joined #asahi-dev
sunyiynus[m] has joined #asahi-dev
N3ros[m] has joined #asahi-dev
ducc[m] has joined #asahi-dev
vivg[m] has joined #asahi-dev
user1tt[m] has joined #asahi-dev
unrelentingtech has joined #asahi-dev
unevenrhombus[m] has joined #asahi-dev
tophevich[m] has joined #asahi-dev
thebrinkoftomorrow[m] has joined #asahi-dev
TellowKrinkle[m] has joined #asahi-dev
stelleg[m] has joined #asahi-dev
JacksonR[m] has joined #asahi-dev
simjnd[m] has joined #asahi-dev
shaman_br[m] has joined #asahi-dev
sajattack[m] has joined #asahi-dev
s-urabe[m] has joined #asahi-dev
rkjnsn has joined #asahi-dev
rgort10[m] has joined #asahi-dev
quentin[m] has joined #asahi-dev
ograff has joined #asahi-dev
obflv[m] has joined #asahi-dev
not_a_weeaboo[m] has joined #asahi-dev
nilsi[m] has joined #asahi-dev
mofux[m] has joined #asahi-dev
kdrag0n[m] has joined #asahi-dev
gpanders has joined #asahi-dev
joerosenberg[m] has joined #asahi-dev
gpanders[m] has joined #asahi-dev
gamble[m] has joined #asahi-dev
fridtjof[m] has joined #asahi-dev
hectour[m] has joined #asahi-dev
etsukata[m] has joined #asahi-dev
dnjmis[m] has joined #asahi-dev
daftfrog[m] has joined #asahi-dev
cgv[m] has joined #asahi-dev
c1truz[m] has joined #asahi-dev
kit_ty_kate has joined #asahi-dev
blasty has joined #asahi-dev
Rhys[m]12 has joined #asahi-dev
akemin_dayo has joined #asahi-dev
DarkShadow44 has joined #asahi-dev
M0x8FF[m] has joined #asahi-dev
drwhax[m]1 has joined #asahi-dev
l3k[m] has joined #asahi-dev
brentr123[m] has joined #asahi-dev
bpalmer4[m] has joined #asahi-dev
bngs[m] has joined #asahi-dev
arnidg[m] has joined #asahi-dev
alexanderwillner[m] has joined #asahi-dev
Shiz has joined #asahi-dev
agraf has joined #asahi-dev
maz has joined #asahi-dev
nico_32_ has joined #asahi-dev
southey has joined #asahi-dev
Mary has joined #asahi-dev
os has joined #asahi-dev
maxim[m] has joined #asahi-dev
_andy_t_ has joined #asahi-dev
V has joined #asahi-dev
jevinskie[m] has joined #asahi-dev
abilash1994[m] has joined #asahi-dev
alicela1n has joined #asahi-dev
m42uko has joined #asahi-dev
roxiun[m] has joined #asahi-dev
jato has joined #asahi-dev
ryanhrob[m] has joined #asahi-dev
IsfarSifat[m] has joined #asahi-dev
_jannau_ has joined #asahi-dev
Sebhl[m] has joined #asahi-dev
as400[m] has joined #asahi-dev
lewurm has joined #asahi-dev
kaprests has joined #asahi-dev
Name[m] has joined #asahi-dev
steffen[m] has joined #asahi-dev
notyou[m] has joined #asahi-dev
mmlb[m] has joined #asahi-dev
kedde[m] has joined #asahi-dev
latosca[m] has joined #asahi-dev
jason1923[m] has joined #asahi-dev
faiz_abbas[m] has joined #asahi-dev
Augur[m] has joined #asahi-dev
bastilian[m] has joined #asahi-dev
astrorion26[m] has joined #asahi-dev
AkihikoOdaki[m] has joined #asahi-dev
kloenk has joined #asahi-dev
dcavalca has joined #asahi-dev
Amey has joined #asahi-dev
Fanfwe has joined #asahi-dev
yamii has joined #asahi-dev
Ziemas has joined #asahi-dev
crabbedhaloablut has joined #asahi-dev
skrzyp has joined #asahi-dev
kjm99[m] has joined #asahi-dev
abbas_faiz[m] has joined #asahi-dev
ryanhrob1[m] has joined #asahi-dev
RowanGoemans[m] has joined #asahi-dev
Dementor[m] has joined #asahi-dev
MatthewLeach[m] has joined #asahi-dev
GraysonGuarino[m] has joined #asahi-dev
landscape15[m] has joined #asahi-dev
jn has joined #asahi-dev
DiscoPenguin[m] has joined #asahi-dev
manawyrm has joined #asahi-dev
BingDennis[m] has joined #asahi-dev
IvanMaksimovic[m] has joined #asahi-dev
dottedmag has joined #asahi-dev
<kettenis> I can confirm that OpenBSD only does a disk cache flush when a filesystem is unmounted, when an explicit cache flush ioctl is issued or when the machine is suspended or powered down
<kettenis> I think that is pretty much the traditional UNIX behaviour:
<kettenis> if you pull the plug you pray that fsck can restore your filesystem to a consistent state and accept that you lose data
leah2 has quit [Ping timeout: 480 seconds]
axboe has quit [Quit: Lost terminal]
leah2 has joined #asahi-dev
weems_ has quit [Read error: Connection reset by peer]
tardyp has quit [Read error: Connection reset by peer]
cptcobalt has quit [Remote host closed the connection]
cptcobalt has joined #asahi-dev
jkkm has quit [Read error: Connection reset by peer]
jkkm has joined #asahi-dev
kendfinger has quit [Read error: Connection reset by peer]
kendfinger has joined #asahi-dev
daniels has quit [Read error: Connection reset by peer]
arnd has quit [Read error: Connection reset by peer]
robher has quit [Read error: Connection reset by peer]
daniels has joined #asahi-dev
arnd has joined #asahi-dev
tardyp has joined #asahi-dev
weems_ has joined #asahi-dev
robher has joined #asahi-dev