fda- has joined #openwrt-devel
fda| has quit [Ping timeout: 480 seconds]
Guest6112 has quit [Ping timeout: 480 seconds]
fda has quit [Ping timeout: 480 seconds]
fda has joined #openwrt-devel
fda- has quit [Ping timeout: 480 seconds]
danitool has quit [Ping timeout: 480 seconds]
fda has quit [Read error: Connection reset by peer]
fda has joined #openwrt-devel
minimal has quit []
fda has quit [Read error: Connection reset by peer]
fda has joined #openwrt-devel
fda- has joined #openwrt-devel
fda has quit [Ping timeout: 480 seconds]
<dansan> neggles: lol, you bastard!
<dansan> Good evening / morning all! What build target do I use to build an individual kmod? For example, kmod-fs-configfs?
<dansan> Well, I want to clean it via make first. I tried make package/kmod-fs-configfs/clean and it didn't find that target
<dansan> neggles: hrm. My company will soon have at least 3 people building firmware in different locations in the US for development purposes. And one of them spends have of his year at sea, so he'll be building from God-knows-where. Maybe we just need to set up our own dev branches and have the builds done on a single CI server.
<dansan> *half of his year
goliath has quit [Quit: SIGSEGV]
<will[m]> did y'all replace feeds.conf with repositories.conf while i wasn't looking?
<neggles> dansan: that sounds like a good idea really
<neggles> even without CI, just using vscode remote or vscode-server
<neggles> and dev VMs
<will[m]> ugh a third environment to set up, sweet
Rentong has joined #openwrt-devel
Rentong has quit [Ping timeout: 480 seconds]
* enyc meeps
<dansan> lol
<dansan> neggles: well, CI/CD is part of the plan, but we got a guy who doesn't have a good build machine and I'm just trying to strategize -- looking at hardware prices.
<dansan> Oh, but does anybody know how to clean and build a single kmod package?
<neggles> dansan: a few VMs running a docker/kube cluster for CI/CD build pipelines, and a VM for each dev's personal screwing around is how we do it
<dansan> Don't you loose a lot of your computing power in a VM as opposed to a simple container?
<dansan> I mean, I know there's all of the virtio drivers, but in my experience they're still much slower than native calls
<neggles> performance overhead vs baremetal, even on relatively old hardware (sandy bridge etc) is single-digit percent
<dansan> hrm
<neggles> we're running everything on top of hyper-V Gen2
<neggles> guest drivers are fully paravirt
<dansan> Oh yeah, I never read up on hyper-v
<neggles> if I spawn a single VM and give it as many vCores as we have pThreads & all but 2GB of RAM
<dansan> Wait, is that a ms tech?
<neggles> yes, but it's also free!
<neggles> with a linux guest, performs nearly identically to the same machine used for baremetal linux (
<dansan> I don't think you'll convince me to run Windows as my core OS :)
<neggles> hey it's windows without a GUI
<neggles> powershell is nice
<dansan> *shudders* uugh
<neggles> I believe overhead shouldn't be much higher with proper virtio/paravirt on linux (proxmox/truenas scale) though
<neggles> and vmware overhead is next to zero as well with vmxnet3/vmware paravirt sas
<dansan> The way I would do it would be to just colocate a server, but I don't think my boss is into that -- he'll probably want to just rent a VM from somebody. I forget which tech they use for those.
<neggles> hyper-V handles massively overcommiting CPUs and memory much better than the other stuff, in my experience, and management is all through a webUI
<dwfreed> VPSes are usually KVM or Xen, both have paravirt devices
<neggles> actually running a server on baremetal instead of hypervisor+guests is kind of silly unless it's a dedicated container host
<neggles> well with a VPS it hardly matters, you get the performance you get
<neggles> oh xenserver would work as well (or whatever the new free fork is called)
<dansan> what's a VPS?
<dwfreed> Virtual Private Server
<dansan> oh, virtual private server
<dansan> gotcha
<neggles> VPS is just VM-as-a-service
<dwfreed> basically the product name for a VM you're renting from someone
<dansan> Yeah, he gave me one of those for CICD, but I broke something. He didn't give me a lot of space or CPUS for it though
<neggles> nested virtualization is much worse performance-wise
<dansan> I would imagine so!
<neggles> OVH sells dedicated servers with pretty decent specs for fairly reasonable prices
<neggles> but nothing beats buying a used R730 and shoving it somewhere you can get a fibre internet connection for performance per dollar
<dansan> Well whatever he set up for that, I know that it has a low CPU priority so that other VMs get the cycles first. I think it's on the same VM pool (whatever you call those) as production servers.
<neggles> host cluster
<dansan> how many cores/threads do those have?
<dansan> Oh, host cluster, ok
<neggles> anywhere from 4c/4t up to 56c/108t iirc
<neggles> the best bang for buck chips are usually the 8-16 core parts, two per server
<dansan> Oh, well if I'm going to get him to do this, I want at least 32 threads
<dansan> ahh, great!
<neggles> E5-2673v3s are really good value, they're a semi-custom SKU that intel made for microsoft to use in azure
<dansan> I setup a CI/CD thing to reuse as much of a previous build as possible w/o rebuilding the whole world, but it's a bunch of scripts and docker images -- it's not the prettiest, but it works sometimes
<neggles> they've aged out so they're often available for like $50-75 a piece, 12-core 24-thread, two per server
<dansan> Oh, we definitely will NOT be doing windows
<dansan> damn!
<neggles> 2.4GHz base, 3.6GHz boost (cpu-world is wrong), 3.2GHz all-core boost
<neggles> broadwell
<will[m]> my favorite ci/cd is gitlab but it uses k8s for on-demand runners
<dansan> wth is "boost"?
<neggles> hahaha
<neggles> all CPUs since like, sandy bridge? have had this concept
<dansan> I mean, it sounds like the button on the old boom boxes -- you hit the "bass boost" button
<dansan> I mean, what is it in real-world, functionality?
<neggles> turbo boost (not a joke name) is very cool and good
<dansan> It sounds like one of those fizzy, soft drink buzz words
<dansan> but what IS it? Not what is it "called"?
<neggles> I'm typing!
<dansan> lol, sorr! :)
<dwfreed> basically it's manufacturer approved automatic overclocking
<neggles> so, for example the E5-2673v3 is rated at 110 watts thermal design power. If you fully load down all of the cores with turbo boost disabled, you will get 2.4GHz on all 12 cores, and ~110W CPU power draw, is the theory.
<dansan> will[m]: I'm ticked with gitlab because I wasted some 50 hours trying to get a patch through when the maintainer was a fking little idiot twerp who thought the world of himself.
<dwfreed> that's not gitlab's problem
<dansan> oh!
<neggles> if you load down only one core, it will do 3.6GHz and be well under that power limit
<dansan> dwfreed: He's gitlab's employee
<neggles> but the CPU is allowed to exceed that power limit temporarily, as long as temperatures are below the limit
<dansan> dwfreed: And no attempt I made to get a more experienced person in was useful, so I just said fk it.
<neggles> so there's your 2.4GHz base - power draw limited to TDP, all cores loaded
<neggles> 3.6GHz boost - power draw limited to TDP, only one core loaded
<dansan> neggles: ok, thanks. that makes very good sense in many regards actually.
<neggles> 3.2GHz all-core boost = power draw limit is effectively removed, all cores fully loaded, however this has maximum duration and temperature limits
<neggles> typically for these xeons the time limit is around 3-5 minutes
<neggles> even in compile workloads, you're unlikely to maintain full load on all cores long enough to actually hit that expiry time
<neggles> so your actual effective clock speed will be somewhere in between
<dansan> But there's another facet to that in real-world compiling scenarios -- often times we get utilization of all cores, followed by "Oh, I need to configure now". So when we're running on all cores, it's not so bad that it runs slower. But when we're waiting for one damn thing to finish, that's the perfect time for overclocking
<neggles> base clock = worst case minimum you can expect under an all core load, boost clock = best case maximum you can expect under a single-core load, all-core boost = maximum clock speed under an all core load, limited by thermal and duration constraints
<neggles> yep, so when you hit that single threaded task, all your idle cores will clock themselves way down below boost clock, and that single active core will jump to 3.6GHz
<neggles> (in this case)
<dansan> Very interesting, ty neggles & dwfreed
<neggles> as an added bonus, while the single core task is running, your total power draw will be low enough that the boost timer resets
<dansan> Well, even better they will execute "I have nothing to do" instructions when causes very minimal power consumption
<neggles> if they don't have anything to execute, they won't execute anything
<neggles> we're well past spintable idle
<dansan> Well, realistically it should be based upon thermal characteristics and any other physical limitations.
<neggles> if you have power saving features enabled, entire inactive cores will clock-gate themselves and stop doing _anything_
<dansan> Oh yeah, I'm thinking of a different scenario (in spinlocks actually). I'm not intimate enough with how they deal with having nothing to run.
<dansan> So I guess the kernel changes it's power state when there's nothing for it to do.
<neggles> in general modern CPUs and schedulers will also identify a spinlock and convert it to a sleep-until-interrupt, or at least drop to minimum clockspeed
<neggles> which is usually in the hundreds of MHz
<dansan> I don't believe that's possible in a spinlock. Because there will usually not be an interrupt to signal when a resource is available.
<dansan> Geeze, now you're making me think. Shame on you! :)
<neggles> oftentimes they'll wake up, check the lock, then go to sleep for the rest of the millisecond if there's nothing to do
<dansan> Not a Linux kernel spinlock
<dansan> They literally spin and when available, there's an instruction they can execute to relax the CPU somewhat, but they have to spin.
robin_ has quit [Ping timeout: 480 seconds]
<dansan> Spinlocks are used when we cannot sleep -- usually interrupts are disabled, so we need to finish the whatever-it-is asap so we can re-enable interrupts. This is why so much of the kernel is now split into the upper and lower halves.
<neggles> unless you force it to do otherwise (via manual overclocking in a desktop system, or disabling C-states and P-states), any given vaguely-modern CPU is shifting cores up and down through clockspeeds and power caps dozens of times a second if not more
<dansan> Do the part that has to be done with interrupts off and then offload the rest of the work to a thread.
<neggles> yeah, the CPU does a bunch of magic behind the scenes though
<dansan> Well I never researched in-depth what that "magic" little NOP instruction does, but I read that it "relaxes" execution if supported, so maybe that's what that is.
<neggles> even in a spinlock it can convert most of the instruction sequence into "do thing, clock gate" - or at least slow itself down to 800mhz rather than sit at full clockspeed
<neggles> this depends on your cpu governor and scheduler settings
<neggles> on the linux side it also depends on if you're using a NOHZ kernel
<neggles> And Then There's Speculative Execution
<dansan> "t calls an architecture specific relax function which has the effect executing some variant of a no-operation instruction that causes the CPU to execute such an instruction efficiently in a lower power state."
<dansan> *it
<dansan> tickless won't affect spinlocks
<neggles> on a multithreaded chip this will typically result in switching to the other thread
<neggles> briefly
<neggles> before coming back
<neggles> but there's no real way to control an awful lot of what happens; it's up to the CPU for the most part
<dansan> neggles: not in a spinlock because interrupts are off.
<dansan> The spinlock will run on that ALU until it re-enables interrupts. kernel programming isn't for the feint of heart! :)
<neggles> most spinlocks exist for incredibly short periods of time these days though, no?
<dansan> or I should say that "core" because every CPU thread in Linux is another "core"
<dansan> Yes, they need to be for very short periods of time, but there are some exceptions.
<neggles> well yeah, exactly, you can be spinlocking on a core/pThread while the actual physical core is working on the other pThread
<neggles> unless you specifically tell it not to do that
<neggles> there's more than one ALU in a core
<neggles> this is all architecturally-dependent ofc
<dansan> I've profiled cases where two threads are marshalling data back and forth and it resulted in some 70% of one CPU time spent spinning. However, this was a good decision for the scheduler because there were no other processes needing time and it resulted in the fastest way to run the program
<neggles> yeah
<neggles> the goal of running everything in VMs/containers is to spend as little time as possible spinning because there's nothing else to do
<dansan> Well, I should probably catch up on modern CPUs before I argue this *too* much, but my understanding is that "hyperthreading" is just two ALUs that share a single FPU and L1 data & instruction caches somehow.
<neggles> though most CPUs in most clusters still spend the majority of their time mostly idle
<neggles> dansan: it is, so very very very very much more complicated than that
<dansan> COOL! I can't wait to read up more :)
<neggles> here's intel's latest x86 desktop core
robin_ has joined #openwrt-devel
<neggles> things are simpler on the ARM side, but... not by a lot
<neggles> not these days
<dansan> Oh yeah, because we have so many SIMD integral instructions now
<neggles> AMD's early implementation of SMT was two integer compute units sharing one FP compute unit, but intel's never really done it that way
<neggles> intel HT started out as just constant context switching afaik
<neggles> thread is waiting for a memory/IO operation or something else that takes a while? OK, save the registers etc. somewhere and switch to another thread, switch back once the load/store/whatever finishes
<dansan> Reading up a bit
<neggles> nowadays it's "we have these two queues of instructions, and this pile of execution units, what're we doing in what order?"
<neggles> "we have a branch coming up and we don't know which path to take? cool, execute both of them, then once we know which path is correct, forget about the wrong one" (in... theory... stupid sidechannels...)
<neggles> the fun part is that an intel x86 CPU is really RISC on the inside
<neggles> the Asahi Linux dev blogs about their bringup of linux-on-Apple-M1 are really good as well https://asahilinux.org/2021/08/progress-report-august-2021/ https://asahilinux.org/2021/03/progress-report-january-february-2021/
<dansan> Well, the branch prediction isn't a part of hyperthreading -- that's been around for a while
<dansan> eew, why bother? :)
<neggles> because the M1 is a ridiculously powerful processor
<dansan> oh, ic
<dansan> but apparently quite undocumented
<neggles> and literally no other ARM chip on the market is capable of emulating x86_64 nearly as well as it
<neggles> because apple cheated
<dansan> HAH! they embedded an x86 core?
<neggles> noooot quite
<dansan> lol!
<neggles> the main challenge for x86_64 emulation on ARM is Total Store Ordering
<neggles> AIUI x86_64 guarantees that writes to the same memory address always occur in order (this is an oversimplification)
<neggles> (because I don't fully understand it)
<dansan> So that's interesting. With HTT there are two sets of registers for the two "threads", but it basically switches back and forth (like you said) when there's a cache miss or some such.
<neggles> but it prevents an issue with a thread on a core getting stale info from the same memory address another core just wrote to, i think
<neggles> ARM does not guarantee this, because usually it doesn't matter
<neggles> but since you can't know which x86_64 stores _do_ need that guarantee and which _don't_, when emulating you have to essentially flush the cache line after every write
<neggles> Apple added a magic bit to their MMU which enables total store ordering on a per-thread basis
<dansan> eew
<neggles> so they hid an x86 MMU inside their ARM one
<dansan> Well I gotta get back to work. I can't bill for chatting :)
<dansan> But very interesting conversation
<neggles> and that's how the M1 runs x86_64 code faster than an intel chip pulling twice the power.
<neggles> yoof.
<dansan> oh, interesting!
<neggles> the M1 outperforms the CPUs it replaced in earlier models, _even while emulating them_
<neggles> but yeah i also need to get back to work :P
<dansan> That is quite interesting
<dansan> lol!
<neggles> this is one hell of a rabbit hole
<dansan> btw, I was just starting to get used to MIPS and now we're going to ARM
<neggles> at least it looks like ARM isn't going anywhere anytime soon
<neggles> RISC-V is one to watch though
<dansan> We haven't been able to buy MT7620As for many months now
<dansan> Yeah, I did ARM in the past
<neggles> MIPS is already dead, tbh :(
<dansan> The pamdemic caused all SORTS of chips to be in short supply.
<neggles> yeah the supply chain for everything is absolutely toast
<neggles> all because we only have a few factories globally that can make 11n polysilicon for chips, and they all predicted a downturn in demand start of last year
<neggles> oops.
<dansan> MIPS has a lot of dominance in China because of their national CPUs, but Mediatek is doing more ARM now
<dansan> wow!
<neggles> making 11n polysilicon is _hard_
<neggles> motorola have two identical factories built at the same time to the same designs, only one of them can hit 11n
<neggles> we do not know why
<dansan> I designed this great board that used STM32Gx chips for CAN FD interfaces and now they are running $40 each :(
<neggles> ATtiny85s are like $4!
<neggles> it's madness
<dansan> Well these also used to be $3.5!!
<dansan> I wrote most of the firmware and everything :(
<neggles> I'm having to learn how the heck to write code for Keil C51 because the only chip I can get my hands on for a certain project at a reasonable price is an EFM8
<neggles> ouch
<neggles> it'll be another 18 months or so before that's all back to "normal" :(
<dansan> hehe, yeah, more than 10x more expensive and that's even when you can find the bastard scalper who has them
<neggles> fabs only started to catch up with demand ~6-9mo ago
<neggles> some chips have a lead time for 1-10ku orders of 60+ weeks :(
<dansan> OMG, I tried to contact MediaTek because of a flaw in one of their chips. They do'nt give a flip
<neggles> even espressif have been hit hard, and they're not on a remotely modern process node
<dansan> If you aren't somebody that buys 10s of thousands of chips from them a year, they wo'nt give you any time
<neggles> the ESP32-S3 was meant to be out six months ago
<neggles> it's looking like december if we're lucky
<dansan> Well, fortunately this whole project is going much slower than my boss had hoped. We're not going to be ready for production for at least 8 more months
<neggles> yup, a friend who works at an embedded device manufacturer says the biggest problem they're having is actually little dc-dc converters
<dansan> WOW!
<neggles> they've had an order go from 4 to 8 to 12 to 28 weeks leadtime all up
<dansan> Well, I'm mostly a software person. We've now got a dedicated EE, so I don't have to do that as much.
<neggles> all the big manufacturers have been panicking and buying out huge amounts of stock
<dansan> wow!
<neggles> making it worse for everyone else
<dansan> Yeah, that's what the problem is
<neggles> I can't even get two samples of a specific EFM8UB3 SKU direct from silabs for another 8 weeks
<dansan> And I wonder how many companies in China or elsewhere that labor is cheap, just engineered another way to build their product and sold their stock instead for a profit.
<neggles> EMC compliance is what screws you
<dansan> Oh yeah, I heard we had a product blow up on that last year! :)
<dansan> They ran the tests and ... oops
<neggles> the system for approving variations in BOM/layout from what you've had EMC-approved is not designed to handle swapping out 2/3 of the misc support silicon because it's all that's available
<dansan> Even worse, I think they did a 50 board run and none were usable because of emissions beyond the limits
<neggles> you end up having to do all the testing again, which takes another 4-8 months...
<neggles> if you were an evil electronics manufacturer aiming to get out ahead of a startup competitor, you could find the cheapest part on their public EMC-approval BOM with no acceptable substitute and buy up all the global stock of it...
<neggles> repeat that a couple of times and they'll go out of business
<dansan> Well my company is in the Iridium network business -- all of those details are beyond me, but I hear that it's a long process to get approved
<dansan> OMG!!!!!!!!!!!!
<neggles> the short version is, "everything is f#$%ed"
<dansan> That's SO evil, but I can see that happening!
<neggles> I'm sure it's already happened
<neggles> this is why just-in-time was never a good idea
<neggles> anyway.
<dansan> It just never gave me a warm fuzzy feeling in my chest.
<neggles> yeah
<neggles> no room for things to go wrong
<neggles> no room for people to mispredict demand
<neggles> especially when it's the people at the start of the production process, and ramping back up to full capacity is a 6-9 month process
<dansan> lol, ok I need to figure out this stupid "pkg_hash_check_unresolved: cannot find dependency kernel (= 5.4.143-1-996c65f322e8a783191e5195e3a4e5b7) for so-and-so" problem
<neggles> hm
<dansan> I build the whole tree and when it tries to install modules, it says the kernel doesn't match. But I have patches so I may have broken something. I should probably try to build straight from v21.02.0 -- just released today!
<neggles> ooh!
<dansan> come to think of it, I'm not sure why I haven't done that already
<neggles> interesting that it's 5.4.143
<neggles> an image i built from master this morning is 5.4.142
<neggles> i might be a bit behind though
<dansan> Well, it adds the hash -- I think based upon the sources. It ends up that the hash for the kernel packages doesn't match the hash in the control file "Depends: " lines
rua has quit [Ping timeout: 480 seconds]
<dansan> What part of OpenWRT generates the ipkg files, debian control files and such?
<neggles> target/package ?
rua has joined #openwrt-devel
<neggles> er
<neggles> package/x target?
<neggles> kmods themselves are made under target/linux i think
<neggles> but I am not really all that familiar with the build process
<dansan> hmm, I've only tried package/kmod-fd-configfs/compile, I'll try that
<dansan> And that's the really weird thing that target/linux makes them all, yet their hashes don't match
<dansan> Well, if I get the same issue building the upstream, then I can just post it on the forums or issue a bug report. If not, I suppose I can start bisecting patches :(
mangix has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
mangix has joined #openwrt-devel
rmilecki has joined #openwrt-devel
rua has quit [Ping timeout: 480 seconds]
rmilecki has quit [Ping timeout: 480 seconds]
rua has joined #openwrt-devel
<dansan> ah hah! it's all in include/package-ipkg.mk
<neggles> I wonder if I could get openwrt booting on this dopey picochip.
<neggles> i wouldn't have a snowflake's hope of getting the 3G femtocell stuff working, though. intel have that SDK locked up behind so many doors
rua has quit [Ping timeout: 480 seconds]
<digitalcircuit> For anyone following the ipq80x hard reboot issue (Deja Dup, CPU frequency, etc), I'm still diagnosing this. I've tried to do some of Ansuel's suggestions, and I'll be emailing the mailing list to ask for help: https://github.com/openwrt/openwrt/compare/master...digitalcircuit:ft-fix-ipq8065-reset
<digitalcircuit> I held off on emailing today because I've noticed that the crash still happens with "performance" CPU governor, i.e. locking both CPUs to 1.75 GHz. I need to expand my automated QA test script to recreate this crash without needing Deja Dup - I'm guessing making a bursty single-core (instead of both core) workload might do it.
<digitalcircuit> (QA scripts with documentation and such so far: https://github.com/digitalcircuit/openwrt-ipq806x-qa-cpu-reset )
robin_ has quit [Ping timeout: 480 seconds]
robin_ has joined #openwrt-devel
<mrkiko> digitalcircuit: good morning / whatever!
<mrkiko> digitalcircuit: does the crash happen with original firmware?
rua has joined #openwrt-devel
nitroshift has joined #openwrt-devel
<will[m]> <neggles> "at least it looks like ARM isn't..." <- You saw the news about ARM being basically taken over by China, and also the USA is restricting import of Chinese telecom gear right?
rua has quit [Ping timeout: 480 seconds]
<neggles> I had not, but from a quick glance it looks like that's only ARM's chinese, well, arm :P and ARM themselves are not actually all that upset about it
<neggles> there's been updates to some news posts in the last 24h or so
<neggles> while there's definitely some shenanigans going on it doesn't really affect other peoples' ability to use ARM ISAs and core designs
<neggles> I'll be a lot more worried if nVidia manage to convince everyone that letting them buy ARM is a good idea
decke has joined #openwrt-devel
rua has joined #openwrt-devel
Kali_ has joined #openwrt-devel
<Kali_> Heyo, i could use some help regarding reading named config sections (config in style of rulesets of /etc/config/upnpd). I couldn't find a proper source example as to how this should be done. Anyone ran into this before?
pmelange has joined #openwrt-devel
guerby_ has quit [Read error: Connection reset by peer]
guerby has joined #openwrt-devel
nitroshift has quit [Remote host closed the connection]
Rentong has joined #openwrt-devel
Rentong has quit [Remote host closed the connection]
hexagonwin[m] has quit [Ping timeout: 480 seconds]
f12 has quit []
f12 has joined #openwrt-devel
hexagonwin[m] has joined #openwrt-devel
danitool has joined #openwrt-devel
pmelange1 has joined #openwrt-devel
pmelange has quit [Read error: Connection reset by peer]
Rentong has joined #openwrt-devel
gladiac has quit [Remote host closed the connection]
gladiac has joined #openwrt-devel
rua has quit [Ping timeout: 480 seconds]
Rentong has quit [Ping timeout: 480 seconds]
rua has joined #openwrt-devel
Rentong has joined #openwrt-devel
norris has quit []
norris has joined #openwrt-devel
Rentong has quit [Ping timeout: 480 seconds]
Rentong has joined #openwrt-devel
fda- has quit [Read error: Connection reset by peer]
fda has joined #openwrt-devel
Rentong has quit [Ping timeout: 480 seconds]
f00b4r0 has joined #openwrt-devel
Tapper has joined #openwrt-devel
goliath has joined #openwrt-devel
<nick[m]12> Any ideas what will be next kernel version after 5.10?
<nick[m]12> Is the openwrr-21.02.0 already ready to test? I built an image using Imagebilder and flashed it on a ipq40xx and then I ended up in a bootloop
<nick[m]12> Ashed back openwrt-21.02-SNAPSHOT with imagebuolder and everything was normal again
<nick[m]12> Flashed*
Rentong has joined #openwrt-devel
pmelange1 has left #openwrt-devel [#openwrt-devel]
<slh64> nick[m]12: 'last stable kernel at the end of the year', so probably 5.16
<nick[m]12> slh64 Nice thanks
<owrt-snap-builds> Build [#307](https://buildbot.openwrt.org/master/images/#builders/2/builds/307) of `layerscape/armv7` completed successfully.
Rentong has quit [Ping timeout: 480 seconds]
Rentong has joined #openwrt-devel
fda has quit [Read error: Connection reset by peer]
Rentong has quit [Remote host closed the connection]
Rentong has joined #openwrt-devel
Rentong has quit [Remote host closed the connection]
fda has joined #openwrt-devel
fda- has joined #openwrt-devel
Rentong has joined #openwrt-devel
fda has quit [Ping timeout: 480 seconds]
<Tapper> nick[m]12 probs k 5.15 if they make it a lts
Rentong has quit [Ping timeout: 480 seconds]
Rentong has joined #openwrt-devel
Rentong has quit [Ping timeout: 480 seconds]
Kali_ has quit [Quit: leaving]
<neggles> would someone mind taking a look at a commit I've put together adding some new device support? or should I just open a PR?
Rentong has joined #openwrt-devel
minimal has joined #openwrt-devel
Rentong has quit [Ping timeout: 480 seconds]
fda- has quit [Ping timeout: 480 seconds]
decke has quit [Quit: Leaving.]
fda has joined #openwrt-devel
fda has quit [Remote host closed the connection]
fda has joined #openwrt-devel
jbowen has quit [Quit: leaving]
<neggles> urgh. capitalization bad in signed-off-by line.
fda has quit [Read error: Connection reset by peer]
fda has joined #openwrt-devel
<neggles> I'mma just open a PR :)
jbowen has joined #openwrt-devel
pmelange has joined #openwrt-devel
pmelange has left #openwrt-devel [#openwrt-devel]
pmelange has joined #openwrt-devel
pmelange has left #openwrt-devel [#openwrt-devel]
danitool has quit [Quit: Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos]
new_guy_9055_ has joined #openwrt-devel
<new_guy_9055_> Hi, I've joined to ask you some where should I start to add support for a new cpu (Trend Chip TC3162U)
new_guy_9055_ has quit []
<rsalvaterra> Sweeet…! https://git.openwrt.org/?p=project/netifd.git;a=commitdiff;h=5ba9744aac6d42da1e56357aca951b52f86cfacb
jbowen has quit [Ping timeout: 480 seconds]
jbowen has joined #openwrt-devel
<fda> whats a good way to track hhttpd/lua failures?
<fda> i get this in syslog: https://pastebin.com/H5f2vCXs
<karlp> run it in the forground can help sometimes
fda- has joined #openwrt-devel
<fda-> karlp: how to run in foreground? run the init script by "trace"?
pmelange has joined #openwrt-devel
fda has quit [Ping timeout: 480 seconds]
<karlp> look at ath the actual uhttpd command line is with ps, stop it with /etc/init.d/uhttpd stop, athen run it by hand at the console?
<karlp> you can then sprinkle io.stderr:write("app go bang here\n") into the lua stuff you're trying to figure out
<fda-> thx karlp!
Tapper has quit [Ping timeout: 480 seconds]
fda has joined #openwrt-devel
fda- has quit [Ping timeout: 480 seconds]
Acinonyx_ has joined #openwrt-devel
Acinonyx has quit [Ping timeout: 480 seconds]
<owrt-snap-builds> Build [#306](https://buildbot.openwrt.org/master/images/#builders/3/builds/306) of `at91/sam9x` completed successfully.
pmelange1 has joined #openwrt-devel
pmelange has quit [Read error: Connection reset by peer]
pmelange has joined #openwrt-devel
pmelange1 has quit [Ping timeout: 480 seconds]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
_lore_ has quit [Ping timeout: 480 seconds]
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
owrt-snap-builds has quit [Ping timeout: 480 seconds]
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
_lore_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
danitool has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
owrt-snap-builds has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Excess Flood]
paper_ has joined #openwrt-devel
paper_ has quit [Remote host closed the connection]
<will[m]> is there a good way of creating my own imagebuilder(s)? i see this old unmaintained repository from some third party, and i see this custom syntax for adding and removing packages from a build, but i'm very tempted to just create a dockerfile that pulls down the buildroot, grabs all the feeds, and is ready for me to execute a build with my custom package and custom .config
<will[m]> (futzing with my bespoke buildroot on my laptop for the last 3 days has me looking for alternatives)
<tmn505> will[m]: if You don't need any customisation of toolchain, kernel or target, then grab SDK build Your packages, after that grab ImageBuilder, point it to Your packages dir and create Your custom image.
Tapper has joined #openwrt-devel
<tmn505> hauke: please backport cbdd2b62e4d5e0572204c37d874d32dc8610840e to 21.02 before release.
<PaulFertser> But it's already tagged...
<tmn505> ahh indeed
<tmn505> the I guess 21.02.1 it is
<tmn505> *then
Borromini has joined #openwrt-devel
<will[m]> tmn505: i do customize the toolchain and target (explicitly removing a few files in a hackish way, and heavily customizing what kmods / packages / configs / etc are selected in .config)
<will[m]> that's why i'm delving into "make my own imagebuilder??"
<will[m]> "or say screw it and make a docker image of the buildroot?"
<fda> karlp: the exception was caused by an adblocker of the webbrowser ^^
paper_ has joined #openwrt-devel
dannyAAM has quit [Quit: znc.saru.moe : ZNC 1.6.2 - http://znc.in]
dannyAAM has joined #openwrt-devel
<tmn505> will[m]: then You have to either create Your own pair of IB and SDK or use buildroot as usual
jbowen has quit [Quit: leaving]
<pmelange> There are docker images of the sdk, imagebuilder and rootfs https://hub.docker.com/u/openwrtorg
<will[m]> right so if i wanted to create my own imagebuilder, is there an advised way of doing that? like i said i saw old projects like this https://github.com/wlanslovenija/firmware-core but it seems outdated or irrrelevant
<will[m]> the default imagebuilder seems to not take kindly to much customization, and anyway uses a totally different syntax for choosing packages
<pmelange> You can take a look at what we do in Freifunk-Falter https://github.com/Freifunk-Spalter/repo_builder
<pmelange> This one just builds our custom feed.
<pmelange> I believe the imagebuilder is used here https://github.com/Freifunk-Spalter/builter
pmelange has left #openwrt-devel [#openwrt-devel]
pmelange has joined #openwrt-devel
<will[m]> hmm so pmelange it seems like Freifunk-Falter is essentially openwrt with a few files patched and a custom package list? what if i needed to do something like CONFIG_LIBCURL_OPENSSL=y, does that mean i need to start one level higher up?
fda- has joined #openwrt-devel
fda has quit [Ping timeout: 480 seconds]
<pmelange> Your are more or less right. You can do CONFIG_LIBCURL_OPENSSL with the sdk. Your just need to build the packages you need with that option. Then create your own feed. The rest is the imagebuilder.
<will[m]> hmm yeah i guess leaning towards just a docker image with the buildroot is where i'm leaning in the end
Borromini has quit [Quit: Lost terminal]
fda has joined #openwrt-devel
<will[m]> i'd be creating a dozen feeds for a dozen modified packages when buildroot can do it all in one (maybe longer, but simpler) go
fda- has quit [Ping timeout: 480 seconds]
<karlp> only needs to be one feed.
pmelange has left #openwrt-devel [#openwrt-devel]
<will[m]> hmmmmm
<ldir> curiosity - why doesn't sysupgrade run /etc/rc.d/K[0-9][0-9]* ?
Tapper has quit [Ping timeout: 480 seconds]
<mangix> hrm mt76 with mt7915 doesn't seem to be able to inject packets
<fda> ldir: dont know, i've extended sysupgrade script file to stop/backup some missing things...
<fda> does IPv6 + multiple vlans work for someone with openwrt? i tested many different settings, even on different devices connected to different providers.
<fda> but everywhere the same result: on every vlan every subnet is announced (5vlan = 5ipv6 subnets for each client). for sure only 1 of all the ipv6s is working on a vlan
<fda> did i some elementary wrong? or does it just not work with more than 1 (ipv6) lan ?
<will[m]> docker buildroot: running. ho ho ho.
lmore377_ has joined #openwrt-devel
lmore377 has quit [Read error: Connection reset by peer]