<JaMa>
vmeson: pushed another result with cpu regulation enabled, swap spike seems even smaller now, so it seems working and just a few minutes extra, now triggering the same build without any swap to see if it survives now
<vmeson>
JaMa: Progress! We'll reveiw the situation in the morning.
sakoman has quit [Ping timeout: 252 seconds]
R0b0t1 has quit [Ping timeout: 252 seconds]
nemik has quit [Ping timeout: 240 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
zkrx has quit [Ping timeout: 252 seconds]
sakoman has joined #yocto
zkrx has joined #yocto
nemik has quit [Ping timeout: 240 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 245 seconds]
nemik has joined #yocto
rber|res has joined #yocto
RobertBerger has quit [Ping timeout: 240 seconds]
seninha has quit [Quit: Leaving]
Starfoxxes has joined #yocto
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
starblue has quit [Ping timeout: 268 seconds]
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
starblue has joined #yocto
agupta1_ has joined #yocto
brazuca has quit [Quit: Client closed]
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
agupta1_ has quit [Ping timeout: 245 seconds]
nemik has quit [Ping timeout: 240 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 252 seconds]
nemik has joined #yocto
tlwoerner_ has joined #yocto
tlwoerner has quit [Ping timeout: 244 seconds]
amitk has joined #yocto
tlwoerner_ has quit [Ping timeout: 268 seconds]
sakoman has quit [Quit: Leaving.]
alessioigor has joined #yocto
beneth has quit [Read error: Connection reset by peer]
alessioigor has quit [Quit: alessioigor]
goliath has joined #yocto
hcg has joined #yocto
gsalazar has quit [Ping timeout: 240 seconds]
<LetoThe2nd>
yo dudX
gsalazar has joined #yocto
frieder has joined #yocto
zpfvo has joined #yocto
goliath has quit [Quit: SIGSEGV]
nemik has quit [Ping timeout: 245 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 240 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
Vonter has quit [Ping timeout: 245 seconds]
vquicksilver has quit [Ping timeout: 245 seconds]
goliath has joined #yocto
Vonter has joined #yocto
vquicksilver has joined #yocto
florian has joined #yocto
mihai has quit [Quit: Leaving]
denisoft81 has joined #yocto
ardo has quit [Read error: Connection reset by peer]
prabhakarlad has joined #yocto
ardo has joined #yocto
stuom has joined #yocto
<qschulz>
o/
<stuom>
Hi, yocto live coding video inspired me to try the extensible SDK. I have a problem that it seems to install "rpm" that depends on "db", and that installs a header with same name I already have from another package. My question is, is RPM needed for/by the eSDK? Can it be disabled? I have always used OPKG before in "regular" builds and have not had
<stuom>
this issue.
denisoft81 has quit [Quit: Leaving]
<LetoThe2nd>
stuom: you mean, it builds AND installs rpm even if you set package_ipk?
nemik has quit [Ping timeout: 240 seconds]
nemik has joined #yocto
jclsn has joined #yocto
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
<stuom>
yes I have package_ipk and I get do_sdk_depends: The file /usr/include/db.h is installed by both myproject and db, aborting. I don't know does it mean it installs RPM but it seems to install its dependency db?
<stuom>
If I remove the db.h in a bbappend, rpm fails to build
<stuom>
So I thought my options are either to rename that header all over in my project, or see if RPM can be removed from the process
<LetoThe2nd>
stuom: well to be honest - if your project is installing such a generic header name to the top level include path, improving that to avoid clashes might be a good idea anyways.
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
leon-anavi has joined #yocto
<RP>
oh wow. rust is deadlocking pseudo in malloc :(
<stuom>
LetoThe2nd yes that is true. I was hoping to find a less disruptive way to solve this. I can suggest the renaming if RPM is really needed
Thomas62 has joined #yocto
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
<LetoThe2nd>
stuom: well its not exactly about rpm, but more like the name clashing with berkeley db as i understand it. some it will collide whenever somebody wants to use your code on a system that also has db installed.
protu[m] has joined #yocto
mvlad has joined #yocto
<agherzan>
Hi all. There seems to be a change in behavior of how we error/warn for bbappend. dunfell was erroring out on dangling bbappends while kirkstone only warns on this. I know there is a BB_DANGLINGAPPENDS_WARNONLY variable for it but was this change of behavior expected?
<JaMa>
agherzan: it's configureable and some layers had it in layer.conf
<agherzan>
So was that expected. Did we want to change that?
<JaMa>
e.g. BB_DANGLINGAPPENDS_WARNONLY = "true" in meta-arm-toolchain/conf/layer.conf
<agherzan>
Having it in layer.conf doesn't sound great either
<agherzan>
Because it again means that importing a layer will change parsing behaviour
<JaMa>
in meta-arm it's already removed in master branch
<JaMa>
if that's the reason for you as well, we can poke rburton again to cherry-pick it to kirkstone
<agherzan>
But again, my question is about the intend of this. Was this expected to be a migration item or is it a bug?
<agherzan>
I'm not sure I get it.
<agherzan>
Are you talking about core now?
<JaMa>
it's toxic behavoir of that layer, that's why I've reported it to ross
<agherzan>
Is was it enabled by default in master?
<JaMa>
no it's not by default, but you might include some layer like meta-arm-toolchain which does that for you, check your bitbake -e
<agherzan>
So does the core now default to BB_DANGLINGAPPENDS_WARNONLY = 0 ?
<agherzan>
OK, so we have a bug, and in theory oe-core should default to BB_DANGLINGAPPENDS_WARNONLY = 0 , right?
<RP>
agherzan: core has never set it. We can't control other layers :(
<JaMa>
rburton: please cherry-pick 232dc039d09267e6d27e60931d0355d308bdbbf4 to kirkstone branch as well
<agherzan>
True, but bitbake changed behaviour
<RP>
JaMa: rburton is away
<RP>
agherzan: bitbake did not change behaviour
<JaMa>
ACK
<RP>
JaMa: try jonmason
<JaMa>
sent it over e-mail
<JaMa>
to both
<agherzan>
Oh I see. I think I got you now
* RP
wishes people wouldn't blame poor old bitbake for everything
<agherzan>
Makes sense now.
<agherzan>
I'll take the blame here.
<agherzan>
Sorry, bitbake!
<agherzan>
Thanks JaMa it is from meta-arm indeed
<JaMa>
poor old bitbake not written in rust :)
<RP>
JaMa: thankfully!
<JaMa>
:)
<agherzan>
Ahaha
<agherzan>
Also freeartos distro does it
<agherzan>
That's a bit annoying now.
<RP>
agherzan: I'm not very happy about that particularly behaviour :(
<agherzan>
Me neither. It sounds orthogonal to layer.conf (at least)
<agherzan>
Anyway - at least now I'm clear and I can fix things around. I'll also propose to freeartos to drop it
<agherzan>
It broke some of my CI because this was a safely net we assumed for handling dangling bbappends
<agherzan>
I'll have to make sure this is always set now to not risk having layers or distros override it
<RP>
agherzan: forcevariable it to what you want ;-)
<agherzan>
Yup
<agherzan>
Cheers guys. All set now. I'll go shoot some patches
<RP>
We need to do something better with it. Maybe someone could open a bug about the poor behaviour?
<agherzan>
I feel that should be me
<agherzan>
I'll do it
<JaMa>
maybe make it more like BBMASK, so that it would list only the .bbappends which might be dangling?
ptsneves has joined #yocto
<RP>
JaMa: or maybe just make it apply to the layer
<RP>
agherzan: thanks
Thomas62 has quit [Quit: Ping timeout (120 seconds)]
<JaMa>
but even inside one layer you might only some bbappend which is allowed to be dangling, while you still want to error for others, well I never really needed it, always could rename bbappend or use DYNAMIC or something to prevent this
<agherzan>
^ that. No idea why you would need this.
<agherzan>
I think it is a path that would lead to issues.
<JaMa>
then it might be some variable inside the .bbappend itself for easier use
nemik has quit [Ping timeout: 240 seconds]
nemik has joined #yocto
<JaMa>
hmm I take it back as someone might need BB_DANGLINGAPPENDS_WARNONLY to work around some external (unmodified) layer to parse in his build
<JaMa>
but then I would probably use BBMASK anyway
<RP>
removing it entirely is kind of tempting
<JaMa>
you don't want to see how creatively some people use BBMASK in lge :/
<RP>
JaMa: no, I suspect I really don't! :)
pgowda_ has joined #yocto
nemik has quit [Ping timeout: 252 seconds]
nemik has joined #yocto
peter43233 has joined #yocto
goliath has quit [Quit: SIGSEGV]
<agherzan>
My eyes were exposed.
goliath has joined #yocto
Starfoxxes has quit [Ping timeout: 268 seconds]
<agherzan>
JaMa: Did you say you already proposed a backport in meta-arm for kirkstone? If not, I will do it now
ardo has quit [Read error: Connection reset by peer]
ardo has joined #yocto
ardo has quit [Read error: Connection reset by peer]
Guest0 has joined #yocto
peter43233 has quit [Ping timeout: 252 seconds]
ardo has joined #yocto
mihai has joined #yocto
ardo has quit [Read error: Connection reset by peer]
ardo has joined #yocto
ardo has quit [Read error: Connection reset by peer]
ardo has joined #yocto
<ThomasRoos[m]>
what is the correct way to send an updated version of a patch?
<ThomasRoos[m]>
- respond to the email of the original send patch? to have it in the thread?
<ThomasRoos[m]>
- where to set [PATCH v2] ? should this be done in patch git commit comment?
<otavio>
Hello all; I am building an SDK where I am adding wxwidgets. It builds fine but it doesn't install wx-config in native sysroot. Does I need to create a wrapper for it?
<qschulz>
ThomasRoos[m]: git format-patch -v2 adds the v2 in the right place for you
<qschulz>
ThomasRoos[m]: send the v2 in a new thread, so just use git send-email but do not give the original message-id when asked
<qschulz>
otavio: are you adding the native version of wxwidgets to your SDK or the target one (or both)?
<qschulz>
otavio: because to have something in the native sysroot, you need the native version of a recipe, not the target
nemik has quit [Ping timeout: 252 seconds]
nemik has joined #yocto
<qschulz>
(so, in DEPENDS have a recipe that ends with -native, don't know how you add native tools to an SDK though, TOOLCHAIN_HOST_TASK maybe?)
peter43233 has joined #yocto
<otavio>
qschulz: in target
<qschulz>
otavio: if you want to be able to use wx-config on the host, you need the native recipe in your SDK
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
<otavio>
qschulz: but it shoud have a cross wx-config
<otavio>
to allow linking with it
Guest14 has joined #yocto
Guest14 is now known as brazuca
<agherzan>
Drop source seems to have been unavailable for a good while for me. Is it something we are aware of it?
ardo has quit [Read error: Connection reset by peer]
<qschulz>
otavio: then you need a dependency on the native version of the recipe that provides wx-config
<qschulz>
and you need to configure it correctly so that it does cross-configuration
<qschulz>
which might not be supported and then you'll have to work on adding support
<qschulz>
or use the hammer solution which is to use qemu on the target wx-config
seninha has joined #yocto
manuel1985 has joined #yocto
ardo has joined #yocto
agupta1_ has joined #yocto
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
marka has quit [Ping timeout: 255 seconds]
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
brazuca has quit [Ping timeout: 252 seconds]
pgowda_ has quit [Quit: Connection closed for inactivity]
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
<JPEW>
Just out of curiosity, the mkstemp() race I was seeing was happening when I had 200-400 parallel builds; is anyone else doing builds at that scale?
<JaMa>
jonmason: yes, it was request for backport to kirkstone, what other e-mail format do you expect for backports?
<JaMa>
jonmason: using [branch] in subject seems to be the common practise for all layers I'm using
<jonmason>
JaMa: it wasn't obvious to me what was being asked, since the patch didn't come from you
<jonmason>
It's early in the morning, brain fog :)
<JaMa>
:)
<jonmason>
JaMa: I have the patch in my kirkstone-next branch (for running CI)
nemik has quit [Ping timeout: 252 seconds]
<jonmason>
Now I just need denix to ack pulling it back to dunfell
nemik has joined #yocto
<LetoThe2nd>
denix: howdy! do you happen to be aware of a usable BSP for the TDA4VM (orwhatsitcalled) thing on the BBAI-64?
aleblanc[m] has joined #yocto
sakoman has joined #yocto
prabhakarlad has quit [Quit: Client closed]
<bigendian>
hi, i have 5 machines.conf, in a recipe i want to append to SRC_URI some patches only for 2 of them, shoudl i duplicate all patches lists in two SRC_URI_append_xxx or there is a better way ?
<qschulz>
bigendian: use a MACHINEOVERRIDES for this if it makes sense
<qschulz>
you add it to your two machines and then use SRC_URI_append_machine-override
<qschulz>
(machine-override replaced by whatever you added to MACHINEOVERRIDES)
<bigendian>
qschulz: thanks a lot !
<RP>
"Use syscall(2) rather than {open,read,close}(2) when possible to avoid reentry during bootstrapping if another library has interposed system call wrappers." - except we wrap syscall() :/
hcg has quit [Quit: Client closed]
Guest14 has joined #yocto
Guest14 is now known as brazuca
<otavio>
qschulz: on OE env it works ... wx-config ends in bindir_crossscripts and binconfig class does the trick
gsalazar has quit [Ping timeout: 240 seconds]
<RP>
JPEW: we've seen very occasional issues like that on the autobuilder, it could explain a few things
<RP>
JPEW: I wonder if the code in sstate.bbclass has the same issue though?
<RP>
JPEW: we use a non-linux NAS on the autobuilder FWIW
<JPEW>
Do you happen to know what it is?
<JPEW>
I don't think the NAS would matter so much as the NFS server
gsalazar has joined #yocto
<RP>
JPEW: well, the NFS server runs on the NAS. I think it may be TrueNAS, BSD based
<JaMa>
we had similar scale before with servers accessing sstate over NFS and people over CIFS, but it was on old RedHat with 2.6 kernel, so it probably worked just by happy coincidence
gsalazar has quit [Remote host closed the connection]
gsalazar has joined #yocto
<JPEW>
JaMa: I _think_ that the Linux Kernel NFS would be immune to the problem
<JPEW>
NFS server that is
<JaMa>
ah I've read only your last message, didn't know the context
peter43233 has quit [Remote host closed the connection]
gsalazar has quit [Remote host closed the connection]
peter43233 has joined #yocto
<bigendian>
is it possible to set some variables in a class depending on the machine type ?
<qschulz>
bigendian: depends excactly what you want to do but FOO:machine-type = "something" replaces FOO
<qschulz>
and this should be tackled with the *multiple* gotchas around them
<qschulz>
this should tackle*
goliath has quit [Quit: SIGSEGV]
zpfvo has quit [Ping timeout: 268 seconds]
manuel_ has joined #yocto
whuang0389 has joined #yocto
manuel1985 has quit [Ping timeout: 252 seconds]
alessioigor has joined #yocto
alessioigor has quit [Client Quit]
prabhakarlad has joined #yocto
manuel_ has quit [Ping timeout: 268 seconds]
<whuang0389>
Hi, I have a recipe to build a javascript app. I'm trying to append to the compile process: do_compile_append() { echo "test" }, but I get a syntax error: echo "test" SyntaxError: invalid syntax. Not sure what's going on here
zpfvo has joined #yocto
<LetoThe2nd>
whuang0389: if i had to guess, then linebreaks and spacing.
<whuang0389>
I tried copying over a compile_append from another recipe that works. I get the same issue. Not sure if it has something to do with inheriting from npm?
<denix>
jonmason: it should be fine for kirkstone and dunfell
<denix>
LetoThe2nd: j721e-evm in meta-ti
peter43233 has quit [Remote host closed the connection]
brazuca has joined #yocto
ardo has quit [Ping timeout: 245 seconds]
goliath has joined #yocto
<rfs613>
whuang0389: it looks like the npm package uses python function (not shell) for building. So in your append, you would also need to use (correctly indented) python.
manuel1985 has joined #yocto
<RP>
JaMa: Do you know much about how the upstream ROS community view YP/OE ?
<JaMa>
RP: meta-ros wasn't updated since my last changes in December 2021, then LGE left OSRF TSC (https://discourse.ros.org/t/os-2-tsc-meeting-january-20th-2022/23986/2) some people offered to maintain it or help maintaining it, but the few PRs which were created still weren't reviewed and merged, so it just bitrots for last 9 months and nobody seems to care enough to revive it
<RP>
JaMa: right, so there isn't direct interest upstream it was you/LGE that were driving it? :(
<JaMa>
from actual users of meta-ros I never got much feedback, there were only a few people ever submitting some PRs, more people submitting issue tickets, but ROS is very "wide" a lot of recipes in multiple different distributions, so I wasn't ever able to test any significant portion of that (other than build testing) and the users on the other hand were just picking smaller random pieces they needed for their
<JaMa>
project (which might work perfectly fine for them or maybe they just copied the recipes to their layer and never reported back)
<JaMa>
I think OSRF itself was interested to get it supported on another platform and LGE was willing to do that to fullfil its TSC membership
<JaMa>
I'm still following some OE related topics on their discourse, but there wasn't any traffic in last months (either they don't tag it as openembedded anymore or really no update)
<RP>
JaMa: seems a little sad it is languishing like that :(
ptsneves has quit [Ping timeout: 268 seconds]
<JaMa>
agreed, but honestly I don't miss working on it too much, as it's just too many quickly changing parts for 1 person to keep at least buildable (and with small confidence about working at all in runtime)
<JaMa>
7610 recipes (mostly generated with superflore tool) which get upgraded every few weeks
zpfvo has quit [Quit: Leaving.]
<RP>
JaMa: right, if there isn't an interested group it would be too hard :(
<moto-timo>
yeah, I contemplated picking it up, but it's a larger matrix than I want to tackle for customers that only need one or two versions.
<moto-timo>
of course, if someone wants to contract Konsulko to maintain it, that is a different story
<moto-timo>
also, their licenses are all old style and not SPDX yet
<moto-timo>
the dreaded "BSD"... which one? random
<JaMa>
BSD is at least a license, they also use "WTF" "TBA" "Check author's email" as well :)
<JaMa>
BTW in current built time benchmark I've noticed in webkit-gtk log.do_compile, unfortunately build reprodcibility probably won't catch these as the build servers might not have gstreamer installed, cc1plus: warning: include location "/usr/include/gstreamer-1.0" is unsafe for cross-compilation [-Wpoison-system-directories]
<JaMa>
didn't we have a QA check grepping the logs for unsafe which made this fatal error?
<JaMa>
hmm it's in meta/classes/insane.bbclass but only for config.log when autotools is used
<JaMa>
should I add similar check to do_install[postfuncs] and do_compile[postfuncs]?
frieder has quit [Remote host closed the connection]
<JaMa>
RP: vmeson: now I'm importing all pressure data from these builds buildstats to Excel to see how it changes (and if there is some expression we can use to find optimal threshold - maximum doesn't work well, but maybe some top percentile or something might be interesting)
brazuca has quit [Ping timeout: 252 seconds]
<JaMa>
I still don't understand why we regulate based on the difference in total pressure instead of the total value (my understanding is that this regulates against steep spikes, not against high pressure in general), so there is steep spike at the beginning of the build (because it's from 0 and we shouldn't limit this one) while when it is really busy already even small increase in pressure is bad
<RP>
JaMa: the value increments over time, it never goes down
GregWildonLindbe has joined #yocto
<RP>
JaMa: the averages weren't reactive enough for us which is why we use the raw data
<JaMa>
ah it's not total pressure, but total absolute stall time (in us), so the difference is the "pressure"
<JaMa>
yes, I should have read it first :)
<GregWildonLindbe>
I would like to set up a mariadb login user in Yocto and I don't even know if it is possible, is there anyone here who understands the mariadb configuration well enough to be able to help me?
<vmeson>
Oh and agupta1_ and I were just looking at how the current delta total simple algorithm referencing the previous second isn't ideal. I'd expect it'll let more work start than one could or should.
<vmeson>
It's still better than nothing but we're trying to tune it without making it lag too much like it did with avg10.
<JaMa>
I don't understand the last graph, why is it showing only the values during 2nd hour of the build time? (starting from 3400s to 5000s)?
<RP>
Initialising tasks...done.
<RP>
Bitbake still alive (no events for 600s). Active tasks:
<RP>
how can it sit like that for 600s :/
<RP>
vmeson: wonder what the pressure is on fedora36-ty-3.yocto.io :)
<vmeson>
JaMa: the last graph with the blue boxes, has a Y axis that is the total build time for core-image-minimal on a modest 24 core? machine as a function of the pressure regulation on the x axis.
<vmeson>
RP: pheh, it's a new distro that is causing your 600s with no active tasks...
* vmeson
takes a look at the worker briefly for fun.
<RP>
vmeson: 1200s now!
<vmeson>
some pressure is 0 to a few percent.
<RP>
vmeson: I was also looking, doesn't seem that high
<vmeson>
Right, not high. There is a bit of memory pressure which I try to avoid... or at least I don't see much of it on the machines I use directly.
<vmeson>
RP: the flood gates seem to have opened and that box is bitbaking like mad now.
<RP>
it is very slowly stating files on the nas
<RP>
vmeson: the edgerouter build isn't
<vmeson>
I have a complex graph to make... priorities!
<RP>
halstead: is the nas ok?
<halstead>
RP: Checking.
<halstead>
RP: No data errors. IO is quite high right now.
<RP>
halstead: [pokybuild@fedora36-ty-3 ~]$ strace -p 2281051 shows it taking around half a second per file stat on the NAS :/
<RP>
halstead: could be it is just heavily loaded, just seems unusual. I've never seen a build spend 1800 seconds and not start building anything!
<halstead>
RP: nfs is certainly working hard. yocto-autobuilder-helper/scripts/generate-testresult-index.py is running on the NAS an using a lot of IO as well.
<RP>
halstead: ah, that could be slowing everything down
<RP>
halstead: build is away now
<halstead>
RP: I'm going to stop it and see if that helps.
<RP>
halstead: I'm afraid I lost my illustration of the problem now!
<halstead>
RP: io pressure is still very high without the index generation running.
<vmeson>
pics added to that folder - I'll explain when I"m back from an errand.
<RP>
halstead: I guess we're just putting the system under a lot of load. I thought I'd better mention in case it was useful/interesting to look at it live
<halstead>
RP: nfs is doing about 10000 transactions per second which is pretty high and about 800mbps in read-write throughput which isn't an issue.
<diamondman>
I would like to better understand the BBEXTENDS behavior, but I don't know how to debug the native_virtclass_handler etc event handlers in the bbclasses to see how they work. I can hack some stuff together to write out results in a directory I can introspect, but I have to assume there is a better way to handle this than shots in the dark or gross hacks. Does anyone have suggestions?
<RP>
halstead: Interesting. I'd think it should be calming down a bit now
florian has quit [Ping timeout: 268 seconds]
<halstead>
RP: it is much calmer. between 30% and 80% busy instead of 80%-95%
<halstead>
RP: I need to visit the data center to replace the debian perf worker's HDD.
<RP>
halstead: it can wait until maint tomorrow if that helps
<halstead>
RP: If it can wait, I'll do it tonight after traffic dies down.
<RP>
halstead: that should be fine. I just wanted to flag it needed attention
<RP>
halstead: sorry to distract but it is useful to know we're loading the nas that heavily :/
<JaMa>
vmeson: ah I see, so X is BB_PRESSURE_MAX_CPU value used in the build (not the collected pressure values from /proc/pressure nor their diff values)
<halstead>
RP: It is often at a fairly high load. I do want to know when stats are taking .5s each though.
<JaMa>
and that's from where the 1000 magic number came from as the lowest value (highest regulation) without causing too significant build time increase
<vmeson>
so if anyone is interested, the "Delta Pressure over 1 second epoch" graphs reflect how (poorly?) the current algorithm works, we just use 1 fixed reference from 'the previous second' so the delta time is > 0 < 1 in theory
<vmeson>
in pracice there are some times when delta T is > 1 second - not much to do in the build then maybe.
<vmeson>
I don't understand the 1/10 of a second banding but I don't think it's that important
<vmeson>
Rather than a fix reference, we need a sliding window ? More tomorrow.
florian has quit [Ping timeout: 252 seconds]
mvlad has quit [Remote host closed the connection]
<vmeson>
JaMa: that's the context. This data includes that patch. The current approach works but may need some tuning. I think we need- more averaging but not so much that we end up close to the avg10 high latency response.