ndec changed the topic of #yocto to: "Welcome to the Yocto Project | Learn more: https://www.yoctoproject.org | Join us or Speak at Yocto Project Summit (2022.11) Nov 29-Dec 1, more: https://yoctoproject.org/summit | Join the community: https://www.yoctoproject.org/community | IRC logs available at https://www.yoctoproject.org/irc/ | Having difficulty on the list or with someone on the list, contact YP community mgr ndec"
Tokamak has joined #yocto
sr2 has joined #yocto
vmeson has quit [Ping timeout: 260 seconds]
vmeson has joined #yocto
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 256 seconds]
nemik has joined #yocto
davidinux has quit [Ping timeout: 268 seconds]
davidinux has joined #yocto
sakoman has joined #yocto
qschulz has quit [Remote host closed the connection]
qschulz has joined #yocto
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 240 seconds]
nemik has joined #yocto
sr277 has joined #yocto
davidinux has quit [Ping timeout: 256 seconds]
sr2 has quit [Ping timeout: 260 seconds]
davidinux has joined #yocto
sr277 is now known as sr2
seninha has quit [Remote host closed the connection]
jclsn has quit [Ping timeout: 260 seconds]
jclsn has joined #yocto
ak77 has quit [Ping timeout: 252 seconds]
nemik has quit [Ping timeout: 240 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
sakoman has quit [Quit: Leaving.]
gho has quit [Ping timeout: 268 seconds]
amitk has joined #yocto
gho has joined #yocto
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
sgw has joined #yocto
Wouter0100 has quit [Quit: The Lounge - https://thelounge.chat]
Wouter0100 has joined #yocto
alessioigor has joined #yocto
alessioigor has quit [Client Quit]
PhoenixMage has quit [Ping timeout: 260 seconds]
PhoenixMage has joined #yocto
GNUmoon has quit [Remote host closed the connection]
GNUmoon has joined #yocto
Wouter0100 has quit [Quit: The Lounge - https://thelounge.chat]
Wouter0100 has joined #yocto
GNUmoon has quit [Remote host closed the connection]
GNUmoon has joined #yocto
rob_w has joined #yocto
Payam has quit [Quit: Leaving]
sgw has quit [Quit: Leaving.]
tre has joined #yocto
<jclsn> JPEW: Morning, I think for controlmasters to work SSH would need to bind the .ssh directory as readwrite, because it needs to put some files in ~/.ssh/controlmasters
goliath has joined #yocto
<jclsn> Tried removing the readonly option, but that doesn't work
Schlumpf has joined #yocto
frieder has joined #yocto
mrnuke has quit [Ping timeout: 252 seconds]
frieder has quit [Client Quit]
frieder has joined #yocto
mrnuke has joined #yocto
mckoan|away is now known as mckoan
<mckoan> good morning
<jclsn> Morning
<jclsn> qschulz: Those commands are for bitbake I guess. Do you actually have to set up an ssh-agent first before using it? Because there is no SSH_AUTH_SOCK in my environment
<PhoenixMage> Hey mckoan
manuel_ has joined #yocto
tre has quit [Remote host closed the connection]
<jclsn> JPEW: I also wonder why I am getting this message "error: cannot run ssh: No such file or directory". That is without the ssh agent even
zpfvo has joined #yocto
<PhoenixMage> Is it possible to include the sha for a file in the SRC_URI?
tre has joined #yocto
<mckoan> PhoenixMage: what do you want to do?
mvlad has joined #yocto
amitk_ has joined #yocto
<PhoenixMage> mckoan: Not have to give a bunch of remote patches a name but still validate them
<PhoenixMage> Just found my answedr in the bitbake doco :)
zpfvo has quit [Quit: Leaving.]
zpfvo has joined #yocto
amitk has quit [Ping timeout: 268 seconds]
kanavin_ has joined #yocto
nemik has quit [Ping timeout: 260 seconds]
kanavin has quit [Ping timeout: 256 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 248 seconds]
nemik has joined #yocto
zpfvo has quit [Ping timeout: 256 seconds]
zpfvo has joined #yocto
moto-timo has quit [Read error: Software caused connection abort]
moto-timo has joined #yocto
zkrx has quit [Ping timeout: 240 seconds]
zpfvo has quit [Ping timeout: 256 seconds]
zpfvo has joined #yocto
mthenault has joined #yocto
zkrx has joined #yocto
<jclsn> JPEW: So this is my pyrex setup script now https://pastebin.com/raw/pnLtGZnJ
<jclsn> Still no Pyrex for me :(
florian has joined #yocto
<sa7mfo> what about if I want to set EXTRA_OEMAKE for a specific machine only? EXTRA_OEMAKE:machine does not work?
<qschulz> sa7mfo: it should, so what makes you say it doesn't?
<qschulz> (looking for pointers on how to debug it with you basically, weird way of phrasing it sorry)
prabhakarlad has joined #yocto
d-fens has joined #yocto
<sa7mfo> maybe it was something else.. I will have to dig a bit
<qschulz> sa7mfo: bitbake-getvar -r <recipe> EXTRA_OEMAKE to see what should be the value (after it's replace for machine)
amitk has joined #yocto
<sa7mfo> thank you qschulz
nemik has quit [Ping timeout: 248 seconds]
nemik has joined #yocto
<sa7mfo> qschulz: it does not seems to work btw
<sa7mfo> to set EXTRA_OEMAKE:machine
<qschulz> sa7mfo: we need more info than "it does not work" :)
<sa7mfo> or wait.. it does
<sa7mfo> how to append the EXTRA_OEMAKE to EXTRA_OEMAKE:machine?
<qschulz> sa7mfo: what are you trying to do, what have you done so far, what did you expect to happen, what is happening instead
<qschulz> sa7mfo: what you want is EXTRA_OEMAKE:append:machine I assume
<qschulz> you want to only add something for a specific machine?
<qschulz> but keep the original content?
<sa7mfo> Yes exacty, thank you, that did the trick!
florian_kc has joined #yocto
rusam has joined #yocto
rusam has quit [Client Quit]
yann has quit [Ping timeout: 248 seconds]
<ramacassis[m]> wow, thanks to you I just remembered the Yocto summit taking place in some weeks ! :)
<ramacassis[m]> there are really great subjects, do you know if some talks will then be made available somewhere ?
<qschulz> ramacassis[m]: they are usually available on Youtube after some time IIRC
seninha has joined #yocto
<ramacassis[m]> awesome, I will try to attend (at least part of) the event, thanks for the link !
GuestNew has joined #yocto
<GuestNew> Hi Yocto Wizards, I would like to remove a variable (EXT_DTB) from EXTRA_OEMAKE (set by u-boot recipe). So I have create bbappend on it and try EXTRA_OEMAKE:remove = "EXT_DTB=${STAGING_DIR_TARGET}*.dtb but the wildcard seems not supported. Any tips ?
<qschulz> GuestNew: find how the EXT_DTB is passed to EXTRA_OEMAKE and use the exact same line with the same variables in it
d-fens has quit [Ping timeout: 260 seconds]
zpfvo has quit [Ping timeout: 260 seconds]
d-fens has joined #yocto
<d-fens> hi , new to yocto and started my first raspi4 build on a WSL2 ubuntu, thought it will put load on the cpu as my gentoo box does during compile time but my quite beefy machine hang on a  massive ssd I/O load, is my (default?)  wsl config shitty or what do i have to do to not crash my machine again?
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
sr2 has quit [Quit: Client closed]
BobPungartnik has joined #yocto
zpfvo has joined #yocto
BobPungartnik has quit [Client Quit]
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
amitk has quit [Ping timeout: 240 seconds]
<rburton> d-fens: reduce BB_NUMBER_THREADS and PARALLEL_MAKE, they default to the number of cores.
<d-fens> great, will try thanks
<d-fens> related: the crashed build left ' /home/user/mender-raspberrypi/build/tmp/work/x86_64-linux/libpcre2-native/10.40-r0/image install`'  in a invalid state, how can i delete/force a rebuild of this or any package?
<d-fens> i guess it wants to install but it's not compiled properly yet
<qschulz> d-fens: remove all of build/tmp ?
<JaMa> bitbake -c cleansstate libpcre2-native
<qschulz> or that, but if you had multiple recipes building at the same time when the crash happened, removing the tmp directory entirely could be a good start
tre has quit [Ping timeout: 268 seconds]
<qschulz> and if it still does not work, probably rebuilding from scratch is going to be easier than iterating over all failures?
GuestNew has quit [Quit: Client closed]
<d-fens> the cleanstate command worked, thanks!
<JaMa> depends on what he was building, throwing away almost built chromium after it fails with OOMK is often too PIA :)
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
manuel_ has quit [Ping timeout: 240 seconds]
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 240 seconds]
nemik has joined #yocto
alessioigor has joined #yocto
alessioigor has quit [Quit: alessioigor]
MrFrank has quit [Remote host closed the connection]
alessioigor has joined #yocto
MrFrank has joined #yocto
d-s-e has joined #yocto
Schlumpf has quit [Quit: Client closed]
<PhoenixMage> Is there a way in yocto to auto generate a custom boot.scr? Adding a boot.cmd file to the u-boot recipe for example?
zpfvo has quit [Ping timeout: 240 seconds]
nemik has quit [Ping timeout: 240 seconds]
nemik has joined #yocto
seninha has quit [Remote host closed the connection]
tre has joined #yocto
nemik has quit [Ping timeout: 256 seconds]
nemik has joined #yocto
prabhakarlad has quit [Quit: Client closed]
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
zpfvo has joined #yocto
amitk has joined #yocto
manuel_ has joined #yocto
amitk has quit [Ping timeout: 260 seconds]
alessioigor has quit [Quit: alessioigor]
mthenault_ has joined #yocto
mthenault has quit [Ping timeout: 260 seconds]
Net147 has quit [Quit: Quit]
amitk has joined #yocto
Net147 has joined #yocto
Net147 has joined #yocto
Net147 has quit [Changing host]
d-fens has quit [Quit: Client closed]
Schlumpf has joined #yocto
rob_w has quit [Quit: Leaving]
zpfvo has quit [Ping timeout: 240 seconds]
amitk has quit [Ping timeout: 268 seconds]
<dl9pf> 0+
zpfvo has joined #yocto
Payam has joined #yocto
Guest13 has joined #yocto
<Guest13> hi everyone.
<Guest13> i created an image for coral dev board. the layers i use are freescale community(kirkstone) and meta-coral(github.com/mirzak/meta-coral/tree/wip/kirkstone-upstream)
<Guest13> when i plug in hdmi, nothing appear. only black screen. i connected with minicom from ttyusb0 and when i look at the logs, i dont see any output related to hdmi such as error, warning, fail. why is hdmi not working?
<Guest13> log : pastebin.com/8pWPD3uS
amitk has joined #yocto
Schlumpf has quit [Quit: Ping timeout (120 seconds)]
sakoman has joined #yocto
Wouter0100 has quit [Quit: The Lounge - https://thelounge.chat]
Wouter0100 has joined #yocto
Tokamak has quit [Quit: Tokamak]
Guest13 has quit [Quit: Client closed]
Payam has quit [Quit: Client closed]
amitk has quit [Ping timeout: 260 seconds]
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
dacav has quit [Quit: leaving]
nemik has quit [Ping timeout: 256 seconds]
<JPEW> jclsn: Hmm, it looks OK to me
<JPEW> Do you have any useful error messages if you try to ssh inside of `pyrex-shell` ?
nemik has joined #yocto
Payam has joined #yocto
<vvn> JPEW: I see, per multiconfig BBMASK with one distro per layer should do the trick then
mthenault__ has joined #yocto
mthenault_ has quit [Ping timeout: 260 seconds]
d-s-e has quit [Ping timeout: 256 seconds]
<vvn> I'm using INITRAMFS_MULTICONFIG to use musl as the initrd, but the parsing fails because of a package which isn't compatible with musl, but this package isn't part of the initramfs image. Am I missing something?
d-s-e has joined #yocto
amitk has joined #yocto
<vvn> to be precise, it's because of wvdial which is part of the main image (not core-image-minimal-initramfs) which is incompatible with musl
<vvn> how can I prevent the initramfs multiconfig to error out on this package?
kscherer has joined #yocto
nemik has quit [Ping timeout: 256 seconds]
amitk has quit [Ping timeout: 248 seconds]
nemik has joined #yocto
amitk_ has quit [Remote host closed the connection]
amitk has joined #yocto
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
Schlumpf has joined #yocto
amelius has joined #yocto
Schlumpf has quit [Client Quit]
<vvn> with INITRAMFS_MULTICONFIG set, in fact bitbake errors out with: ERROR: An uncaught exception occurred in runqueue and ERROR: Running idle function
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
tre has quit [Remote host closed the connection]
mthenault__ has quit [Ping timeout: 260 seconds]
d-s-e has quit [Quit: Konversation terminated!]
amelius has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
<kergoth> https://www.yoctoproject.org/software-overview/downloads/ does not list 4.0.5, https://docs.yoctoproject.org/4.0.5/migration-guides/release-notes-4.0.4.html shows 4.0.5 in the version selector, but not the release notes, yet https://wiki.yoctoproject.org/wiki/Releases says 4.0.5, and there was an email announcement. Was 4.0.5 released or not?
<kergoth> It doesn't seem like this release was done properly
prabhakarlad has joined #yocto
Payam has quit [Ping timeout: 260 seconds]
<qschulz> kergoth: release notes are broken by design unfortunately, but https://docs.yoctoproject.org/dev/ should have the release-notes and it does not
<qschulz> michaelo: ^
<qschulz> kergoth: in short, the first issue is that we tag releases before the release-notes are added to the docs
<kergoth> Ah, that's unfortunate. Only one piece of it, though. Really we should have all our ducks in a row before it's announced, yeah?
<qschulz> second, I believe we should have the release notes of all releases available in the docs of any release, but it does not work this way currently and the solution is not obvious to me
<vvn> do you guys use INITRAMFS_MULTICONFIG?
<kergoth> Definitely still missing the downloads page update too this time around though
<qschulz> kergoth: this one yes, and I think ndec may be responsible for this one? or halstead maybe?
<RP> halstead: See above, was there an issue with 4.0.5?
prabhakarlad has quit [Quit: Client closed]
prabhakarlad has joined #yocto
Brian79 has joined #yocto
gho has quit [Quit: Leaving.]
zpfvo has quit [Remote host closed the connection]
manuel_ has quit [Ping timeout: 252 seconds]
<vvn> I have this stacktrace when using INITRAMFS_MULTICONFIG = "initrd": https://pastebin.com/raw/D94tXyD8
amitk_ has joined #yocto
frieder has quit [Remote host closed the connection]
<vvn> is it a bitbake or meta-ti issue maybe?
mckoan is now known as mckoan|away
<RP> vvn: bitbake :(
<RP> vvn: well, bitbake shouldn't be giving a traceback like that but a readable error message. It is likely a configuration from meta-ti as "initrd" isn't available
amitk_ has quit [Remote host closed the connection]
<vvn> RP: I have a <my-layer>/conf/multiconfig/initrd.conf though and I can bitbake mc:initrd:somerecipe
<RP> vvn: I can't comment without going into the details of how it is configured, too many things could be happening
<denix> vvn: is your <somerecipe> and image and provides do_image_complete task?
<denix> s/and/an
<vvn> denix: INITRAMFS_IMAGE is core-image-minimal-initramfs
<denix> vvn: it's only d.appendVarFlag('do_bundle_initramfs', 'mcdepends', ' mc::${INITRAMFS_MULTICONFIG}:${INITRAMFS_IMAGE}:do_image_complete')
<vvn> I'm currently bitbaking mc:initrd:core-image-minimal-initramfs, so I guess the linux-ti-staging recipe is tweaking the initramfs tasks?
<denix> vvn: it shouldn't
prabhakarlad has quit [Quit: Client closed]
<vvn> yeah it doesn't seem to tweak the recipe much. I don't understand the stack trace though
<denix> vvn: "grep -ir init meta-ti" shows nothing
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
<Saur[m]> RP: Regarding the problem I spoke about yesterday with "Initializing tasks" hanging for about a minute halfway through, I have figured out what is happening, but I cannot understand why it started just some weeks ago.
<vvn> denix: RP: INITRAMFS_MULTICONFIG="default" in local.conf is enough to reproduce the stack trace
<vvn> denix: are you able to test this?
<RP> Saur[m]: if you describe it I can see if it triggers any memories
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
<Saur[m]> After a lot of debugging I figured that what is happening is that bitbake sends the workerdata to the worker. In my case that is about 40 MB, sent over a pipe. The second thing I realized was that the worker was receiving this as 8 kB chunks. This seemed odd as the default buffer for pipes is 64 kB. After studying the docs some more, it turns out that only applies as long as I haven't more than 1024 pipes in total. With the use of lsof, I figured
<Saur[m]> that I had more than 8000 pipes active, and 7.5k of those belonged to Firefox. When I terminated Firefox, the one minute stall was gone too.
<denix> vvn: can you try do_bundle_initramfs[mcdepends] += "mc::initrd:core-image-minimal-initramfs:do_image_complete" in the kernel recipe?
<denix> vvn: and which branch is it?
<Saur[m]> Now the weird part is that I have tried downgrading Firefox to a version from August, but I still have the same problem. And I have been using Firefox as long as I can remember, so why has it become a problem now? And it's (probably) not due to visiting some new site, because I rarely use that browser as I mainly use one on another computer, so the set of open tabs has been fairly static...
<vvn> denix: kirkstone
<RP> Saur[m]: I wonder how big workerdata is in core vs your local setup
<RP> Saur[m]: i.e. did workerdata suddenly get a lot larger?
<Saur[m]> @RP: I can find out...
<vvn> denix: same stack trace with the explicit mcdepends
nemik has quit [Ping timeout: 260 seconds]
<RP> Saur[m]: it is interesting that it slows down so much with that change. Where is the 1024 limit documented out of interest?
d4rkn0d3z has joined #yocto
nemik has joined #yocto
<Saur[m]> RP: Here are the sizes of the individual members of my workerdata: https://pastebin.com/UqyCvDcy
<Saur[m]> I.e., I have taken each member and pickled them individually.
<d4rkn0d3z> Hi, I'm new to Yocto and I am trying to change a device tree node via a patch file. How can I check if the patch was applied besides flashing it to the device and checking if the change works?
<RP> Saur[m]: did you change python versions?
<RP> Saur[m]: I'm surprised by some of those sizes, we should be able to improve that
<Saur[m]> This is with Python 3.10.
<RP> Saur[m]: I should be able to come up with a small patch to test
prabhakarlad has joined #yocto
<Saur[m]> RP: Here are the numbers for a core-image-minimal for qemux86-64: https://pastebin.com/MPp3YYeN
<RP> Saur[m]: how many tasks in this vs the other one?
<RP> Saur[m]: I'm shocked at how large the fakerootenv is but because it is all an unsplit string, it may be hard to optimise.
<RP> Saur[m]: I'm working on compression of some of the other bits already for other reasons
florian has quit [Quit: Ex-Chat]
florian_kc has quit [Ping timeout: 260 seconds]
<fray> On a relatively "modern" machine, how long does oe-selftest -a tend to take?
<Saur[m]> RP: Umm, how do I see the number of tasks you ask for? Is it the number of Targets reported after parsing (in that case it is 7635 vs 4086).
<RP> Saur[m]: I'm thinking of the number it would print when running, i.e. Task 3 of 4000
<RP> Saur[m]: workerdata probably should scale by number of tasks
<RP> or perhaps number of recipes involved
<Saur[m]> Of course. Then tasks are 16640 vs 4552. And recipes are 4713 vs 2625 (I have a couple of the OE layers included in addition to poky).
<RP> Saur[m]: right, then the data structures aren't scaling too badly for 16,000 vs 4,500 tasks
<RP> Saur[m]: I tried locally with master, https://git.yoctoproject.org/poky-contrib/commit/?h=rpurdie/t222 and that gives a file 9693166 with "bitbake core-image-minimal -n" and 12007242 core-image-sato
<RP> python 3.10.6
Xagen has joined #yocto
<Saur[m]> Compressing all of my 40 MB workerdata using default gzip takes 0.5 s and unpacking it takes 0.14 s. The resulting file is 10% of the original, so it should transfer in a fraction of the original time. Could be a simple improvement. Then of course, If you can reduce the size of the underlying structures there would be a win all over.
<RP> Saur[m]: if your IPC is that compromised you'll still have problems elsewhere though
<Xagen> when building `linux-yocto` i see that there's a `vmlinux-<kernel version>-yocto-standard` image in both `image/boot` and `package/boot`, but it never makes it into the packaged sdk
<Xagen> is there something i can turn on to have it packaged?
<vvn> denix: are you able to try with INITRAMFS_MULTICONFIG="default"?
kevinrowland has joined #yocto
<Saur[m]> RP: Btw, the documentation about the pipe buffer size is found in `man 7 pipe`, specifically `/proc/sys/fs/pipe-user-pages-soft`.
roussinm has joined #yocto
<roussinm> The build directory I'm using is zfs for yocto build. This patch https://lists.openembedded.org/g/openembedded-core/message/140897 seems to work, otherwise do_image_wic rootfs fails. I see that the patch never made it in, so wondering what could be wrong with it?
<RP> Saur[m]: I'm wondering if we should increase the pipe size with F_SETPIPE_SZ (or at least try to)?
<Saur[m]> RP: Btw, adding `zlib.compress()` for the workerdata reduced the time for the Initializing tasks from 17 to 8 seconds for my build (with 64 kB buffers).
<Saur[m]> RP: I tried to increase the buffer using F_SETPIPE_SZ to 1 MB (the maximum), but even if fcntl() said it succeeded, the used buffer was still 64 kB. But I may have missed something somewhere.
<RP> Saur[m]: how did you determine the used buffer? strace?
<halstead> qschulz: I've updated the taxonomy so 4.0.5 will show up soon. Normally the RE does that.
<Saur[m]> I have some printf style debugging in the worker (writing to a /tmp file).
<Saur[m]> I just printed the sizes of the read data in serve()
Brian79 has quit [Quit: Client closed]
<RP> halstead: thanks!
<RP> kergoth: see above for 4.0.5
rob_w has joined #yocto
<kergoth> thanks
roussinm has quit [Ping timeout: 256 seconds]
roussinm has joined #yocto
roussinm has quit [Quit: WeeChat 3.3-dev]
florian_kc has joined #yocto
rsalveti has joined #yocto
Haxxa has quit [Quit: Haxxa flies away.]
Haxxa has joined #yocto
adams[1] has joined #yocto
<adams[1]> who adds the meta layers in bblayers.conf file ?
kevinrowland has quit [Quit: Client closed]
amitk has quit [Ping timeout: 268 seconds]
nemik has quit [Ping timeout: 256 seconds]
nemik has joined #yocto
sakoman has quit [Quit: Leaving.]
mkorpershoek has quit [Ping timeout: 255 seconds]
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
mkorpershoek has joined #yocto
<paulg> https://wiki.yoctoproject.org/wiki/Reproducible_Builds <--- needs a section on how to kill it with fire.
<LetoThe2nd> adams[1]: usually you?
<adams[1]> LetoThe2nd: figured it out thanks.
<adams[1]> In one of the bb file I have this PACKAGE_INSTALL += " stress-ng i2c-tools libgpiod-tools"...can I add one more meta layer and in of the bbappends can I can remove i2c-tools?
<adams[1]> or is there any other elegant solution?
sakoman has joined #yocto
<LetoThe2nd> adams[1]: this sounds like you are mixing some things up. the bb (usually called a recipe) has nothing to do with the layers. you can add/modify the data in recipes by so-called appends, e.g. recipe-like files with the .bbappend ending.
u1106 has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
u1106 has joined #yocto
roussinm has joined #yocto
manuel_ has joined #yocto
Wouter0100 has quit [Quit: The Lounge - https://thelounge.chat]
Wouter0100 has joined #yocto
<RP> paulg: it is a wiki
<RP> paulg: and you probably just want to kill the kernel timestamp bit
<paulg> yeah, I went and read the kernel class to remind me of the variable name for that. But a customer sounds like they want the old BUILD_REPRODUCIBLE_BINARIES=0 behaviour
<RP> paulg: we removed a lot of that choice since there wasn't a good reason to have two codepaths to maintain
<paulg> that was what I was afraid of.
<RP> paulg: did they say specifically what they didn't want/like?
<paulg> they used the kernel as their example (I'm not surprised) but they generalized to saying that setting build date+time to some fixed preset was "the same as removing this data entirely".
<RP> paulg: that is the key thing, we don't use "some fixed preset", we preserve most timestamps
<RP> paulg: we clamp the values to an upper limit and we have a deterministic value for the upper limit based on the incoming sources
<RP> effectively, nothing is removed other than the variance that comes from the time it was built
<paulg> I don't have any more info to go on, other than the vague suggestion it will break build and release process for several customers.
<paulg> and who knows - people do lots of crazy stuff.
<paulg> yeah - I'll leave it for vmeson and the userspace people to decide what to do. I gave a solution for the kernel. :-P
fitzsim has quit [Read error: Connection reset by peer]
<vvn> JPEW: so BBMASK += "meta-distro-I-want-to-ignore" in conf/multiconfig/foo.conf should suffice?
<paulg> I'm guessing trying a revert of those two would open a giant can of worms...
<JPEW> vvn: With some caveats, yes
<JPEW> BBMASK doesn't mask everything (like classes IIRC)
<RP> paulg: you'd get to keep all the pieces
<paulg> and the ashes from when it inevitably catches fire.
<paulg> anyway, thanks for helping me confirm BUILD_REPRODUCIBLE_BINARIES really was dead and buried with kirkstone
sr2 has joined #yocto
florian_kc is now known as florian
<LetoThe2nd> fire? we burn something?
rob_w has quit [Quit: Leaving]
Algotech75 is now known as Algotech
Tokamak has joined #yocto
manuel_ has quit [Ping timeout: 256 seconds]
sr2 has quit [Ping timeout: 260 seconds]
<RP> Saur[m]: I logged the read sizes on my system and they're mostly around the 500k mark. I also noticed python 3.10 allows a pipesize argument to subprocess and setting that to 1MB did result in fewer reads
florian has quit [Ping timeout: 268 seconds]
<RP> Saur[m]: my conclusion is that instead of bundling this all into workerdata, we should send the individual fn entries at fork_off_task time
<RP> Saur[m]: the worker only needs the bulky info at task exec time
gsalazar has joined #yocto
<Saur[m]> RP: I tried adding `pipesize=1024*1024` to the Poen calls in runqueue.py. If I am in the situation with more than 1024 open pipes, then I get an EPERM. Otherwise the setting is accepted, but I still do not see reads > 64 kB. :(
<Saur[m]> Popen*
<RP> Saur[m]: hmm, not good :/
<Saur[m]> I wonder what's the difference between your system and mine that makes it go above 64 kB without any hassle.
roussinm has quit [Quit: WeeChat 3.0]
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 256 seconds]
nemik has joined #yocto
<RP> Saur[m]: kernel version?
<Saur[m]> 5.18
<RP> Saur[m]: 5.15
<RP> Saur[m]: https://git.yoctoproject.org/poky-contrib/commit/?h=rpurdie/t222&id=620adb1747f3bdd4dfd7a03d1e02c40349fed347 should help
demirok has joined #yocto
justache is now known as justGrit