GNUmoon has quit [Remote host closed the connection]
GNUmoon has joined #yocto
rob_w has joined #yocto
Payam has quit [Quit: Leaving]
sgw has quit [Quit: Leaving.]
tre has joined #yocto
<jclsn>
JPEW: Morning, I think for controlmasters to work SSH would need to bind the .ssh directory as readwrite, because it needs to put some files in ~/.ssh/controlmasters
goliath has joined #yocto
<jclsn>
Tried removing the readonly option, but that doesn't work
Schlumpf has joined #yocto
frieder has joined #yocto
mrnuke has quit [Ping timeout: 252 seconds]
frieder has quit [Client Quit]
frieder has joined #yocto
mrnuke has joined #yocto
mckoan|away is now known as mckoan
<mckoan>
good morning
<jclsn>
Morning
<jclsn>
qschulz: Those commands are for bitbake I guess. Do you actually have to set up an ssh-agent first before using it? Because there is no SSH_AUTH_SOCK in my environment
<ramacassis[m]>
awesome, I will try to attend (at least part of) the event, thanks for the link !
GuestNew has joined #yocto
<GuestNew>
Hi Yocto Wizards, I would like to remove a variable (EXT_DTB) from EXTRA_OEMAKE (set by u-boot recipe). So I have create bbappend on it and try EXTRA_OEMAKE:remove = "EXT_DTB=${STAGING_DIR_TARGET}*.dtb but the wildcard seems not supported. Any tips ?
<qschulz>
GuestNew: find how the EXT_DTB is passed to EXTRA_OEMAKE and use the exact same line with the same variables in it
d-fens has quit [Ping timeout: 260 seconds]
zpfvo has quit [Ping timeout: 260 seconds]
d-fens has joined #yocto
<d-fens>
hi , new to yocto and started my first raspi4 build on a WSL2 ubuntu, thought it will put load on the cpu as my gentoo box does during compile time but my quite beefy machine hang on a massive ssd I/O load, is my (default?) wsl config shitty or what do i have to do to not crash my machine again?
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
sr2 has quit [Quit: Client closed]
BobPungartnik has joined #yocto
zpfvo has joined #yocto
BobPungartnik has quit [Client Quit]
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
amitk has quit [Ping timeout: 240 seconds]
<rburton>
d-fens: reduce BB_NUMBER_THREADS and PARALLEL_MAKE, they default to the number of cores.
<d-fens>
great, will try thanks
<d-fens>
related: the crashed build left ' /home/user/mender-raspberrypi/build/tmp/work/x86_64-linux/libpcre2-native/10.40-r0/image install`' in a invalid state, how can i delete/force a rebuild of this or any package?
<d-fens>
i guess it wants to install but it's not compiled properly yet
<qschulz>
d-fens: remove all of build/tmp ?
<JaMa>
bitbake -c cleansstate libpcre2-native
<qschulz>
or that, but if you had multiple recipes building at the same time when the crash happened, removing the tmp directory entirely could be a good start
tre has quit [Ping timeout: 268 seconds]
<qschulz>
and if it still does not work, probably rebuilding from scratch is going to be easier than iterating over all failures?
GuestNew has quit [Quit: Client closed]
<d-fens>
the cleanstate command worked, thanks!
<JaMa>
depends on what he was building, throwing away almost built chromium after it fails with OOMK is often too PIA :)
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
manuel_ has quit [Ping timeout: 240 seconds]
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 240 seconds]
nemik has joined #yocto
alessioigor has joined #yocto
alessioigor has quit [Quit: alessioigor]
MrFrank has quit [Remote host closed the connection]
alessioigor has joined #yocto
MrFrank has joined #yocto
d-s-e has joined #yocto
Schlumpf has quit [Quit: Client closed]
<PhoenixMage>
Is there a way in yocto to auto generate a custom boot.scr? Adding a boot.cmd file to the u-boot recipe for example?
zpfvo has quit [Ping timeout: 240 seconds]
nemik has quit [Ping timeout: 240 seconds]
nemik has joined #yocto
seninha has quit [Remote host closed the connection]
tre has joined #yocto
nemik has quit [Ping timeout: 256 seconds]
nemik has joined #yocto
prabhakarlad has quit [Quit: Client closed]
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
zpfvo has joined #yocto
amitk has joined #yocto
manuel_ has joined #yocto
amitk has quit [Ping timeout: 260 seconds]
alessioigor has quit [Quit: alessioigor]
mthenault_ has joined #yocto
mthenault has quit [Ping timeout: 260 seconds]
Net147 has quit [Quit: Quit]
amitk has joined #yocto
Net147 has joined #yocto
Net147 has joined #yocto
Net147 has quit [Changing host]
d-fens has quit [Quit: Client closed]
Schlumpf has joined #yocto
rob_w has quit [Quit: Leaving]
zpfvo has quit [Ping timeout: 240 seconds]
amitk has quit [Ping timeout: 268 seconds]
<dl9pf>
0+
zpfvo has joined #yocto
Payam has joined #yocto
Guest13 has joined #yocto
<Guest13>
hi everyone.
<Guest13>
i created an image for coral dev board. the layers i use are freescale community(kirkstone) and meta-coral(github.com/mirzak/meta-coral/tree/wip/kirkstone-upstream)
<Guest13>
when i plug in hdmi, nothing appear. only black screen. i connected with minicom from ttyusb0 and when i look at the logs, i dont see any output related to hdmi such as error, warning, fail. why is hdmi not working?
<Guest13>
log : pastebin.com/8pWPD3uS
amitk has joined #yocto
Schlumpf has quit [Quit: Ping timeout (120 seconds)]
<JPEW>
Do you have any useful error messages if you try to ssh inside of `pyrex-shell` ?
nemik has joined #yocto
Payam has joined #yocto
<vvn>
JPEW: I see, per multiconfig BBMASK with one distro per layer should do the trick then
mthenault__ has joined #yocto
mthenault_ has quit [Ping timeout: 260 seconds]
d-s-e has quit [Ping timeout: 256 seconds]
<vvn>
I'm using INITRAMFS_MULTICONFIG to use musl as the initrd, but the parsing fails because of a package which isn't compatible with musl, but this package isn't part of the initramfs image. Am I missing something?
d-s-e has joined #yocto
amitk has joined #yocto
<vvn>
to be precise, it's because of wvdial which is part of the main image (not core-image-minimal-initramfs) which is incompatible with musl
<vvn>
how can I prevent the initramfs multiconfig to error out on this package?
kscherer has joined #yocto
nemik has quit [Ping timeout: 256 seconds]
amitk has quit [Ping timeout: 248 seconds]
nemik has joined #yocto
amitk_ has quit [Remote host closed the connection]
amitk has joined #yocto
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
Schlumpf has joined #yocto
amelius has joined #yocto
Schlumpf has quit [Client Quit]
<vvn>
with INITRAMFS_MULTICONFIG set, in fact bitbake errors out with: ERROR: An uncaught exception occurred in runqueue and ERROR: Running idle function
<kergoth>
It doesn't seem like this release was done properly
prabhakarlad has joined #yocto
Payam has quit [Ping timeout: 260 seconds]
<qschulz>
kergoth: release notes are broken by design unfortunately, but https://docs.yoctoproject.org/dev/ should have the release-notes and it does not
<qschulz>
michaelo: ^
<qschulz>
kergoth: in short, the first issue is that we tag releases before the release-notes are added to the docs
<kergoth>
Ah, that's unfortunate. Only one piece of it, though. Really we should have all our ducks in a row before it's announced, yeah?
<qschulz>
second, I believe we should have the release notes of all releases available in the docs of any release, but it does not work this way currently and the solution is not obvious to me
<vvn>
do you guys use INITRAMFS_MULTICONFIG?
<kergoth>
Definitely still missing the downloads page update too this time around though
<qschulz>
kergoth: this one yes, and I think ndec may be responsible for this one? or halstead maybe?
<RP>
halstead: See above, was there an issue with 4.0.5?
prabhakarlad has quit [Quit: Client closed]
prabhakarlad has joined #yocto
Brian79 has joined #yocto
gho has quit [Quit: Leaving.]
zpfvo has quit [Remote host closed the connection]
frieder has quit [Remote host closed the connection]
<vvn>
is it a bitbake or meta-ti issue maybe?
mckoan is now known as mckoan|away
<RP>
vvn: bitbake :(
<RP>
vvn: well, bitbake shouldn't be giving a traceback like that but a readable error message. It is likely a configuration from meta-ti as "initrd" isn't available
amitk_ has quit [Remote host closed the connection]
<vvn>
RP: I have a <my-layer>/conf/multiconfig/initrd.conf though and I can bitbake mc:initrd:somerecipe
<RP>
vvn: I can't comment without going into the details of how it is configured, too many things could be happening
<denix>
vvn: is your <somerecipe> and image and provides do_image_complete task?
<denix>
s/and/an
<vvn>
denix: INITRAMFS_IMAGE is core-image-minimal-initramfs
<denix>
vvn: it's only d.appendVarFlag('do_bundle_initramfs', 'mcdepends', ' mc::${INITRAMFS_MULTICONFIG}:${INITRAMFS_IMAGE}:do_image_complete')
<vvn>
I'm currently bitbaking mc:initrd:core-image-minimal-initramfs, so I guess the linux-ti-staging recipe is tweaking the initramfs tasks?
<denix>
vvn: it shouldn't
prabhakarlad has quit [Quit: Client closed]
<vvn>
yeah it doesn't seem to tweak the recipe much. I don't understand the stack trace though
<Saur[m]>
RP: Regarding the problem I spoke about yesterday with "Initializing tasks" hanging for about a minute halfway through, I have figured out what is happening, but I cannot understand why it started just some weeks ago.
<vvn>
denix: RP: INITRAMFS_MULTICONFIG="default" in local.conf is enough to reproduce the stack trace
<vvn>
denix: are you able to test this?
<RP>
Saur[m]: if you describe it I can see if it triggers any memories
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
<Saur[m]>
After a lot of debugging I figured that what is happening is that bitbake sends the workerdata to the worker. In my case that is about 40 MB, sent over a pipe. The second thing I realized was that the worker was receiving this as 8 kB chunks. This seemed odd as the default buffer for pipes is 64 kB. After studying the docs some more, it turns out that only applies as long as I haven't more than 1024 pipes in total. With the use of lsof, I figured
<Saur[m]>
that I had more than 8000 pipes active, and 7.5k of those belonged to Firefox. When I terminated Firefox, the one minute stall was gone too.
<denix>
vvn: can you try do_bundle_initramfs[mcdepends] += "mc::initrd:core-image-minimal-initramfs:do_image_complete" in the kernel recipe?
<denix>
vvn: and which branch is it?
<Saur[m]>
Now the weird part is that I have tried downgrading Firefox to a version from August, but I still have the same problem. And I have been using Firefox as long as I can remember, so why has it become a problem now? And it's (probably) not due to visiting some new site, because I rarely use that browser as I mainly use one on another computer, so the set of open tabs has been fairly static...
<vvn>
denix: kirkstone
<RP>
Saur[m]: I wonder how big workerdata is in core vs your local setup
<RP>
Saur[m]: i.e. did workerdata suddenly get a lot larger?
<Saur[m]>
@RP: I can find out...
<vvn>
denix: same stack trace with the explicit mcdepends
nemik has quit [Ping timeout: 260 seconds]
<RP>
Saur[m]: it is interesting that it slows down so much with that change. Where is the 1024 limit documented out of interest?
<Saur[m]>
I.e., I have taken each member and pickled them individually.
<d4rkn0d3z>
Hi, I'm new to Yocto and I am trying to change a device tree node via a patch file. How can I check if the patch was applied besides flashing it to the device and checking if the change works?
<RP>
Saur[m]: did you change python versions?
<RP>
Saur[m]: I'm surprised by some of those sizes, we should be able to improve that
<Saur[m]>
This is with Python 3.10.
<RP>
Saur[m]: I should be able to come up with a small patch to test
<Saur[m]>
Compressing all of my 40 MB workerdata using default gzip takes 0.5 s and unpacking it takes 0.14 s. The resulting file is 10% of the original, so it should transfer in a fraction of the original time. Could be a simple improvement. Then of course, If you can reduce the size of the underlying structures there would be a win all over.
<RP>
Saur[m]: if your IPC is that compromised you'll still have problems elsewhere though
<Xagen>
when building `linux-yocto` i see that there's a `vmlinux-<kernel version>-yocto-standard` image in both `image/boot` and `package/boot`, but it never makes it into the packaged sdk
<Xagen>
is there something i can turn on to have it packaged?
<vvn>
denix: are you able to try with INITRAMFS_MULTICONFIG="default"?
kevinrowland has joined #yocto
<Saur[m]>
RP: Btw, the documentation about the pipe buffer size is found in `man 7 pipe`, specifically `/proc/sys/fs/pipe-user-pages-soft`.
roussinm has joined #yocto
<roussinm>
The build directory I'm using is zfs for yocto build. This patch https://lists.openembedded.org/g/openembedded-core/message/140897 seems to work, otherwise do_image_wic rootfs fails. I see that the patch never made it in, so wondering what could be wrong with it?
<RP>
Saur[m]: I'm wondering if we should increase the pipe size with F_SETPIPE_SZ (or at least try to)?
<Saur[m]>
RP: Btw, adding `zlib.compress()` for the workerdata reduced the time for the Initializing tasks from 17 to 8 seconds for my build (with 64 kB buffers).
<Saur[m]>
RP: I tried to increase the buffer using F_SETPIPE_SZ to 1 MB (the maximum), but even if fcntl() said it succeeded, the used buffer was still 64 kB. But I may have missed something somewhere.
<RP>
Saur[m]: how did you determine the used buffer? strace?
<halstead>
qschulz: I've updated the taxonomy so 4.0.5 will show up soon. Normally the RE does that.
<Saur[m]>
I have some printf style debugging in the worker (writing to a /tmp file).
<Saur[m]>
I just printed the sizes of the read data in serve()
Brian79 has quit [Quit: Client closed]
<RP>
halstead: thanks!
<RP>
kergoth: see above for 4.0.5
rob_w has joined #yocto
<kergoth>
thanks
roussinm has quit [Ping timeout: 256 seconds]
roussinm has joined #yocto
roussinm has quit [Quit: WeeChat 3.3-dev]
florian_kc has joined #yocto
rsalveti has joined #yocto
Haxxa has quit [Quit: Haxxa flies away.]
Haxxa has joined #yocto
adams[1] has joined #yocto
<adams[1]>
who adds the meta layers in bblayers.conf file ?
<adams[1]>
In one of the bb file I have this PACKAGE_INSTALL += " stress-ng i2c-tools libgpiod-tools"...can I add one more meta layer and in of the bbappends can I can remove i2c-tools?
<adams[1]>
or is there any other elegant solution?
sakoman has joined #yocto
<LetoThe2nd>
adams[1]: this sounds like you are mixing some things up. the bb (usually called a recipe) has nothing to do with the layers. you can add/modify the data in recipes by so-called appends, e.g. recipe-like files with the .bbappend ending.
<RP>
paulg: and you probably just want to kill the kernel timestamp bit
<paulg>
yeah, I went and read the kernel class to remind me of the variable name for that. But a customer sounds like they want the old BUILD_REPRODUCIBLE_BINARIES=0 behaviour
<RP>
paulg: we removed a lot of that choice since there wasn't a good reason to have two codepaths to maintain
<paulg>
that was what I was afraid of.
<RP>
paulg: did they say specifically what they didn't want/like?
<paulg>
they used the kernel as their example (I'm not surprised) but they generalized to saying that setting build date+time to some fixed preset was "the same as removing this data entirely".
<RP>
paulg: that is the key thing, we don't use "some fixed preset", we preserve most timestamps
<RP>
paulg: we clamp the values to an upper limit and we have a deterministic value for the upper limit based on the incoming sources
<RP>
effectively, nothing is removed other than the variance that comes from the time it was built
<paulg>
I don't have any more info to go on, other than the vague suggestion it will break build and release process for several customers.
<paulg>
and who knows - people do lots of crazy stuff.
<paulg>
and the ashes from when it inevitably catches fire.
<paulg>
anyway, thanks for helping me confirm BUILD_REPRODUCIBLE_BINARIES really was dead and buried with kirkstone
sr2 has joined #yocto
florian_kc is now known as florian
<LetoThe2nd>
fire? we burn something?
rob_w has quit [Quit: Leaving]
Algotech75 is now known as Algotech
Tokamak has joined #yocto
manuel_ has quit [Ping timeout: 256 seconds]
sr2 has quit [Ping timeout: 260 seconds]
<RP>
Saur[m]: I logged the read sizes on my system and they're mostly around the 500k mark. I also noticed python 3.10 allows a pipesize argument to subprocess and setting that to 1MB did result in fewer reads
florian has quit [Ping timeout: 268 seconds]
<RP>
Saur[m]: my conclusion is that instead of bundling this all into workerdata, we should send the individual fn entries at fork_off_task time
<RP>
Saur[m]: the worker only needs the bulky info at task exec time
gsalazar has joined #yocto
<Saur[m]>
RP: I tried adding `pipesize=1024*1024` to the Poen calls in runqueue.py. If I am in the situation with more than 1024 open pipes, then I get an EPERM. Otherwise the setting is accepted, but I still do not see reads > 64 kB. :(
<Saur[m]>
Popen*
<RP>
Saur[m]: hmm, not good :/
<Saur[m]>
I wonder what's the difference between your system and mine that makes it go above 64 kB without any hassle.