RobertBerger has quit [Remote host closed the connection]
RobertBerger has joined #yocto
roosterphant has joined #yocto
pbsds has joined #yocto
wooosaiiii has joined #yocto
astlep5504018 has joined #yocto
olani has joined #yocto
mario-goulart has joined #yocto
Danct12 has joined #yocto
sugarbeet has joined #yocto
Piraty has joined #yocto
arisut has joined #yocto
jstephan has joined #yocto
sgw has joined #yocto
sudip has joined #yocto
neofutur_ has joined #yocto
yocton has joined #yocto
efeschiyan has joined #yocto
gchamp has quit [*.net *.split]
davidinux has quit [*.net *.split]
khem has quit [*.net *.split]
xantoz has quit [*.net *.split]
locutusofborg has quit [*.net *.split]
tangofoxtrot has quit [*.net *.split]
ablu-linaro has quit [*.net *.split]
RobW has quit [*.net *.split]
zwelch has quit [*.net *.split]
marka has quit [*.net *.split]
frosteyes has quit [*.net *.split]
brrm has quit [*.net *.split]
_lore_ has quit [*.net *.split]
Maxxed has quit [*.net *.split]
chep has quit [Client Quit]
chep has joined #yocto
xantoz has joined #yocto
brrm has joined #yocto
davidinux has joined #yocto
gchamp has joined #yocto
khem has joined #yocto
ablu-linaro has joined #yocto
tangofoxtrot has joined #yocto
locutusofborg has joined #yocto
frosteyes has joined #yocto
marka has joined #yocto
zwelch has joined #yocto
RobW has joined #yocto
_lore_ has joined #yocto
Maxxed has joined #yocto
Maxxed has quit [Max SendQ exceeded]
Maxxed has joined #yocto
gchamp has quit [*.net *.split]
davidinux has quit [*.net *.split]
khem has quit [*.net *.split]
xantoz has quit [*.net *.split]
locutusofborg has quit [*.net *.split]
tangofoxtrot has quit [*.net *.split]
ablu-linaro has quit [*.net *.split]
RobW has quit [*.net *.split]
zwelch has quit [*.net *.split]
marka has quit [*.net *.split]
frosteyes has quit [*.net *.split]
brrm has quit [*.net *.split]
_lore_ has quit [*.net *.split]
dmoseley has quit [*.net *.split]
Starfoxxes has quit [*.net *.split]
Dr_Who has quit [*.net *.split]
dkc has quit [*.net *.split]
nsbdfl_ has quit [*.net *.split]
marex has quit [*.net *.split]
alimon has quit [*.net *.split]
ecdhe has quit [*.net *.split]
clever has quit [*.net *.split]
bq has quit [*.net *.split]
mdp has quit [*.net *.split]
lucaceresoli has quit [*.net *.split]
Vonter has quit [*.net *.split]
yannd has quit [*.net *.split]
tokamak has quit [*.net *.split]
dlan has quit [*.net *.split]
DarkestFM has quit [*.net *.split]
ctraven has quit [*.net *.split]
LDericher has quit [*.net *.split]
woky has quit [*.net *.split]
fullstop has quit [*.net *.split]
bradfa has quit [*.net *.split]
Habbie has quit [*.net *.split]
NishanthMenon has quit [*.net *.split]
paulbarker has quit [*.net *.split]
frosteyes has joined #yocto
brrm has joined #yocto
_lore_ has joined #yocto
zwelch has joined #yocto
RobW has joined #yocto
ablu-linaro has joined #yocto
marka has joined #yocto
tangofoxtrot has joined #yocto
khem has joined #yocto
xantoz has joined #yocto
locutusofborg has joined #yocto
gchamp has joined #yocto
davidinux has joined #yocto
Vonter has joined #yocto
yannd has joined #yocto
dmoseley has joined #yocto
tokamak has joined #yocto
Starfoxxes has joined #yocto
Dr_Who has joined #yocto
dkc has joined #yocto
marex has joined #yocto
clever has joined #yocto
dlan has joined #yocto
alimon has joined #yocto
ctraven has joined #yocto
DarkestFM has joined #yocto
nsbdfl_ has joined #yocto
ecdhe has joined #yocto
LDericher has joined #yocto
bq has joined #yocto
woky has joined #yocto
lucaceresoli has joined #yocto
NishanthMenon has joined #yocto
paulbarker has joined #yocto
mdp has joined #yocto
Habbie has joined #yocto
bradfa has joined #yocto
fullstop has joined #yocto
jmd has joined #yocto
Saur_Home has quit [Quit: Client closed]
Saur_Home has joined #yocto
usvi has joined #yocto
nerdboy has joined #yocto
dmoseley has quit [*.net *.split]
Starfoxxes has quit [*.net *.split]
Dr_Who has quit [*.net *.split]
dkc has quit [*.net *.split]
alimon has quit [*.net *.split]
marex has quit [*.net *.split]
nsbdfl_ has quit [*.net *.split]
ecdhe has quit [*.net *.split]
bq has quit [*.net *.split]
clever has quit [*.net *.split]
mdp has quit [*.net *.split]
lucaceresoli has quit [*.net *.split]
Starfoxxes has joined #yocto
Dr_Who has joined #yocto
dmoseley has joined #yocto
marex has joined #yocto
dkc has joined #yocto
clever has joined #yocto
alimon has joined #yocto
nsbdfl_ has joined #yocto
ecdhe has joined #yocto
bq has joined #yocto
mdp has joined #yocto
lucaceresoli has joined #yocto
alessioigor has joined #yocto
alessioigor has quit [Remote host closed the connection]
alessioigor has joined #yocto
alperak has joined #yocto
alperak has quit [Ping timeout: 250 seconds]
rob_w has joined #yocto
ptsneves has joined #yocto
goliath has joined #yocto
ptsneves has quit [Ping timeout: 252 seconds]
goliath has quit [Client Quit]
jmd has quit [Remote host closed the connection]
mulk has quit [Ping timeout: 252 seconds]
mulk has joined #yocto
zwelch has quit [Ping timeout: 268 seconds]
goliath has joined #yocto
mckoan|away is now known as mckoan
zpfvo has joined #yocto
zpfvo has quit [Ping timeout: 260 seconds]
Kubu_work has joined #yocto
<jdiez>
rob_w: are you RobW from yesterday? we were talking about some meta-ros* build failures
<rob_w>
nope sorry
<jdiez>
np sorry
jonesv has quit [Remote host closed the connection]
raghavgururajan has quit [Remote host closed the connection]
tleb has quit [Remote host closed the connection]
raghavgururajan has joined #yocto
tleb has joined #yocto
jonesv has joined #yocto
tleb has quit [Remote host closed the connection]
jonesv has quit [Remote host closed the connection]
raghavgururajan has quit [Remote host closed the connection]
zpfvo has joined #yocto
raghavgururajan has joined #yocto
tleb has joined #yocto
jonesv has joined #yocto
nerdboy has quit [Ping timeout: 276 seconds]
frieder has joined #yocto
mvlad has joined #yocto
ykrons has joined #yocto
prabhakarlad has joined #yocto
rsalveti has joined #yocto
gsalazar has joined #yocto
<LetoThe2nd>
yo dudX
florian has joined #yocto
Meyrevived has joined #yocto
leon-anavi has joined #yocto
DarkestFM has quit [Ping timeout: 240 seconds]
DarkestFM has joined #yocto
<Meyrevived>
Hey all, I have a question for an automation job I'm building
<mckoan>
Meyrevived: dont't ask to ask, ask :-D
<Meyrevived>
Part of the job is to benchmark a qemu emulator running various yocto versions. The job code is in Python, I'm using qemu.qmp and I keep getting "qemu.qmp.ConnectError Failed to establish connection: [Errno 2] No such file or directory"
<Meyrevived>
I suspect that it's because even though I'm running my program from the yocto's build dir after running source oe-init-build-env, the new process the python runs does not have this command run and I'm not sure how to incorporate it into my code.
<Meyrevived>
Actually, it's impossible to run "source" commands with python. I thought about a separate bash script that would run these commands and bitbake etc. but the rest of the automation job is python and if there's the slightest chance that it can be done via python or just more elegantly, I prefer to find a way to do it.
<Meyrevived>
I've only been familiar with the Yocto project for the past ~4 months so there's a lot I don't know yet.
alperak has joined #yocto
luc4 has joined #yocto
<luc4>
Hello! I'm creating my first image with yocto for an embedded device that will run a single task. For such a device, would you run everything as the root user? Or would you create a multi-user system?
<neverpanic>
Always create a multi-user system. Never run services as root.
<abelloni>
definitively run the service as a simple user
<neverpanic>
It's just basic security hygiene to do this right.
luc4 has quit [Ping timeout: 268 seconds]
luc4 has joined #yocto
vladest has quit [Remote host closed the connection]
<jdiez>
how do you guys iterate on image builds? in the general case where more than a single package has changed. Do you rewrite the flash every time? or only update the rootfs somehow?
<rburton>
jdiez: you can turn on package management and update with that, assuming you have networking between target and build machine
<rburton>
jdiez: if you're writing code then devtool can deploy binaries over ssh directly
<jdiez>
yeah, I have package management enabled. If you mean `devtool deploy-target`, AFAIU that only works on a single package basis
<jdiez>
for example, I just added generation of a locale to my distro config. That caused a couple of other packages to be rebuilt. Would I do `devtool deploy-target core-image-minimal` or so?
<rburton>
no
<rburton>
devtool deploy-target is, as you say, for a single traditional recipe
<rburton>
then run a web server on the deploy feed (python3 -mhttp.server works) and 'bitbake package-index' when you want to use it.
<rburton>
at that point you can opkg/dnf/apt update/upgrade/install as needed.
<jdiez>
okay, I see. Neat. Thanks.
<rburton>
(if you set PACKAGE_FEED_URIS you don't need to do any config on the target)
<jdiez>
I suppose a PR server is also needed to increment package versions after rebuilds?
<rburton>
if you want it to just work yes. i'm too lazy and know what i changed so just tell opkg to reinstall :)
<jdiez>
gotcha
alperak has joined #yocto
<jdiez>
I'm ending up with ~1GB of unneeded firmware in /lib/firmware and 177 linux-firmware-* packages installed, even though my `MACHINE_FIRMWARE` variable only contains like 5 entries. Why would that happen?
<JaMa>
jdiez: see package dependencies something is pulling linux-firmware package for you probably
<JaMa>
jdiez: last time it was symlink from one of the linux-firmware-* packages to a file in linux-firmware
<jdiez>
yeah, I'm looking at task-depends.dot, but it's not clear to me what is depending on the whole linux-firmware package
<jdiez>
how rude of meta-sdr :) (I found it by `rg linux-firmware meta*`)
<jdiez>
seems kind of unusual to include linux-firmware in a package group that deals with userspace applications, not sure why the author would do that in the first place (except a misconfigured build on their side?)
luc4_ has joined #yocto
luc4 has quit [Ping timeout: 264 seconds]
luc4_ is now known as luc4
luc4 has quit [Client Quit]
<abelloni>
I'm using nfs so I don't have to flash anything
<michalsieron>
> Packages specified in RRECOMMENDS need not actually be produced. [...] If such a recipe does exist and the package is not produced, the build continues without error.
<michalsieron>
If recipe `foo` pulls in recipe `bar` through RRECOMMENDS and `bar` has an error during compilation, should the build continue (as in `bar` errors degraded to warnings) or fail?
<Saur>
yocton: No, it does not make any sense for a recipe that inherits native to also inherit useradd.
<rburton>
michalsieron: all that means is that its not an error to recommend a package that doesn't exist, and it won't cause dependency failures when you build an image
<yocton>
Saur: Thanks for the confirmation! :)
<michalsieron>
rburton: > However, there must be a recipe providing each package, either through the PACKAGES or PACKAGES_DYNAMIC variables or the RPROVIDES variable, or an error will occur during the build.
<michalsieron>
In what situation does package not exist but is still being provided?
<rburton>
yeah that needs rewording
<rburton>
package A can recommend B, and it's fine if B doesn't exist
<Saur>
michalsieron: There are many cases where packages are provided but not created. E.g., most recipes do not actually produce an ${PN}-doc package even if it is listed in PACKAGES.
<michalsieron>
Saur: so situations where package is empty and doesn't set ALLOW_EMPTY?
<rburton>
it literally just means a package can recommend another package that doesn't have to exist
<michalsieron>
ok. thanks
<Ad0>
how do I completely turn off hostname and serial console both in u-boot and kernel?
<Ad0>
I want to sent a transient hostname and yocto fills it in with the machine
<Ad0>
it has to be empty to be set as a transient in kernel 5.10+
<Ad0>
or at least in kirkstone
<rburton>
Ad0: you'll want to poke at base-files
<michalsieron>
and when it comes to difference between RRECOMMENDS and RSUGGESTS, is it:
<michalsieron>
recommends -> package manager will install (possibly by default, with potential option to disable) all recommended packages, but removing one later would not break dependency chain
<michalsieron>
suggests -> user can see, which packages may be related and beneficial to have installed alongside, but has to install them on their own
<michalsieron>
or in other words, RRECOMMENDS is opt out and RSUGGESTS is opt in?
<michalsieron>
do I understand that correctly?
<Ad0>
hostname:pn-base-files has to be empty if it's even allowed
Meyrevived has joined #yocto
Minvera has joined #yocto
Saur_Home has quit [Quit: Client closed]
Saur_Home has joined #yocto
Guest99 has quit [Quit: Client closed]
Meyrevived has quit [Quit: Client closed]
martin_thingvold has joined #yocto
rcw has joined #yocto
rcw has quit [Remote host closed the connection]
RobW has quit [Quit: Leaving]
rcw has joined #yocto
<rcw>
jdiez: I am RobW from yesterday (I was being lazy about fixing my NickServ, but just did it to avoid future confusion)
<rcw>
jdiez: How are things going?
martin_thingvold has quit [Remote host closed the connection]
Guest99 has joined #yocto
Guest99 has quit [Client Quit]
<jdiez>
rcw: welcome back :) I got ros2 humble running on my board, with the pyyaml PACKAGECONFIG fix that moto-timo suggested. I tried your mickledore-next branch, but I still had the build error
martin_thingvold has joined #yocto
<rcw>
jdiez: I'm glad to hear that! If you have a moment could you file an issue on github about the pyyaml error you saw with the fix?
<jdiez>
rcw: I can definitely do that, but it's not a problem with any code in meta-ros*
* rcw
nods
<rcw>
Since Mickledore is EOL, I don't mind taking the patch in meta-ros
<jdiez>
okay, gotcha. Will file an issue shortly
<jdiez>
in the meantime, I have another ROS-related question, if you don't mind
<jdiez>
how should user-created/downstream ROS packages or workspaces be packaged? Should I use superflore as well?
<jdiez>
I added demo-nodes-cpp to my IMAGE_INSTALL, and they end up getting installed in /usr/lib/demo_nodes_cpp
<usvi>
lets say a recipe builds an ipk. is there possibility to copy that ipk somewhere for archival purposes as a part of the recipe?
<rcw>
jdiez: Also, if you have questions specific to ROS or meta-ros instead of Yocto, you may also get in touch with me here: https://discord.com/invite/PWTYK5AmZZ
<jdiez>
which using the classes and such in meta-ros, will create a package containing a ros workspace with the nodes/launch files in demo_nodes_cpp, and deploy it to /usr/share/demo_nodes_cpp?
<usvi>
I tried to spy the environment but could not find anything I could use
<jdiez>
rcw: cool, thanks, will join!
<rcw>
We just created #cwg-openembedded. So far Ryan and I have started talking about this stuff there
<Ad0>
rburton, if [ "${hostname}" ]; then
<Ad0>
what if I set it to a whitespace
<Ad0>
or should I rather do a do_install:append
<jdiez>
rcw: don't see that in the channel browser
<Ad0>
I guess that is the safest since the check might change in the future
desmaster has joined #yocto
<rcw>
It is just under "Working Groups" I am "robwoolley" on Discord, if you message me I can invite you.
desmaster has quit [Remote host closed the connection]
destmaster has joined #yocto
<jdiez>
rcw: friend request sent :) I need to work on some other things for a bit, will catch up with you all later on Discord
alperak44 has joined #yocto
<rburton>
usvi: is the archiver class what you're after?
alperak has quit [Ping timeout: 250 seconds]
destmaster has quit [Remote host closed the connection]
alperak44 has quit [Ping timeout: 250 seconds]
Xagen has joined #yocto
<usvi>
rburton: I'll take a look, thanks again
zwelch has joined #yocto
ykrons has joined #yocto
<rburton>
usvi: might not be but depends on your needs
<usvi>
rburton: btw I ran the build from point 0 scratch, again "Removing 3 stale sstate objects for arch"
goliath has quit [Quit: SIGSEGV]
<usvi>
and failed build, again with different output
jpuhlman has quit [Read error: Connection reset by peer]
jpuhlman has joined #yocto
<moto-timo>
rcw: is there a way to get an .ical for the ROS OE WG meetings?
jmd has joined #yocto
jmiehe has joined #yocto
<rburton>
landgraf: remove the ;
<rburton>
apart from that i didn't think it meaningfully changed
<RP>
landgraf: it shouldn't have changed, we did just add code to remove the ';' chararcters
<landgraf>
RP: I had to replace += with :append to fix it. += from the recipe added the fixup function before empty_volatile one
<landgraf>
not sure if it's bug or feature :)
<landgraf>
changed in 'classes/recipes: Switch to use inherit_defer' commit according to bisect
<RP>
landgraf: it might be inherit_defer that changed
<RP>
landgraf: right
<landgraf>
I was suprised that my inteview task I did few weeks ago stopped working :-D Thankfully it worked at the time of the interview
<landgraf>
RP: for that qemu bug I've deployed debian 12 and trying to reproduce it locally. Hopefully will have more time tomorrow/Sat. Forgot about the triage today while bisecting rootfs postcommand issue :-/
<RP>
landgraf: glad it worked when it needed to! :)
<RP>
landgraf: np, it will be interesting to see if you can reproduce it or whether there is something about the autobuilder setup. Worst case we can get you access to one of the workers
* landgraf
was hit by bugs/features in meta-aws, meta-virtualization, meta-oe and oe-core this week :)
mckoan is now known as mckoan|away
<RP>
landgraf: quite the collection. I'm sure the oe-core ones were all features :)
<JaMa>
RP: isn't the BB_PRESSURE_MAX_CPU still a bit low? bitbake is reporting higher numbers since nanbield (will find the link with details)
<RP>
JaMa: could be. I'll have a look at the logs after this and see if it needs to be higher
<JaMa>
RP: I'm using 200K and still it throttles, but YMMV on your builders
<JaMa>
while before I was using just 1K with similar results
Thorn has joined #yocto
<RP>
JaMa: on current builds with the previous setting it has 5 of 16 tasks running and hovering in and out of load which is kind of what you'd expect it to do
<vmeson>
RP: I haven't read #yocto yet and I'm in yet another meeting so I won't have time for a while. load average changes so slowly that I think it's okay to raise it . Could it make things worse? !!
leon-anavi has quit [Quit: Leaving]
frieder has quit [Remote host closed the connection]
florian_kc has joined #yocto
<jdiez>
when using a remote sstate cache (i.e. on a webserver NOT in the LAN), how much data would be downloaded by a developer who doesn't have a local copy, when building e.g. core-image-minimal?
<jdiez>
and what (roughly) order of magnitude of requests? 10? 1k?
<kanavin>
jdiez, it's not difficult to run a local experiment that would establish the answers to these
<jdiez>
true
<kanavin>
jdiez, when building images with a fully populated sstate, only the final rpm (opkg/deb) packages would be fetched from there, and a few extras needed to construct the image
<vmeson>
RP: I read the #yocto history and see that you're asking if we should raise the pressure limit, not the load average limit. As you say, more data would be useful.
<kanavin>
rpm sstate is not split by package, it's one object per recipe with all the packages inside though, so you'd be fetching -dbg -dev etc even if they're not used
* vmeson
runs some tests on a 24 core system overnight...
<jdiez>
kanavin: gotcha, thanks
<vmeson>
I'll likely just change the CPU pressure limit: 1000, 2000, 4000, ... and see how long a "no-sstate, with all the source fetched" build of core-image-minimal build takes.
<JaMa>
vmeson: I would start with no regulation to get the baseline, then 1K, 10K, 100K, on 64 thread machine I have similar build time only with 100K or more
<khem>
kanavin: I think jdiez is asking if say the metadata checkedout with no local state and configured to fetch from a remote sstate mirror on internet, and say bitbake finds that 100% of cache can be found on the remote sstate server then how much data will be downloaded. Its important to know for capped internet speeds. What I would say it will depend upon what is being built. but we can have rough numbers for say unmodified core-image-minimal and
<khem>
core-image-sato etc.
<khem>
JaMa: hmmm I might be a victim of this slowness as well.. how much RAM do you have on these machines with 64 cores
<jdiez>
khem: yeah, I just wanted to get a general idea. It seems the number of requests/downloaded data is "roughly proportional" to the packages affected by whichever changes are done by the remote developer. I guess if they want to compile anything, they would download the sstate for the cross compiler and such
<khem>
jdiez: I would say do an experiment with unmodified tree and see how much it fetches
<jdiez>
yeah, will do
<khem>
it will be in few GBs since thats what I see generated when building from scratch and populating local sstate on the way
<khem>
my builds do have clang etc. so it is a bit more
<moto-timo>
rburton: vmeson: on a hunch I disabled the 'perbranch' .annotate part of the StatsView code and it is ok... the overall statistics are:
<moto-timo>
rburton: vmeson we do want that per branch info though... because if you look at any modern branch, the number of layers is less by about 200 (the are a couple hundred "master" layers that are long bit rotted to not being functional, but layerindex doesn't currently know any better as far as the code goes)
<khem>
rburton: are we looking at layer.conf for LAYERSERIES_COMPAT
<JaMa>
khem: 128G
<khem>
some layers may have single branch support multiple release branches
<khem>
JaMa: ok my case exact half of yours VM it is 32core/64G
<JaMa>
khem: but need to restrict -j for e.g. chromium to avoid OOMK, so 2G per cpu thread is no longer enough for some things like chromium and node
<JaMa>
BB_PRESSURE_MAX_CPU = "200000"
<JaMa>
BB_NUMBER_PARSE_THREADS = "64"
<JaMa>
#BB_PRESSURE_MAX_CPU = "1000000"
<JaMa>
BB_NUMBER_THREADS = "8"
<JaMa>
PARALLEL_MAKE = "-j 70 -l 140"
<moto-timo>
and the multiple release support is a conundrum for LayerIndex... what if a 'kirkstone' git branch exists and yet real development/support is on 'master' git branch? (meta-webosose fits this model for instance). Then layerindex either has to guess, or a human has to be involved.
<JaMa>
#PARALLEL_MAKE:pn-qtwebengine = "-j 40"
<JaMa>
PARALLEL_MAKE:pn-webruntime = "-j 40"
<khem>
hmm my lazy math was 2G per core
<khem>
JaMa: BB_NUMBER_THREADS is a bit low isnt it
<khem>
when its not building these hoarders it will be not using full resources
<JaMa>
increasing it doesn't help with build time for me, only makes the overload spikes worse
<khem>
have you tries doing a world build with meta-openembedded included ?
<JaMa>
even with 200K BB_PRESSURE_MAX_CPU it's still often restricted to less than 8 most of the time anyway
<vmeson>
JaMa: sure
<vmeson>
moto-timo: thanks!
<JaMa>
khem: yes and with multilib and other bigger layers, then the world builds take ~30 hours
<moto-timo>
vmeson: I'm a little bit surprised it's 25k recipes, but I still think it pales compared to debian or fedora.
<vmeson>
~25,000 recipes is about what I expected. I assume that Debian's QA is better than what we do for 80% of those recipes but I really don't know.
<moto-timo>
vmeson: debian QA covers nearly all of their packages, we only cover OE core and whatever individual layer maintainers do.
<JaMa>
it's true that our PCs as many times faster, but the things I do are as slow as they were 30 years ago (as the tasks get bigger and bigger as well)
<JaMa>
with 128G ram I still cannot search all the world builds in jenkins console output on web (is much faster to fetch the logs and search locally)
<JaMa>
khem: I'll probably give-up on those xfstests, I've reproduced the "cp:" error again during another 100 rebuilds, now checking without parallel make in do_install
<vmeson>
moto-timo: yep. more than 3K, < 50K, and Debian has > 50K ( I don't care about the exact number).
<JaMa>
khem: but those cleanups are IMHO still useful so please keep them in master-next
<khem>
another consideration is to partition most frequent updates to one bundle and slow ones to other bundle, eg. lot of embedded apps are base OS + single application so separate application out to have its own upgradable bundle
<khem>
thats what I have seen, often app is upgraded more frequently
<fullstop>
I was planning on having, a much smaller application partition, and the remainder of the space being used for "data", and not included in A/B
<fullstop>
Kernel + rootfs probably need to go together in one bundle because of kmod versions
<fullstop>
Yes, I've used those in the Android world.
<khem>
RP: it seems core-image-sato-sdk:do_testimage gets stuck for qemuriscv64 builds on AB
simonew has joined #yocto
alessioigor has quit [Quit: alessioigor]
<khem>
well its not really stuck I can ssh into qemu and it is running a build of cpio :) so I just need to be more patient I guess
<khem>
RP: I am also seeing "ERROR: Exited from signal Killed (9)" in some ptest runs e.g. with core-image-ptest-libmodule-build-perl on riscv qemu, if I run the test manually inside qemu it works ok so I guess it fails in core-image-ptest-all runs perhaps due to resource pressure on build host
<khem>
I am trying it with QB_MEM:virtclass-mcextend-libmodule-build-perl = "-m 2048" lets see
<khem>
JaMa those xfstests happens in my CI too, so I am interested in trying your patches out and see if it fixes them
<khem>
JaMa: interestingly it only happens for qemuarm builds in my case
sgw has quit [Remote host closed the connection]
sgw has joined #yocto
<AdrianF>
:q:q:q
<khem>
Adrian seems to quite annoyed, start using VScode :)
<Crofton>
Lol
<simonew>
vim still open :D?
<AdrianF>
Hm.. yes, I use vim as well.
nerdboy has joined #yocto
nerdboy has quit [Changing host]
nerdboy has joined #yocto
jmd has quit [Remote host closed the connection]
<shoragan>
fullstop, if you don't need to have the smallest possible download, RAUC's adaptive update can be a way to keep the simplicity of a A/B rootfs layout while skipping the download of unchanged parts: https://rauc.readthedocs.io/en/latest/advanced.html#adaptive-updates
<shoragan>
the more reproducible your build is, the smaller the changes in the rootfs image. yocto is pretty good for that by now, if configured correctly
<fullstop>
thanks, shoragan, I will definitely read more about adaptive updates. It sounds promising.
<JaMa>
khem: I have reproduced it in qemux86-64 and various other MACHINEs
<JaMa>
khem: I have a lot of debug in install-sh now, but there is nothing which could explain this, now it's on 80th iteration without issue
simonew has quit [Quit: Konversation terminated!]
<JaMa>
khem: but I'll fix it eventually, now there is no turning back :) unless one of the people who touched the recipe will beat me to it
<jdiez>
fullstop: I'm also looking into small-as-possible updates (for satellite use cases, there the uplink is even more limited) and out of the options in the software update wiki page, meta-updater seems the most promising to me: https://github.com/advancedtelematic/meta-updater
<jdiez>
in short: it's file based binary diffs, you only need one partition and you can have multiple software versions (ostree commits)
Kubu_work has quit [Quit: Leaving.]
nerdboy has quit [Ping timeout: 256 seconds]
ptsneves has quit [Ping timeout: 252 seconds]
nerdboy has joined #yocto
mvlad has quit [Remote host closed the connection]
ablu has quit [Ping timeout: 272 seconds]
risca has joined #yocto
ablu has joined #yocto
<JaMa>
khem: now I have xfstests reliable reproducer, fix comming soon (I hope)
<khem>
6.6.12 does not have this issue so I wonder
gsalazar has quit [Ping timeout: 252 seconds]
gsalazar has joined #yocto
Xagen has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
gsalazar has quit [Ping timeout: 246 seconds]
amitk_ has quit [Remote host closed the connection]
tangofoxtrot has quit [Remote host closed the connection]
goliath has quit [Quit: SIGSEGV]
tangofoxtrot has joined #yocto
Guest65 has joined #yocto
Guest65 has quit [Client Quit]
DiogenesMountain has joined #yocto
<DiogenesMountain>
Hello, I hope everyone is well. I have two images: a production and development image. The machine and distro are currently the same. Is there any way to apply a patch to the kernel source but only for the development image? I believe that with packages in userspace you can create a separate recipe (e.g. recipe, recipe-dev) and install either
<DiogenesMountain>
package into an image as required. Is there a similar approach that can be done for patches to the kernel source? Thanks in advance.
martin_thingvold has quit [Ping timeout: 264 seconds]