<ak77>
hmm.. somehow the wic image has different fstab (few entries auto added) than squashfs image... i found two fstabs, one in image-name/<version>/build-wic/fstab and one in image-name/<version>/rootfs/etc
<ak77>
is there a way to prevent wic to fiddle with fstab
tgamblin has quit [Remote host closed the connection]
tgamblin has joined #yocto
<ak77>
so, i build squashfs rootfs, that gets 1) integrated into wic image... 2) integrated into rauc bundle. initial install of wic image, rootfs has different fstab than upgraded rootfs. (should be same squashfs)
Daanct12 has quit [Read error: Connection reset by peer]
rob_w has quit [Remote host closed the connection]
Daanct12 has joined #yocto
<shoragan>
ak77, if you do image-based updates (as you do with RAUC), you should use the same FS image in the RAUC bundle and disk image.
<shoragan>
by default, wic will create a new FS image (to support cases where you split the rootfs into multiple partitions)
<ak77>
I was under impression that I do that. wic has --source rootfs type = squashfs, and rauc has squashfs specified. but somehow they are not the same (different fstab!), so, how do I use same fs image ?
<shoragan>
you have squashfs and wic in your IMAGE_FSTYPES?
<shoragan>
i think that in this case, wic will generate it's own squashfs image for including in the disk image
Daanct12 has quit [Ping timeout: 252 seconds]
<shoragan>
you'll also have different filesystem uuids in that case
<shoragan>
as wic doesn't give access the the FS images it generates, RAUC uses the one generated with IMAGE_FSTYPES
<shoragan>
i think you could use --source rawcopy to use that image in wic as well. i normally use genimage and have a separate recipe for the disk image
Daanct12 has joined #yocto
florian has quit [Ping timeout: 260 seconds]
<ak77>
shoragan: yes. both. ok. yes. i will go back to rawcopy now that i know about IMAGE_TYPEDEP
<michalsieron>
hi there, why is `TARGET_CC_KERNEL_ARCH` by default empty and not equal to something like `TARGET_CC_ARCH`?
<michalsieron>
the main thing that I am interested in is how to properly and why don't we pass `-mcpu` option to the kernel compilation
LainExperiments has joined #yocto
jpuhlman- has quit [Ping timeout: 276 seconds]
Austriker has quit [Quit: Client closed]
Austriker has joined #yocto
druppy has joined #yocto
michalsieron has quit [Quit: Client closed]
Austriker has quit [Quit: Client closed]
florian_kc has joined #yocto
druppy has quit [Ping timeout: 264 seconds]
florian has quit [Ping timeout: 252 seconds]
michalsieron has joined #yocto
michalsieron has quit [Client Quit]
arisut has quit [Quit: install gentoo]
arisut has joined #yocto
arisut has quit [Client Quit]
arisut has joined #yocto
florian_kc has quit [Ping timeout: 248 seconds]
Austriker has joined #yocto
ehussain has quit [Ping timeout: 260 seconds]
mbulut_ has quit [Ping timeout: 244 seconds]
<agodard>
qschulz: yes most likely, thanks for the notice
Daanct12 has quit [Quit: WeeChat 4.4.4]
LainExperiments has quit [Quit: Client closed]
florian_kc has joined #yocto
cyxae has joined #yocto
cyxae has quit [Remote host closed the connection]
cyxae has joined #yocto
ptsneves has quit [Ping timeout: 265 seconds]
leon-anavi has quit [Remote host closed the connection]
prabhakalad has quit [Ping timeout: 252 seconds]
prabhakalad has joined #yocto
druppy has joined #yocto
prabhakalad has quit [Ping timeout: 265 seconds]
prabhakalad has joined #yocto
goliath has joined #yocto
prabhakalad has quit [Ping timeout: 264 seconds]
prabhakalad has joined #yocto
florian_kc has quit [Ping timeout: 248 seconds]
<Jones42>
is anyone attending the 38C3 in Hamburg next week?
Xagen has joined #yocto
<mcfrisk>
howto test changes to linux-yocto kernel-meta by applying a patch there? some magic is needed..
druppy has quit [Ping timeout: 244 seconds]
Starfoxxes has quit [Ping timeout: 246 seconds]
nvil has joined #yocto
<nvil>
hi, i'm trying to use namespaces but i i think i'm missing something to be able to use user namespace, i tried remapping uid and guid but the configuration changes nothing. Does anybody know what library, variable am i missing?
prabhakalad has quit [Remote host closed the connection]
florian_kc has joined #yocto
jmd has joined #yocto
ibena has quit [Quit: WeeChat 4.1.1]
<rburton>
nvil: without seeing what you're doing its hard to say. have you tried just "unshare -rU", that should give you a "root" shell
<nvil>
rburton: no no i haven't tried it, i'll do it. thanks!
florian_kc has quit [Ping timeout: 252 seconds]
GNUmoon has quit [Ping timeout: 264 seconds]
polprog has quit [Remote host closed the connection]
simson has joined #yocto
simson is now known as simson2
<simson2>
Hello, I have a quick question regarding emmc and nand. I understand the technical differences and I'm wondering what reasons speak to use raw nand memory when emmc is so much simpler to use
<LetoThe2nd>
simson2: price.
<simson2>
hm... makes sense. thanks!
nvil has quit [Quit: Client closed]
polprog has joined #yocto
nvil has joined #yocto
<mischief>
does anyone know how to get icecc to play nice with ccache
<rburton>
why would you use both?
<mischief>
ccache is defaulted on with INHERIT here.. and i want to use icecc.
<rburton>
INHERIT:remove = "ccache"
<mischief>
that's certainly one way to do it
<rburton>
its the best way to do it
<mischief>
on the same note, has anyone looked at sccache?
<mischief>
i gave it a go as an experiment but it's not really done
<rburton>
"absolute paths must match to get a cache hit" ouch
<mischief>
also, i have a patch for icecc :-D
mulk has quit [Read error: Connection reset by peer]
olani- has joined #yocto
<mischief>
had to remember how git send-email works..
<JaMa>
I see some benefits of using icecc for things like chromium, but do you really have faster external kernel modules builds with it?
<mischief>
there's a lot of them, so yes, somewhat
Jones42_ has joined #yocto
mulk has joined #yocto
Jones42 has quit [Ping timeout: 248 seconds]
Jones42 has joined #yocto
Jones42_ has quit [Ping timeout: 252 seconds]
goliath has quit [Quit: SIGSEGV]
<JaMa>
a lot of recipes or many modules in single recipe? how many cores are in your icecc cluster? we were using something like 140 for chromium and very few recipes were able to take advantage of that many cores, with 64 threads on reasonably priced threadripper I don't see a benefit of icecc
<JaMa>
nowadays we're rather using ccache for some of the recipes, but also with a dubious benefits
<mischief>
128 threads in the cluster.
<mischief>
JaMa: both many recipes (more than 10) and in some cases many modules within recipes
<mischief>
the cluster is 4 ryzen 9 9950x, so they are pretty fast already.. but there are some actual improvements for things with many source files or building many recipes in parallel
<mischief>
well, i was going to build the kirkstone branch, since that's what we use, but meta-firefox wants python2.7 and this container is debian 12.. so ill do scarthgap i suppose
florian_kc has joined #yocto
<mischief>
scarthgap doesn't work either. :-(
Xagen has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
Guest43 has joined #yocto
<Guest43>
Hi there. Am i in the right place to ask about board support?
<rburton>
you can ask. depending on the board there might be better places to ask.
<Guest43>
Texas Instruments EVMK2GX
<Guest43>
The Yocto version available in the product page is outdated, so I'm wondering if the board is supported in newer Yocto versions.
jpuhlman has quit [Read error: Connection reset by peer]
<NizarNizar>
thank you
jpuhlman- has joined #yocto
cabazon has joined #yocto
<tgamblin>
rburton: any insight into the dnf issue kanavin saw with '[OE-core][PATCH] python3: upgrade 3.13.0 -> 3.13.1'?
<tgamblin>
assuming you are used to dealing with dnf/libdnf shenanigans in builds
<kanavin>
you probably will be able to get a coredump out of it with 'ulimit -c unlimited', then run gdb as usual
vthor has quit [Quit: kill -9 $pid]
<kanavin>
it's not about dnf, it's only triggering a segfault in python itself
<tgamblin>
ah
<tgamblin>
and here I thought that doing a core-image-ptest-python3 build + buildall-qemu + reproducibility test for it would catch everything :)
<denix>
NizarNizar: if that's K2G, then it's very old and it has been deprecated couple of years ago
NizarNizar has quit [Changing host]
NizarNizar has joined #yocto
<denix>
NizarNizar: the last release that meta-ti had support for K2 platforms, was dunfell
<denix>
dunfell is EOL, but it was an LTS and lasted from April 2020 until April 2024, so not too-too bad
<denix>
my understanding is that K2 platforms are slowly being deprecated from upstream as well, like kernel and u-boot
<kanavin>
tgamblin, that's a good test actually, I would not think of anything else to check locally.
<kanavin>
the full-cmdline thing is hitting some corner case.
<NizarNizar>
thank you denix
NizarNizar has left #yocto [#yocto]
NizarNizar has joined #yocto
NizarNizar has left #yocto [#yocto]
<JaMa>
mischief: what exactly doesn't work with scarthgap?
vthor has joined #yocto
vthor has quit [Changing host]
vthor has joined #yocto
<tgamblin>
kanavin: what do you mean by running gdb as usual? I've never used it on bitbake/the build process like this
<kanavin>
tgamblin, I mean that once you have a coredump, and the python executable you can run gdb to get a stack trace of where the crash happens, it's a start
<mischief>
JaMa: well, the scripts in this test-oe-build-time repo
<mischief>
JaMa: perhaps it is the addition of INHERIT += "icecc" that broke it. http://0x0.st/XCff.txt