LetoThe2nd changed the topic of #yocto to: Welcome to the Yocto Project | Learn more: https://www.yoctoproject.org | Community: https://www.yoctoproject.org/community | IRC logs: http://irc.yoctoproject.org/irc/ | Having difficulty on the list, with someone on the list or on IRC, contact Yocto Project Community Manager Letothe2nd | CoC: https://www.yoctoproject.org/community/code-of-conduct
mansandersson86 has quit [Quit: The Lounge - https://thelounge.chat]
mansandersson86 has joined #yocto
mansandersson86 has quit [Quit: The Lounge - https://thelounge.chat]
mansandersson86 has joined #yocto
florian has quit [Ping timeout: 248 seconds]
vvn has quit [Quit: WeeChat 4.5.2]
ptsneves has joined #yocto
ptsneves has quit [Ping timeout: 252 seconds]
AdrianF7 has joined #yocto
AdrianF has quit [Ping timeout: 252 seconds]
AdrianF7 is now known as AdrianF
Jones42 has quit [Ping timeout: 252 seconds]
Jones42 has joined #yocto
jclsn has quit [Ping timeout: 272 seconds]
jclsn has joined #yocto
frgo_ has quit [Read error: Connection reset by peer]
frgo has joined #yocto
sgw has quit [Ping timeout: 260 seconds]
sgw has joined #yocto
wojci has joined #yocto
wojci has quit [Ping timeout: 245 seconds]
wojci has joined #yocto
wojci has quit [Ping timeout: 244 seconds]
grma has quit []
grma has joined #yocto
savolla has joined #yocto
Xagen has quit [Read error: Connection reset by peer]
Xagen has joined #yocto
grma has quit []
grma has joined #yocto
gecko_ has joined #yocto
wojci has joined #yocto
leon-anavi has joined #yocto
goliath has joined #yocto
enok has joined #yocto
enok has quit [Ping timeout: 252 seconds]
enok has joined #yocto
enok has quit [Ping timeout: 246 seconds]
enok has joined #yocto
enok has quit [Client Quit]
enok has joined #yocto
goliath has quit [Quit: SIGSEGV]
enok has quit [Ping timeout: 248 seconds]
florian has joined #yocto
rfuentess has joined #yocto
Kubu_work has joined #yocto
Schlumpf has joined #yocto
<Schlumpf> Good morning,I get the following error on the do_install step when building net-tools: `build/tmp/work/core2-64-heuft/net-tools/2.10/recipe-sysroot-native/usr/bin/msgfmt: Cannot convert from "ISO-8859-15" to "UTF-8". msgfmt relies on iconv(), and iconv() does not support this conversion.`
<Schlumpf> Could the this be a problem with my build host (Gentoo) or my image?
ablu has quit [Ping timeout: 248 seconds]
ablu has joined #yocto
florian has quit [Ping timeout: 248 seconds]
enok has joined #yocto
sstiller has joined #yocto
frieder has joined #yocto
olani has quit [Ping timeout: 272 seconds]
enok has quit [Ping timeout: 268 seconds]
enok has joined #yocto
enok has quit [Client Quit]
enok71 has joined #yocto
florian has joined #yocto
enok71 has quit [Ping timeout: 260 seconds]
enok has joined #yocto
goliath has joined #yocto
Schlumpf has quit [Ping timeout: 240 seconds]
<mcfrisk> hmm, could the spdx tasks run in parallel for larger set of binary packages? linux-yocto-6.12.13+git-r0 do_create_package_spdx - 13m37s feels slow on a fast machine with plenty of cores
<mcfrisk> linux-yocto-6.12.13+git-r0 do_create_package_spdx - 27m25s, yuck
Schlumpf has joined #yocto
sakoman1 has joined #yocto
sakoman has quit [Ping timeout: 260 seconds]
Jones42 has quit [Remote host closed the connection]
sstiller has quit [Quit: Leaving]
teroshan has left #yocto [The Lounge - https://thelounge.chat]
alessio has joined #yocto
<alessio> ERROR: net-snmp-5.9.3-r0 do_patch: Applying patch '0001-unload_all_mibs-fix-memory-leak-by-freeing-tclist.patch' on target directory '/home/alessio.bogani/build-images/ds-2.y/build/tmp/work/zen1-voltumna-linux/net-snmp/5.9.3-r0/net-snmp-5.9.3'
kanavin has quit [Quit: Leaving]
<mcfrisk> core-image-base-1.0-r0 do_create_image_sbom_spdx - 38m20s, this too is taking way too much time for such a small image. maybe I should disable SPDX things completely..
<rburton> mcfrisk: that's not usual, can you see what it's actually doing?
<mcfrisk> at least do_create_package_spdx was slowly processing each kernel module package. will check do_create_image_sbom_spdx in a new build..
<mcfrisk> rburton: do_create_image_sbom_spdx is very, very slowly doing https://pastebin.com/raw/HjZKD2Qc
<rburton> maybe an algoirithmic bug somewhere? Paging JPEW for when he wakes ^^^
<mcfrisk> I think the sdpx tasks are not paralellized at all, same for create_package_spdx which iterates serially over all binary packages which kernel produces tons of
<rburton> that total number seems excessive though. i admit the image i had to hand was -minimal but it has ~43k total not 3.5M
<mcfrisk> I'm building a much more modular linux-yocto kernel config: wc -l core-image-base-genericarm64.rootfs.manifest 1468, grep kernel-module core-image-base-genericarm64.rootfs.manifest | wc -l 1283
<mcfrisk> because having all those drivers built into kernel make the kernel itself slow, and also slow down udev with tons of events in initrd
olani has joined #yocto
olani has quit [Remote host closed the connection]
ptsneves has joined #yocto
Kubu_work1 has joined #yocto
Kubu_work1 has quit [Client Quit]
florian has quit [Ping timeout: 260 seconds]
florian has joined #yocto
Kubu_work1 has joined #yocto
<landgraf> Is there a way how to assign label to the generated rootfs ext4 image? Without wic
florian has quit [Ping timeout: 268 seconds]
<JPEW> Spdx tasks will run in parallel, but the need the build/runtime dependencies
<JPEW> The tasks themselves are serial, which could maybe be intended improved.
<JPEW> And ya, 3.5M elements will be slow. Did you turn on source tracking or something?
florian has joined #yocto
<mcfrisk> JPEW: nothing special turned on, just building poky genericarm64 core-image-base. all SDPX tasks are horribly slow on aarch64 build machine with plenty of cores.
<JPEW> Hmm shouldn't be too slow then. Maybe you can share your spdx when it finishes?
<landgraf> nvm found the way
<JPEW> mcfrisk: oh, modular kernel config
florian_kc has joined #yocto
<JPEW> .... Ya the kernel cves are killing it. It reports them for each generated kernel package, and there are a lot
<JPEW> So lots of kenrel packages * lots of kernel cves
<mcfrisk> 3.4G spdx file for core-image-base :/
<mcfrisk> less dies on the file too
<rburton> ooouch
<mcfrisk> a single loooong line!?
<mcfrisk> $ wc core-image-base-genericarm64.rootfs-20250310113123.spdx.json: 0 149339 3553204902 core-image-base-genericarm64.rootfs-20250310113123.spdx.json
<rburton> who needs whitespace
<mcfrisk> I wonder why autobuilder/CI didn't see issues with these
olani has joined #yocto
cyxae has joined #yocto
<JPEW> mcfrisk: less kernel packages
<mcfrisk> nope, that's the wrong way to go. I don't think SPDX_INCLUDE_VEX should be enabled by default if it's this slow and explodes the exported data.
<mcfrisk> and IMO SDPX output should also be human readable
enok has quit [Ping timeout: 260 seconds]
<JPEW> mcfrisk: SPDX_PRETTY = "1"
<JPEW> The kernel is a bit of a pathalogical (but unfortunately common) case. Let me see if there is a better way to express it in SPDX so that there ends up being less relationships
<JPEW> mcfrisk: or `cat *.spdx.json | jq .` :)
<mcfrisk> so much time and resources wasted on SPDX things in default builds and I never look at the output, and it's not even human readable by default :/ I understand the what it is for but I don't think maintainers actually look at the data, IT management systems may get the imports eventually but I doubt no-one even looks at them..
<JPEW> mcfrisk: Feel free to turn it off if you don't want it
<RP> mcfrisk: you can make that argument for debug symbols too :/
<JPEW> I will try to see if we can make the pathalogical vulnerability case less annoying. The problem is mainly that we don't know which kernel packages that get built will actually be installed, so we can't take advantage of a lot of the "grouping" that SPDX allows
<JPEW> The normal CVE * packages is not a problem for most recipes, when there are less than 10 of each
<mcfrisk> buildhistory does export a lot of useful data for me, and that isn't enabled by default
Minvera has joined #yocto
<mcfrisk> what's the best way to disable sdpx classes globally? IMAGE_CLASSES:remove = "create-spdx-image-3.0" only stops the image recipe handling
<JPEW> mcfrisk: INHERIT:remove = "create-spdx" maybe
<mcfrisk> JPEW: thanks, that works. docs should be updated, they claim that it's not enabled by default..
<JPEW> mcfrisk: Could you raise a docs bug in bugzilla for that?
Ad0 has quit [Ping timeout: 248 seconds]
wojci has quit [Ping timeout: 245 seconds]
<RP> mcfrisk: it is enabled in poky
<JaMa> sakoman1: can you please include https://lists.openembedded.org/g/bitbake-devel/message/17414 in your next pull for kirkstone? it's backports from 2.6 in nanbield and it saves a lot of build time
florian_kc has quit [Quit: Ex-Chat]
enok has joined #yocto
florian has quit [Ping timeout: 276 seconds]
Ad0 has joined #yocto
savolla has quit [Ping timeout: 245 seconds]
<RP> JPEW: are you looking into those spdx sstate sig test failures yet or should I take a look?
<JPEW> I got side tracked. I'll have time later this week though. I suspect it's just got some vardeps that need ignored
frieder has quit [Remote host closed the connection]
goliath has quit [Quit: SIGSEGV]
enok has quit [Ping timeout: 260 seconds]
kanavin has joined #yocto
enok has joined #yocto
enok has quit [Client Quit]
enok has joined #yocto
enok has quit [Read error: Connection reset by peer]
enok has joined #yocto
enok has quit [Remote host closed the connection]
florian has joined #yocto
<sakoman1> JaMa: OK, will do!
rfuentess has quit [Remote host closed the connection]
ptsneves has quit [Ping timeout: 276 seconds]
Schlumpf has quit [Quit: Client closed]
<rburton> JPEW: interestingly that expand url/meta-arm/spdx failure is breaking in a gitsm repo with urls from the .gitmodules
<JPEW> rburton: Ya, I saw that
<JPEW> What is EDK2 doing....
<rburton> the recipe itself is pretty much just two gitsm: urls
<rburton> but its been a very successful stress test for devtool in the past...
<rburton> i expect the submodules are many
<rburton> yeah 12 immediate submodules in edk itself
<rburton> should something be faking a SRCREV for the submodule sha?
<rburton> zeddii: did you take any reverts to the kernel related to graphics drivers when moving past 6.6?
<rburton> zeddii: new xserver doesn't like our /sys and the relevant fixes in the upgrade are meant to fix it with 6.9+
<rburton> wondering if you found this breakage and reverted something in the kernel to work around it
Articulus has quit [Quit: Leaving]
<rburton> hm no can't _see_ anything in git cherry between 6.12/standard/base and 6.12/base
goliath has joined #yocto
Kubu_work has quit [Quit: Leaving.]
Kubu_work1 has quit [Quit: Leaving.]
enok has joined #yocto
enok has quit [Remote host closed the connection]
enok has joined #yocto
leon-anavi has quit [Quit: Leaving]
<qschulz> rburton: any news on https://github.com/mesonbuild/meson/issues/13018 maybe?
<qschulz> I'm hitting the same issue with Buildroot when trying to build libcamera's qcam with Qt6
<qschulz> ah, just realized you didn't open the issue, you just commented on it
enok has quit [Ping timeout: 252 seconds]
<rburton> qschulz: from that 30s of looking it shouldn't be too difficult, just need to use the right coredata
<khem> cmake has similar issue when using QT6 I think thats why the cmake wrapper class exists - https://code.qt.io/cgit/yocto/meta-qt6.git/tree/classes/qt6-cmake.bbclass
<qschulz> khem: rburton: I was considering finding a way to move the binaries back to /usr/bin instead of /usr/libexec
<qschulz> seems like this is what the cmake wrapper does with -DINSTALL_LIBEXECDIR
<qschulz> it sucks for Buildroot though as we would need to have this hack for all Qt6 packages
<qschulz> mmmm but it still uses the weak default /usr/libexec
<khem> yeah, but I think there was a valid reason that rburton had to move them to libexecdir which I do not remember
chep has quit [Ping timeout: 244 seconds]
chep has joined #yocto
sotaoverride has quit [Ping timeout: 260 seconds]
ptsneves has joined #yocto
druppy has joined #yocto
khem has quit [Quit: WeeChat 4.5.2]
florian has quit [Ping timeout: 265 seconds]
wojci has joined #yocto
florian has joined #yocto
<rburton> can anyone remember where the qemush4 machine was?
khem has joined #yocto
<khem> RP: I have sent a revert of crucible, it was not accompanied by version upgrade etc. so I think it made it simple to decide
Kubu_work has joined #yocto
sotaoverride has joined #yocto
florian has quit [Ping timeout: 244 seconds]
Kubu_work1 has joined #yocto
Kubu_work has quit [Quit: Leaving.]
Kubu_work has joined #yocto
Kubu_work has quit [Client Quit]
florian has joined #yocto
Kubu_work1 has quit [Quit: Leaving.]
druppy has quit [Ping timeout: 260 seconds]
<RP> khem: thanks, that is appreciated
cyxae has quit [Quit: cyxae]
Kubu_work has joined #yocto
alessio has quit [Ping timeout: 272 seconds]
Kubu_work1 has joined #yocto
Kubu_work has quit [Quit: Leaving.]
Kubu_work1 has quit [Quit: Leaving.]
Kubu_work has joined #yocto
ptsneves has quit [Ping timeout: 252 seconds]
<JPEW> RP: Ah... varsepsexclude isn't transitive? I have the flag on the create-spdx-3.0.bbclass, but it doesn't appear to block the var dep in the called python functions, which would explain the problem
<JPEW> Anyway, I think I can fix that easily enough
<RP> JPEW: that sounds right
<JPEW> RP: K, I finished my other stuff earlier than expected, so I'll take a look
<RP> JPEW: I didn't get to it with meetings unfortunately
goliath has quit [Quit: SIGSEGV]
<khem> JPEW: pertinent task hashes will still get/ignore them regardless of transitive'ness
<rburton> so yeah loongarch64 doesn't work again
<rburton> do we remove the qemu machine from core if it can't build a kernel or bootloader?
<rburton> maybe post-release and have a new rule that bsps in core are tested on the ab and well maintained?
<khem> rburton: its not officially supported arch so just ignore it
<khem> OE perhaps does not have such high bar anyway like yp would
Kubu_work has quit [Quit: Leaving.]
<RP> rburton: I'm in the "ignore it" camp. They obviously have a layer and patches that let it work somehow/somewhere
olani- has quit [Ping timeout: 244 seconds]
Kubu_work has joined #yocto
florian has quit [Ping timeout: 244 seconds]