LetoThe2nd changed the topic of #yocto to: Welcome to the Yocto Project | Learn more: https://www.yoctoproject.org | Community: https://www.yoctoproject.org/community | IRC logs: http://irc.yoctoproject.org/irc/ | Having difficulty on the list, with someone on the list or on IRC, contact Yocto Project Community Manager Letothe2nd | CoC: https://www.yoctoproject.org/community/code-of-conduct
qschulz has quit [Remote host closed the connection]
qschulz has joined #yocto
davidinux has quit [Ping timeout: 240 seconds]
davidinux has joined #yocto
jclsn has quit [Ping timeout: 256 seconds]
jclsn has joined #yocto
geoffhp has joined #yocto
parthiban has joined #yocto
parthiban has left #yocto [#yocto]
MrCryo has joined #yocto
MrCryo has quit [Remote host closed the connection]
Jones42_ has quit [Ping timeout: 252 seconds]
rob_w has joined #yocto
enok has joined #yocto
rfuentess has joined #yocto
Kubu_work has quit [Quit: Leaving.]
michael_e has joined #yocto
michael_e has quit [Client Quit]
ederibaucourt has quit [Quit: ZNC 1.8.2 - https://znc.in]
ederibaucourt has joined #yocto
ctraven has quit [Ping timeout: 268 seconds]
ctraven has joined #yocto
goliath has joined #yocto
enok has quit [Ping timeout: 240 seconds]
zpfvo has joined #yocto
ablu has quit [Ping timeout: 255 seconds]
ablu has joined #yocto
shoragan_ is now known as shoragan
mckoan|away is now known as mckoan
c-thaler has joined #yocto
polprog has quit [Ping timeout: 272 seconds]
enok has joined #yocto
leon-anavi has joined #yocto
Kubu_work has joined #yocto
enok has quit [Quit: enok]
enok has joined #yocto
halloy4985 has joined #yocto
<halloy4985> Hi , Is there a way to revert a patch applied by cleansstate
<JaMa> if cleansstate task applies patches for you than you live in very weird universe
<halloy4985> Haha .I mean clenasstate doesnt do it for me . Do you have a way to revert patches applied on EXTERNAL_SRC SRC_URI meaning source is locally not fetched.
halloy4985 is now known as max_eip
<JaMa> remove them from SRC_URI if you don't want them to be applied
Jones42 has joined #yocto
<max_eip> Bro i want to apply the patch. But dont want to commit to mainline.
<max_eip> Problem is if i cleansstate , i can see some files modified which cause no problem just i dont like . I want cleansstate to do_unpatch so no change is observed in my local code.
<JaMa> first you wanted to revert it now you want to apply it, if you use EXTERNAL_SRC (I assume you meant EXTERNALSRC) then you're responsible for applying the changes there bro
<max_eip> I think we miscommunicated. Thanks for your time.
<LetoThe2nd> JaMa: hey bro I want a beer
<JaMa> LetoThe2nd: I need beer, now! :)
<LetoThe2nd> JaMa: apply from external source?
<JaMa> sometimes, when internal fridge source gets empty
max_eip has quit [Remote host closed the connection]
enok has quit [Ping timeout: 255 seconds]
sukbeom6 has joined #yocto
shivamurthy_ has joined #yocto
OnkelUll_ has joined #yocto
wCPO0 has joined #yocto
rsalveti_ has joined #yocto
dmoseley_ has joined #yocto
benkard has joined #yocto
fullstop_ has joined #yocto
rsalveti has quit [Read error: Connection reset by peer]
dl9pf has quit [Read error: Connection reset by peer]
wCPO has quit [Read error: Connection reset by peer]
mulk has quit [Read error: Connection reset by peer]
OnkelUlla has quit [Read error: Connection reset by peer]
sukbeom has quit [Read error: Connection reset by peer]
shivamurthy has quit [Ping timeout: 268 seconds]
patersonc has quit [Ping timeout: 268 seconds]
denix has quit [Ping timeout: 268 seconds]
fullstop has quit [Excess Flood]
patersonc_ has joined #yocto
dl9pf has joined #yocto
denix has joined #yocto
dmoseley has quit [Read error: Connection reset by peer]
asriel has quit [Ping timeout: 268 seconds]
dl9pf has joined #yocto
dl9pf has quit [Changing host]
shivamurthy_ is now known as shivamurthy
rsalveti_ is now known as rsalveti
wCPO0 is now known as wCPO
benkard is now known as mulk
sukbeom6 is now known as sukbeom
asriel has joined #yocto
fullstop_ is now known as fullstop
enok has joined #yocto
mvlad has joined #yocto
OnkelUll_ is now known as OnkelUlla
mbulut__ has joined #yocto
ardo has quit [Ping timeout: 246 seconds]
* RP wonders if we can merge the remaining unpack change with the fatal error for S=WORKDIR now
<JaMa> RP: I'm seeing quite a few failures in meta-virtualization and meta-security from UNPACKDIR changes (so I'm more concerned than with gcc-14)
<JaMa> I was also thinking about re-sending https://lists.openembedded.org/g/openembedded-core/message/197098 but didn't want to add another thing on your mind or AB queue :)
<RP> JaMa: other layers haven't adapated to the workdir changes yet, but they kind of can't/won't until I make things error
<RP> JaMa: at least that patch won't break core :)
<JaMa> yeah, I'm just concerned when these big changes land so soon after each other (which will make the triage of build failures in other layers more complicated) and people might not notice the "silent breakage" like ross did in "gawk: fix readline detection"
<JaMa> I'm glad I had gcc-14 in our world builds for couple months to catch all issues caused by that before the UNPACKDIR change lands, but other people maybe didn't do this in time
Guest13 has joined #yocto
<Guest13> regarding to my issue yesterday with samba rburton , looks like the issue is when i select a different machine (colibri-imx6), it works fine with tegra for example.. where can i start
<JaMa> Guest13: bitbake-getvar
<Guest13> JaMa for what?
<JaMa> Guest13: SRC_URI, LIC_FILES_CHKSUM, DL_DIR, ..
<rburton> JaMa: i do try and look at the buildhistory for most patch series but don't do it all the time
<JaMa> rburton: yes, buildhistory is great and thank you for looking at it
<rburton> Guest13: maybe colibri-imx8 broke samba. with a fresh poky and meta-oe, build samba for qemuarm. if that works, then its the BSP or your tooling or some other weird mangled build tree problem.
<JaMa> rburton: I wouldn't have noticed this one as we have PREFERRED_VERSION_gawk = "3.1.5" (cough gplv2)
<RP> JaMa: the configure breakage is worrying :/
<JaMa> RP: yes, I have seen few of those, but luckily they were fatal for what was explicitly enabled
<RP> logically I should wait and let gcc 14 settle. My own sanity say I should merge this and move on :/
<rburton> hm i wonder if its possible to extract the autoconf test list and results
* rburton has a cunning plan
rob_w has quit [Remote host closed the connection]
rob_w has joined #yocto
ardo has joined #yocto
<Guest13> interesting, my bbappend file was messing samba's fetch up.... am confused ;_:
<Guest13> FILESEXTRAPATHS:prepend := "${THISDIR}/files:"
<Guest13> SRC_URI:colibri-imx6 = " file://imx6/smb.conf"
<Guest13> SRC_URI:jetson-tx2-devkit = " file://tx2/smb.conf"
<Guest13> do_install:append:colibri-imx6() {
<Guest13>     DST=${D}/etc/samba/smb.conf
<Guest13>     install -d ${DST}
<Guest13>     install -m 0644 ${WORKDIR}/smb.conf ${DST}
<Guest13> }
<Guest13> do_install:append:jetson-tx2-devkit() {
<Guest13>     DST=${D}/etc/samba/smb.conf
<Guest13>     install -d ${DST}
<Guest13>     install -m 0644 ${WORKDIR}/smb.conf ${DST}
<Guest13> }
<Guest13> FILES:${PN} += "/etc/samba/smb.conf"
rob_w has quit [Remote host closed the connection]
<JaMa> Guest13: SRC_URI:colibri-imx6 abd SRC_URI:jetson-tx2-devkit are obviously wrong, maybe you wanted to use :append:override here
<rburton> Guest13: yeah that would be utterly breaking the fetch
<rburton> Guest13: mainly it _does not download the sources_
<rburton> golden rule of "why is this thing behaving weird": did you break it? verify it works without your local changes first.
<Guest13> ahhh, i was overriding the SRC_URI
<Guest13> instead of adding/appending the files
<JaMa> yes and using bitbake-getvar or bitbake -e if you don't understand the syntax at all
<rburton> for example "bitbake-getvar -r samba SRC_URI" will show that there is no tarball in the SRC_URI entry
<JaMa> well use bitbake-getvar even if you understand the syntax well, because one can always for get about some nasty .bbappend or .inc file hiding somewhere
Guest13 has quit [Quit: Client closed]
<mcfrisk> I always check my changes with bitbake -e. Even trivial things may not be so trivial when a lot of layers, bbappends etc are involved and a silly monkey (me) banging the keyboard with typos etc...
florian has quit [Ping timeout: 252 seconds]
florian has joined #yocto
Esben has joined #yocto
Vonter has quit [Ping timeout: 260 seconds]
lexano has joined #yocto
florian__ has joined #yocto
Vonter has joined #yocto
krissmaster has joined #yocto
florian has quit [Ping timeout: 268 seconds]
florian has joined #yocto
OnkelUlla has quit [Remote host closed the connection]
polprog has joined #yocto
jmiehe has joined #yocto
rber|res has quit [Read error: Connection reset by peer]
wooosaiiii has quit [Quit: wooosaiiii]
wooosaiiii has joined #yocto
OnkelUlla has joined #yocto
<RP> process_possible_migrations() in runqueue is taking over 1200s to complete a migration pass :(
<RP> looks like it is stuck in unihash queries
Jones42 has quit [Ping timeout: 260 seconds]
Jones42 has joined #yocto
enok has quit [Ping timeout: 264 seconds]
wooosaiiii has quit [Quit: wooosaiiii]
wooosaiiii has joined #yocto
<RP> JPEW: around? I'm seeing some hashserver performance issues, was there a way to get server statistics ?
<RP> JPEW: I've written up my findings so far and emailed
<JPEW> I'm in and out this morning. 'bitbake-hashclient stats' (I think that's the command) might be useful
c-thaler has quit [Quit: Client closed]
<RP> JPEW: thanks, that is what I wasn't spotting
<RP> bitbake-hashclient --address wss://hashserv.yoctoproject.org/ws stats gives "average": 0.06250257539454776
<RP> For 38000 tasks, that would be a problem :(
<JPEW> Ya
<RP> locally it is "average": 0.0004462178160857849
<JPEW> I wonder if the support for parallel queries would help
<JPEW> Is the server CPU bound?
<RP> JPEW: I don't have access to the server, we'll need michael for that. I'm at least trying to give us a way to quantify and show where the issue is
<JPEW> Ya. I'll see if I can dig up the parallel patches and you can see if they help
<JaMa> "average": 0.00014738774137496095 (over local unix://) I win :)
enok has joined #yocto
Kubu_work has quit [Ping timeout: 268 seconds]
Jones42 has quit [Remote host closed the connection]
Jones42 has joined #yocto
Jones42 has joined #yocto
Jones42 has quit [Changing host]
<JPEW> RP: Parallel support is already in; is BB_HASHSERVE_MAX_PARALLEL set on the AB?
<RP> JPEW: it is not. What should we set that to?
<RP> JPEW: if the problematic code is doing this: https://git.yoctoproject.org/poky/commit/?h=master-next&id=37c0c8890261cc0e52239cdd70844653fa1c91d3 (i.e. single get_unihash() calls), would we benefit from that?
<JPEW> Ah, ya, that would not help
<JPEW> Wait, that is parellel
<RP> JPEW: I'm making it more parallel with that patch?
<RP> (which is unmerged)
<JPEW> Ah, OK. Yes, that looks correct and should help
<RP> JPEW: I don't think it will change much :(. What is a reasonable value for that variable? 100?
<JPEW> You can try 100 I guess, but that seems a little high to me, at least until we can verify that the hashserve itself can actually handle 100 requests (e.g. doesn't become CPU bound, run out of TCP connections, etc.)
<JPEW> I suppose 100 might at least tell us if it will fix the problem
Xagen has joined #yocto
<RP> JPEW: I can put 10 into the configs, see if it helps. Setting 100 locally didn't seem to speed things up that much
<JPEW> It won't since the local server can't parallelize the SQL queries
<RP> JPEW: I was trying against the public server
<RP> JPEW: it is bad. https://valkyrie.yoctoproject.org/#/builders/17/builds/13 - 1hour 20 and still not past setting up the tasks :(
<JPEW> Is that the new AB?
<RP> JPEW: yes
<RP> well, a test cluster modelling it
<JPEW> It's _too_ fast :)
<RP> ?
<JPEW> It the hash server close?
<JPEW> Or still hosted on the YP infra
<RP> JPEW: it is probably on the wrong continent atm :(
Guest13 has joined #yocto
<JPEW> Ya, that doesn't help. The parallel connections will help a little, since they should reduce the average connection latency
<JPEW> But.... you'll need a lot to overcome that level of latency (probably too many)
<RP> JPEW: crazy thought. If hashequiv of task A isn't present, is there any point in looking up hashes for tasks which depend on A ?
<Guest13> I have a quick question: bitbake-getvar -r z-image --value IMAGE_ROOTFS outputs /home/ubuntu/z/builder/build/tmp/work/p3768_0000_p3767_0001-poky-linux/z-image/1.0-r0/rootfs however, the rootfs is not here (it only has a "temp" folder with logs). Where can I find the rootfs? (I need to debug if a certain service was installed correctly)
<JaMa> Guest13: if you're using rm_work, then it was probably already removed
<JPEW> RP: Have to think on that one; gut instinct is... yes
<RP> JPEW: have a think. Reporting makes sense, sure but I think there might be an optimisation short cut once a leaf dependency doesn't match
enok has quit [Ping timeout: 268 seconds]
<RP> "Please stop pasting release notes (or at least the user mentions) in your commit messages. GitHub spams every single person mentioned in every commit like this that you push." - https://github.com/daregit/yocto-combined/commit/1342b314a0fcb2f68171ff5c396f015b1c42dfe2#commitcomment-142281540
<mcfrisk> github crazy
<JPEW> Heh, fun
<JaMa> yeah even #NN trigger notifications in often unrelated PRs or issue tickets :/
<mcfrisk> I hope spammers figure this out soon
<JaMa> is this yocto-combined something which should replace combo-layer or something unofficial? in which case why RP noticed that (go spammed as well because he is committer there)
<JaMa> ?
<RP> JaMa: nothing to do with me. I get all kinds of spam from github. This is probably due to my S-o-b or as the committer
<RP> "You are receiving this because you authored the thread." - no I didn't
<JaMa> wangmingyu84 authored and rpurdie committed on Nov 16, 2021
<JaMa> if the thread starts by the commit .. even when it's just "Submodule poky updated from 5ce6bb to aa9b00" yeah GH is crazy
<JaMa> I've disabled most of my notifications there, but then I sometimes miss something and these @mentions are evil indeed, I think I kept those notifications enabled
<RP> JaMa: I "commit" enough changes I get a ton of weird stuff
<JaMa> co-pilot should sort out what @mentions are just unintentional drive-by mention in commit message and where the people really meant to summon someone
LocutusOfBorg has quit [Ping timeout: 240 seconds]
<qschulz> It's one of the "issues" with Mastodon as well, if someone answers you, you'll be by default added with your @ to the answers of that answer except if they remove you
rfuentess has quit [Remote host closed the connection]
Xagen has quit [Quit: Textual IRC Client: www.textualapp.com]
Kubu_work has joined #yocto
Xagen has joined #yocto
<JaMa> RP: FYI: I'm testing UNPACKDIR changes as they are today in master-next and noticed that for recipe with npm.bbclass the sources are now in "duplicated" ${WORKDIR}/git/git" e.g. jsdoc-to-ts-native/1.0.0/git/git/README.md for recipe with S = "${WORKDIR}/git", I'm trying to figure out if it's caused by npm/npmsw or something else in this recipe, so just FYI
enok has joined #yocto
<RP> JaMa: was that with a clean builddir or an existing one?
dmoseley_ has quit [Quit: ZNC 1.9.0 - https://znc.in]
<JaMa> it's reproducible after cleansstate
<JaMa> git -c gc.autoDetach=false -c core.pager=cat -c safe.bareRepository=all clone -n -s /OE/build/downloads/git2/github.com.enactjs.jsdoc-to-ts.git/ /OE/build/luneos-styhead/tmp-glibc/work/x86_64-linux/jsdoc-to-ts-native/1.0.0/sources-unpack/git/
<JaMa> this seems to work, but then it's probably moved to subdirectory instead, adding debug to base.bbclass to see why
<RP> JaMa: I'd guess there is a [dirs] creating ${S} somewhere
<RP> we should probably make the shutil.move more robust
mbulut__ has quit [Ping timeout: 240 seconds]
dmoseley has joined #yocto
LocutusOfBorg has joined #yocto
enok has quit [Ping timeout: 260 seconds]
florian__ has quit [Ping timeout: 264 seconds]
<JaMa> RP: I think it's npmsw fetcher creating ${S} in https://git.openembedded.org/bitbake/tree/lib/bb/fetch2/npmsw.py#n277 which is called from base.bbclass between deleting bb.utils.remove(workdir + '/' + basedir, True) and shutil.move to it
<RP> JaMa: it is hardcoded as S in there which it shouldn't be :(
<RP> That code is plain wrong :(
<JaMa> there are even more issues in that code :/
<RP> JaMa: not entirely surprising :(
<JaMa> I have some fixes for it in https://git.openembedded.org/bitbake-contrib/log/?h=jansa/master but it's a mess so I haven't finished it to usable version
<JaMa> setting S to ${UNPACKDIR}/git for these recipes with npmsw:// is usable work around, right?
<JaMa> I was testing a change to switch S to UNPACKDIR for all our recipes before, but that was causing quite a few conflicts between branches (so if needed I would keep it only for npmsw recipes for now)
<JaMa> sure, sec
<JaMa> yes, this seems to work
<JaMa> thanks
<RP> JaMa: great, I'll queue that as it shouldn't know anything about S
mckoan is now known as mckoan|away
<JaMa> one less skeleton ready to jump as soon as you merge UNPACKDIR change :)
<JaMa> zeddii: are you already looking at meta-virtualization failures from https://git.openembedded.org/openembedded-core/commit/?id=cc4ec43a2b657fb4c58429ab14f1edc2473c1327 ?
<RP> JaMa: yes!
<JaMa> zeddii: khem fixed some go recipes in meta-oe already, but the changes in meta-virtualization will be a bit bigger and will need you to adjust your scripts to generate e.g. recipes-containers/docker-compose/src_uri.inc
<zeddii> Jama: yes, but I'm traveling until the weekend, so it won't be before then.
<JaMa> ack
<JaMa> zeddii: also please merge https://lists.yoctoproject.org/g/meta-virtualization/message/8730 to kirkstone when you're back
<halstead> JPEW: the new hashserv is in North America but it's set up to be distributed to multiple continents. We only have the one end point right now. I can check on CPU.
<khem> JaMa: I was wondering if I should merge the UNPACKDIR in meta-openembedded now, world builds are clean for x86_64
<khem> one oscam recipe is showing some issue it uses svn fetcher I plan to switch to using a git mirror for it
<JaMa> have anyone seen Armin lately? meta-oe kirkstone and scarthgap are broken for a while and multiple people were complaining (I didn't because I'm still grateful to him and khem that I no longer need to maintain meta-oe :))
<khem> armpit is in room here
<RP> khem: he is very quiet though!
<khem> RP: seems so :)
<JaMa> yes, I've seen one e-mail from him on May 14 and Apr 28 before that, so very quiet lately
<halstead> JPEW: the AWS frontend and the database aren't CPU or IO bound that I can see.
<halstead> It might be some connection limit. I'll check.
<RP> halstead: you can clearly see the issue at the top of https://valkyrie.yoctoproject.org/#/builders/17/builds/12/steps/12/logs/stdio "Bitbake still alive (no events for 7200s). Active tasks:"
<RP> halstead: that means it didn't run anything for 7200s as it spent that time trying to talk to the hash server
<JaMa> khem: or if you're willing to add https://lists.openembedded.org/g/openembedded-devel/message/110244 to scarthgap and https://lists.openembedded.org/g/openembedded-devel/message/110196 to fix parsing which is broken since April 30
<khem> let me see
<JPEW> Seems likely that it's the latency then
<JPEW> Let me see if I can measure it
<JaMa> RP: found another npmsw issue related to UNPACKDIR I think :/ will debug more and let you know
Kubu_work has quit [Quit: Leaving.]
yudjinn has quit [Ping timeout: 264 seconds]
<RP> JaMa: :( I can't say I'm surprised unfortunately
Esben has quit [Remote host closed the connection]
<qschulz> tlwoerner: fighting with extlinux + non-fitImage builds on a Rockchip device right now
<qschulz> tlwoerner: somehow, the kernel+dtb aren't installed in the /boot partition
<qschulz> trying to debug who's installing it there and based on what variable, if you have any hint, I'll take it :)
florian__ has joined #yocto
nerdboy has quit [Remote host closed the connection]
leon-anavi has quit [Quit: Leaving]
<JPEW> RP: Sent a patch to bitbake-hashclient that can be used to measure the roundtrip latency
zpfvo has quit [Remote host closed the connection]
nerdboy has joined #yocto
nerdboy has quit [Changing host]
nerdboy has joined #yocto
<RP> halstead: ^^^
<RP> JPEW: thanks!
<halstead> JPEW: Thanks in the meantime I'm tweaking connection handling to see if we can increase throughput.
<RP> JPEW: should we add a "time 50 dummy queries" option too ?
<RP> JPEW: that would tell whether the database base or the connection is the holdup?
<JPEW> `bitbake-hashclient stress` will do that
<RP> halstead, JPEW: Given https://autobuilder.yoctoproject.org/typhoon/#/builders/108/builds/6028 has been going 19 hours, I'm not convinced this is a geo issue
<JPEW> Rp: Ya fair
<halstead> hashserv.yoctoproject.org and the typhoon cluster are have 6ms latency
<halstead> hashserv.yoctoproject.org and the valkyre cluster are have 181ms latency
<RP> halstead: it means valkyrie will be slower but it probably isn't the only issue :/
<RP> 10000 requests in 239.2s. 41.8 requests per second
<RP> Max request time was 3.89533072s
<RP> Average request time 0.02391578s
<halstead> I've tweaked settings to allow many more simultaneous connections. Lets see if we can get some load on the system now.
<RP> "10000 requests in 9.5s. 1048.2 requests per second" was a local server
<JaMa> "10000 requests in 1.5s. 6642.2 requests per second" is my local
<RP> JaMa: that was a domain socket though and not over http?
<JaMa> yeah unix:// again
<RP> halstead: "bitbake-hashclient --address wss://hashserv.yoctoproject.org/ws stress" is the time we need to improve somehow :/
<RP> halstead: "10000 requests in 186.1s. 53.7 requests per second" this time
<JPEW> RP: just sent a patch to improve the stress stats reporting
sudip has quit [Ping timeout: 252 seconds]
<RP> JPEW: that runqueue getunihashes change to runqueue breaks bitbake :(
<JPEW> For me, the average round trip time for stress (0.108) and ping (109) are pretty close, which makes me believe the database is not the problem
<RP> ERROR: libxdamage-1_1.1.6-r0 do_package: Cache unihash 7b0e8235065c28cb2a4726c08f44ce3a48cbd0548af279f9c3ad851a71e44e46 doesn't match BB_UNIHASH 10eb130e8cec19ba27bb4d9176fa1255ce1747fdc5977b231e316f7fbf297b3e
<JPEW> Weird. I'll have to take a look
<RP> JPEW: it'll be some kind of cache coherency issue
<JPEW> RP: For sure
sudip has joined #yocto
<RP> I did worry a bit when I tweaked the code, clearly I need to look deeper
* JPEW needs to eat lunch
* RP also needs to find food
<RP> JPEW: give the idea of stopping queries when one fails some thought. The more I think about it, the more I think this could help a lot
<RP> not all queries, just queries that would use that hash as part of the next hash
<qschulz> I added INHERIT += "buildhistory" in conf/local.conf and did two builds with two different machines
<qschulz> I only have the buildhistory for the second machine
<qschulz> I removed build/buildhistory but it doesn't get regenerated
<qschulz> what am I doing wrong here
<RP> qschulz: different branches maybe?
<RP> qschulz: did you set it to commit?
<qschulz> RP: the default is commit, but didn't want to look into this, so disabled it after
<qschulz> the thing is, I don't have buildhistory directory anymore, why is not recreating it?
<halstead> 10000 requests in 36.3s. 275.5 requests per second on typhoon
<halstead> 10000 requests in 254.5s. 39.3 requests per second on valkyrie
<RP> halstead: sounds like JPEW is right and it is latency
<RP> qschulz: it will if you build new things?
<RP> qschulz: it behaves differently to other bits of the code, it doesn't restore from sstate, it logs
<qschulz> RP: invalidating the cache you mean... true, could try that
<qschulz> RP: i'm trying to identify who's pulling a package into the rootfs, I think/hope buildhistory could help with that
<qschulz> but not too familiar with it, so probably hitting a nail with the wood part of the hammer :)
<JaMa> the depends files will help and buildhistory is useful for other things as well, so good to get familiar even when there are other ways to query this
<halstead> 275.5 r/s still seems low. 500 r/s should be an easy target.
<qschulz> JaMa: yes, but I really want the RDEPENDS part, I'm not at all interested in why the recipe is built (its the kernel, I know why :) )
<JaMa> qschulz: yes, buildhistory/images/qemux86_64/glibc/core-image-minimal/depends-nokernel-nolibc-noupdate-nomodules.dot shows the RDEPENDS
<RP> halstead: builds on typhoon are also running slowly, just probably not as slowly as valkyrie :/
<qschulz> RP: yup, added a package, the image is rebuilt -> new buildhistory directory
<JaMa> halstead: how can I know if wss://hashserv.yoctoproject.org/ws from here is hitting typhoon or valkrie?
alimon has joined #yocto
<halstead> JaMa: I'm measuring from those clusters to hashserv.yoctoproject.org. There is only one v2 hashserv right now and it's in North America
<JaMa> I see 34.221.58.120 and 10000 requests in 240.7s. 41.6 requests per second
<halstead> I'll get an EU copy up once we solve the performance issue were latency isn't a concern.
<JaMa> aha thanks, I thought there were 2 hashservs, not 2 clusters accessing the same hashserv, sorry for noise
florian__ has quit [Ping timeout: 264 seconds]
<qschulz> JaMa: seems like my kernel-module-* packages are pulling in kernel-<version> which pulls in kernel-image which pulls in kernel-image-fitImage
<qschulz> but it doesn't make sense to me that kernel-module-* would RDEPENDS on the kernel
<JaMa> doesn't the kernel package provide e.g. modules.dep files?
<qschulz> JaMa: it provides modules.builtin, modules.builtin.modinfo and modules.order in /lib/modules/*/
<qschulz> JaMa: that was a very nice hint, thank you
<qschulz> tlwoerner: did you test your /boot merging with a device that doesn't have kernel-modules in MACHINE_EXTRA_RRECOMMENDS?
tgamblin has quit [Ping timeout: 256 seconds]
tgamblin has joined #yocto
florian__ has joined #yocto
florian_kc has joined #yocto
florian__ has quit [Read error: Connection reset by peer]
florian__ has joined #yocto
florian_kc has quit [Ping timeout: 256 seconds]
Guest13 has quit [Ping timeout: 250 seconds]
<qschulz> tlwoerner: I also assume we should make the MACHINE_ESSENTIAL_EXTRA_RDEPENDS depend on UBOOT_EXTLINUX being 1 otherwise we add packages to the image that the user didn't need
enok has joined #yocto
dkl has quit [Quit: %quit%]
dkl has joined #yocto
dkl has quit [Remote host closed the connection]
dkl has joined #yocto
bryan has joined #yocto
yudjinn has joined #yocto
bryan has quit [Client Quit]
bgreen has joined #yocto
bgreen is now known as bryan
bryan is now known as bgreen
ptsneves has joined #yocto
Starfoxxes has quit [Quit: Leaving]
bgreen has quit [Changing host]
bgreen has joined #yocto
Saur_Home has quit [Quit: Client closed]
Saur_Home has joined #yocto
florian__ has quit [Ping timeout: 256 seconds]
jpuhlman- has quit [Read error: Connection reset by peer]
jpuhlman has joined #yocto
florian__ has joined #yocto
<bgreen> Is this a good place to ask about issues with bitbake's multiconfig feature?
<bgreen> I'll start with a basic question about multiconfig: if I have a multiconfig configuration that only changes MACHINE, is it necessary to also set a new TMPDIR? I had hoped not, but my experience is suggesting otherwise.
Jones42 has quit [Ping timeout: 260 seconds]
<RP> bgreen: it depends how compatible the MACHINEs are with each other and how well the BSPs are written
<RP> in theory it should work but there are ways things could break
<bgreen> In my use case, I am trying to use multiconfig to build an "mfg" image that is different from a regular image mainly in having a differently-configured kernel. So the mfg multiconfig does MACHINE:append = "-mfg", and the machine config for mfg sets a different value for PREFERRED_PROVIDER_virtual/kernel
<bgreen> I will for example run 'bitbake main-image mc:mfg:mfg-image' to build the regular and mfg image at the same time
<RP> it would probably work if the MACHINES were two different names. Same name for MACHINE might make it tricky
<bgreen> but something odd happens. sometimes, a task for a particular recipe will get executed twice, concurrently - once for each config.
<bgreen> but the MACHINES are two different names. ie. am62xx and am62xx-mfg
<RP> are they two different dirs under work ?
<bgreen> no, they aren't. the recipe has a default package arch, so MACHINE_ARCH which is common
<RP> which would be a problem in this case
<RP> that is why it is breaking
<bgreen> how is that a problem in this case? I was thinking I wouldn't need a separate TMPDIR.
<RP> you've told it to run the builds concurrently. Usually two MACHINES would have different MACHINE_ARCH so it would work. In this case they don't
<JaMa> any idea what would be causing pseudo issues in clean TMPDIR with today's master-next? path mismatch [2 links]: ino 29766796 db '/OE/build/oe-core/tmp-glibc/work/qemux86_64-oe-linux/base-files/3.0.14/packages-split/base-files/etc/issue' req '/OE/build/oe-core/tmp-glibc/work/qemux86_64-oe-linux/base-files/3.0.14/sstate-build-package/package/etc/issue'. multiple recipes, but always between package and
<JaMa> sstate-build-package
<RP> JaMa: with the recent PSEUDO_IGNORE_PATHS bits? I've not tested those yet
<bgreen> why would two MACHINES have different MACHINE_ARCH? can't you have two machines with same underlying aarch64 architecture?
<bgreen> for machine specific recipes of course, there are build directories for each.
<RP> bgreen: MACHINE_ARCH are machine specific packages, not archtiecture specific packages
<JaMa> RP: I don't have "base/bitbake.conf: Move S/B to PSEUDO_IGNORE_PATHS unconditionally" yet
<bgreen> you are right, I misspoke. There are separate directories for each machine arch
Haxxa has quit [Quit: Haxxa flies away.]
<RP> JaMa: ok, that is probably good. I don't know why you're seeing that though, it worked in all the tests I ran
<JaMa> yes, it's strange, it worked in different build dir before, now I've switched to "smaller" build to reproduce the npmsw issue in isolation with public recipe and it started to fail everywhere, but will investigate
<RP> bgreen: I don't know the failure you saw and I'd expect tasks to run in parallel if they're in separate work directories. I can't really comment further
<JaMa> http://errors.yoctoproject.org/Errors/Build/184179/ for some reason this doesn't show the 2 do_package failure from pseudo aborts
<bgreen> so, the packages which have PACKAGE_ARCH set to MACHINE_ARCH to build in separate directories.
<bgreen> but many recipes do not set PACKAGE_ARCH to MACHINE_ARCH.
Haxxa has joined #yocto
<RP> bgreen: but those are identical between the machines ?
<RP> in theory they should be. If they're not, that would be a problem
mvlad has quit [Remote host closed the connection]
<RP> bgreen: the yocto-check-layer tests would check some of these things
<bgreen> most recipes don't have PACKAGE_ARCH set to MACHINE_ARCH. The default for PACKAGE_ARCH is TUNE_PKGARCH
<bgreen> for MACHINE=am62xx, thats PACKAGE_ARCH=aarch64, by default. majority of packages build under the aarch64 dir
<bgreen> and boost shouldn't be built once for MACHINE=am62xx and again for MACHINE=am62xx-mfg. And its not, but the do_package task is getting invoked twice, concurrently
yudjinn has quit [Ping timeout: 268 seconds]
<RP> bgreen: what it means is that something in boost is machine specific so either you fix that or mark it machine specific
<RP> it shouldn't be machine specific. The yocto-check-layer tests would help track down which recipes have problems like this
<bgreen> I've looked and I don't see anything. But I'll check again.
<RP> bgreen: it could be a dependency of boost?
<bgreen> I'll take a look at yocto-check-layer.
<bgreen> Its rather hard to tell what might be causing this.
<RP> yocto-check-layer was designed to try and show where layers have issues like this...
<RP> that and some of the oe-selftest sstatetests
<JaMa> or you can try scripts/sstate-diff-machines.sh, I'm still using this to detect issues like this
<RP> that is probably the better one to try
<bgreen> NOTE: Running task 6926 of 8368 (/opt/srv/jenkins/root/workspace/AM62x/test_build/oe/meta-aos/recipes-support/boost/aosboost_1.71.0.bb:do_package)
<bgreen> in the log I get these two lines in close proximty and then a failure ends up happening:
<bgreen> NOTE: Running task 7395 of 8368 (mc:mfg:/opt/srv/jenkins/root/workspace/AM62x/test_build/oe/meta-aos/recipes-support/boost/aosboost_1.71.0.bb:do_package)
<bgreen> thanks, I'll take a look at those tools to see if they can help track it down.
<JaMa> khem: you have clean world build with meta-oe/master-next with oe-core/master-next? I've started to test UNPACKDIR from oe-core/master-next a bit more today and I'm still seeing e.g. nodejs-oe-cache-native failure which doesn't seem to be fixed in meta-oe/master-next, will send it once I get my build tests usable again
<RP> bgreen: in tmp/stamps/xxx there will be two do_package siginfo files. bitbake-diffsigs might show an interesting difference
yudjinn has joined #yocto
<JaMa> RP: fwiw: the pseudo issues is reproducible in the other build directory as well if I force rebuild of e.g. base-files, will bisect what's causing that
<JaMa> lucky me, base-files doesn't take long to rebuild :)
<khem> JaMa: yes however, its with yoe distro, so there might be some packages which are left out
enok has quit [Ping timeout: 240 seconds]
<khem> JaMa: noticed few more this morning, pick the latest master-next always
<khem> and also latest master-next of oe-core
<RP> JaMa: FWIW I tried a bitbake base-files -C unpack a few times and I didn't see any errors
<khem> JaMa: https://snips.sh/f/dqBdd6FdsD qemux86_64/glibc
<khem> JaMa:clean build was with musl, which has a bit less packages supported
<JaMa> khem: https://git.openembedded.org/meta-openembedded-contrib/commit/?h=jansa/master&id=f0a77ff0db231b30eaca7d05b93cec4dbe1f4afd is the one I was seeing now
ptsneves has quit [Ping timeout: 240 seconds]
<JaMa> but it could be some gremlins somewhere, now I'm "seeing" things with pseudo as well, who knows .. ah I guess I know now
<JaMa> I've switched to python 3.13 to test something about 2 hours ago and then forgot that I did
<JaMa> ok, regular oe-core/master failed the same, so it probably is python 3.13
<RP> JaMa: hmm. What did they do in python 3.13?! :/
<JaMa> I've saved the TMPDIR for later to compare
<JaMa> but after switching back to 3.12 it works again
<RP> JaMa: might be a missing glibc function intercept :/
<JaMa> 3.13 doesn't even pass own testsuite when pgo is enabled, so it might be anything at this point
<bgreen> am62xx_esr and am62xx_esr_mfg are the two MACHINEs. I'm not sure if this is evidence of a MACHINE-specific dependency
<bgreen> 1.71.0-r1.do_packagedata_setscene.f51d3b74a0d2df55aa3f5b287808ea90c7ca9a1806d8bb830d3be537ec01e544.am62xx_esr
<bgreen> @RP I checked tmp/stamps. I'm not seeing two do_package sigdata files, but in one build tree I see these two:
<bgreen> 1.71.0-r1.do_packagedata_setscene.f51d3b74a0d2df55aa3f5b287808ea90c7ca9a1806d8bb830d3be537ec01e544.am62xx_esr_mfg
dmoseley has quit [Quit: ZNC 1.9.0 - https://znc.in]
dmoseley has joined #yocto
Xagen has quit [Ping timeout: 264 seconds]
dmoseley has quit [Quit: ZNC 1.9.0 - https://znc.in]
<RP> bgreen: no, that isn't. packagedata has some quirks in that is extracted per machine. That is different to do_package
<rburton> khem: did you know openssl fails to build with qemurisc32
<RP> rburton: is there a patch pending for that?
<rburton> yes!
<rburton> khem: stand down :)
dmoseley has joined #yocto
<rburton> like a fool i thought i'd check my pciutils rewrite on EVERY MACHINE IN CORE
* RP merges the WORKDIR change
<RP> it needs to go in then we can build upon it
<JaMa> added https://bugzilla.yoctoproject.org/show_bug.cgi?id=15490 for pseudo-python-3.13 issue - for future - don't look :)
<khem> rburton:https://patchwork.yoctoproject.org/project/oe-core/patch/20240513230528.4115348-1-raj.khem@gmail.com/
<khem> and always peek into yoe/mut contrib branch, you will always find fun stuff there
<JaMa> kinky stuff
<rburton> khem: if you have a spare ten minutes, looking at pciutils symbol versioning wrappers and telling me if they can be replaced with something that doesn't involve textrel warnings would be nice :) https://git.kernel.org/pub/scm/utils/pciutils/pciutils.git/tree/lib/internal.h#n16
<rburton> to me it feels like ARGH WHAT IS THAT ARGH is a suitable response, but i'm not sure
<khem> RP:btw. top 4 patches on https://git.yoctoproject.org/poky-contrib/log/?h=yoe/mut are already submitted to ml, you may want to cherry-pick them
dmoseley has quit [Quit: ZNC 1.9.0 - https://znc.in]
<khem> rburton: hopping into car atm, will take a look once at desk again later today
<rburton> khem: only if you're bored, i ripped the recipe apart and it started textrel warning at me. found where, realised i ran out of C/gcc knowledge, added an INSANE_SKIP :)
<JaMa> khem: thanks for meta-oe backports!
dmoseley has joined #yocto
zwelch has quit [Read error: Connection reset by peer]
zwelch has joined #yocto
florian__ has quit [Ping timeout: 260 seconds]
jmiehe has quit [Quit: jmiehe]
goliath has quit [Quit: SIGSEGV]
brrm has quit [Ping timeout: 255 seconds]
brrm has joined #yocto
mrpelotazo has quit [Ping timeout: 268 seconds]
mrpelotazo has joined #yocto