qschulz has quit [Remote host closed the connection]
qschulz has joined #yocto
davidinux has quit [Ping timeout: 240 seconds]
davidinux has joined #yocto
jclsn has quit [Ping timeout: 256 seconds]
jclsn has joined #yocto
geoffhp has joined #yocto
parthiban has joined #yocto
parthiban has left #yocto [#yocto]
MrCryo has joined #yocto
MrCryo has quit [Remote host closed the connection]
Jones42_ has quit [Ping timeout: 252 seconds]
rob_w has joined #yocto
enok has joined #yocto
rfuentess has joined #yocto
Kubu_work has quit [Quit: Leaving.]
michael_e has joined #yocto
michael_e has quit [Client Quit]
ederibaucourt has quit [Quit: ZNC 1.8.2 - https://znc.in]
ederibaucourt has joined #yocto
ctraven has quit [Ping timeout: 268 seconds]
ctraven has joined #yocto
goliath has joined #yocto
enok has quit [Ping timeout: 240 seconds]
zpfvo has joined #yocto
ablu has quit [Ping timeout: 255 seconds]
ablu has joined #yocto
shoragan_ is now known as shoragan
mckoan|away is now known as mckoan
c-thaler has joined #yocto
polprog has quit [Ping timeout: 272 seconds]
enok has joined #yocto
leon-anavi has joined #yocto
Kubu_work has joined #yocto
enok has quit [Quit: enok]
enok has joined #yocto
halloy4985 has joined #yocto
<halloy4985>
Hi , Is there a way to revert a patch applied by cleansstate
<JaMa>
if cleansstate task applies patches for you than you live in very weird universe
<halloy4985>
Haha .I mean clenasstate doesnt do it for me . Do you have a way to revert patches applied on EXTERNAL_SRC SRC_URI meaning source is locally not fetched.
halloy4985 is now known as max_eip
<JaMa>
remove them from SRC_URI if you don't want them to be applied
Jones42 has joined #yocto
<max_eip>
Bro i want to apply the patch. But dont want to commit to mainline.
<max_eip>
Problem is if i cleansstate , i can see some files modified which cause no problem just i dont like . I want cleansstate to do_unpatch so no change is observed in my local code.
<JaMa>
first you wanted to revert it now you want to apply it, if you use EXTERNAL_SRC (I assume you meant EXTERNALSRC) then you're responsible for applying the changes there bro
<max_eip>
I think we miscommunicated. Thanks for your time.
<LetoThe2nd>
JaMa: hey bro I want a beer
<JaMa>
LetoThe2nd: I need beer, now! :)
<LetoThe2nd>
JaMa: apply from external source?
<JaMa>
sometimes, when internal fridge source gets empty
max_eip has quit [Remote host closed the connection]
enok has quit [Ping timeout: 255 seconds]
sukbeom6 has joined #yocto
shivamurthy_ has joined #yocto
OnkelUll_ has joined #yocto
wCPO0 has joined #yocto
rsalveti_ has joined #yocto
dmoseley_ has joined #yocto
benkard has joined #yocto
fullstop_ has joined #yocto
rsalveti has quit [Read error: Connection reset by peer]
dl9pf has quit [Read error: Connection reset by peer]
wCPO has quit [Read error: Connection reset by peer]
mulk has quit [Read error: Connection reset by peer]
OnkelUlla has quit [Read error: Connection reset by peer]
sukbeom has quit [Read error: Connection reset by peer]
shivamurthy has quit [Ping timeout: 268 seconds]
patersonc has quit [Ping timeout: 268 seconds]
denix has quit [Ping timeout: 268 seconds]
fullstop has quit [Excess Flood]
patersonc_ has joined #yocto
dl9pf has joined #yocto
denix has joined #yocto
dmoseley has quit [Read error: Connection reset by peer]
asriel has quit [Ping timeout: 268 seconds]
dl9pf has joined #yocto
dl9pf has quit [Changing host]
shivamurthy_ is now known as shivamurthy
rsalveti_ is now known as rsalveti
wCPO0 is now known as wCPO
benkard is now known as mulk
sukbeom6 is now known as sukbeom
asriel has joined #yocto
fullstop_ is now known as fullstop
enok has joined #yocto
mvlad has joined #yocto
OnkelUll_ is now known as OnkelUlla
mbulut__ has joined #yocto
ardo has quit [Ping timeout: 246 seconds]
* RP
wonders if we can merge the remaining unpack change with the fatal error for S=WORKDIR now
<JaMa>
RP: I'm seeing quite a few failures in meta-virtualization and meta-security from UNPACKDIR changes (so I'm more concerned than with gcc-14)
<RP>
JaMa: other layers haven't adapated to the workdir changes yet, but they kind of can't/won't until I make things error
<RP>
JaMa: at least that patch won't break core :)
<JaMa>
yeah, I'm just concerned when these big changes land so soon after each other (which will make the triage of build failures in other layers more complicated) and people might not notice the "silent breakage" like ross did in "gawk: fix readline detection"
<JaMa>
I'm glad I had gcc-14 in our world builds for couple months to catch all issues caused by that before the UNPACKDIR change lands, but other people maybe didn't do this in time
Guest13 has joined #yocto
<Guest13>
regarding to my issue yesterday with samba rburton , looks like the issue is when i select a different machine (colibri-imx6), it works fine with tegra for example.. where can i start
<rburton>
JaMa: i do try and look at the buildhistory for most patch series but don't do it all the time
<JaMa>
rburton: yes, buildhistory is great and thank you for looking at it
<rburton>
Guest13: maybe colibri-imx8 broke samba. with a fresh poky and meta-oe, build samba for qemuarm. if that works, then its the BSP or your tooling or some other weird mangled build tree problem.
<JaMa>
rburton: I wouldn't have noticed this one as we have PREFERRED_VERSION_gawk = "3.1.5" (cough gplv2)
<RP>
JaMa: the configure breakage is worrying :/
<JaMa>
RP: yes, I have seen few of those, but luckily they were fatal for what was explicitly enabled
<RP>
logically I should wait and let gcc 14 settle. My own sanity say I should merge this and move on :/
<rburton>
hm i wonder if its possible to extract the autoconf test list and results
* rburton
has a cunning plan
rob_w has quit [Remote host closed the connection]
rob_w has joined #yocto
ardo has joined #yocto
<Guest13>
interesting, my bbappend file was messing samba's fetch up.... am confused ;_:
rob_w has quit [Remote host closed the connection]
<JaMa>
Guest13: SRC_URI:colibri-imx6 abd SRC_URI:jetson-tx2-devkit are obviously wrong, maybe you wanted to use :append:override here
<rburton>
Guest13: yeah that would be utterly breaking the fetch
<rburton>
Guest13: mainly it _does not download the sources_
<rburton>
golden rule of "why is this thing behaving weird": did you break it? verify it works without your local changes first.
<Guest13>
ahhh, i was overriding the SRC_URI
<Guest13>
instead of adding/appending the files
<JaMa>
yes and using bitbake-getvar or bitbake -e if you don't understand the syntax at all
<rburton>
for example "bitbake-getvar -r samba SRC_URI" will show that there is no tarball in the SRC_URI entry
<JaMa>
well use bitbake-getvar even if you understand the syntax well, because one can always for get about some nasty .bbappend or .inc file hiding somewhere
Guest13 has quit [Quit: Client closed]
<mcfrisk>
I always check my changes with bitbake -e. Even trivial things may not be so trivial when a lot of layers, bbappends etc are involved and a silly monkey (me) banging the keyboard with typos etc...
florian has quit [Ping timeout: 252 seconds]
florian has joined #yocto
Esben has joined #yocto
Vonter has quit [Ping timeout: 260 seconds]
lexano has joined #yocto
florian__ has joined #yocto
Vonter has joined #yocto
krissmaster has joined #yocto
florian has quit [Ping timeout: 268 seconds]
florian has joined #yocto
OnkelUlla has quit [Remote host closed the connection]
polprog has joined #yocto
jmiehe has joined #yocto
rber|res has quit [Read error: Connection reset by peer]
wooosaiiii has quit [Quit: wooosaiiii]
wooosaiiii has joined #yocto
OnkelUlla has joined #yocto
<RP>
process_possible_migrations() in runqueue is taking over 1200s to complete a migration pass :(
<RP>
looks like it is stuck in unihash queries
Jones42 has quit [Ping timeout: 260 seconds]
Jones42 has joined #yocto
enok has quit [Ping timeout: 264 seconds]
wooosaiiii has quit [Quit: wooosaiiii]
wooosaiiii has joined #yocto
<RP>
JPEW: around? I'm seeing some hashserver performance issues, was there a way to get server statistics ?
<RP>
JPEW: I've written up my findings so far and emailed
<JPEW>
I'm in and out this morning. 'bitbake-hashclient stats' (I think that's the command) might be useful
<RP>
locally it is "average": 0.0004462178160857849
<JPEW>
I wonder if the support for parallel queries would help
<JPEW>
Is the server CPU bound?
<RP>
JPEW: I don't have access to the server, we'll need michael for that. I'm at least trying to give us a way to quantify and show where the issue is
<JPEW>
Ya. I'll see if I can dig up the parallel patches and you can see if they help
<JaMa>
"average": 0.00014738774137496095 (over local unix://) I win :)
enok has joined #yocto
Kubu_work has quit [Ping timeout: 268 seconds]
Jones42 has quit [Remote host closed the connection]
Jones42 has joined #yocto
Jones42 has joined #yocto
Jones42 has quit [Changing host]
<JPEW>
RP: Parallel support is already in; is BB_HASHSERVE_MAX_PARALLEL set on the AB?
<RP>
JPEW: I'm making it more parallel with that patch?
<RP>
(which is unmerged)
<JPEW>
Ah, OK. Yes, that looks correct and should help
<RP>
JPEW: I don't think it will change much :(. What is a reasonable value for that variable? 100?
<JPEW>
You can try 100 I guess, but that seems a little high to me, at least until we can verify that the hashserve itself can actually handle 100 requests (e.g. doesn't become CPU bound, run out of TCP connections, etc.)
<JPEW>
I suppose 100 might at least tell us if it will fix the problem
Xagen has joined #yocto
<RP>
JPEW: I can put 10 into the configs, see if it helps. Setting 100 locally didn't seem to speed things up that much
<JPEW>
It won't since the local server can't parallelize the SQL queries
<RP>
JPEW: it is probably on the wrong continent atm :(
Guest13 has joined #yocto
<JPEW>
Ya, that doesn't help. The parallel connections will help a little, since they should reduce the average connection latency
<JPEW>
But.... you'll need a lot to overcome that level of latency (probably too many)
<RP>
JPEW: crazy thought. If hashequiv of task A isn't present, is there any point in looking up hashes for tasks which depend on A ?
<Guest13>
I have a quick question: bitbake-getvar -r z-image --value IMAGE_ROOTFS outputs /home/ubuntu/z/builder/build/tmp/work/p3768_0000_p3767_0001-poky-linux/z-image/1.0-r0/rootfs however, the rootfs is not here (it only has a "temp" folder with logs). Where can I find the rootfs? (I need to debug if a certain service was installed correctly)
<JaMa>
Guest13: if you're using rm_work, then it was probably already removed
<JPEW>
RP: Have to think on that one; gut instinct is... yes
<RP>
JPEW: have a think. Reporting makes sense, sure but I think there might be an optimisation short cut once a leaf dependency doesn't match
<JaMa>
yeah even #NN trigger notifications in often unrelated PRs or issue tickets :/
<mcfrisk>
I hope spammers figure this out soon
<JaMa>
is this yocto-combined something which should replace combo-layer or something unofficial? in which case why RP noticed that (go spammed as well because he is committer there)
<JaMa>
?
<RP>
JaMa: nothing to do with me. I get all kinds of spam from github. This is probably due to my S-o-b or as the committer
<RP>
"You are receiving this because you authored the thread." - no I didn't
<JaMa>
wangmingyu84 authored and rpurdie committed on Nov 16, 2021
<JaMa>
if the thread starts by the commit .. even when it's just "Submodule poky updated from 5ce6bb to aa9b00" yeah GH is crazy
<JaMa>
I've disabled most of my notifications there, but then I sometimes miss something and these @mentions are evil indeed, I think I kept those notifications enabled
<RP>
JaMa: I "commit" enough changes I get a ton of weird stuff
<JaMa>
co-pilot should sort out what @mentions are just unintentional drive-by mention in commit message and where the people really meant to summon someone
LocutusOfBorg has quit [Ping timeout: 240 seconds]
<qschulz>
It's one of the "issues" with Mastodon as well, if someone answers you, you'll be by default added with your @ to the answers of that answer except if they remove you
rfuentess has quit [Remote host closed the connection]
<JaMa>
RP: FYI: I'm testing UNPACKDIR changes as they are today in master-next and noticed that for recipe with npm.bbclass the sources are now in "duplicated" ${WORKDIR}/git/git" e.g. jsdoc-to-ts-native/1.0.0/git/git/README.md for recipe with S = "${WORKDIR}/git", I'm trying to figure out if it's caused by npm/npmsw or something else in this recipe, so just FYI
enok has joined #yocto
<RP>
JaMa: was that with a clean builddir or an existing one?
<JaMa>
setting S to ${UNPACKDIR}/git for these recipes with npmsw:// is usable work around, right?
<JaMa>
I was testing a change to switch S to UNPACKDIR for all our recipes before, but that was causing quite a few conflicts between branches (so if needed I would keep it only for npmsw recipes for now)
<JaMa>
zeddii: khem fixed some go recipes in meta-oe already, but the changes in meta-virtualization will be a bit bigger and will need you to adjust your scripts to generate e.g. recipes-containers/docker-compose/src_uri.inc
<zeddii>
Jama: yes, but I'm traveling until the weekend, so it won't be before then.
<halstead>
JPEW: the new hashserv is in North America but it's set up to be distributed to multiple continents. We only have the one end point right now. I can check on CPU.
<khem>
JaMa: I was wondering if I should merge the UNPACKDIR in meta-openembedded now, world builds are clean for x86_64
<khem>
one oscam recipe is showing some issue it uses svn fetcher I plan to switch to using a git mirror for it
<JaMa>
have anyone seen Armin lately? meta-oe kirkstone and scarthgap are broken for a while and multiple people were complaining (I didn't because I'm still grateful to him and khem that I no longer need to maintain meta-oe :))
<khem>
armpit is in room here
<RP>
khem: he is very quiet though!
<khem>
RP: seems so :)
<JaMa>
yes, I've seen one e-mail from him on May 14 and Apr 28 before that, so very quiet lately
<halstead>
JPEW: the AWS frontend and the database aren't CPU or IO bound that I can see.
<halstead>
It might be some connection limit. I'll check.
<RP>
JPEW: it'll be some kind of cache coherency issue
<JPEW>
RP: For sure
sudip has joined #yocto
<RP>
I did worry a bit when I tweaked the code, clearly I need to look deeper
* JPEW
needs to eat lunch
* RP
also needs to find food
<RP>
JPEW: give the idea of stopping queries when one fails some thought. The more I think about it, the more I think this could help a lot
<RP>
not all queries, just queries that would use that hash as part of the next hash
<qschulz>
I added INHERIT += "buildhistory" in conf/local.conf and did two builds with two different machines
<qschulz>
I only have the buildhistory for the second machine
<qschulz>
I removed build/buildhistory but it doesn't get regenerated
<qschulz>
what am I doing wrong here
<RP>
qschulz: different branches maybe?
<RP>
qschulz: did you set it to commit?
<qschulz>
RP: the default is commit, but didn't want to look into this, so disabled it after
<qschulz>
the thing is, I don't have buildhistory directory anymore, why is not recreating it?
<halstead>
10000 requests in 36.3s. 275.5 requests per second on typhoon
<halstead>
10000 requests in 254.5s. 39.3 requests per second on valkyrie
<RP>
halstead: sounds like JPEW is right and it is latency
<RP>
qschulz: it will if you build new things?
<RP>
qschulz: it behaves differently to other bits of the code, it doesn't restore from sstate, it logs
<qschulz>
RP: invalidating the cache you mean... true, could try that
<qschulz>
RP: i'm trying to identify who's pulling a package into the rootfs, I think/hope buildhistory could help with that
<qschulz>
but not too familiar with it, so probably hitting a nail with the wood part of the hammer :)
<JaMa>
the depends files will help and buildhistory is useful for other things as well, so good to get familiar even when there are other ways to query this
<halstead>
275.5 r/s still seems low. 500 r/s should be an easy target.
<qschulz>
JaMa: yes, but I really want the RDEPENDS part, I'm not at all interested in why the recipe is built (its the kernel, I know why :) )
<JaMa>
qschulz: yes, buildhistory/images/qemux86_64/glibc/core-image-minimal/depends-nokernel-nolibc-noupdate-nomodules.dot shows the RDEPENDS
<RP>
halstead: builds on typhoon are also running slowly, just probably not as slowly as valkyrie :/
<qschulz>
RP: yup, added a package, the image is rebuilt -> new buildhistory directory
<JaMa>
halstead: how can I know if wss://hashserv.yoctoproject.org/ws from here is hitting typhoon or valkrie?
alimon has joined #yocto
<halstead>
JaMa: I'm measuring from those clusters to hashserv.yoctoproject.org. There is only one v2 hashserv right now and it's in North America
<JaMa>
I see 34.221.58.120 and 10000 requests in 240.7s. 41.6 requests per second
<halstead>
I'll get an EU copy up once we solve the performance issue were latency isn't a concern.
<JaMa>
aha thanks, I thought there were 2 hashservs, not 2 clusters accessing the same hashserv, sorry for noise
florian__ has quit [Ping timeout: 264 seconds]
<qschulz>
JaMa: seems like my kernel-module-* packages are pulling in kernel-<version> which pulls in kernel-image which pulls in kernel-image-fitImage
<qschulz>
but it doesn't make sense to me that kernel-module-* would RDEPENDS on the kernel
<JaMa>
doesn't the kernel package provide e.g. modules.dep files?
<qschulz>
JaMa: it provides modules.builtin, modules.builtin.modinfo and modules.order in /lib/modules/*/
<qschulz>
JaMa: that was a very nice hint, thank you
<qschulz>
tlwoerner: did you test your /boot merging with a device that doesn't have kernel-modules in MACHINE_EXTRA_RRECOMMENDS?
tgamblin has quit [Ping timeout: 256 seconds]
tgamblin has joined #yocto
florian__ has joined #yocto
florian_kc has joined #yocto
florian__ has quit [Read error: Connection reset by peer]
florian__ has joined #yocto
florian_kc has quit [Ping timeout: 256 seconds]
Guest13 has quit [Ping timeout: 250 seconds]
<qschulz>
tlwoerner: I also assume we should make the MACHINE_ESSENTIAL_EXTRA_RDEPENDS depend on UBOOT_EXTLINUX being 1 otherwise we add packages to the image that the user didn't need
enok has joined #yocto
dkl has quit [Quit: %quit%]
dkl has joined #yocto
dkl has quit [Remote host closed the connection]
dkl has joined #yocto
bryan has joined #yocto
yudjinn has joined #yocto
bryan has quit [Client Quit]
bgreen has joined #yocto
bgreen is now known as bryan
bryan is now known as bgreen
ptsneves has joined #yocto
Starfoxxes has quit [Quit: Leaving]
bgreen has quit [Changing host]
bgreen has joined #yocto
Saur_Home has quit [Quit: Client closed]
Saur_Home has joined #yocto
florian__ has quit [Ping timeout: 256 seconds]
jpuhlman- has quit [Read error: Connection reset by peer]
jpuhlman has joined #yocto
florian__ has joined #yocto
<bgreen>
Is this a good place to ask about issues with bitbake's multiconfig feature?
<bgreen>
I'll start with a basic question about multiconfig: if I have a multiconfig configuration that only changes MACHINE, is it necessary to also set a new TMPDIR? I had hoped not, but my experience is suggesting otherwise.
Jones42 has quit [Ping timeout: 260 seconds]
<RP>
bgreen: it depends how compatible the MACHINEs are with each other and how well the BSPs are written
<RP>
in theory it should work but there are ways things could break
<bgreen>
In my use case, I am trying to use multiconfig to build an "mfg" image that is different from a regular image mainly in having a differently-configured kernel. So the mfg multiconfig does MACHINE:append = "-mfg", and the machine config for mfg sets a different value for PREFERRED_PROVIDER_virtual/kernel
<bgreen>
I will for example run 'bitbake main-image mc:mfg:mfg-image' to build the regular and mfg image at the same time
<RP>
it would probably work if the MACHINES were two different names. Same name for MACHINE might make it tricky
<bgreen>
but something odd happens. sometimes, a task for a particular recipe will get executed twice, concurrently - once for each config.
<bgreen>
but the MACHINES are two different names. ie. am62xx and am62xx-mfg
<RP>
are they two different dirs under work ?
<bgreen>
no, they aren't. the recipe has a default package arch, so MACHINE_ARCH which is common
<RP>
which would be a problem in this case
<RP>
that is why it is breaking
<bgreen>
how is that a problem in this case? I was thinking I wouldn't need a separate TMPDIR.
<RP>
you've told it to run the builds concurrently. Usually two MACHINES would have different MACHINE_ARCH so it would work. In this case they don't
<JaMa>
any idea what would be causing pseudo issues in clean TMPDIR with today's master-next? path mismatch [2 links]: ino 29766796 db '/OE/build/oe-core/tmp-glibc/work/qemux86_64-oe-linux/base-files/3.0.14/packages-split/base-files/etc/issue' req '/OE/build/oe-core/tmp-glibc/work/qemux86_64-oe-linux/base-files/3.0.14/sstate-build-package/package/etc/issue'. multiple recipes, but always between package and
<JaMa>
sstate-build-package
<RP>
JaMa: with the recent PSEUDO_IGNORE_PATHS bits? I've not tested those yet
<bgreen>
why would two MACHINES have different MACHINE_ARCH? can't you have two machines with same underlying aarch64 architecture?
<bgreen>
for machine specific recipes of course, there are build directories for each.
<RP>
bgreen: MACHINE_ARCH are machine specific packages, not archtiecture specific packages
<JaMa>
RP: I don't have "base/bitbake.conf: Move S/B to PSEUDO_IGNORE_PATHS unconditionally" yet
<bgreen>
you are right, I misspoke. There are separate directories for each machine arch
Haxxa has quit [Quit: Haxxa flies away.]
<RP>
JaMa: ok, that is probably good. I don't know why you're seeing that though, it worked in all the tests I ran
<JaMa>
yes, it's strange, it worked in different build dir before, now I've switched to "smaller" build to reproduce the npmsw issue in isolation with public recipe and it started to fail everywhere, but will investigate
<RP>
bgreen: I don't know the failure you saw and I'd expect tasks to run in parallel if they're in separate work directories. I can't really comment further
<bgreen>
so, the packages which have PACKAGE_ARCH set to MACHINE_ARCH to build in separate directories.
<bgreen>
but many recipes do not set PACKAGE_ARCH to MACHINE_ARCH.
Haxxa has joined #yocto
<RP>
bgreen: but those are identical between the machines ?
<RP>
in theory they should be. If they're not, that would be a problem
mvlad has quit [Remote host closed the connection]
<RP>
bgreen: the yocto-check-layer tests would check some of these things
<bgreen>
most recipes don't have PACKAGE_ARCH set to MACHINE_ARCH. The default for PACKAGE_ARCH is TUNE_PKGARCH
<bgreen>
for MACHINE=am62xx, thats PACKAGE_ARCH=aarch64, by default. majority of packages build under the aarch64 dir
<bgreen>
and boost shouldn't be built once for MACHINE=am62xx and again for MACHINE=am62xx-mfg. And its not, but the do_package task is getting invoked twice, concurrently
yudjinn has quit [Ping timeout: 268 seconds]
<RP>
bgreen: what it means is that something in boost is machine specific so either you fix that or mark it machine specific
<RP>
it shouldn't be machine specific. The yocto-check-layer tests would help track down which recipes have problems like this
<bgreen>
I've looked and I don't see anything. But I'll check again.
<RP>
bgreen: it could be a dependency of boost?
<bgreen>
I'll take a look at yocto-check-layer.
<bgreen>
Its rather hard to tell what might be causing this.
<RP>
yocto-check-layer was designed to try and show where layers have issues like this...
<RP>
that and some of the oe-selftest sstatetests
<JaMa>
or you can try scripts/sstate-diff-machines.sh, I'm still using this to detect issues like this
<RP>
that is probably the better one to try
<bgreen>
NOTE: Running task 6926 of 8368 (/opt/srv/jenkins/root/workspace/AM62x/test_build/oe/meta-aos/recipes-support/boost/aosboost_1.71.0.bb:do_package)
<bgreen>
in the log I get these two lines in close proximty and then a failure ends up happening:
<bgreen>
NOTE: Running task 7395 of 8368 (mc:mfg:/opt/srv/jenkins/root/workspace/AM62x/test_build/oe/meta-aos/recipes-support/boost/aosboost_1.71.0.bb:do_package)
<bgreen>
thanks, I'll take a look at those tools to see if they can help track it down.
<JaMa>
khem: you have clean world build with meta-oe/master-next with oe-core/master-next? I've started to test UNPACKDIR from oe-core/master-next a bit more today and I'm still seeing e.g. nodejs-oe-cache-native failure which doesn't seem to be fixed in meta-oe/master-next, will send it once I get my build tests usable again
<RP>
bgreen: in tmp/stamps/xxx there will be two do_package siginfo files. bitbake-diffsigs might show an interesting difference
yudjinn has joined #yocto
<JaMa>
RP: fwiw: the pseudo issues is reproducible in the other build directory as well if I force rebuild of e.g. base-files, will bisect what's causing that
<JaMa>
lucky me, base-files doesn't take long to rebuild :)
<khem>
JaMa: yes however, its with yoe distro, so there might be some packages which are left out
enok has quit [Ping timeout: 240 seconds]
<khem>
JaMa: noticed few more this morning, pick the latest master-next always
<khem>
and also latest master-next of oe-core
<RP>
JaMa: FWIW I tried a bitbake base-files -C unpack a few times and I didn't see any errors
<khem>
rburton: hopping into car atm, will take a look once at desk again later today
<rburton>
khem: only if you're bored, i ripped the recipe apart and it started textrel warning at me. found where, realised i ran out of C/gcc knowledge, added an INSANE_SKIP :)
<JaMa>
khem: thanks for meta-oe backports!
dmoseley has joined #yocto
zwelch has quit [Read error: Connection reset by peer]