ChanServ changed the topic of #yocto to: Welcome to the Yocto Project | Learn more: https://www.yoctoproject.org | Join us or Speak at Yocto Project Summit (2021.11) Nov 30 - Dec 2, more: https://yoctoproject.org/summit | Join the community: https://www.yoctoproject.org/community | IRC logs available at https://www.yoctoproject.org/irc/ | Having difficulty on the list or with someone on the list, contact YP community mgr ndec
florian has joined #yocto
sgw has joined #yocto
florian has quit [Ping timeout: 252 seconds]
geoffhp has joined #yocto
starblue has quit [Ping timeout: 250 seconds]
starblue has joined #yocto
sakoman has quit [Quit: Leaving.]
camus has quit [Remote host closed the connection]
camus has joined #yocto
jclsn5 has joined #yocto
jclsn has quit [Ping timeout: 240 seconds]
pgowda_ has joined #yocto
camus has quit [Quit: camus]
amitk has joined #yocto
camus has joined #yocto
alessioigor has joined #yocto
jclsn5 is now known as jclsn
<jclsn> Morning
<jclsn> Shouldn't the kernel binary be identical when INHERIT += " reproducible-build" is set?
<jclsn> I just built it two times and the md5sum differs
frieder has joined #yocto
alessioigor has quit [Quit: alessioigor]
rfuentess has joined #yocto
mckoan|away is now known as mckoan
<mckoan> good morning
jclsn is now known as jclsn_
<RP> jclsn_: can depend on a lot of things including which kernel it is :/
jclsn_ is now known as jclsn
<jclsn> RP: Hmm it is the linux-fslc-imx kernel
JaMa has quit [Quit: reboot]
<RP> jclsn: We made sure linux-yocto does the right things (as far as I know) but I can't comment on that kernel :/
<RP> rburton: the series in master-next for crypto in core is close but two issues. Arm ptest failure (missing import tomli) and reproducibility failure
<jclsn> RP: Are you saying you recommend using another kernel?
zavorka has quit [Remote host closed the connection]
<RP> jclsn: no, I suspect you need that kernel for your hardware. I'm just saying that the people who made it probably haven't paid attention to reproducubility :(
<jclsn> otavio: Is that true?
<jclsn> RP: I don
<jclsn> RP: I don't think so actually. More probable that my patches are the cause
* landgraf is back
<RP> landgraf: welcome back
<RP> jclsn: I don't know. I do know we had to be careful about the configuration and the way linux-yocto was built and I don't know if others have spent time on that
<RP> jclsn: reproducibility isn't top of everyone's priority list even if we have core working well
<jclsn> Hmm okay
Schlumpf has joined #yocto
leon-anavi has joined #yocto
<RP> jclsn: you might be right and it may be your patches but I'm not sure...
Guest4 has joined #yocto
tnovotny has joined #yocto
<Guest4> Is the following link meant for custom hardware? https://www.yoctoproject.org/docs/latest/bsp-guide/bsp-guide.html
<RP> rburton: have the reproducibility fix, just need the ptest one...
lucaceresoli has joined #yocto
davidinux has quit [Quit: WeeChat 2.8]
davidinux has joined #yocto
xmn has quit [Quit: ZZZzzz…]
mvlad has joined #yocto
<jclsn> What do these messages mean?
<jclsn> return visitor(node)
<jclsn> WARNING: linux-fslc-imx-5.10.98+git999-r0 do_fetch: /usr/lib/python3.8/ast.py:371: PendingDeprecationWarning: visit_Str is deprecated; add visit_Constant
<jclsn> WARNING: linux-fslc-imx-5.10.98+git999-r0 do_populate_lic: /usr/lib/python3.8/ast.py:371: PendingDeprecationWarning: visit_Str is deprecated; add visit_Constant
<jclsn> return visitor(node)
<jclsn> WARNING: linux-fslc-imx-5.10.98+git999-r0 do_deploy_source_date_epoch: /usr/lib/python3.8/ast.py:371: PendingDeprecationWarning: visit_Str is deprecated; add visit_Constant
<jclsn> return visitor(node)
<jclsn> Is my Python deprecated?
<Guest4> jclsn What are you trying to accomplish and what did you do to reach those warnings?
<jclsn> They appear when I build the kernel
<jclsn> Every time
<Saur[m]> jclsn: There are a couple of commits in OE-Core fixing problems like that. Search the Git log for "visit_Constant".
<jclsn> I think I also made some progress regarding this weird issue with the kernel
<jclsn> When I checkout linux-fslc-imx to a workspace with devtool modify, everything works fine. When I devtool reset and build normall, the kernel panic occurs. How can this be?
<jclsn> Maybe checking out silently fails somehow
<jclsn> Saur: Will have a look
tgamblin has quit [Ping timeout: 250 seconds]
tgamblin_ has joined #yocto
<jclsn> git log | grep -b3 visit_Constant returns nothing
<jclsn> In meta-oe, if that is correct
<Saur[m]> No, that is not OE-Core. Search in meta (or poky)
T_UNIX[m] has joined #yocto
<RP> jclsn: which version of the project is that with?
<jclsn> RP: honister
<RP> jclsn: it means deprecated API in python is being used but I thought we'd fixed that
<RP> jclsn: those are what Saur[m] was referring to
Guest4 has quit [Ping timeout: 256 seconds]
florian has joined #yocto
vladest has joined #yocto
JaMa has joined #yocto
<JaMa> another interesting behavior was that 8bb-threads happened to use more swap than 64bb-threads, but I guess that's just unlucky scheduling when more heavier do_compile had their memory-usage peak at the same time in https://raw.githubusercontent.com/shr-project/test-oe-build-time/081926d88779bbcc3fe348a8f7d56c94e2e1b46e/threadripper-3970x-128gb-gentoo-kirkstone-2022-03/5-build-8-bb-threads.png compared with
<RP> JaMa: interesting. I wish I had more time to look at performance :/
* RP has a patch which cuts a couple of million getVar calls out but not sure whether it massively helps or not
<Saur[m]> A couple of million getVar calls less sounds like it could have an impact...
<RP> Saur[m]: they hit the expand cache so not as much as you'd think
<Saur[m]> Ah, ok.
<RP> Saur[m]: I have another which knocks about 25% of the getVarFlag calls out but I think it breaks something, need to try and track down what
<RP> I really need to come up with a better timing metric too
<Saur[m]> Sounds like interesting patches.
<LetoThe2nd> yo dudX
<RP> Saur[m]: two different experiments in https://git.yoctoproject.org/poky-contrib/commit/?h=rpurdie/t222&id=e728323330c7e5b26d61775c53fabfc1e3088566
<RP> Doing something with __depends might also help if we had somewhere other than the datastore to poke it
flodo has joined #yocto
<flodo> hey there guys! I am new to yocto and I am facing some problems I can't solve. I hope someone can help me a bit.
<flodo> I am trying to boot a core image via pxe and syslinux, I was able to build the bzImage and now I am serving it via pxe
<flodo> after loading the bzImage the device is telling me that it is not able to mount the root fs on an unknown block (0,0)
<flodo> I guess I would therefore also need an initramfs?
<Saur[m]> RP: You can move the if not srcflags test outside and skip the entire for loop if there are no flags.
ilunev has joined #yocto
<RP> Saur[m]: least of the problems in that code
JaMa has quit [Quit: reboot]
<rburton> RP: got a link for the outstanding crypto failure?
JaMa has joined #yocto
<rburton> armpit/khem: can you fix meta-oe master now so it builds against oe-core master?
ilunev has quit [Quit: Textual IRC Client: www.textualapp.com]
kriive has joined #yocto
starblue has quit [Ping timeout: 250 seconds]
kriive has quit [Remote host closed the connection]
starblue has joined #yocto
kriive has joined #yocto
<RP> rburton: I think the main issue now is the 2GB min memory requirement and the test runtime
<RP> rburton: arm shows https://autobuilder.yoctoproject.org/typhoon/#/builders/82/builds/2969/steps/12/logs/stdio and IA shows no failure, I think as there isn't free memory
<RP> rburton: I have a patch locally for the missing tomli dependency
<RP> rburton: 20 mins so far for the crypto test on arm :/
creich has quit [Quit: Leaving]
<RP> (since I don't have KVM)
<rburton> amazing news https://gitlab.freedesktop.org/polkit/polkit/-/commit/c7fc4e1b61f0fd82fc697c19c604af7e9fb291a2 polkit can use duktape instead of mozjs
<rburton> RP: urgh slow
<RP> rburton: 32 mins, still going
zeddii has quit [Excess Flood]
<RP> rburton: nice, finally :)
<rburton> RP: maintainers were understandably trying to not introduce huge regressions but yeah took a while
<RP> rburton: definitely understandable
zeddii has joined #yocto
* RP wonders why core-image-ptest-all would have less than 2GB free memory when it has 4GB :/
<RP> X?
<rburton> doubt it
<RP> it is x86 specific
<rburton> does it actually have 4gb though?
<RP> qemu is being called with -m 4096
<rburton> ok, fair
<RP> [ 0.027261] Kernel command line: root=/dev/vda rw mem=4096M ip=192.168.7.2::192.168.7.1:255.255.255.0::eth0:off:8.8.8.8 console=ttyS0 console=ttyS1 oprofile.timer=1 tsc=reliable no_timer_check rcupdate.rcu_expedited=1 printk.time=1
<RP> [ 0.036338] Memory: 2028184K/2096600K available (16396K kernel code, 2134K rwdata, 3584K rodata, 1692K init, 2156K bss, 68156K reserved, 0K cma-reserved)
<RP> does anyone know why x86-64 would do that? :/
<rburton> well q35 should support up to 4gb
zkrx has quit [Ping timeout: 240 seconds]
<rburton> q35 does do games with memory though
<RP> rburton: the ptest had one failure out of 2855 tests :(
<RP> openssl memory leak
<rburton> if you can share info about the leak I can forward to someone who might be able to fix it
<RP> rburton: log mailed
<Perceval[m]> Hello all :) I'm trying to disable all agetty services. I added 'PACKAGECONFIG_remove = "serial-getty-generator"' in systemd_%.bbappend but the services are still there. How would you do it?
<rburton> RP: colleague looking quickly now. i suggest we just disable that test for now though.
<rburton> its a grand 300bytes leaked
<RP> rburton: can you sort a patch to disable?
<rburton> yeah
<RP> rburton: I updated master-next to all the pieces I currently know we need
Guest4 has joined #yocto
<rburton> RP: do you remmeber why we have USRBINPATH?
<rburton> it's always set to bindir
<rburton> ah no, it's not
<rburton> gotcha
<RP> rburton: Sounds like you have an answer. I'm sure there was a reason...
<rburton> because its where the interpreters will live
<rburton> so native/nativesdk is always /usr/bin, even if the target prefix is /foobar
<RP> ah, yes
<rburton> was the leak only on arm?
<rburton> RP ^
<rburton> pytest lets you do programatic xfails, so we can make it expected-to-fail on arm hosts
<rburton> self-reporting when it works again too that way
<RP> rburton: it didn't run on x86 so I don't know yet
<RP> rburton: actually, results just in, seems to work on x86
zkrx has joined #yocto
<RP> er, no, didn't run at all
<rburton> i wonder why it didn't run at all
<RP> rburton: lack of memory. I need to debug that
<RP> rburton: I do at least have a local build to poke at now
<rburton> one of the leaks is fixed upstream already
<RP> rburton: setting to 8GB gives me 6GB so we should be able to check x86 now
* RP is wondering where the missing 2GB is going
<RP> rburton: fails on x86 too
<rburton> ok
<rburton> can i send you a commit to test?
<rburton> likely faster than me building
<RP> rburton: yes
<RP> rburton: ditching the kernel mem= parameter fixes the memory
* RP grabbing some food, back shortly
<rburton> RP: poky-contrib:ross/ssl
<jclsn> RP: Yes, this is in my log. I might have to add that this message only occurs when I have checked out a workspace for linux-fslc-imx. I was wrong about it happening always.
<jclsn> It is also weird that my kernel is resolved when I check out a workspace
<jclsn> *kernel issue
<jclsn> I mean there shouldn't be a difference if I don't modify the code
<qschulz> jclsn: there is some gotchas with files in WORKDIR and devtool
<qschulz> jclsn: I wouldn't be too surprised there are some issues related to the defconfig
<jclsn> qschulz: gotchas?
tgamblin_ has quit [Quit: Leaving]
<qschulz> jclsn: the relative paths between S and WORKDIR aren't the same in devtool and in "normal" bitbake execution
<jclsn> Ah okay
<qschulz> also, the defconfig is sometimes only installed if there isn't already a defconfig/.config in WORKDIR
tgamblin has joined #yocto
<jclsn> I am using my defconfigs from the meta-layer
<qschulz> so that's your next step I guess, doing a diff between a normal bitbake and a devtool'ed run and see what are the differences in the source files
<jclsn> like meta-custom/recipes-kernel/linux/linux-fslc-imx/mx8/defconfig
<jclsn> Okay thanks I will investigate
<jclsn> At least I am getting fruther
<jclsn> Still weird that my colleagues can build without isses
<jclsn> *issues
<jclsn> Where does bitbake checkout the repos btw?
<jclsn> Searched for them under tmp
<qschulz> jclsn: in S
<qschulz> which is typically in WORKDIR
<qschulz> in/under
<jclsn> so tmp/work?
<qschulz> if you're looking for tarballs or git repositories before checkout, they are in DL_DIR, which named "downalods"
<qschulz> jclsn: that and in subdirectories specific to the architecture and name of the recipe but yes
<qschulz> jclsn: for the kernel, though, it is tmp/work-shared/
<jclsn> ah yeah
<jclsn> fzf is my friend
<jclsn> and you :)
<jclsn> tmp/work-shared/machine/kernel-source
xmn has joined #yocto
<jclsn> Maybe building in zsh is not a good idea
<jclsn> Will see if bash makes a difference tomorrow
<qschulz> jclsn: do the diff of the sources
<jclsn> Have to go now
<jclsn> qschulz: I will
<RP> rburton: its rebuilding quite a bit...
<jclsn> But even if I find them, that won't explain why my colleagues can build successfully
jmiehe has joined #yocto
<qschulz> jclsn: you're trying to understand something you don't even know what it is. Figure out what is the issue first and then you can start investigating why it happened
manuel1985 has joined #yocto
Guest4 has quit [Ping timeout: 256 seconds]
wooosaiiii has joined #yocto
florian_kc has joined #yocto
<rburton> RP: maybe should have made that patch class-target for testing
<RP> rburton: failed :/
<RP> rburton: error looks the same
<rburton> can you share the log again? at least one of the error should have gone
<RP> rburton: looks the same to me. This one is x86
<RP> rburton: sent
<rburton> i'll make it xfail for now :)
SSmoogen is now known as Ebeneezer_Smooge
sakoman has joined #yocto
<vvn> hi all -- so basically python applications implementing a setup.py must now switch to wheels?
<rburton> vvn: must, no. ideally, yes. that's a change from upstream python, not from us.
<rburton> if you want to use setup.py build/install then inherit setuptools3_legacy
<vvn> rburton: I know it's not from you guys ;-) If the change is trivial I'll switch to wheels
<rburton> vvn: leave the inherit as setuptools3 and see if the packaging is the same
<rburton> badly behaved setup.py will suddenly start putting files inthe wrong place with wheels
<rburton> basically, if the setup used absolute paths in data_files, you can't use wheels
<vvn> rburton: the packages fails with poetry_core.bbclass:1: Could not inherit file classes/pip_install_wheel.bbclass
<rburton> yes, there's a patch for meta-oe which didn't get merged yet
<rburton> tell armpit/khem
<rburton> its on the list
<rburton> (oe-devel)
<rburton> RP: ross/ssl has hopefully a workaround for pycrypto
<rburton> i'm hoping it was just that one test that failed anyway
<vvn> rburton: do you have a link? I can add a tested-by if that matters
<vvn> let me look for it
<rburton> i'm sure you can find the patchwork or lore link if you want to apply it
<vvn> I think I know its author ;)
<vvn> not that easy to find the link
<rburton> erm no not that one
<rburton> that one
<rburton> lore is great for grabbing the actual patches, https://lore.kernel.org/openembedded-devel/ for oe-devel
creich has joined #yocto
davidinux has quit [Ping timeout: 252 seconds]
davidinux has joined #yocto
AKN has joined #yocto
manuel1985 has quit [Quit: Leaving]
<vvn> rburton: build fixed with setuptools3_legacy and your patch
<RP> rburton: do I want the ssl leak fixes too?
<rburton> if they don't help, don't bother i guess
<RP> rburton: I couldn't seem to see any difference so either they don't or a messed up :/
<rburton> i'll kick a build and see
<moto-timo> More gremlins in py crypto?
<moto-timo> Sigh
flodo has quit [Quit: Client closed]
<RP> moto-timo: oh yes :/
<moto-timo> :/
amitk has quit [Ping timeout: 250 seconds]
<RP> moto-timo: reproducibility failed, qemux86 testing memory limits were wrong, ptest on python3-crypto fails, there was a typo in the ptest patch and so on
<RP> moto-timo: I'll squash master-next a bit soon but it currently gives a bit of the story
<RP> rburton: that patch doesn't work :/
<RP> rburton: I've mailed it to you
AKN has quit [Ping timeout: 250 seconds]
<moto-timo> RP: I’ve been trying to follow along
fleg has quit [Remote host closed the connection]
<rburton> RP: bah
raghavgururajan has quit [Remote host closed the connection]
fleg has joined #yocto
<rburton> RP: oh ffs, sorry
raghavgururajan has joined #yocto
<RP> rburton: I at least clearly did apply your patch :)
<rburton> repushed, it needed reason="..."
<RP> rburton: right :)
chep has quit [Quit: ZNC 1.8.2 - https://znc.in]
chep has joined #yocto
expert[m] has quit [Quit: You have been kicked for being idle]
pgowda_ has quit [Quit: Connection closed for inactivity]
frieder has quit [Remote host closed the connection]
tnovotny has quit [Quit: Leaving]
frieder has joined #yocto
frieder has quit [Remote host closed the connection]
AKN has joined #yocto
AKN has quit [Client Quit]
AKN has joined #yocto
<RP> rburton: good news. that works :)
<rburton> phew
<rburton> sorry about that
<RP> rburton: we're getting there
* RP notes the installer patch does get on with master-next but easily fixed
AKN has quit [Read error: Connection reset by peer]
florian_kc has quit [Ping timeout: 260 seconds]
AKN has joined #yocto
Schlumpf has quit [Quit: Client closed]
florian_kc has joined #yocto
florian has quit [Quit: Ex-Chat]
ecdhe_ has quit [Read error: Connection reset by peer]
ecdhe has joined #yocto
florian_kc has quit [Ping timeout: 240 seconds]
rfuentess has quit [Remote host closed the connection]
<kanavin> RP: looks like doing version updates during freeze isn't causing too much trouble?
<kanavin> RP: should I do another batch after AUH runs tomorrow?
<RP> kanavin: so far so good. I'm having headache with other issues
<RP> kanavin: I'm not against another batch for evaluation
<kanavin> RP: right, ask if you think I can help somewhere
<rfs613> for a CVE fix affecting multiple branches, should I submit multiple patches, or is there a tag etc to say "apply to these branches" ?
<rburton> rfs613: please sent multiple patches as it's unlikely the same patch will apply to all the branches anyway
<rburton> different maintainers for each branch, you see
<rfs613> rburton: I was afraid you'd say that... but yeah I understand :-)
<RP> kanavin: thanks, I don't quite know what I'm doing anymore now, let alone where I need help! :)
<kanavin> RP: maybe your neurons are asking for afk so they can rearrange themselves? :)
mckoan is now known as mckoan|away
<RP> kanavin: I think they're dreading trying to make sense of all the things in -next into a coherent patch series
Herrie has quit [Ping timeout: 240 seconds]
lucaceresoli has quit [Quit: Leaving]
<RP> Great, we now see the same failure on x86 and arm :)
florian_kc has joined #yocto
florian_kc has quit [Ping timeout: 240 seconds]
<vvn> armpit: thank you
Herrie has joined #yocto
AKN has quit [Read error: Connection reset by peer]
Herrie|2 has joined #yocto
Herrie has quit [Ping timeout: 256 seconds]
Herrie|2 is now known as Herrie
roussinm has joined #yocto
<rburton> RP: you're going to squash bits of master-next before merging right :)
goliath has quit [Quit: SIGSEGV]
<rfs613> newbie question about layers... I've added a layer and its dependencies... but bitbake doesn't seem to want to see any of the recipes contained in the new layer. How to debug what I did wrong?
<rfs613> (this is actually for the CVE fix on multiple branches, to check that it actually applies/builds)
Guma has joined #yocto
goliath has joined #yocto
<smurray> rfs613: does "bitbake-layers show-layers" look like you expect?
<rfs613> smurray: yes, it appears in there
<rfs613> smurray: and
<smurray> rfs613: that does seem strange, then
<rfs613> "bitbake-layers layerindex-show-depends meta-security" also seems happy (i'm adding meta-security)
<smurray> you'd get an error if an explicitly required layer wasn't present
<smurray> i.e. via LAYERDEPS
cambrian_invader has joined #yocto
<rfs613> yep, when I first tried to add it, it wanted several other layers, meta-perl and meta-python for example, so I added those as well.
<smurray> perhaps look through "bitbake-layers show-recipes" to see if any of the recipes are being seen, and look for a stray BBMASK definition in your conf, maybe
<cambrian_invader> does anyone know where to send patches for https://source.codeaurora.org/external/qoriq/qoriq-components/meta-qoriq/ ?
<rfs613> smurray: oh, I got it now... meta-security has several meta-layers within, I only added the top-most one.
dev1990 has joined #yocto
<Guma> I was wondering if anyone knows how to fix this issue I encountered. I am conneced to my remote machine (ubuntu) from my macosx over ssh. When connected I do try to run minicom to connect to serial port (yocto embedded device). Everything works fine but I do not see cursor on command line or in vim. TERM=console.
<Guma> Same thing happens when I am working from my ubuntu machine so it does not seem it is related to Mac. But when using putty on windows everything works
<smurray> rfs613: whew
<smurray> cambrian_invader: that's a good question, probably will have to ask your NXP support contact if you have one. If it's something where there's still qoriq bits in meta-freescale, like the kernel, you could maybe switch to using that version and working with the community folks
<cambrian_invader> smurray: I have some patches for meta-qoriq itself; specifically enabling some dynamic layers
<cambrian_invader> I'
<cambrian_invader> ll try seeing if I can get an answer from NXP support
<smurray> cambrian_invader: right, but NXP do not engage with the community at all for any of their BSP stuff, you'll not the README.md in that layer has no contact information
<smurray> err, note
<cambrian_invader> well, I asked on the meta-freescale list
<cambrian_invader> which seems active
<cambrian_invader> but no response as of yet
<smurray> cambrian_invader: stuff like meta-qoriq and meta-imx are developed by NXP and thrown over the wall, they don't involve anyone from the community
<cambrian_invader> oh, is meta-freescale something different?
florian_kc has joined #yocto
<cambrian_invader> I can tell that meta-qoriq is "over the wall"
<smurray> cambrian_invader: AIUI some people working on meta-freescale used to be under contract with Freescale/NXP, but that ended a couple of years ago and it's pretty much just community maintained now
<cambrian_invader> huh, interesting
<cambrian_invader> I saw some nxp emails on the mailing list
<cambrian_invader> so I assumed it was official
<smurray> cambrian_invader: they do push some things, but their party-line is to use meta-imx which adds a bunch of stuff on top AFAIK
<cambrian_invader> yeah, I'm not really happy with how diverged from mainline the *-imx stuff is
<cambrian_invader> layerscape/qoriq seems better, but only because it doesn't seem like there is as much development
<smurray> yes, there's a bunch of parallel dev going on inside NXP by different groups, I recently stumbled upon a whole different set of layers for the S32 chips by different devs AFAICT
<RP> rburton: yes, that is the plan
Guma has quit [Quit: Good Night Everyone...]
<rburton> moto-timo: why does py-crypto have a RDEPENDS:class-target?
<rburton> RP: some small fixups in ross/ssl you'll want to squash in
<moto-timo> rburton: those are stdlib from python3… at one point that was breaking -native or nativesdk- or so I remember from JaMa
<rburton> that means python should be fixed then surely
GNUmoon has quit [Ping timeout: 240 seconds]
<moto-timo> Anyway, it’s just a separation of deps that always apply vs. python3 sub packages that are only needed for target
<rburton> there's no native/target configuration in the recipe, so it sounds like the native/sdk provides are (were?) incomplete?
<moto-timo> So no, those don’t apply to native/nativesdk
<moto-timo> Minimal install without full stdlib on target
kevinrowland has joined #yocto
<rburton> moto-timo: python3-native will rprovide those
<rburton> (should)
<moto-timo> Sure. It’s mostly just handy to remember what is a sub package vs. a recipe for a module. But I’m not going to die on that mountain. Do what you think is best. It has probably been a couple years since that behavior was broken.
<moto-timo> Things with Perl are more obvious because the oe specific sub packages are named Perl-module-foo. With python you have to remember to look at the manifest first, since you won’t fine e.g. python-numbers on layerindex or pypi
<moto-timo> s/fine/find/
<moto-timo> This confuses folks
<moto-timo> s/python/python3/
dev1990 has quit [Quit: Konversation terminated!]
GNUmoon has joined #yocto
florian_kc has quit [Ping timeout: 240 seconds]
behanw has quit [Quit: Connection closed for inactivity]
mvlad has quit [Remote host closed the connection]
kevinrowland has quit [Quit: Client closed]
prabhakarlad has quit [Quit: Client closed]
dgriego has quit [Quit: Textual IRC Client: www.textualapp.com]
florian_kc has joined #yocto
florian_kc has quit [Ping timeout: 268 seconds]
pabigot has quit [Ping timeout: 256 seconds]
yannd has joined #yocto
pabigot has joined #yocto
<moto-timo> RP: thank you for sending out the patch bomb... I was about to ask what you wanted me to do about it
<RP> moto-timo: it took a bit of beating into shape but I think we're kind of there (keeping history of changes from the meta-oe version)
<moto-timo> RP: yeah... not so smooth as we would have hoped
<moto-timo> RP: it also highlights the disparity between what is in meta-openembedded vs. what is hammered on in core with the AB
<RP> moto-timo: right, core does have a slightly higher bar :/
<moto-timo> RP: which is the entire reason we wanted py-crypto to move... it needs to be hammered on
* moto-timo happy despite the churn
<RP> moto-timo: it is all driving me slightly crazy :(
<moto-timo> RP: you and me both
* RP lost half the weekend to it :(
<moto-timo> RP: sorry... I clearly did not expect so much trouble... sigh
<RP> moto-timo: I'd hoped it would be better :)
<RP> There is just so much stuff going on, that is the challenge
kevinrowland has joined #yocto
leon-anavi has quit [Quit: Leaving]
dgriego has joined #yocto
<kevinrowland> How can I determine why a "-dev" package was added to my image manifest and rootfs, when it wasn't added to `IMAGE_INSTALL` and doesn't show up in the `RDEPENDS` of any other package?  `FILES_${PN}-dev` for this package is set to `/usr/include /usr/lib`, which I know is wrong, but I'm not looking to fix that right now. I'm interested in knowing
<kevinrowland> which bit of `bitbake` noticed that some library in `/usr/lib` is needed by some other package at runtime, and therefore decided to install the whole "-dev" package. Any pointers?
<RP> kevinrowland: it will be the shlibs code in package.bbclass
<RP> kevinrowland: I'd imagine it was injected into the RDEPENDS of some packages at package creation time
<kevinrowland> RP: Wonderful, thank you, I'll take a look
florian_kc has joined #yocto
Bardon has quit [Ping timeout: 272 seconds]
fitzsim has quit [Read error: Connection reset by peer]
Bardon has joined #yocto
fitzsim has joined #yocto