ChanServ changed the topic of #yocto to: Welcome to the Yocto Project | Learn more: | Join us or Speak at Yocto Project Summit (2021.11) Nov 30 - Dec 2, more: | Join the community: | IRC logs available at | Having difficulty on the list or with someone on the list, contact YP community mgr ndec
goliath has quit [Quit: SIGSEGV]
Vonter has quit [Ping timeout: 256 seconds]
Vonter has joined #yocto
jclsn7 has quit [Ping timeout: 250 seconds]
jclsn7 has joined #yocto
jclsn7 has quit [Ping timeout: 256 seconds]
jclsn7 has joined #yocto
jclsn7 has quit [Ping timeout: 256 seconds]
marc1 has quit [Ping timeout: 240 seconds]
jclsn7 has joined #yocto
camus1 has joined #yocto
camus has quit [Ping timeout: 256 seconds]
camus1 is now known as camus
jclsn7 has quit [Ping timeout: 256 seconds]
jclsn7 has joined #yocto
jclsn7 has quit [Ping timeout: 240 seconds]
sakoman has quit [Quit: Leaving.]
jclsn7 has joined #yocto
marc1 has joined #yocto
jclsn7 has quit [Ping timeout: 240 seconds]
jclsn7 has joined #yocto
sakoman has joined #yocto
jclsn7 has quit [Ping timeout: 256 seconds]
jclsn7 has joined #yocto
jclsn7 has quit [Ping timeout: 240 seconds]
jclsn7 has joined #yocto
jclsn7 has quit [Ping timeout: 240 seconds]
jclsn7 has joined #yocto
amitk has joined #yocto
jclsn7 has quit [Ping timeout: 256 seconds]
jclsn7 has joined #yocto
jclsn7 has quit [Ping timeout: 256 seconds]
jclsn7 has joined #yocto
sakoman has quit [Quit: Leaving.]
GNUmoon has quit [Ping timeout: 240 seconds]
GNUmoon has joined #yocto
alessioigor has joined #yocto
alessioigor has quit [Client Quit]
GNUmoon has quit [Ping timeout: 240 seconds]
oobitots has joined #yocto
osama2 has joined #yocto
osama3 has joined #yocto
osama2 has quit [Ping timeout: 256 seconds]
Etheryon has joined #yocto
pgowda_ has joined #yocto
kroon has joined #yocto
GNUmoon has joined #yocto
kanavin_ has joined #yocto
kanavin has quit [Ping timeout: 240 seconds]
rzr has joined #yocto
<rzr> hi,
<rzr> i am observing do_fetch hanging at 100% on ovmf ? any clue ?
frieder has joined #yocto
frieder has quit [Remote host closed the connection]
frieder has joined #yocto
goliath has joined #yocto
mckoan|away is now known as mckoan
rob_w has joined #yocto
Etheryon has quit [Ping timeout: 256 seconds]
goliath has quit [Quit: SIGSEGV]
<RP> rzr: what does ps show as running?
leon-anavi has joined #yocto
<hmw[m]> hi, Im trying to use cmake in a project but im getting #error "__WORDSIZE is not defined" ( but it is pulling bits/c++config.h:3, from host file system not out of the sdk)
LetoThe2nd has joined #yocto
<LetoThe2nd> yo dudX
Schlumpf has joined #yocto
goliath has joined #yocto
<qschulz> o/
<rzr> RP, hi richard
<rzr> a couple of bitbake-worker decafbad
<rzr> let me try again with --debug
<LetoThe2nd> decaf? BAD!!!
<rzr> let me try to build again in docker container too
<rzr> i am using latest ubuntu
<RP> rzr: usually to do that it would be waiting on the exit of some command :/
tre has joined #yocto
<RP> LetoThe2nd: bitbake has strong feelings on that :)
<LetoThe2nd> RP: obviously we are aligned.
<RP> LetoThe2nd: the fakeroot worker has decafbadbad
Schlumpf has quit [Quit: Client closed]
<LetoThe2nd> RP: +1
<RP> should have been decafdeadbad in hindsight
<LetoThe2nd> decafdead
<landgraf> rzr: I had some issues with long do_fetch of ovmf as well
<landgraf> rzr: it took around 1hr iirc
<rzr> landgraf, hi
<landgraf> rzr: hi
<rzr> the problem for me is that it's hanging after 100%
<rzr> landgraf, I think this package is about 1GB ...
<rzr> 0: ovmf-native-edk2-stable202111-r0 do_fetch (pid 808701) 100% |###################################################################################################################################################################| 35.2M/s
<RP> keep in mind the progress bars may not be 100% accurate
<RP> i.e. it could still be doing something
<rzr> the git process is gone
<rzr> any timeout / delay variable to adjust ?
<RP> no, there would be a delay there to adjust
<RP> wouldn't
<RP> it does do other things after fetching such as mirror archiving. I assume you looked at the do_fetch log file?
florian has joined #yocto
<jclsn[m]> Can anyone tell me which layer provides Electron?
<jclsn[m]> There was a meta-electron once in OSSystems, but it is gone
<landgraf> rzr: 8 commits latest one was 8 years ago
<rzr> yes I know that this project is not easy to rebuild
<jclsn[m]> rzr: Do you use that layer? Is it any good?
<rzr> jclsn[m], no I was just looking at webengines and found that one
<jclsn[m]> Yeah but it doesn't look maintained
<rzr> jclsn[m], look at nwjs maybe it's better
<jclsn[m]> I need to know if anyone has experience with Electron and Yocto
jmiehe has joined #yocto
mvlad has joined #yocto
<jclsn[m]> Tried building Electron for honister but I am getting this error
<jclsn[m]> /meta-electron/recipes-electron/electron/ cannot map aarch64 to electron architecture
<jclsn[m]> ```
<jclsn[m]> ```
<jclsn[m]> s///, s///.//
<jclsn[m]> Guess it is inheriting the architecture somehow from npm
ilunev has joined #yocto
<jclsn[m]> Uh this is Electron 7 also. Electron is at version 19 by now...
<michaelo> Hi halstead: oops the index on is broken. Can you do something about it?
<qschulz> michaelo: this is more and more recurrent, what's happening? Is there something we can do to make sure this does not happen anymore (or at least much less)?
<Saur[m]> rzr: Try `ps auxwwwf | less` and search for bitbake. That is usually a good way to find out what processes bitbake is currently running.
<rburton> pstree -p -l $(pgrep '^Cooker$')
<rburton> that will show a nice tree of all the processes under the main bitbake process
<michaelo> qschulz: you're right, this is quite frequent. I'll ask halstead what can cause this.
GillesM has joined #yocto
<landgraf> rzr: ovmf-native-edk2-stable202111-r0 do_fetch completed
florian_kc has joined #yocto
<rzr> Saur[m], yea i know but i only see decafbad and log file only list TaskStarted event
Schlumpf has joined #yocto
florian_kc has quit [Ping timeout: 240 seconds]
Schlumpf has quit [Ping timeout: 256 seconds]
vladest has quit [Quit: vladest]
vladest has joined #yocto
florian has quit [Quit: Ex-Chat]
<otavio> jclsn[m]: Electron isn't avail on OE/Yocto as far as I know. It is not easy to package. I suggest using cog, chromium or webengine
ldericher has quit [Quit: ZNC 1.8.2 -]
ldericher has joined #yocto
ldericher has quit [Quit: ZNC 1.8.2 -]
ldericher has joined #yocto
<RP> michaelo: the issue is probably related to warnings in
<RP> michaelo: or some of those errors :/
<jclsn[m]> <otavio> "jclsn: Electron isn't avail on..." <- Thanks. I guess I will ditch Electron then. I am not going into maintaining a layer for it, if it is that hard
<otavio> jclsn[m]: it is as kinda complex as chromium. It isn't worth (in my opinion)
Guest75 has joined #yocto
Guest75 has quit [Client Quit]
tgamblin_ has quit [Quit: Leaving]
tgamblin has joined #yocto
<jclsn[m]> <otavio> "jclsn: it is as kinda complex as..." <- Yeah, cog seems like a simple alternative. Plus the meta-webkit layer is maintained by Igalia, which is a real company
<jclsn[m]> But cog seems to be still in beta phase
<jclsn[m]> There is no version 1.0
oobitots has quit [Quit: Client closed]
akiCA has joined #yocto
codavi has joined #yocto
<rburton> RP: meta-arm is now kirkstone
akiCA has quit [Ping timeout: 256 seconds]
<jclsn[m]> webkit and cog have already been evaluated by a colleague I was just told. He said performance and debugging options were not good
<JaMa> rburton: will you run the spdx script as well or should I send patch for meta-arm?
<rburton> JaMa: if you have a patch, send it :)
sakoman has joined #yocto
<JaMa> ok, sent
<rburton> thanks
<JPEW> jclsn[m]: We're using WebKit+cog in production
<jclsn[m]> JPEW: To display the UI of your watches?
<jclsn[m]> Can you maybe give me tips to improve performance then? What are the advantages comapred to Chromium?
<JPEW> jclsn[m]: No. I work on boat displays (think "glass helm"); we have a program for 3rd parties to display content on our screens using HTML5
<jclsn[m]> JPEW: On what p
<jclsn[m]> on what machine?
<JPEW> When I was doing the evaluation a few years ago, WebKit took ~1/2 the time to build in Yocto vs. Chromium (which, since WebKit can take an hour is saying something!)
<jclsn[m]> Advantages compared to Chromium?
<JPEW> The lowest end we can display on is a dual-core ARM A7 (I think)
<jclsn[m]> Building Yocto
<jclsn[m]> fast is not so crucial. Performance during runtime is important
<JPEW> Also, Wayland support was mandatory for use and Chromium wasn't there yet (when I last looked a few years ago)
<jclsn[m]> There is chromium-ozone-wayland now
<JPEW> Ok. My understanding of the tradeoff is that WebKit is smaller and lighter than Chromium, which seems to be the case in my eval
<JPEW> But, Chromium might perform better, if that's your goal
<jclsn[m]> Yeah I would assume that as well
<jclsn[m]> My colleague said Chromium was faster though
<jclsn[m]> We have an old i.MX6 board, so performance is important
<jclsn[m]> on the i.MX8 it won't matter so much
<JPEW> Ya, our concern was resource usage, since we have to fit in products that are also doing other important things :)
Minvera has joined #yocto
<JPEW> (and wayland support)
<jclsn[m]> Yeah sounds good in theory
<jclsn[m]> I guess I will have to re-evaluate
<jclsn[m]> The old evaluation was a year ago
<JPEW> jclsn[m]: I'd recommend using since the webkit in oe-core tends to lag behind
<jclsn[m]> JPEW: Already building that one :)
<jclsn[m]> Taking some time as well
<qschulz> running Chromium in a flatpak with Wayland only.. crashes multiple times a day so I'm not sure it's there yet :)
<jclsn[m]> Oh but almost done
<jclsn[m]> significantly faster than chromium
<qschulz> (on my PC obviously)
vladest has quit [Remote host closed the connection]
<JPEW> Ya, WebKit also doesn't require clang & friends, so that helps
vladest has joined #yocto
<JPEW> (unless you already were using clang anyway, we were not)
<RP> moto-timo: Adding --froce-reinstall to the pip flags fixes that AB issue FWIW
<RP> --force-reinstall
<RP> rburton: thanks, things look greener again now :)
<jclsn[m]> JPEW: chromium requires clang as well
<JPEW> Use the --force !
<jclsn[m]> Ah webkit doesn't
<jclsn[m]> I see
<smurray> RP: I realize this morning that the convert-spdx-licenses script has the same issue wrt closing the new file, do you want another patch?
<RP> smurray: I was meaning to check that. Please
<smurray> RP: okay, will do once I'm off this conf call
<RP> moto-timo: well, it fixed a local reproducer of one issue
<RP> JPEW: I did mean to ask you about something. I don't know if you've ever played with the threaded checkstatus code in sstate.bbclass?
<JPEW> RP: Let mess
<JPEW> me see
<RP> JPEW: It is worrying me a bit as the recent environment issues we ran into there scare me
<RP> JPEW: It runs in core bitbake context and shares a datastore amongst multiple threads why it uses threads rather than multiprocess. I'm not convinced it is safe though
<JPEW> Hmm, whats the race?
<JPEW> Ya, seems like it could race easily
<RP> JPEW: basically I hacked it to setup the correct env in advance then it doesn't have to change it
<RP> not something I like :/
<JPEW> Ya, I can see why
<RP> which is a fun trace to debug
<qschulz> smurray: can't we use context instead of manually closing the fie descriptor?
<RP> JPEW: going forward we may need to thread the fetcher code too to help solve some of the go issues so I'm getting a bit worried :/
<RP> I wonder if async may be more helpful...
<JPEW> RP: I was just going to say that
<rburton> RP: do you remember if there was any conclusion on removing the exported test stuff in oeqa?
<JPEW> The "threading" is so that you can have multiple child processes running at once?
<jclsn[m]> cog is not really a good tool to propose your boss in German, because we pronounce every consonants at the end of the word hardly
<JPEW> (presumably, get since the GIL isn't going to allow multiple bits of python code to run at once)
<jclsn[m]> s/hardly/hard/
* JPEW doesn't know enough linguistics to know why that's a problem... pretty sure it's pronounced as a hard "g" in English too
Vonter has quit [Ping timeout: 256 seconds]
<jclsn[m]> No in English you pronounce a "g" soft in the end. In Germany it will always be a "k"
thekappe has joined #yocto
<smurray> qschulz: I went with the simplest fix possible, AFAICS the logic would need to be shifted around to make that work since the file is opened beforehand
<qschulz> jclsn[m]: depends on the region :)
<jclsn[m]> So there is this tool "cock"
<jclsn[m]> Super light weight
<jclsn[m]> Littel overhead
<jclsn[m]> Kinda gay haha
Vonter has joined #yocto
<jclsn[m]> qschulz: In English? I studies Anglistics once. Consonants are usually pronounced softly
<qschulz> jclsn[m]: in German, words ending in ig aren't pronounced the same way everywhere
<qschulz> though yes.. cog does not end with ig :)
Tokamak has joined #yocto
<LetoThe2nd> okay folks, can agree on this being covered now?
<jclsn[m]> Calls himself a jester and spoils the fun...
<JPEW> LetoThe2nd: Yes
dtometzki has quit [Ping timeout: 256 seconds]
<JPEW> RP: Arguably, the fetcher should not be manipulating the environment in that way anyway. It *should* pass the correct environment to the subprocess and not change it locally
Schlumpf has joined #yocto
<JPEW> RP: Oh, but it's running python code.... that's too bad
* JPEW wonders if the fetcher should fork for that part
olani has quit [Ping timeout: 240 seconds]
rber|res has joined #yocto
oobitots has joined #yocto
<rber|res> I am a bit confused about header only recipes which seem to work although no one sets ALLOW_EMPTY_${PN} = "1"
<rber|res> I thought that if the "main" package is empty you have to allow that explicitly.
<JaMa> rber|res: why do you think so? header only recipe is usually quite useless on target and creating empty package doesn't help anyone (other than ${PN}-dev default RDEPENDS which you can set to empty instead of default ${PN})
<JaMa> RDEPENDS:${PN}-dev = "" is useful in header only recipes, but ALLOW_EMPTY is just bad work around
kroon has quit [Quit: Leaving]
<rber|res> @JaMa, this seems to explain it ;)
<rber|res> @JaMa, Now I am only wondering about the SDK. Normally I would just include the main package in the image and the SDK contains the -dev package. Does this still work?
rob_w has quit [Quit: Leaving]
<rber|res> @JaMa, and I guess the same or a similar trick should work with statically linked library only (.a) recipes as well
<RP> rburton: no, but I do keep getting tempted to
<RP> JPEW: a separate process could work but that code does connection caching
<RP> JPEW: basically there is a connection opened per thread so we can check the presence of a large number of http urls quickly in parallel
<RP> JPEW: I'm not saying it a is a good thing, just what it does
<JPEW> I see that. For better or worse I usually assume that if people want connection caching, they would use `requests` :)
<JPEW> So I sort of glossed over that part
<JPEW> Makes sense here though
<smurray> rber|res: for SDKs all the dev (and dbg and src) packages corresponding to the packages in the target image are installed
<smurray> rber|res: if you need static-dev in the SDK, take a look at SDKIMAGE_FEATURES
<rber|res> @smurray I am aware of that, but with RDEPENDS:${PN}-dev = "" we don't have any empty "main" package to be installed into the target image.
<smurray> rber|res: AFAIK the mechanism used with COMPLEMENTARY_GLOBS effectively adds the packages to IMAGE_INSTALL, it's not relying on the package manager dependencies
<smurray> rber|res: if there's no main package, the -dev can only get pulled in if it's a dependency of something else AIUI
<rber|res> smurray, Ah so you say, although there is no main package created it still works.
GillesM has quit [Quit: Leaving]
<rfs613> curious if anyone else has noticed do_cve_check seems to take considerably longer than before?
<JaMa> rber|res: I think you'll need to add PN-dev to SDK explicitly, because you cannot add empty PN to IMAGE_INSTALL, so COMPLEMENTARY_GLOBS won't match on it (for dev-pkgs nor dbg-pkgs)
<JaMa> unless whatever package which uses these headers explicitly adds your PN-dev in its RDEPENDS:OtherPN-dev
<JaMa> then adding OtherPN in IMAGE_INSTALL will correctly install both -dev packages in the SDK
jwillikers has joined #yocto
<smurray> JaMa: the thing I'm sketchy on is if DEPENDS plays a role, i.e. do -dev packages have the -dev of things in a recipe's DEPENDS in the package dependencies. I was thinking they did, but I've not dug down to look at a spec file to check
<rfs613> the slowdown seems to correlate with 6ec2230291 ("cve-check: add lockfile to task") being added (I am on dunfell branch)
jwillikers has quit [Remote host closed the connection]
<JaMa> smurray: definitely not all, e.g. I've just checked has DEPENDS = "virtual/libiconv libunistring" and the control file in .ipk says: Depends: libidn12 (= 1.36-r0) ;; Recommends: autoconf-archive-dev, glibc-dev
<smurray> JaMa: ah, okay
<JaMa> but still explicit RDEPENDS from OtherPN-dev on PN-dev IMHO makes sense in cases like this (even without building the SDK)
<JaMa> better than remembering that you need to add PN-dev to all SDKs which include OtherPN and IMHO still better than creating and installing empty package to target image, just because it makes SDK generation a bit easier
thekappe has quit [Quit: Client closed]
Schlumpf has quit [Quit: Client closed]
<kergoth> any thoughts on this? Seems reasonable to me. Will slightly bump task time on sysroot_stage_all due to the call to realpath, but probably small in comparison to the actual work being done. Hmm
frieder has quit [Remote host closed the connection]
dmoseley has quit [Quit: ZNC 1.8.2 -]
prabhakarlad has quit [Quit: Client closed]
<qschulz> kergoth: what about backporting patches for cpio instead? or are we relying on the host cpio for that?
<kergoth> It's host due to its use in native recipes as well, would lead to recursion trying to use our own
<kergoth> Oh, thanks, somehow I completely missed that. /eyeroll
oobitots has quit [Quit: Client closed]
dmoseley has joined #yocto
<RP> kergoth: I didn't particularly like having to do that but...
<kergoth> Yeah.. it's not great. Conceptually using relative paths is nice, but adding that manual realpath is ugly, and the reason for it is too
<RP> kergoth: exactly :/
<moto-timo> RP: --force-reinstall is what I was trying as well
<RP> moto-timo: it doesn't seem quite the thing to do but I thought testing and seeing if there was anything else might be helpful
<moto-timo> RP: installing has been the biggest pain of this entire process.
<RP> moto-timo: it reminds me a lot about when we tried to use a package manager to handle "staging"
<RP> what became sstate
<moto-timo> RP: I think there might be some host cache involved in pip... but not certain
<RP> moto-timo: I saw some warnings about my HOMEDIR :/
<moto-timo> RP: I suppose I could revisit python3-installer instead... but I was hoping to follow upstream "Install with pip" guidance
<RP> moto-timo: I suspect we may need to reconfigure the location
<kergoth> there's an env var for the pip cache location
<kergoth> on my mac i set it to ~/Library/Caches/pip
<moto-timo> also --download-cache <dir>
<moto-timo> reading the man page for the 1000th time
AKN has joined #yocto
dmoseley has quit [Quit: ZNC 1.8.2 -]
goliath has quit [Quit: SIGSEGV]
dmoseley has joined #yocto
tre has quit [Remote host closed the connection]
dmoseley has quit [Client Quit]
dmoseley has joined #yocto
lucaceresoli has quit [Remote host closed the connection]
lucaceresoli has joined #yocto
<vmeson> rburton: Can you briefly explain why you want to remove meta/classes/testexport.bbclass ?
<rburton> JPEW: a new http fetcher that just uses requests entirely and doesn't fork out to wget sounds like an awesome idea
taco has joined #yocto
taco is now known as taco___
<rburton> vmeson: primarily because i wasn't sure anyone was using it at all, and it is responsible for a duplicated tree of oeqa infrastructure which is just 90% redundant
<vmeson> rburton: ok, thanks.
<taco___> I've been trolling the docs and google for a while but haven't found much help for this question yet. What is the canonical way to remove busybox and move to binutils?
<LetoThe2nd> taco___: not building an image that pulls it in? core-image-full-cmdline for starters, maybe?
<taco___> oh man that is embarrassing. thanks
<rburton> JPEW: also my son wonders if you're related to James Watt
<moto-timo> seems like we might want pip cache in ${TMPDIR}/cache/pip or something like that?
<rburton> taco___: *removing* busybox entirely is hard. if you install the 'proper' tools like util-linux then those are preferred over busybox
<kergoth> rburton, JPEW: wonder if something like would be of use as an alternative to the full requests due to our issues with python dependencies
<moto-timo> ${TOPDIR} would probably mix things up too much especially for multiconfig?
<JPEW> rburton: Not AFAIK. My family name came from Scandavian. Came through New Orleans which didn't keep good records so we're not quite sure what it was before.
<rburton> kergoth: interesting. as long as it covers ssl/proxies/redirects sanely, i'm happy
* kergoth nods
<JPEW> would be my preference if we go full asyncio
<moto-timo> +1
<kergoth> good suggestion
<rburton> yeah, doing it properly with asyncio sounds like a winning move
<kergoth> I feel like it's just a matter of time before bitbake has to be used via a virtualenv, possibly with a wrapper script to ease its setup.
<kergoth> vendoring everything is not great
<rburton> agreed
<taco___> rburton thanks for that bit.
<kergoth> Not that I love installing everything with pip *either*, but.. :)
<moto-timo> kergoth: we could go with python3-installer but it was a bit more wonky (it provides a library, not an install command)
<JPEW> moto-timo, kergoth, rburton:
<moto-timo> kergoth:
<moto-timo> that was also a bit of a bootstrapping condundrum
<kergoth> JPEW: oh. hah. nice.
<JPEW> I keep carrying it around locally to test stuff, so theres some non-apropo stuff on the branch.... but it's been working pretty well for me
<kergoth> JPEW: looks promising, exactly the sort of thing i was thinking about, but better
<JPEW> kergoth: .... this is not the first time I've done this sort of wrapping :)
<kergoth> guessing it'd be nice to make the usage of the wrapper bits optional somehow, so it could be installed still. i.e. make the "real" scripts live elsewhere and install from there
<kergoth> course it's not installable anymore, so guess that doesn't matter
<JPEW> Ya, the point here is to use modules from pip instead of vendoring, so it's not really optional
<kergoth> moto-timo: looks like you're doing good work on this, and it seems to be messy, so thanks :)
<moto-timo> kergoth: you're welcome
<kergoth> JPEW: I meant more allow bitbake to be installed in the venv, and wrap the calls to the installed bitbake in the venv, but that'd probably be too easy to get out of sync, so your method is likely better. I just still rather dislike how far we deviate from typical python project behavior sometimes :)
dtometzki has joined #yocto
<kergoth> I'll have to try that bitbake-venv branch, seems nice.
davidinux has quit [Quit: WeeChat 2.8]
Thorn has quit [Ping timeout: 256 seconds]
<landgraf> Is there a way to go down from the CVE metrics (link from the project status email) to the list of CVEs per release/branch without running cve-checker locally? It may be useful.
Thorn has joined #yocto
<JPEW> kergoth: ya, I like how easy it is to still hack on bitbake with this method though. It's really the best of both worlds IMHO
oobitots has joined #yocto
<JPEW> You can directly edit bitbake, but still use packages
<smurray> taco___: in theory setting PREFERRED_PROVIDER_virtual/base-utils = "packagegroup-core-base-utils" in addition to the VIRTUAL-RUNTIME_base-utils-foo variables should get you there, but the result may need vetting, it's possible there are some things not in that packagegroup you might want
<rzr> ovmf-edk2-stable202111-r0 packaged finally
<moto-timo> RP: I'm testing with --no-cache (which is effectively what we have been doing so far anyway)
mckoan is now known as mckoan|away
osama3 has quit [Ping timeout: 256 seconds]
goliath has joined #yocto
<moto-timo> RP: RESULTS - buildepoxy.EpoxyTest.test_epoxy: ERROR (0.08s) --- seems to be a meson issue?
ldericher has quit [Quit: ZNC 1.8.2 -]
ldericher has joined #yocto
Guest219 has joined #yocto
Guest219 has quit [Client Quit]
chbae has joined #yocto
<chbae> Could anyone let me know how to check debug symbol after rootfs creation? I should check it for production.
chbae has quit [Client Quit]
<moto-timo> hmmm.
<zyga[m]> RP: did you manage to solve the thread/env issue that you ran into the other day?
jmiehe has quit [Quit: jmiehe]
<moto-timo> ugh... is it the syntax of "if [ -f ${D}${bindir}/pip ]" should it be -e?
Vonter has quit [Read error: Connection reset by peer]
<moto-timo> apparently so
Vonter has joined #yocto
<moto-timo> RP: do you want a v3 entire series or just fixes on top of the series?
AKN has quit [Read error: Connection reset by peer]
<zyga[m]> RP: Nice! Threading bugs are elusive.
pgowda_ has quit [Quit: Connection closed for inactivity]
GNUmoon has quit [Ping timeout: 240 seconds]
florian_kc has joined #yocto
sakoman has quit [Quit: Leaving.]
GNUmoon has joined #yocto
prabhakarlad has joined #yocto
oobitots has quit [Quit: Client closed]
sakoman has joined #yocto
<moto-timo> RP: v3 sent. the root cause of latest woes was python3-pip-native and wrong syntax of the file check for ${D}${bindir}/pip
* moto-timo hopes RP is off riding his MTB
<moto-timo> maybe too dark for that
goliath has quit [Quit: SIGSEGV]
<tgamblin> Are there examples of recipes that add to KERNEL_FEATURES when included in a build? Bonus points if it's for a ptest
leon-anavi has quit [Quit: Leaving]
bluelightning has quit [Remote host closed the connection]
amitk has quit [Ping timeout: 240 seconds]
<vmeson> tgamblin: meta-agl.git/meta-agl-bsp/virtualization-layer/recipes-kernel/linux/ = " ${@bb.utils.contains("DISTRO_FEATURES", "ptest", " features/scsi/scsi-debug.scc", "", d)}"
<vmeson> and cour mockup thing: meta-agl.git/meta-agl-bsp/virtualization-layer/recipes-kernel/linux/ = " ${@bb.utils.contains("DISTRO_FEATURES", "ptest", " features/gpio/mockup.scc", "", d)}"
<tgamblin> vmeson: Thanks. Seems like it doesn't happen in non-kernel recipes
paulg has quit [Remote host closed the connection]
mvlad has quit [Remote host closed the connection]
florian_kc has quit [Ping timeout: 256 seconds]
goliath has joined #yocto
<kergoth> Can't change one recipe's variables from another.
Guest47 has joined #yocto
Guest47 has quit [Client Quit]
florian_kc has joined #yocto
GillesM has joined #yocto
<kergoth> Hmm, seems all the yocto docs links are dead now, have to descend to a specific version to get the content
<kergoth> so all the bookmarks are useless now
<RP> moto-timo: just off eating, bit dark for the mountain bike!
<moto-timo> RP: that was my second guess
<moto-timo> RP: I'm not 100% certain v3 is the final, but it should be much closer...
<moto-timo> RP: still some collateral damage in meta-openembedded with removal of python3-nose...
<RP> moto-timo: just catching up, I saw the pip failures. I'll queue another test
<moto-timo> RP: thank you
pasherring has quit [Quit: Leaving]
<RP> moto-timo: there were a couple of ptest failures in the last build btw. Not sure if they're going to be fixed or not
<moto-timo> RP: the best I could tell the buildepoxy ones were related to meson, but I don't know if that was back to the python3-pip-native root cause or not
<moto-timo> RP: those were sdk selftest, not ptest
<moto-timo> looks like kanavin's build grabbed all the workers
Vonter has quit [Ping timeout: 240 seconds]
<moto-timo> somehow the "fix" for pip rewriting the #!/usr/bin/nativepython3 didn't work for pytest :/
<moto-timo> oh, the fix was 1s and this is line 2
<RP> ah!
<moto-timo> different pattern, same problem
<moto-timo> pip install --do-not-do-things-we-do-not-want-you-to-do
<JPEW> moto-timo: Ah, the ever popular --dwim flag :)
<moto-timo> JPEW +1
<moto-timo> the meson-wrapper needs <native sysroot>/usr/bin/nativepython3 I think, so blanket replacement for that string is not correct
<moto-timo> (I don't pretend to fully understand the meson-wrapper just yet0
<moto-timo> above snippet is from /usr/bin/pytest
<moto-timo> same is in /usr/bin/py.test
Vonter has joined #yocto
Minvera has quit [Remote host closed the connection]
<RP> moto-timo: I'm still just trying to catch up with my inbox and things from being away a few hours and distracted with meetings so I'm not much help atm
<moto-timo> RP: just telling me to look for ptest failures was a help TBH
<moto-timo> RP: you get a free pass ;)
lucaceresoli has quit [Quit: Leaving]
kevinrowland has joined #yocto
goliath has quit [Quit: SIGSEGV]
Vonter has quit [Ping timeout: 250 seconds]
bluelightning has joined #yocto
skunkjoe has joined #yocto
* moto-timo files several bugs to capture what needs to get the Python PEP-517 work completely over the finish line (so they can happen between M3 and M4)
Vonter has joined #yocto
florian_kc has quit [Ping timeout: 272 seconds]
chep` has joined #yocto
chep has quit [Read error: Connection reset by peer]
chep` is now known as chep
Vonter has quit [Read error: Connection reset by peer]
Vonter has joined #yocto
sakoman has quit [Quit: Leaving.]
skunkjoe has quit [Remote host closed the connection]
<agherzan> JaMa: The mirroring CI workflow is now back in place.
<agherzan> For meta-raspberrypi ^
<kevinrowland> Hello Yocto folks. Is there a quick way to deploy a package that's not present in IMAGE_INSTALL to a target? I know I can do it with `devtool deploy-target`, but that requires me to run `devtool modify` and `devtool build` first. Is there a way to "deploy to the target" after just doing a call to `bitbake`? I'm trying to keep instructions simple
<kevinrowland> for the rest of my team, especially those who aren't super familiar with Yocto. Maybe I should just write a little wrapper to `rsync` the contents of `packages-split/${PN}/` after `bitbake` builds the thing?
<agherzan> With this occasion, I've implemented a GitHub action to deal with git mirrors: . If anyone is interested, check it out.
<moto-timo> agherzan: that is very handy. thank you for sharing
<agherzan> moto-timo: I'm glad. Feel free to contribute. My next plan is to have multiple destinations
<agherzan> So you can push to multiple remotes your GitHub repo.
<agherzan> (without multiple local bares)
<moto-timo> kevinrowland: you can use a package manager and a "package feed" or you can use NFS/TFTP
<moto-timo> kevinrowland: hypothetically some "smarter" workflow with rsync could be added (hand-wavy) somewhere
<kevinrowland> moto-timo: does the "package feed" route require `rpm` or another package manager to be installed on the target?
<kevinrowland> moto-timo: I love `devtool` because it knows which files need to be deployed (although I think it's a little heavy handed and deploys from `image/`). A package manager would probably have the same knowledge, right?
<JaMa> agherzan: thanks, I'll switch to github anyway (already done in some builds - for others already pending reviews to switch), but consistency will be still very useful for other users
<agherzan> JaMa: Indeed.
codavi has quit [Ping timeout: 272 seconds]
<moto-timo> kevinrowland: yes, you would need dnf, apt or opkg on target
<moto-timo> kevinrowland: I am a _HUGE_ fan of devtool, but sometimes having to devtool modify etc is a burden
<kevinrowland> moto-timo: "package feed" tip was great, thank you. Looks like I have some reading to do in the "Using Runtime Package Management" section of the mega manual