ChanServ changed the topic of #yocto to: Welcome to the Yocto Project | Learn more: https://www.yoctoproject.org | Join us or Speak at Yocto Project Summit (2021.11) Nov 30 - Dec 2, more: https://yoctoproject.org/summit | Join the community: https://www.yoctoproject.org/community | IRC logs available at https://www.yoctoproject.org/irc/ | Having difficulty on the list or with someone on the list, contact YP community mgr ndec
dvorkindmitry has quit [Quit: KVIrc 5.0.0 Aria http://www.kvirc.net/]
otavio has quit [Read error: Connection reset by peer]
otavio has joined #yocto
rsalveti has joined #yocto
tantrum_ has joined #yocto
tantrum has quit [Ping timeout: 260 seconds]
tantrum_ is now known as tantrum
florian__ has quit [Ping timeout: 264 seconds]
goliath has quit [Quit: SIGSEGV]
sakoman has quit [Quit: Leaving.]
camus has joined #yocto
camus1 has joined #yocto
camus has quit [Ping timeout: 260 seconds]
camus1 is now known as camus
camus1 has joined #yocto
camus has quit [Ping timeout: 264 seconds]
camus1 is now known as camus
Fanfwe has quit [*.net *.split]
yocti has quit [*.net *.split]
chep has quit [*.net *.split]
wyre has quit [*.net *.split]
ndec has quit [*.net *.split]
tlwoerner has quit [*.net *.split]
georgem has quit [*.net *.split]
jonmason has quit [*.net *.split]
yocton has quit [*.net *.split]
rzr has quit [*.net *.split]
rzr has joined #yocto
tlwoerner has joined #yocto
yocti has joined #yocto
Fanfwe has joined #yocto
jonmason has joined #yocto
ndec has joined #yocto
georgem has joined #yocto
wyre has joined #yocto
tantrum has quit [Ping timeout: 264 seconds]
glembo[m] has quit [*.net *.split]
dwagenk has quit [*.net *.split]
perdmann_ has quit [*.net *.split]
MWelchUK has quit [*.net *.split]
ak77 has quit [*.net *.split]
mrnuke has quit [*.net *.split]
kergoth has quit [*.net *.split]
perdmann has joined #yocto
mrnuke has joined #yocto
ak77 has joined #yocto
MWelchUK has joined #yocto
dwagenk has joined #yocto
glembo[m] has joined #yocto
camus has quit [Quit: camus]
camus has joined #yocto
dtometzki has quit [Ping timeout: 245 seconds]
camus1 has joined #yocto
camus has quit [Ping timeout: 260 seconds]
camus1 is now known as camus
xmn has quit [Quit: ZZZzzz…]
alessioigor has joined #yocto
alessioigor has quit [Client Quit]
camus has quit [Ping timeout: 260 seconds]
camus has joined #yocto
leon-anavi has joined #yocto
dtometzki has joined #yocto
zpfvo has joined #yocto
vermaete has joined #yocto
amitk has joined #yocto
bps2 has joined #yocto
tnovotny has joined #yocto
bps2 has quit [Ping timeout: 260 seconds]
mihai has quit [Quit: Leaving]
<dwagenk> Is the layerindex webUI broken right now? The dropdown menu for branch selection doesn't work, it doesn't search for anything when typing... Only thing that works as normal: pressing <Enter> clears the search box.
<dwagenk> Already tried both firefox and chromium. Both without changes in teh configuration compared to last time (~2 Weeks ago)
zpfvo has quit [Ping timeout: 268 seconds]
zpfvo has joined #yocto
vermaete has quit [Quit: Client closed]
zpfvo has quit [Ping timeout: 260 seconds]
zpfvo has joined #yocto
<dwagenk> searching on othe tabs (recipes, machines, ..) works OK. it is just the layer search and the dropdown menus that are not working.
zpfvo has quit [Ping timeout: 246 seconds]
zpfvo has joined #yocto
bps2 has joined #yocto
zpfvo has quit [Ping timeout: 260 seconds]
zpfvo has joined #yocto
xmn has joined #yocto
zpfvo has quit [Ping timeout: 268 seconds]
zpfvo has joined #yocto
zpfvo has quit [Ping timeout: 260 seconds]
zpfvo has joined #yocto
mihai has joined #yocto
zpfvo has quit [Ping timeout: 260 seconds]
zpfvo has joined #yocto
zpfvo has quit [Ping timeout: 246 seconds]
zpfvo has joined #yocto
camus1 has joined #yocto
camus has quit [Ping timeout: 260 seconds]
camus1 is now known as camus
zpfvo has quit [Ping timeout: 260 seconds]
zpfvo has joined #yocto
vicale has quit [Quit: Leaving]
florian__ has joined #yocto
<wCPO> I'm having trouble building groff-native (hardknott) as it can't find automake-1.16: "groff-native/1.22.4-r1/groff-1.22.4/build-aux/missing: line 81: automake-1.16: command not found", has anyone experienced this before?
<hmw[m]> hi im getting: Failed to load module: /usr/lib/weston/desktop-shell.so: cannot open shared object file: No such file or directory
<hmw[m]> btw im on dunfell
<manuel1985> hmw[m]: Do you have the package 'weston' installed?
<hmw[m]> manuel1985: yes
<hmw[m]> ( it is from /usr/bin/weston.log
<hmw[m]> opkg list weston
<hmw[m]> weston - 8.0.0-r0.arago3
<manuel1985> hmw[m]: No idea. oe-pkgdata-util tells me this file is part of package 'weston', and it indeed is installed on my machine.
<manuel1985> I'm on Dunfell as well with weston 8.0.0-r0.
<hmw[m]> rebuild it is then :(
<hmw[m]> manuel1985: ty
<manuel1985> hmw[m]: Just checked the recipe, there seems to be a packageconfig option for shell-desktop
<manuel1985> Oh but if your weston log is throwing that error, that packageconfig option is probably already set to use shell-desktop
<manuel1985> Otherwise it wouldn't start looking for it
<manuel1985> Ok got no idea, sry. If you find out the reason, I'd be interested to learn. Best of luck.
<marex> hmw[m]: well is the desktop-shell.so in tmp/work* ?
<marex> hmw[m]: find tmp/work -name desktop-shell.so
<marex> it should be somewhere in packages-split/
<marex> hmw[m]: if it is, then make sure that package is installed in your image , is it ?
goliath has joined #yocto
<hmw[m]> marex: shall check in a bit ( did a reset of my build system ) will take until ~13:00 the rebuild. thanks for info btw
hpsy has joined #yocto
<marex> hmw[m]: there is some oddity in dunfell since a few days ago that the notifications from weston to systemd do not work
<marex> so in case your weston instance magically gets killed after a few seconds, you might be hitting that
<hmw[m]> the weston service is up for 22 min atm ( did not format the running system)
<marex> revert 4efdcc10906945765aa28324ce1badc59cda2976 in oe-core if you run into this
<marex> ok
florian_kc has joined #yocto
florian__ has quit [Ping timeout: 268 seconds]
<RP> kanavin_: please don't trigger anything as I need to restart the controller when the current build finishes
florian_kc has quit [Ping timeout: 268 seconds]
otavio has quit [Remote host closed the connection]
<kanavin_> RP: sure, I was just about to :)
<kanavin_> RP: why am I not surprised, newly released xserver-xorg in qemu: [ 29.108] (EE) Caught signal 11 (Segmentation fault). Server aborting
<kanavin_> they also forgot to put important files into the tarball, the 'obsolete' autotools build doesn't mind, but the shiny new meson build does
<kanavin_> RP: I'll take it out, it fails this way on arm/mips but not x86
<kanavin_> simply not well tested it seems
florian_kc has joined #yocto
<agherzan> Hi. I'm a bit confused about how the psplash alternative symlink is supposed to work. We define the priority (not package override) as 100 https://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/meta/recipes-core/psplash/psplash_git.bb#n73. So a new psplash package (set with FEATURE_PACKAGES) to psplash-foo will end up having the alternative configuration for priority 100. Now, the main package, also pulls in as a RECOMMENDS psplash-default which
<agherzan> comes with the same 100 priority for the symlink. All this means that the build will always warn about a `Warn: update-alternatives: psplash has multiple providers with the same priority` because both psplash-foo and psplash-default come with alternative configuration for the psplash symlink set with priority 100. For me, the `RRECOMMENDS_psplash=" psplash-default"` sounds fishy.
mvlad has joined #yocto
<agherzan> My usecase is simple: I want to provide a new psplash package without messing with any of the defaults (so people can easily switch based on SPLASH/FEATURE_PACKAGES).
<agherzan> I know this was the entire idea of the way we set this initially but it sounds like that RRECOMMENDS just forces a default that doesn't sound right.
otavio has joined #yocto
<agherzan> Now, in image.bbclass we define the default to `splash`. And I think this is the actual root of the issue. Because by doing so, RRECOMMENDS is used to pull in the actual binary. Shouldn't it be the other way around? Remove the RRECOMMENDS and set the image.bbaclass default to
<agherzan> FEATURE_PACKAGES_splash = "${SPLASH}"
<agherzan> SPLASH ?= "psplash-default"
tgamblin has joined #yocto
<RP> kanavin_: I suspect it is new people doing the release and they just don't have the experience :/
<kanavin_> RP: basically one hobbyist-looking guy from Lithuania
<kanavin_> red hat has no interest anymore
<RP> kanavin_: it is kind of sad
<RP> kanavin_: FWIW the autobuilder should have faster build "startup" time now. I just want to get to the bottom of some scheduler bugs in buildbot (upstream are asking me to test a fix)
<kanavin_> RP: I was wondering what goes on in that 15 minute startups, thanks
<RP> kanavin_: NFS issues is the answer ;-)
<rburton> couldnt believe rsync was that much slower than tar
<kanavin_> RP: I also wanted to ask this: do we really need to run those heavy toolchain tests in every a-full? If people look at them separately, maybe they should be moved to nightlies?
<RP> kanavin_: maybe use a-quick? :)
<RP> rburton: its the number of files. File creation overhead on NFS is the bottlebeck
<RP> I threw zstd in since it is there and helped
leon-anavi has quit [Remote host closed the connection]
leon-anavi has joined #yocto
florian__ has joined #yocto
florian_kc has quit [Ping timeout: 268 seconds]
tnovotny has quit [Quit: Leaving]
<rburton> RP: hm docs say that you cannot set SDKMACHINE in a distro configuration. I can't see why that would not work.
<RP> rburton: include conf/machine-sdk/${SDKMACHINE}.conf
<RP> include conf/distro/${DISTRO}.conf
<rburton> oh right
<rburton> why not swap that ;)
<RP> rburton: distros are allowed to override machine config
<RP> it is consistency
xmn has quit [Ping timeout: 268 seconds]
chep has joined #yocto
<hmw[m]> <marex> "it should be somewhere in..." <- marex: the find on desktop-shell.so did not return ( on dunfell HEAD ) going to reverte to
<hmw[m]> 4efdcc10906945765aa28324ce1badc59cda2976
<RP> kanavin_: ready for builds now
frosteyes has joined #yocto
florian_kc has joined #yocto
florian__ has quit [Read error: Connection reset by peer]
florian__ has joined #yocto
florian_kc has quit [Ping timeout: 268 seconds]
frosteyes has quit [Quit: WeeChat 2.8]
frosteyes has joined #yocto
<frosteyes> hi folks. A quick question. I have created a SDK, where I want to compile kernel modules out-of-tree. So have added kernel-devsrc
<frosteyes> Next I am trying to run make -C /usr/local/oecore-x86_64/sysroots/corei7-64-poky-linux/usr/src/kernel prepare scripts
<frosteyes> But the problem is that it fails the prepare with objtool. (no rule to make target .... objtool/fixdep.o)
<frosteyes> Is using dunfell branch..
<frosteyes> I guess it might be an issue with how I use the kernel-devsrc part.
<zeddii> frosteyes: What's your kernel version ? objtool is covered under devsrc, everything needed to rebuild it is packaged.
<zeddii> but I can't say that I've tested with the SDK.
<zeddii> if there are a set of configs and reproducing steps, I could always try it locally to see what's up.
<frosteyes> Thanks zeddii - the kernel is a custom 4.19.130
<zeddii> aha. yes, there could be parts missing for that kernel version. kernel-devsrc is curated to pick up just what we need, so I end up tweaking it through the versions.
jatedev has quit [Ping timeout: 256 seconds]
<frosteyes> zeddii: to you know if 5.4 is working better?
<zeddii> with the SDK .. I can't say for sure. But any of those versions, we have a nightly test that builds out of tree modules, so we know that it works for any given release, with the kernel versions we've tested as the references
<frosteyes> Okay..
<zeddii> it is tested on target for those out of tree tests, but I do know it works in the eSDK. I just don't use the SDK.
<frosteyes> Will look into the eSDK, and also the other kernel..
<frosteyes> 5.4 seems to be the normal "dunfell" kernel..
<zeddii> dunfell was v5.2 and 5.4 at the time of release, so I know they were tested heavily
<zeddii> right, and we deleted 5.2 after, so you'll only see 5.4 now in the dunfell branches
<frosteyes> Yes
<zeddii> I can try a dunfell SDK test, with all the stock bits.
<zeddii> I just have to look up the incantation :D
<frosteyes> That would be great :) I have a task for upgrading kernel also. So will properly look into it - before my "getting SDK to work with out-of-tree modules"
troth has joined #yocto
<marex> RP: I have this question ... say I have a BSP layer for a board and I want to bbappend a recipe which is built for class-native, but only for specific machine so that the BSP layer won't have side effects on other layers, is that possible ?
<marex> FOO:append:class-native:my-machine = " extraflag" seems to not work
<marex> obviously, because MACHINE isnt set in class-native and neither is MACHINEOVERRIDES
GillesM has joined #yocto
kayterina has joined #yocto
GillesM has quit [Client Quit]
<smurray> marex: re weston startup, I did a test build of latest dunfell with INIT_MANAGER="systemd" last night, and weston seems to start and stay running in core-image-weston
<marex> smurray: on oe-core/dunfell too ?
<marex> the dunfell part is important there
<smurray> marex: I tested poky dunfell with qemux86-64
<marex> smurray: did you have the notification stuff above in weston systemd service file ?
<smurray> marex: yes
<marex> all right, in that case, I need to double-check why the notification isn't delivered here
paulg has joined #yocto
<smurray> marex: are you using a BSP layer as well? Some dork with weston startup, could see it breaking. We don't use weston-start in AGL, but do have modules=systemd-notify.so in our base weston.ini, so we're still okay (I think)
akiCA has joined #yocto
codavi has joined #yocto
leon-anavi has quit [Remote host closed the connection]
<marex> smurray: that systemd-notify.so in modules should be unnecessary, see weston.log, it will complain it is loaded already
leon-anavi has joined #yocto
<smurray> marex: weston-start is adding it via the command-line, I figure that would be the cause of that?
<marex> yep
scott_ has joined #yocto
akiCA has quit [Ping timeout: 268 seconds]
<smurray> marex: not a problem in our AGL images, I think, but I'll keep an eye out for the message when we upgrade to 3.1.13
sakoman has joined #yocto
<hmw[m]> marex: 4efdcc10906945765aa28324ce1badc59cda2976 also is not producing desktop-shell.so
<hmw[m]> i also have added PACKAGECONFIG = "shell-desktop" to a new weston_%.bbappend
<marex> hmw[m]: PACKAGECONFIG:append:yourmachine = " shell-desktop"
<marex> hmw[m]: you can verify if it is picked up using bitbake -e weston | grep ^PACKAGECONFIG
camus has quit [Remote host closed the connection]
camus has joined #yocto
<hmw[m]> it is was not in the package config. is the : new ?
<marex> hmw[m]: it is new syntax which replaces the old _ one
<marex> since honister
otavio has quit [Ping timeout: 268 seconds]
otavio has joined #yocto
<sveinse> How can I setup a build server site wide hash equivalent server? Download any fairly new poky, setup an arbitrary local.conf pointing to the site-wide sstate and run bitbake-hashserv with a global port?
<RP> kanavin_: ah. My fault :(
<sveinse> Is there a correspondance between the MACHINE running the bitbake-hashserv against the systems that is going to use the hash equivalence server?
<sveinse> This is going to be used for ordinary image builds. I don't have any autobuilder setup
<RP> kanavin_: workaround pushed until I can figure out something better
<RP> marex: natives are only built once for all machines so what you're asking for isn't possible
<RP> sveinse: you don't need metadata, just a copy of bitbake
<RP> kanavin_: I restarted it for you
<sakoman> \
<sveinse> Won't the bitbake-hashserve need access to the sstate cache? Or is that completely separate? E.g. handled by bitbake itself?
<rburton> is anyone else looking at upgrading python3-cryptography? (cc moto_timo[m])
<moto-timo> rburton: pyo3 is not behaving well in cross-compile. Frustrating mess.
<rburton> grr
<rburton> we have a dependency on py-crypto for a firmware build, which breaks with openssl3 as py-cry doesn't support it
<marex> RP: I guess I could use DISTROOVERRIDES for that ?
<marex> RP: or is that awful ?
<marex> RP: I am trying to avoid side-effects of the layer
<kanavin_> RP: thanks :)
<hmw[m]> marex: tnx it it working now :D
<marex> oh :)
<marex> hmw[m]: so it was the missing append ?
<smurray> marex: depending on what the -native bit is, don't you risk rebuilding piles of stuff?
<hmw[m]> marex: yes
<RP> marex: layers really shouldn't hack natives to be machine specific, ever
<sveinse> I've noticed that it's a bit variable if python3-* recipes have BBCLASSEXTEND = "native nativesdk" or not. Some doesn't have any, some only native, a few have all
<rburton> patches welcome if you've tested the build
<RP> sveinse: in most cases we've likely just never needed those variants and they do have a cost
<sveinse> I see. I have a few python3-*.bbappends in my layers because of it
kiran has joined #yocto
<rburton> sveinse: just post the patches, they're trivial if you've tested them
kayterina has quit [Remote host closed the connection]
camus has quit [Ping timeout: 260 seconds]
camus has joined #yocto
<rburton> moto-timo: do you have any WIP? like a setuptools-rust recipe already?
<rburton> awesome
<moto-timo> rburton: older version, simpler (no real rust) https://git.openembedded.org/meta-openembedded-contrib/log/?h=timo/rust_python3-cryptography
<marex> smurray: luckily no, I am only doing this to fill PARALLEL_MAKE back into one crappy tool from crappy poorly maintained layer
<marex> smurray: else it builds for effing ever
<smurray> marex: maybe a bbappends that's dynamically applied would be enough?
zpfvo has quit [Ping timeout: 246 seconds]
zpfvo has joined #yocto
<marex> smurray: doesnt that blow up on yocto-check-layer or oelint-adv ?
<smurray> marex: it likely would for y-c-l if you include the layer, yes
<smurray> marex: but it's simpler than perhaps something with BBMASK set via anon python, which might be another route
<smurray> marex: using BBFILES_DYNAMIC was what I was thinking of
zpfvo has quit [Ping timeout: 260 seconds]
zpfvo has joined #yocto
zpfvo has quit [Ping timeout: 260 seconds]
zpfvo has joined #yocto
otavio has quit [Ping timeout: 260 seconds]
otavio has joined #yocto
<marex> smurray: I am not entirely sure if that doesn;t trigger y-c-l either
<marex> smurray: thanks anyway
<smurray> marex: the BBFILES_DYNAMIC? I believe it won't until you include the layer that triggers it (unless you put some other mechanism on top)
Tokamak has joined #yocto
Tokamak has quit [Client Quit]
<smurray> marex: the only other option that comes to mind is tying to a distro feature or other variable like meta-virt does to gate it's modifications and pass y-c-l
<smurray> marex: that's used in AGL in places to get y-c-l compliance
zpfvo has quit [Ping timeout: 246 seconds]
zpfvo has joined #yocto
zpfvo has quit [Ping timeout: 246 seconds]
zpfvo has joined #yocto
bps2 has quit [Ping timeout: 268 seconds]
otavio has quit [Ping timeout: 260 seconds]
otavio has joined #yocto
<sveinse> rburton: yocto use mail patches? No scheme for PR?
<rburton> correct
<sveinse> How is copyright managed in yocto? Waived to commit?
<rburton> basically, kernel-style DCO
otavio has quit [Ping timeout: 260 seconds]
otavio has joined #yocto
curious457 has quit [Quit: Client closed]
Guest81 has joined #yocto
<Guest81> Can I force a 32 bit user space by setting MLPREFIX="lib32-"?
<sveinse> Can BB_HASHSERVE and BB_SIGNATURE_HANDLER be specified in site.conf ?
<rburton> anything can be in site.conf
<rburton> its parsed just before auto.conf, which is just before local.conf
<tgamblin> khem: can you put "[oe] [PATCH][meta-python] python3-imgtool: add recipe" on master-next?
leon-anavi has quit [Quit: Leaving]
<sveinse> Will a hash equivalence server help on build times beyond what the sstate cache provides?
<rburton> yes
<rburton> if say libfoo rebuilds with different inputs, but the output is identical, the hashequi will say its the same result
<rburton> so the output hash gets switched back, and other things might not rebuild
<rburton> build an image, make a trivial change to a core library which has no impact (like add a comment), and watch it decide that large chunks don't need to be rebuilt
<sveinse> Do you need to prime it with an existing sstate setup? I just fired it up, but I build the full honister two days ago
bps2 has joined #yocto
<rburton> bitbake will look in the existing sstate too, but the hash equiv helps future builds
<sveinse> I.e. does it add anything wiping the sstate cache before using the hashserv?
curious457 has joined #yocto
zpfvo has quit [Ping timeout: 260 seconds]
curious457 has quit [Quit: Client closed]
zpfvo has joined #yocto
zpfvo has quit [Client Quit]
<halstead> dwagenk: I've resolved the issue with the layerindex dropdowns etc. Let me know if you still notice problems after a reload.
<RP> sveinse: the rebuild that triggers will prime the hash equivalence server
Saur has joined #yocto
otavio has quit [Ping timeout: 268 seconds]
otavio has joined #yocto
bps2 has quit [Ping timeout: 268 seconds]
camus1 has joined #yocto
<moto-timo> vmeson: any advice/examples for how to patch a crate without rust tooling? We need to patch pyo3 to search in ${STAGING_LIBDIR}/python-sysconfigdata https://github.com/PyO3/pyo3/blob/main/pyo3-build-config/src/impl_.rs#L836
<moto-timo> s/without/with our/
camus has quit [Ping timeout: 260 seconds]
camus1 is now known as camus
dev1990 has joined #yocto
<sveinse> RP: yeah, thanks. Just wiped the sstate cache and started a complete rebuild of all build lanes. Will take 12 hours to complete :D Hopefully we get a good hashserv out of this
vd has joined #yocto
<vd> Does PCI 4.0 on a NVMe makes a difference compared to PCI 3.0 when compiling?
Artzaik[m] has joined #yocto
xantoz has quit [Ping timeout: 260 seconds]
xantoz has joined #yocto
<rburton> no idea, but i doubt you'll be hitting the throughput limits
<JaMa> vd: depends on where you bottleneck is, using it on quad core laptop probably won't make much difference, on 128 core 3995wx the effect would be much more visible
* JaMa has 6000s aorus gen4 2tb drive with 3970 and only 128g ram is bigger issue than io now
<vd> interesting! So with a modest 5950X CPU / 64Go RAM, PCI 3.0 or 4.0 for NVMe won't make any significant difference so better go with the PCI 3.0 I assume
<Saur> rburton: Did you see my question about the licenses for libx11-compose-data on the OE-Core list the other day?
<JaMa> if you already have it running with PCIE 3 drive, then I would watch avq in atop during the build
leon-anavi has joined #yocto
<vd> JaMa don't have it yet, I'm about to order and NVMe PCI 3.0 vs 4.0 was my last step
<JaMa> frosteyes submitted results with 5950x 128G ram and PCIE 4.0 Corsair SSD MP600 1TB, maybe he can share his experiences on his system
<sveinse> One of the few things not affected by this crazy electronics crisis?
* JaMa bought his before crisis started and before Chia was announced :)
vermaete has joined #yocto
<vd> JaMa: thoughts on QLC vs TLC nand for the NVMe?
zyga-mbp has joined #yocto
camus has quit [Ping timeout: 260 seconds]
camus has joined #yocto
<sveinse> How well does hashserv enabled build interact with older non-hashserve builds? Both will fill the sstate cache, but only the enable builds will fill the hashserve db. Could that affect build performance? Or are older systems pre-hashserve so different from newer that they don't really share anything in sstate cache anyways?
<JPEW> sveinse: There is probably not a lot that pre-hashserve builds would share anyway, so your assessment sounds correct (to me anyway)
cryptollision[m] has left #yocto [#yocto]
florian__ has quit [Ping timeout: 268 seconds]
florian__ has joined #yocto
Xagen has joined #yocto
<sveinse> Perhaps the Yocto community could start their own taskhash blockchain XD :P
florian__ has quit [Ping timeout: 268 seconds]
<kanavin_> autotools gone mad: kea:compile───run.do_compile.───make───make───bash───bash───make───bash───bash───make───bash───bash───make───bash───bash───make───x86_64-poky-lin───x86_64-poky-lin─┬─as
<sveinse> haha
<sveinse> automake in each subdir I would guess
<kanavin_> I don't mind what crazy shit they pull in there, but running just one compiler at a time, even though top level make runs with -j 16 is not ok.
<sveinse> yeah, I think bash doesn't pass on the vars for make to utilize the job-server of the top make
<vd> JaMa: nevermind, I get it for TLC/QLC and NVMe/SSD. Endurance is the key for normal use :)
<vmeson> moto-timo: do you have a branch of the meta-python repo that I could take a look at?
zyga-mbp has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<moto-timo> vmeson: https://git.openembedded.org/meta-openembedded-contrib/log/?h=timo/python3-cryptography_35.0.0
<moto-timo> vmeson: it fails in do_compile because of pyo3
<vmeson> moto-timo: thanks... time to reboot to pick up some updates, I'll take a look hopefully today.
<JaMa> I remember oposite case in qt* long time ago when -j 64 from top level makefile was also used in 10+ make calls in subdirectories
<moto-timo> vmeson: the error is from https://github.com/PyO3/pyo3/blob/main/pyo3-build-config/src/impl_.rs#L884 because it doesn’t know about ${STAGING_LIBDIR}/python-sysconfigdata
zyga-mbp has joined #yocto
hpsy has quit [Quit: Leaving.]
<vd> what should you prioritize on an NVMe, SSTATE_DIR or TMPDIR?
otavio has quit [Ping timeout: 260 seconds]
otavio has joined #yocto
camus has quit [Ping timeout: 260 seconds]
camus has joined #yocto
vermaete has quit [Quit: Client closed]
mcfrisk_ is now known as mcfrisk
<JPEW> vd: TMPDIR
otavio has quit [Ping timeout: 268 seconds]
<JPEW> vd: SSTATE_DIR is (mostly) pretty linear (write an archive, read an archive). TMPDIR actually does builds so I would imagine seeking is a lot more important.... at least that's my hunch. I run NVMe TMPDIR and a 1TB spinning disk sstate
otavio has joined #yocto
<vd> JPEW with the system on the NVMe as well or another disk?
<JPEW> vd: OS is on the nvme, but I don't know how much that would matter
otavio has quit [Ping timeout: 268 seconds]
<JaMa> I have OS on another nvme, because I expect the OE one to die much sooner and at least use separate partition where you're more comfortable with long sync interval (and nobarrier)
<JaMa> but with 64G ram on 32 cores I don't expect that there will be a lot of memory left for disk cache (like on my 64cores with 128G)
<JaMa> also depends on what are your typical images, building qtwebengine/chromium/nodejs easily takes 2GB per c++ process
<vd> JaMa: building qtwebengine images on 16-core / 64Go ram, I was worried about the NVMe endurance even though it sounds nicer for TMPDIR
<JaMa> also I use it as regular desktop during the builds, so chrome tabs get killed by OOMK first (and don't help much) then c++ gets killed and I need to restart the build
<JaMa> are you going to disable smp?
nad has joined #yocto
florian__ has joined #yocto
hpsy has joined #yocto
<JaMa> doesn't 5950 have 16 cores, 32 threads? I was assuming you'll use -j 32 with it
<JaMa> my gen4 nvme (used mostly for OE builds) is running for 11316 hours and still has 95% life left
<JaMa> but it's true that last year I'm running significantly less builds here locally
otavio has joined #yocto
nad has quit [Quit: Client closed]
nad has joined #yocto
<JaMa> and I wrote around 240TB to it in 2 years, while rootfs disk shows only 13TB in 3-4 yers
otavio has quit [Ping timeout: 268 seconds]
<JaMa> I should check what smart says about server ssds, because here I don't usually use rm_work, so the builds stay valid relatively long, while on jenkins build server I'm using rm_work + delete TMDIR after jenkins job is finished, so with every job it needs to unpack a lot of sstate and a lot of writes to TMPDIR even when it gets removed shortly after (but maybe not quickly enough before it gets partially
<JaMa> written to disk)
otavio has joined #yocto
<JaMa> that's why on servers with more ram than cores I prefer to use tmpfs to keep ssds healthy for longer
otavio has quit [Ping timeout: 260 seconds]
otavio has joined #yocto
florian_kc has joined #yocto
florian__ has quit [Ping timeout: 268 seconds]
otavio has quit [Ping timeout: 260 seconds]
otavio has joined #yocto
otavio has quit [Ping timeout: 260 seconds]
otavio has joined #yocto
otavio has quit [Ping timeout: 268 seconds]
nad has quit [Ping timeout: 256 seconds]
otavio has joined #yocto
camus1 has joined #yocto
yocton has joined #yocto
camus has quit [Read error: Connection reset by peer]
camus1 is now known as camus
otavio has quit [Ping timeout: 268 seconds]
GillesM has joined #yocto
kiran has quit [Ping timeout: 268 seconds]
otavio has joined #yocto
bps2 has joined #yocto
xmn has joined #yocto
otavio has quit [Ping timeout: 268 seconds]
otavio has joined #yocto
bps2 has quit [Ping timeout: 260 seconds]
hpsy has quit [Ping timeout: 268 seconds]
zyga-mbp has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
otavio has quit [Ping timeout: 268 seconds]
bps2 has joined #yocto
dev1990 has quit [Quit: Konversation terminated!]
otavio has joined #yocto
<moto-timo> vmeson: rburton: pushed a pyo3 recipe created with cargo-bitbake (plus an attempt at a patch, but it doesn't work yet)
otavio has quit [Ping timeout: 260 seconds]
otavio has joined #yocto
florian_kc has quit [Ping timeout: 268 seconds]
bps2 has quit [Ping timeout: 260 seconds]
otavio has quit [Ping timeout: 268 seconds]
otavio has joined #yocto
bps2 has joined #yocto
florian_kc has joined #yocto
otavio has quit [Ping timeout: 268 seconds]
otavio has joined #yocto
otavio has quit [Ping timeout: 260 seconds]
otavio has joined #yocto
GillesM has quit [Quit: Leaving]
leon-anavi has quit [Quit: Leaving]
otavio has quit [Ping timeout: 268 seconds]
otavio has joined #yocto
otavio has quit [Ping timeout: 268 seconds]
otavio has joined #yocto
otavio has quit [Ping timeout: 260 seconds]
otavio has joined #yocto
otavio has quit [Ping timeout: 268 seconds]
otavio has joined #yocto
jatedev has joined #yocto
otavio has quit [Ping timeout: 268 seconds]
otavio has joined #yocto
otavio has quit [Ping timeout: 268 seconds]
<sveinse> What happes if the sstate cache is cleaned for e.g. old age and the hashserv?
otavio has joined #yocto
<sveinse> What I mean is that the entry remains in the hashserv, but the sstate object goes away
<sveinse> Will it simply regenerates the artifacts, uploads them to the sstate cache and updates the hashserv, overwriting the old entry?
goliath has quit [Quit: SIGSEGV]