ndec changed the topic of #yocto to: "Welcome to the Yocto Project | Learn more: https://www.yoctoproject.org | Join us or Speak at Yocto Project Summit (2022.05) May 17 - 19, more: https://yoctoproject.org/summit | Join the community: https://www.yoctoproject.org/community | IRC logs available at https://www.yoctoproject.org/irc/ | Having difficulty on the list or with someone on the list, contact YP community mgr ndec"
dti has joined #yocto
dtometzki has quit [Ping timeout: 268 seconds]
brazuca has quit [Ping timeout: 252 seconds]
RobertBerger has joined #yocto
rber|res has quit [Ping timeout: 252 seconds]
sakoman has quit [Quit: Leaving.]
nemik has quit [Ping timeout: 256 seconds]
nemik has joined #yocto
sakoman has joined #yocto
Estrella has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
amitk has joined #yocto
kevinrowland has quit [Quit: Client closed]
Tokamak has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
goliath has joined #yocto
sakoman has quit [Quit: Leaving.]
olani has quit [Ping timeout: 252 seconds]
nemik has quit [Ping timeout: 256 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 256 seconds]
nemik has joined #yocto
mvlad has joined #yocto
alessioigor has joined #yocto
alessioigor has quit [Quit: alessioigor]
adams[1] has quit [Quit: Client closed]
pbergin has joined #yocto
<mcfrisk> With the BB_PRESSURE_MAX_[CPU,MEMORY,IO] patches now proposed to dunfell and kirkstone too, what values are good as limits? Could there be some values set by default too? I tried setting to 10 for all, which means AFAIK that if more than 10% of CPU, memory or IO is stalling during measurement time, then no new tasks are fired. In my test machine with 40 threads, 128 Gb RAM, I frequently ran out of RAM with
<mcfrisk> default parallel settings.
frieder has joined #yocto
GNUmoon2 has quit [Ping timeout: 268 seconds]
GNUmoon2 has joined #yocto
goliath has quit [Quit: SIGSEGV]
<RP> mcfrisk: we aren't sure about values. To be able to even experiment fairly with this on the autobuilder we need it functional over all our builds
zpfvo has joined #yocto
<RP> mcfrisk: we may be able to recommend something in due course but we're definitely not there yet. The default proposed for the autobuilder to start with is 10000
<mcfrisk> RP: do the autobuilders run more than one build in parallel on a machine?
<RP> mcfrisk: 3 at once
<mcfrisk> ouch, with our memory use patterns that would be horror, random oom kills everywhere..
<RP> mcfrisk: we do limit zstd/xz memory/parallelism a bit
Schlumpf has joined #yocto
<mcfrisk> feels like CPU overloading is rarely a problem, just keep the threads queued, but everyone is in trouble if RAM runs out. Even IO saturation isn't a big deal as long as there is some RAM for file system buffers, e.g. sstate cache.
<RP> mcfrisk: we run a lot of tests under qemu and regulating that load to avoid timeouts is what we're aiming for
<mcfrisk> do you have problems with some specific resource, CPU, memory, IO or in general you see gaps in performance or throughput, or annoying random failures due to oom or hangs/deadlocks?
<mcfrisk> when I view CPU, memory, IO and network usage of bitbake builds in pcp charts, I can see that CPU utilization is quite low, and lots of IO is done, and then those few spikes of memory usage which trigger oom killer.
<mcfrisk> these on a single machine with a single bitbake build
ptsneves has joined #yocto
<RP> mcfrisk: annoying random timeout problems from the VMs, probably from load spikes on the systems
<Schlumpf> Good morning,
<Schlumpf> is it possible to confiure a bridge with /etc/network/interfaces? I tried the examples from Debian wiki (https://wiki.debian.org/BridgeNetworkConnections#Configuring_bridging_in_.2Fetc.2Fnetwork.2Finterfaces) without success. Creating bridges with brctl works fine.
zpfvo has quit [Quit: Leaving.]
leon-anavi has joined #yocto
zpfvo has joined #yocto
goliath has joined #yocto
<wkawka> Hi, how can I affect pkgconfig search path?
<mcfrisk> RP: ouch, also VMs. We had very bad experiences with cloud things, like vmware. The IO stack simply stalled causing random failures in large set of machines. IMO it's a bad idea to run bitbake builds on virtual machines.
ardo has quit [Read error: Connection reset by peer]
ardo has joined #yocto
<landgraf> mcfrisk: VMs are used for running tests and some configuration stuff on target machines
<ernstp> It was working very well for me with Azure VMs, Ubuntu 18.04
zpfvo has quit [Ping timeout: 268 seconds]
zpfvo has joined #yocto
<qschulz> JPEW: Yocto Chant #1 IIRC. Recipe data is local, configuration data is global (not sure about the wording of the second part of the sentence but that's the intent anyways :) )
GNUmoon2 has quit [Remote host closed the connection]
GNUmoon2 has joined #yocto
florian has joined #yocto
xmn has quit [Remote host closed the connection]
xmn has joined #yocto
GNUmoon2 has quit [Remote host closed the connection]
GNUmoon2 has joined #yocto
Schlumpf has quit [Quit: Client closed]
mait[m] has quit [Quit: You have been kicked for being idle]
Schlumpf has joined #yocto
Bardon_ has joined #yocto
Bardon has quit [Ping timeout: 268 seconds]
davidinux has quit [Ping timeout: 252 seconds]
davidinux has joined #yocto
<rburton> wkawka: you might want to explain what you actually want to do
<rburton> DvorkinDmitry: because that's a terrible idea in the use case of "generate an image once, deploy to 1000 machines". it's trivial to pre-generate keys, there's a recipe in oe-core for the selftests which does that
<wkawka> I made a recipe for https://github.com/system76/ec using cargo bitbake, I repiared some thing but now pkgconf cannot find `hidapi-hidraw`. However, there is `hidapi-hidraw.pc.in` file and I set `PKG_CONFIG_PATH` variable like this:
<wkawka> `PKG_CONFIG_PATH .= ":/build/tmp/work/.../system76-ectool/0.3.8.AUTOINC+a8213311b1-r0/cargo_home/bitbake/hidapi-1.3.4/etc/hidapi/pc/pkgconfig"` (I know it's horrible but I want to make it work)
<rburton> a .pc.in file needs to be turned into a .pc file at some point, and installed into the sysroot
<wkawka> Ok, so I need to add do_compile_prepend and there steps to install them
<rburton> no, ec should be doing this itself
<rburton> where is hidapi-hidraw coming from?
<wkawka> the `hidapi-hidraw.pc.in` file is the only `hidapi-hidraw` file
destmaster84 has joined #yocto
<wkawka> And the pc file is not installed by default
<destmaster84> I'm unable to exclude mariadb package from my build with IMAGE_INSTALL:remove = " mariadb" on kirkstone ... any suggestion?
<qschulz> destmaster84: remove the recipe from the layer and check what does not build anymore
<qschulz> destmaster84: removing mariadb from IMAGE_INSTALL just removes it from the list of packages in IMAGE_INSTALL
<qschulz> but not all (actually most packages aren't) are in IMAGE_INSTALL
<qschulz> they are pulled by other packages as dependencies
<qschulz> you might have to play with PACKAGECONFIG or remove other packages from the list
<qschulz> it can be tedious to remove a package
<RP> mcfrisk: this isn't the builds, it is the runtime tests
<rburton> wkawka: so you're also building hidapi?
wkawka has quit [Quit: Client closed]
wkawka has joined #yocto
<wkawka> No, hidapi is a dependency in system76 ec
<wkawka> It is an crate
<wkawka> to be exact
<wkawka> And while building crate hidapi it has an error https://pastebin.com/SL56z3di
Schlumpf has quit [Ping timeout: 252 seconds]
Schlumpf has joined #yocto
prabhakarlad has quit [Quit: Client closed]
zpfvo has quit [Ping timeout: 268 seconds]
alejandrohs has quit [Ping timeout: 252 seconds]
<wkawka> Ok, now it works, I had to do a recipe for that hidapi and add it to DEPENDS for system76-ec
alejandrohs has joined #yocto
zpfvo has joined #yocto
wkawka has quit [Quit: Client closed]
<RP> rburton: was there a trick to build a native tool in meson?
<RP> ah, no, meson won't help. Its cmake for llvm itself :(
destmaster84 has quit [Quit: Client closed]
zpfvo has quit [Ping timeout: 268 seconds]
zpfvo has joined #yocto
<rburton> RP: meson does natively nicely. cmake... actively makes it very hard
<RP> rburton: right. I was hoping to generate a cross llvm-config which I can do but it them makes the target data host arch specific :(
vladest has quit [Ping timeout: 256 seconds]
Schlumpf has quit [Quit: Client closed]
Estrella has joined #yocto
brazuca has joined #yocto
Juanosorio94 has joined #yocto
zpfvo has quit [Ping timeout: 252 seconds]
<ptsneves> is qemux86-64's libdir /usr/lib? Should it not be /usr/lib64?
brazuca has quit [Ping timeout: 252 seconds]
<Juanosorio94> I have two recipes, A and B, recipe B requires header files from recipe A. I tried to place these header files somewhere during install(), but I just learned that variables like B and D change for each recipe... I dont necesarily need these header files ending up in my rootfs... how can I access them from recipe B?
zpfvo has joined #yocto
<qschulz> ptsneves: IIRC, /usr/lib64 only exists if multilib is enabled? but that might be bad recollection
<qschulz> Juanosorio94: install the header files into ${D}${includedir} in recipe A and have recipe B have DEPENDS += "A"
<Juanosorio94> I need to pass them using to cmake using a flag
<qschulz> Juanosorio94: uh?
<Juanosorio94> like so EXTRA_OECMAKE="-DPKG_A_INCLUDE=/path/to/headers"
<ptsneves> qschulz: :O wow i worked all this time in 64 bit platforms with libdir = /usr/lib64. If what you say is right...i have been living in a bubble.
<qschulz> Juanosorio94: why? we already have the magic for cmake to include header files from other recipes
<qschulz> so just rely on the cmake bbclass to do the right thing
<ptsneves> Juanosorio94: not even required as headers should be in the sysroot and available with not modification
<Juanosorio94> my cmake expects me to give it a DPKG_A_INLCUDE and path to the headers
<Juanosorio94> its just the way the repo is set up, just tried that out and it didnt work :(
<qschulz> ptsneves: don't have time to look into this, but I suspect this is related to BASE_LIB (set by machine tunes) which is then read in meta/conf/multilib.conf and put into baselib
brazuca has joined #yocto
<qschulz> Juanosorio94: make it point to ${STAGING_INCDIR}
pbergin has quit [Quit: Leaving]
<qschulz> but that is quite messed up, why not use -I (or whatever the equivalent for cmake is) instead of a specific variable?
<Juanosorio94> thanks ill trythat!
<qschulz> ptsneves: BASELIB actually sorry
<qschulz> no, BASE_LIB, can't read :p
<ptsneves> qschulz: no worries. Interestingly bitbake-getvar does not show any setting coming from tune, and it actually should as i read the tune-core2.inc
<Juanosorio94> qschulz: which one though?  during install in A or include in B or both? I assume both right?
<qschulz> Juanosorio94: you install the header into ${D}${includedir} in recipe A, in recipe B you have DEPENDS += "A"
<qschulz> Juanosorio94: in recipe B, you also have EXTRA_OECMAKE += "-DPKG_A_INCLUDE=${STAGING_INCDIR}"
prabhakarlad has joined #yocto
<ptsneves> qschulz: BASE_LIB is set to None https://pastebin.com/p0RyxTP8
<ptsneves> Will need to look into why
<qschulz> ptsneves: documentation states it only applies to multilib confiuguration
<ptsneves> ah
<ptsneves> i see. Thanks! I will read it more carefully
<qschulz> ptsneves: does not mean the documentation is correct though or is explicit enough :)
<ptsneves> qschulz: actually it seems it is. I just was just too prejudiced on my knowledge.
Schlumpf has joined #yocto
sakoman has joined #yocto
mihai has joined #yocto
kscherer has joined #yocto
<LetoThe2nd> JPEW: the yocto chant #1: "recipe data is local, configuration data is global"
<qschulz> LetoThe2nd: aaaah still got it :)
<LetoThe2nd> qschulz: of course. sorry been travelling all day.
<qschulz> LetoThe2nd: I meant *I* still got it (I answered JPEW this morning but wasn't sure it was this wording :) )
<LetoThe2nd> qschulz: ah okay
goliath has quit [Quit: SIGSEGV]
wmills has joined #yocto
nemik has quit [Ping timeout: 256 seconds]
florian has quit [Remote host closed the connection]
nemik has joined #yocto
Guest63 has joined #yocto
<Guest63> Hi
<Guest63> Is there any way to run command using '${@}' expression
nemik has quit [Ping timeout: 252 seconds]
<Guest63> ${@os.system(command)} doesn't seem to work
nemik has joined #yocto
<qschulz> Guest63: just create a python function you call from within the inline expression? Though I'm not sure what's the issue with this expression above
<Guest63> Thanks qschulz
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
hcg has joined #yocto
sotaoverride has quit [Ping timeout: 248 seconds]
sotaoverride has joined #yocto
<rburton> Guest63: if you're using ${@...} then you're in shell, so just run the command directly?
Tokamak has joined #yocto
<qschulz> rburton: in shell? that's should be python?
<rburton> well i guess if you're doing a variable assignment you're not strictly in shell land
goliath has joined #yocto
<rburton> Guest63 should explain what they actually want to do
<rburton> RP: https://bugzilla.yoctoproject.org/show_bug.cgi?id=14902 is the stub perl bug. trying to reproduce now
<kergoth> Hmm.. random idea, I wonder about creating an asdf (https://asdf-vm.com/) plugin for the buildtools tarball.
Schlumpf has quit [Quit: Client closed]
sotaoverride has quit [Ping timeout: 256 seconds]
<qschulz> kergoth: we need two different versions of sphinx-build for the docs. I would hate to have to handle different buildtools tarballs just for that. So +1 from the people working with docs on the autobuilder I guess? :)
<Guest63> rburton I am trying to calculate the hash of rootfs
<Guest63> well i can do that using a python function
<Guest63> but i wanted to do it an single line like `rootfs_size="${@os.system("sha256sum ${ROOTFS_IMAGE_PATH}")}"`
<rburton> Guest63: if you want a sha256sum file on disk then bitbake can do that for you
<rburton> eg IMAGE_FSTYPES = "ext4 ext4.sha256sum"
<RP> qschulz: I was going to propose using different buildtools for that as I don't see many other options!
<Guest63> thanks, I will try that
<Guest63> but I am curious If i can use "${@os.system(command)}" to run arbitrary commands
<Guest63> This would be very handy
<kergoth> os.system() doesn't return anything.
<kergoth> it sends stdout and stderr to the existing stdout and stderr, doesn't capture them, so not of any use to you
<Guest63> kergoth, i think that explains my build stucks  when it executes ```${@os.system(command)}```
<Guest63> Thanks
<qschulz> RP: too many issues with the docs right now.. aaaaaaand I'm glad you raised this because this completely breaks my attempt at unifying release manuals...
<qschulz> I hate this
vladest has joined #yocto
<qschulz> at least I won't feel bad for not finding the time to push a PoC to the mL
zpfvo has quit [Ping timeout: 268 seconds]
zpfvo has joined #yocto
sotaoverride has joined #yocto
zpfvo has quit [Ping timeout: 252 seconds]
zpfvo has joined #yocto
zpfvo has quit [Ping timeout: 268 seconds]
zpfvo has joined #yocto
amitk has quit [Ping timeout: 256 seconds]
nemik has quit [Ping timeout: 244 seconds]
nemik has joined #yocto
<RP> qschulz: well, I'm open to other approaches
<RP> qschulz: Its not like I'm finding time to resolve it either :(
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
zpfvo has quit [Ping timeout: 268 seconds]
zpfvo has joined #yocto
zpfvo has quit [Client Quit]
<rburton> accidentally checked out the 'green' branch and wondered why nothing built
seninha has quit [Remote host closed the connection]
Juanosorio94 has quit [Quit: Client closed]
olani has joined #yocto
Guest9046 has joined #yocto
<Guest9046> Hi
<Guest9046> I am porting xen on raspberry pi 4
<Guest9046> Could port & see Dom0
<Guest9046> Trying to create a guest domain, copied guest.cfg, Image & xen-guest-image-minimal .ext3 to /home/root
<Guest9046> while mounting ext3 filesystem facing issue
<Guest9046> Logs:
<Guest9046> root@raspberrypi4-64:~# losetup /dev/loop0 xen-guest-image-minimal-raspberrypi4-64.ext3
<Guest9046> losetup: xen-guest-image-minimal-raspberrypi4-64.ext3: No such file or directory
<Guest9046> root@raspberrypi4-64:~# ls -l
<Guest9046> -rw-r--r-- 1 root root 24652288 Mar 9 12:36 Image
<Guest9046> -rw-r--r-- 1 root root 247 Mar 9 12:37 guest1.cfg
<Guest9046> -rw-r--r-- 1 root root 868220928 Mar 9 12:39 xen-guest-image-minimal-raspberrypi4-64.ext3
<Guest9046> root@raspberrypi4-64:~# chmod 0777 xen-guest-image-minimal-raspberrypi4-64.ext3
<Guest9046> root@raspberrypi4-64:~# ls -l
<Guest9046> -rw-r--r-- 1 root root 24652288 Mar 9 12:36 Image
<Guest9046> -rw-r--r-- 1 root root 247 Mar 9 12:37 guest1.cfg
<Guest9046> -rwxrwxrwx 1 root root 868220928 Mar 9 12:39 xen-guest-image-minimal-raspberrypi4-64.ext3
<Guest9046> root@raspberrypi4-64:~# losetup /dev/loop0 xen-guest-image-minimal-raspberrypi4-64.ext3
<Guest9046> losetup: xen-guest-image-minimal-raspberrypi4-64.ext3: No such file or directory
<Guest9046> root@raspberrypi4-64:~# losetup /dev/loop0 /home/root/xen-guest-image-minimal-raspberrypi4-64.ext3
<Guest9046> losetup: /home/root/xen-guest-image-minimal-raspberrypi4-64.ext3: No such file or directory
<Guest9046> root@raspberrypi4-64:~#
<Guest9046> root@raspberrypi4-64:~#
<Guest9046> root@raspberrypi4-64:~#
<Guest9046> guest image file is there but showing no file or directory
<Guest9046> Any suggestion on this?
leon-anavi has quit [Quit: Leaving]
Guest9046 has quit [Quit: Client closed]
dti has quit [Quit: ZNC 1.8.2 - https://znc.in]
dtometzki has joined #yocto
dtometzki has quit [Client Quit]
dtometzki has joined #yocto
dtometzki has quit [Client Quit]
dtometzki has joined #yocto
dtometzki has quit [Client Quit]
dtometzki has joined #yocto
Guest9068 has joined #yocto
Guest9068 has quit [Client Quit]
otavio has quit [Remote host closed the connection]
otavio has joined #yocto
dtometzki has quit [Quit: ZNC 1.8.2 - https://znc.in]
florian has joined #yocto
vladest has quit [Quit: vladest]
vladest has joined #yocto
justache is now known as justache_
justache_ is now known as justache
brazuca has quit [Quit: Client closed]
goliath has quit [Quit: SIGSEGV]
dtometzki has joined #yocto
rsalveti has quit [Quit: Connection closed for inactivity]
dtometzki has quit [Quit: ZNC 1.8.2 - https://znc.in]
dtometzki has joined #yocto
prabhakarlad has quit [Quit: Client closed]
florian has quit [Ping timeout: 244 seconds]
DvorkinDmitry has quit [Quit: KVIrc 5.0.0 Aria http://www.kvirc.net/]
<ptsneves> seems there are no docs for license-obsolete nor license-exists
frieder has quit [Remote host closed the connection]
<RobertBerger> @ptsneves: license-exists: https://pastebin.com/UNVBdxGA
<RobertBerger> @ptsneves: both are in insane.bbclass (but not in all Yocto/OE versions)
hcg has quit [Quit: Client closed]
wCPO has quit [Quit: The Lounge - https://thelounge.chat]
wCPO has joined #yocto
kriive has quit [Remote host closed the connection]
mvlad has quit [Remote host closed the connection]
prabhakarlad has joined #yocto
<RP> qschulz: I'm worrying a bit about what you said. You mean my approach would conflict with the direction you were thinking we should go?
florian has joined #yocto
brazuca has joined #yocto
brazuca has quit [Client Quit]
kekiefer[m] is now known as KurtKiefer[m]
kscherer has quit [Quit: Konversation terminated!]
florian has quit [Ping timeout: 248 seconds]
seninha has joined #yocto