berton[m] has quit [Quit: You have been kicked for being idle]
florian_kc has joined #yocto
amelius has joined #yocto
<Entei[m]>
Is there a way to alter the rpm build process being done through yocto? The rpms it generates don't provide the conventional rpm macros for build systems such cmake, meson etc, that are required for rebuilding srpms on the target machine.
leon-anavi has joined #yocto
<amelius>
Hey we're about to get new build hardware, at the moment we are on Xeon W-2145 CPU @ 3.70GHz with 16 cores. 64GB RAM and 8TB SSD. Is more RAM and more Cores better or is there a limit where the build is no longer accelerated?
<mcfrisk>
Entei[m]: I think yocto doesn't cover this use case. I would rather stick to cross compiling with bitbake, cross compiling in the SDK, or compiling on the target without complex packaging tools. though ipkg has worked in the past in SDK at least.
mrpelotazo has joined #yocto
<mcfrisk>
amelius: RAM per CPU thread is an important ratio. 2 gigs of RAM per thread has been ok on my work loads. But this depends heavily on what you compile. webkit/chromium/C++ with templates/qt will make things a lot harder
<mcfrisk>
amelius: and two thing: build times vs build failures due to out-of-memory (OOM)
rich1234 has quit [Quit: Client closed]
<mcfrisk>
adding more RAM helps to keep file system operations buffered in RAM, but default build setups may still write to disk which is slowing things down. Tune kernel settings for that. I would not use tmpfs as that doesn't scale if you run out of ram..
<mcfrisk>
/kernel settings/kernel vm settings/
<Entei[m]>
mcfrisk: What about thousands of packages that don't have a recipe? Rebuilding SRPMs on the target would be a great option instead of writing recipe for each of those packages. And directly iinstalling using the build tool like autoconf or cmake doesn't cut it on a system where you plan to managing things with a package manager which requires you to have dependencies through proper packages.
mrpelotazo has quit [Quit: WeeChat 3.8]
mrpelotazo has joined #yocto
<mcfrisk>
Entei[m]: well, mixing yocto and other Linux distro packages has not ended well. I'd use containers.
<Entei[m]>
mcfrisk: umm...what do you mean? Doesn't yocto support packages? I see it supporting deb and rpm packages without much of configuration.
<landgraf>
Entei[m]: you can rebuild them somewhere and import binary RPMs into yocto build with bin_package.
<mcfrisk>
Entei[m]: yocto builds an output binary package stream using those package managers. It is very much not compatible with other distros which build dep or rpm packages. It can be for simple things but for anything more complex things will fall apart.
<Entei[m]>
mcfrisk: So you mean to say, I don't depend on yocto based distro to build RPMs at all, but rather build them on say Fedora and then transfer them to my yocto based distro. Did I get it correct?
<jclsn>
I have an issue where I am changing the list of files in SRC_URI of a recipes and the old ones are still left in the WORKDIR. do_install() is then doing a for loop over all the files in question and installs them to the image. So I end up with all the old files that are currently not in the list anymore. Is there any way to let Bitbake clean these files from the WORKDIR automatically?
<mcfrisk>
Entei[m]: nope, but that may work on the binary RPM level, but fall apart due to compiler, dependency etc level. If you instead build RPMs with the yocto SDK, then things will likely work, but still, building SRPMs from other distributions may not.
<mcfrisk>
jclsn: bitbake -c clean recipe, or wipe full build/tpm and rebuild
<jclsn>
mcfrisk: Yeah sure, but I would be assuming that only the files currently contained in the recipes will be installed
mrpelotazo has quit [Quit: WeeChat 3.8]
<jclsn>
Maybe looping over all the files in the WORKDIR is just bad practice
<mcfrisk>
jclsn: try a clean build, incremental rebuilds have issues and various corner cases and old files may still exists in workdir
<mcfrisk>
jclsn: it is slightly bad, though it can be handy as well if there are large number of files.
<jclsn>
I could loop over all the files in the SRC_URI instead of all the files in WORKDIR
<mcfrisk>
jclsn: that is better
<jclsn>
How would I do that? srcuri = d.getVar('SRC_URI', True).split() and then?
invalidopcode9 has quit [Remote host closed the connection]
invalidopcode9 has joined #yocto
<mcfrisk>
jclsn: I've seen additional variables being used to construct both SRC_URI and then used in do_install() and do_deploy() tasks. SRC_URI itself with the url type string may be harder to use
starblue has quit [Ping timeout: 260 seconds]
starblue has joined #yocto
rich1234 has joined #yocto
<rburton>
jclsn: definitely iterate over SRC_URI. there are functions in bb.fetch2 to parse the urls.
<rburton>
__init__ for the module api, but not an ancient release
<jclsn>
I don't see a function in there that sounds like what I want
<rburton>
decode_url?
<rburton>
then filter to just file: entries
d-s-e has joined #yocto
<jclsn>
No idea how that works. I think it is really unfortunate that there is no documentation. It would make it much easier to support you guys
<jclsn>
So d is the dictionary of all variabales as far as I understand
<jclsn>
files = d.decode_url(${SRC_URI}) or something?
<rburton>
${} won't expand in python
<jclsn>
SRC_URI is a list of files
<jclsn>
Oh yeah it is python
<rburton>
and decode_url takes a single url
<jclsn>
do_install is bash I guess
<rburton>
you can write a python helper function
<jclsn>
Yeah but I have a list of files
<jclsn>
No url
<jclsn>
SRC_URI = " file://file1 file://file2 "
<rburton>
for url in d.getVar("SRC_URI"); scheme, location, ... = bb.fetch2.decodeurl(url)
<rburton>
if you want to keep your do_install as bash then write a helper function in python that returns the list of files in a way that your bash can read
<Entei[m]>
Would setting PR variable in local.conf set a fixed PR for all packages? By default the package naming follows the r0, r1 etc naming scheme since by default packages are set to PR=r0. I'd like to set the names to a fixed release name like xy12, xy13 etc (like how Fedora does it - fc37, fc38 etc).
<rburton>
Entei[m]: that's not what PR is for
<rburton>
it might work, but that's pretty horrible.
seninha has joined #yocto
<Entei[m]>
rburton: Yeah I do understand the functionality behind it, but I don't see any other way to change package names. Those packages ending in r0, r1 etc are pretty crude, especially when I'd like to make a whole distribution following one single naming scheme instead of mixup of numbers.
<Entei[m]>
Is there any other way to change it?
alessioigor has joined #yocto
<rburton>
just try setting PR and see what happens
<rich1234>
I am trying to get the OpenGLES libraries. I have added the following to my local.conf
<rich1234>
IMAGE_INSTALL:append = " vim libsdl2 mesa"
<rburton>
grumble at bitbake syntax. use "def list_files(d):" to start the function, you can't call a variable 'network location', and you need to split() the url list.
<jclsn>
That isn't parsed right
<jclsn>
I will just hardcode. This is too time consuming
<rburton>
i did those changes and it was working
<jclsn>
Show me
<rburton>
sorry, deleted it already
<rburton>
def list_files(d): for url in d.getVar("SRC_URI").split():
<jclsn>
Ah wait there is a closing parenthesis missing
destmaster has joined #yocto
<destmaster>
Hi, I would to buy a new PC or server to speedup my Yocto's images building time. Could you suggest on what hardware specification pay attention on? I understood weel the key to speed up the build time is the number of logic cores, right? There's significant differences in performances between Intel and AMD architecture? My budget is around
<destmaster>
1500/2000€.
<jclsn>
destmaster: Anyt Threadripper will do
<destmaster>
jclsn thank you
<jclsn>
Lenovo ThinkStation P620 for example. They are a bit above your budget though
<jclsn>
Maybe try getting a used one
<jclsn>
Or just build one yourself
<destmaster>
jclsn thank you, I will evaluate the cost difference between buy a ready to use vs build a one myself
<jclsn>
rburton: It is not installing anything unfortunately http://ix.io/4uak
<qschulz>
jclsn: it returns a string, you cannot use a shell forloop on a string that is space separated
<jclsn>
qschulz: I though rburton said that would work
destmaster has quit [Quit: Client closed]
<qschulz>
jclsn: for i in "this is a test"; do echo $i; done
<qschulz>
for i in this is a test; do echo $i; done
<qschulz>
VAR="this is a test"; for i in $VAR; do echo $i; done
<qschulz>
and see for yourself
<qschulz>
(you'd need to use arrays but those are bashism and we recommend not using any non POSIX shell stuff
<jclsn>
This is really giving me a headache ^^
<jclsn>
I can do just bash or just python but not both
<qschulz>
jclsn: call bb.exec on the python function and install the files this way
<qschulz>
this way = in python directly
<jclsn>
No idea how I would do that
<qschulz>
or just add a python task just before do_install
<jclsn>
"just"
<qschulz>
don't know if everything is properly setup though
<jclsn>
I don't understand anything of what this is doing
<qschulz>
but honestly, you're not the first one to have this issue
<qschulz>
this is known but requires quite deep changes in how we handle SRC_URI with the file:// fetcher
<qschulz>
(don't know which bugzilla ID it has but we have a bug for it)
<jclsn>
I also don't understand why there isn't some easy function for this. This is not a wild use case
<qschulz>
jclsn: theoretically, SRC_URI will give you only what you need so there's no need for this
<qschulz>
now because we use WORKDIR as the directory to store SRC_URI file:// files, we can't easily remove files that were installed in previous runs
seninha has quit [Ping timeout: 252 seconds]
<qschulz>
jclsn: half wondering if you cannot force the fetch task of your recipe to depend on a clean task?
<qschulz>
do_fetch[depends] += "do_clean" ?
<rburton>
ewww
<qschulz>
rburton: everything is eww :)
<rburton>
[cleandirs] would be better
<qschulz>
rburton: which ones?
Perflosopher has quit [Ping timeout: 246 seconds]
Perflosopher has joined #yocto
<qschulz>
ah, maybe on ${WORKDIR} directly?
<rburton>
this is why there's a long-standing need to put unpack files into !workdir
<rburton>
definitely not workdir
<qschulz>
rburton: then how do you plan on using cleandirs varflag for SRC_URI file:// files?
<rburton>
cleandirs ${S} and put the files into ${S}
<rburton>
though i'm confused why looping through the output didn't work
<qschulz>
rburton: or any other subdir, indeed
Bhstalel has joined #yocto
<jclsn>
Just tested this in bash. No idea why it is not working http://ix.io/4uak
<rburton>
oh right because i fat-fingered the index
<rburton>
index 4 is password
<rburton>
when in doubt actually check what is happening
Bhstalel has quit [Client Quit]
<rburton>
you want 2
<rburton>
should make decodeurl return a namedtuple
<jclsn>
Yeah right I also forgot the ${S} before ${i}
<rburton>
workdir, not s
<rburton>
whats useful is looking at eg temp/run.do_install, which is the _expanded_ task
<rburton>
so that's expanded the python function and you can see what actually popped out
<jclsn>
I assigned WORKDIR to S. Shouldn't it be the same?
<olani->
qschulz: VAR="this is a test"; for i in $VAR; do echo $i; done should work in most shells. zsh is an exception. But in this case $VAR would be expanded by bitbake before the shell even looks at the code.
<rburton>
jclsn: dont assign WORKDIR to S
<rburton>
oh i see what you mean, you did S=WORKDIR
<rburton>
that's fine, sorry
<rburton>
yeah, same differerence then
rob_w has quit [Remote host closed the connection]
rich1234 has quit [Quit: Client closed]
<qschulz>
olani-: not POSIX so not good for shell tasks in Bitbake :)
<qschulz>
olani-: btw I use zsh (so many small differences compared to bash it's interesting :) )
<jclsn>
rburton: My colleague now told me systemd also iterates through the files in the workspace and that my solution is wrong xD
<rburton>
as you said, remove a file and rebuild without cleaning and you get the old files
<jclsn>
Yeah not optimal
<rburton>
dropping the files into a directory and cleaning it in unpack might work well
<jclsn>
But you wouldn't want to clean everyt time you rebuild
<rburton>
it would only happen if unpack re-ran
<rburton>
if unpack is re-running, you're building from scratch anyway
<jclsn>
Maybe the systemd recipe does it this way?
<rburton>
no
<jclsn>
So why such a mistake in the recipe? systemd is crucial
<jclsn>
Weird
<rburton>
because most people don't think about this edge case
<jclsn>
Yeah I am great
<jclsn>
I always knew that actually
<jclsn>
hehe
frieder has quit [Ping timeout: 264 seconds]
kscherer has joined #yocto
<jclsn>
Well, I just stumbled over it while wanting to override some rule. I had multiple rules in /etc/udev/rules.d, even the ones not in the recipe anymore. This resulted in the wrong rtc being symlinked o /dev/rtc
sakoman has joined #yocto
amitk has quit [Ping timeout: 240 seconds]
frieder has joined #yocto
<RP>
jclsn: there is an open bug for fixing this but it isn't straightforward
<jclsn>
RP: Can you send me the link?
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
<rburton>
i'm thinking we need to just change the unpack procedure and tell everyone to suck it up
<rburton>
change T to workdir/tasks/ at the same time :)
goliath has quit [Quit: SIGSEGV]
<RP>
rburton: logs maybe?
<RP>
rburton: it is amazing all the ways things break if you change it
<rburton>
i've been using tasks/ for a month or so now
<RP>
rburton: oh, ${T} is much easier and will just break external scripts. It was unpack I was meaning
<RP>
I think tasks may well be confusing to people for different reasons to temp
tgamblin has joined #yocto
<rburton>
yeah, external scripts that hardcode temp are broken.
<rburton>
unpack will be fun but i think we just have to break it and let people fix
PobodysNerfect_ has joined #yocto
PobodysNerfect has quit [Ping timeout: 255 seconds]
seninha has joined #yocto
d-s-e has quit [Quit: Konversation terminated!]
PobodysNerfect_ has quit [Quit: Gone to sleep. ZZZzzz…]
<khem>
rburton: 18mins is not bad we have larger daemons
berton[m] has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
amitk has quit [Ping timeout: 248 seconds]
amitk has joined #yocto
florian_kc has joined #yocto
thomasd13 has quit [Ping timeout: 252 seconds]
leon-anavi has quit [Remote host closed the connection]
<smurray>
I'm managing to trigger the build-deps QA check when trying to RDEPENDS on one of a couple of packages that intentionally have duplicate RPROVIDES (I'm trying to allow building images with different client certificates). I don't see a way to avoid this, figured I'd ask if I'm missing something?
GillesMM has quit [Remote host closed the connection]
<dacav>
Hi. In my current yocto setup I've got a top-level recipe that generates a rootfs using squashfs. For reasons I'd need to embed it into a ubi volume. In the same ubi image I'd like to add the fitImage produced by the kernel recipe. Basically I'd like to call ubinize in the on_deploy step of a recipe for which (by means of dependencies?) fitImage and rootfs.squashfs should already be in the deploy
<dacav>
directory. Is there a standard way to do so?
GNUmoon2 has joined #yocto
GNUmoon has quit [Ping timeout: 255 seconds]
<dacav>
Could it be the case of a IMAGE_POSTPROCESS_COMMAND?
<dacav>
...although the image recipe does not depend on the kernel recipe, so I have no guarrantee (I guess) that I will find the fitImage under DEPLOY_DIR
Nostromo43 has joined #yocto
<Nostromo43>
khem I've added a library onto my custom yocto image that's trying to use llvm 16.0.2. I see in the current llvm_15.0.7 the cmake files and libLLVM-16.so file are removed. What's the reason for this?
florian_kc has joined #yocto
<Nostromo43>
khem Oh I see that on the current llvm_git.bb those have since been removed.
Haxxa has quit [Quit: Haxxa flies away.]
Haxxa has joined #yocto
Thorn has quit [Ping timeout: 255 seconds]
tenko[m] has joined #yocto
ptsneves has quit [Ping timeout: 260 seconds]
alessioigor has quit [Quit: alessioigor]
xcm_ has quit [Remote host closed the connection]
sakoman has quit [Quit: Leaving.]
schtobia has quit [Quit: Bye!]
schtobia has joined #yocto
invalidopcode9 has quit [Remote host closed the connection]
invalidopcode9 has joined #yocto
amitk has joined #yocto
amitk__ has quit [Ping timeout: 248 seconds]
florian_kc has quit [Ping timeout: 252 seconds]
Minvera has quit [Remote host closed the connection]
kscherer has quit [Quit: Konversation terminated!]
nerdboy has quit [Ping timeout: 240 seconds]
nerdboy has joined #yocto
nerdboy has joined #yocto
nerdboy has quit [Changing host]
florian_kc has joined #yocto
nerdboy has quit [Remote host closed the connection]
nerdboy has joined #yocto
nerdboy has quit [Changing host]
nerdboy has joined #yocto
goliath has quit [Quit: SIGSEGV]
prabhakarlad has quit [Quit: Client closed]
prabhakarlad has joined #yocto
florian_kc has quit [Ping timeout: 276 seconds]
sakoman has joined #yocto
seninha has quit [Quit: Leaving]
Thorn has joined #yocto
<no1florasure[m]>
“Would you like to discover how to profit from the cryptocurrency market if YES I’m willing to help 11 people how to earn $100,000 in just 72hours Dm me to know HOW