<rm5248>
Has anybody seen random build failures before when building the eSDK? My eSDK builds keep failing on my build server, but I'm wondering if it's something to do with resources. I'm trying a build now by lowering BB_NUMBER_THREADS to see if that makes it more reliable
starblue has quit [Ping timeout: 255 seconds]
starblue has joined #yocto
<rm5248>
Most of the time the failure results in a pseudo abort failure with a path mismatch on a .ipk file, but one time it failed during the tar process(File removed before we read it), where the file the symlink was pointing to did not exist
starblue has quit [Ping timeout: 246 seconds]
florian has quit [Ping timeout: 255 seconds]
enok has joined #yocto
Daanct12 has quit [Quit: WeeChat 4.2.1]
Starfoxxes has quit [Ping timeout: 272 seconds]
starblue has joined #yocto
bunk has joined #yocto
vladest1 has joined #yocto
vladest has quit [Ping timeout: 264 seconds]
vladest1 is now known as vladest
<RP>
rm5248: that sounds like files being rmoved out of pseudo's context so I doubt its a resource problem, more an intermittent race
<rm5248>
If it is an intermittent race though, I'm wondering if it's because there are /too many/ resources(e.g. threads) running. I wasn't sure if it could have something to do with ulimits
rfs613 has quit [Ping timeout: 260 seconds]
rfs613 has joined #yocto
<RP>
rm5248: path mismatch pseudo aborts are not resource problems
<RP>
I say this as someone who worked on that pseudo code
florian has joined #yocto
<rm5248>
ah. what might cause a path mismatch? It seems to affect different packages
Xagen has joined #yocto
<RP>
rm5248: imagine pseudo things "a" and inode 123 are the same file. Imagine something deletes "a" outside of pseudo and then creates "b" which the kernel gives inode 123. Then you view "b" under pseudo
<RP>
s/things/thinks/
<rm5248>
So the likely cause is that there is a rogue process of some kind deleting files in the yocto build folder? Is that correct?
<fray>
or your build folder is on a filesystem w/o persistent inodes
<fray>
pseudo inode mismatch is almost always a real bug somewhere in the code (outside of pseudo). The only other time I've seen it is on a filesystem that doesn't have persistent inodes (like NFS).
<rm5248>
Unfortunately I don't really know how that helps. The build is failing on an ubuntu 22.04 machine, it is using ext4. The one potential odd thing about it is that it runs on an EC2 instance, so I'm not sure if AWS could be doing something special
<fray>
I don't have much experience w/ building on EC2, was always too expensive to play with it. With that said, the normal cause if it says the filesystem is ext4, there is something running that is NOT in pseudo context that is manipulating files (which removes or changes their inode in some way).. I've seen this happen w/ scripts that run to copy something in and end up overwriting existing files.. these
<fray>
scripts would be outside the normal YP recipe context.
<fullstop>
I'm adding a package in my yocto layer, and it builds with cmake. I can build it with the sdk generated from yocto, but not from within yocto itself, and I'm hoping that someone here knows where I've gone astray.
<rburton>
fullstop: does your recipe just inherit cmake and almost do nothing else?
<fullstop>
It's not finding simple includes, such as unistd.h, so it's likely not finding the sysroot
<rburton>
if you inherit cmake then congratulations you've got a broken cmakelists.txt
<fullstop>
it inherits cmake and adds a do_install
<rburton>
cmake has a do_install, so you only need that if your cmakelists don't know how to install
<fray>
All I can sugges tis be sure to use the cmake class. It sounds like not finding unistd.h means that your software isn't invoking everything with $CC (from the environment), often ends up with hard coded gcc, which isn't right..
<fullstop>
it does not know how to install
<fray>
the value of $CC from the environment contains the references to the header files
<rburton>
fullstop: but the usual problem is that the cmakelists.txt thinks its okay to overwrite CMAKE_C_FLAGS or something, when it should be appending
<fullstop>
The log shows the appropriate compiler, so I believe it to be using $CC
<rburton>
does it show the compiler passing --sysroot?
<fullstop>
oh, wait I think that I see it.
<fullstop>
there is a SET(CMAKE_CXX_FLAGS) which is wrong.
<fullstop>
my apologies, I should have seen that.
<rburton>
that would be it
sotaoverride has quit [Killed (copper.libera.chat (Nickname regained by services))]
sotaover1ide is now known as sotaoverride
enok has quit [Ping timeout: 268 seconds]
<rburton>
fray: cmake is 'fun' and demands that the compiler argument is just the binary, so we have to pass sysroot through CMAKE_C_CFLAGS etc.
<fullstop>
....and built.
<fray>
as I've said for years.. cmake is "broken by design"
<rburton>
absolutely
<rburton>
terrible build system
<fullstop>
guess I'll go back to gnu make. :-)
<rburton>
fullstop: also terrible
<rburton>
meson
<fullstop>
one of these years
<thomas_34>
what? why is cmake terrible? I'm have to use it since years... :S
<rburton>
thomas_34: my main reason is that it basically hates the idea of cross compiling, ironically
<rburton>
well its also got hundreds of other flaws, but thats the dealbreaker
<thomas_34>
Huh? I'm using it to compile my stuff for 4 different architectures?
<rburton>
thomas_34: can you build a binary in your cmake that you run on the build machine whilst cross-compiling?
enok has joined #yocto
<rburton>
cmake _can_ cross, it's just that its horrible
<thomas_34>
well... I don't know what you mean exactly - but yes. I have a toolchain-definition for every architecture (also for build machine for example) and just build the stuff
<rburton>
so you have a project. you have a small xcode generator written in C that you need to compile with the native compiler and then run, and that produces code that you then cross-compile
<rburton>
(not sure where that x came from)
<zeddii>
I can't get past the sea of incomprehensible includes and mixed functions. but it shares that issue with every other build framework.
<fray>
every cmake implementation I've seen, people end up hard coding compiler definitions into it, cause using the environment or other standard is too difficult.. GNU make people do similar, but in those cases it's often easier for me to just remove the offending lines and everything works.. but in cmake remove lines and the next bit breaks.. it's a horrible mess..
<thomas_34>
Actually, I'm kinda have something like this. I define that in a extra cmake task and execute that.
<fray>
I've never used meson for any development, so I've no comment there..
<JaMa>
every build system is terrible, I'm looking forward for AI to skip the source code completely and pulling executable binaries from pixie dust
<fray>
pixie dust = dubious plagerism.. YEP
<rm5248>
ThomasRoos: Thanks, I will check it out. Do you build the eSDK at all when you build? The build works fine apart from that
<thomas_34>
rburton, I just know cmake for c/c++. I don't know if anything else does a better job for particular thingy but I'm so far impressed what you can do with that.
<rburton>
thomas_34: meson is much better :)
<ThomasRoos>
rm5248 - no did miss that information - I do not use eSDK at all by now.
<JaMa>
fray: I was just joking, but if it makes my threadripper to spit out real unicorns, then my kids will be happier with my work
<fray>
lol I don't have an issue with 'ai', especially in helping coding.. BUT I have a huge problem with the way Microsoft and others have trained their AI against open source and other people's work w/o any attribution or paying attention to licensing..
<fullstop>
One other question, as I'm moving things from dunfell to mickledore.. it looks like SRCREV pointing to a tag is no longer accepted. Do I really have to figure out the commit sha instead of using a tag?
<fray>
you always have had to use the sha directly.. if tag worked before it was not intentional
<JaMa>
why moving from one EOL LTS release to different EOL release and not-LTS?
<fullstop>
because bsp
<fray>
reason being, only the sha is deterministic/reproducible.. a tag can move, a branch can move, etc.
florian has quit [Ping timeout: 240 seconds]
<fray>
ya, mickledore is very much dead at this point.. nanbield and upcoming scarthgap is what most people should be targeting.. (nanbield as a stepping stone to scarthgap)
<fullstop>
I'm kind of at the mercy of NXP at the moment
<JaMa>
fullstop: using tag "worked" by using git ls-remote while parsing to convert tag name to sha, now it asks you to convert it yourself in advance to make it more reliable (when tag moves) and to avoid network access to remote repos while parsing (as it cannot be resolved in DL_DIR)
enok has quit [Ping timeout: 260 seconds]
<fullstop>
I'll have to be careful, I guess. I could bump packages by git mv before, but now I need to edit the recipe and supply the sha.
<fullstop>
It's not horrible, but I need to be careful.
<JaMa>
for our components in webOS we extend this _after_the_do_fetch_ to check that the SHA matches with expected tag name in the repo
<fray>
we use scripting here to update recipes automatically.. loads the 'next sha' (from a set of rules we have) and then automatically updates the recipes, verifies they build and checks them in..
<fullstop>
That's probably worth doing. I could see the tag name not matching the sha.
<fray>
ya, we only tag after release.. during development we have a next and stable branch, and again automation rules on promoting code
<simonew>
oeRuntimeTest vs OERuntimeTestContext Seems there are two similiar named clases
<simonew>
Is cls correct done?
<wacke>
hmm, i inherit OERuntimeTestCase
<wacke>
tried with oeRuntimeTest (by guess), got an error: AttributeError: type object 'oeRuntimeTest' has no attribute 'tc'
<wacke>
ok, thx for your help so far simonew, i'm afk now, if anyone has some ideas, please write, i'll check later
082AAVV6M has joined #yocto
ThomasRoos has quit [Quit: Client closed]
simonew has quit [Remote host closed the connection]
simonew has joined #yocto
enok has quit [Ping timeout: 255 seconds]
082AAVV6M has quit [Quit: Lost terminal]
rfuentess has quit [Read error: Connection reset by peer]
tgamblin_ is now known as tgamblin
noyez has joined #yocto
vladest has quit [Remote host closed the connection]
tgamblin has quit [Quit: Leaving]
zpfvo has quit [Remote host closed the connection]
zpfvo has joined #yocto
vladest has joined #yocto
Starfoxxes has joined #yocto
thomas_34 has quit [Ping timeout: 250 seconds]
starblue has quit [Ping timeout: 260 seconds]
vladest has quit [Remote host closed the connection]
starblue has joined #yocto
zpfvo has quit [Quit: Leaving.]
rm5248 has quit [Ping timeout: 260 seconds]
mckoan is now known as mckoan|away
manuel1985 has quit [Ping timeout: 268 seconds]
rm5248 has joined #yocto
vladest has joined #yocto
starblue has quit [Ping timeout: 246 seconds]
jmiehe has quit [Quit: jmiehe]
starblue has joined #yocto
prabhakarlad has joined #yocto
leon-anavi has quit [Quit: Leaving]
starblue has quit [Ping timeout: 255 seconds]
starblue has joined #yocto
leon-anavi has joined #yocto
leon-anavi has quit [Client Quit]
simonew has quit [Ping timeout: 268 seconds]
starblue has quit [Ping timeout: 256 seconds]
starblue has joined #yocto
florian has joined #yocto
geoffhp has joined #yocto
prabhakar has joined #yocto
enok has joined #yocto
florian has quit [Ping timeout: 272 seconds]
mvlad has quit [Remote host closed the connection]
amitk has quit [Ping timeout: 268 seconds]
amitk has joined #yocto
starblue has quit [Ping timeout: 268 seconds]
rm5248 has quit [Quit: Leaving.]
Saur_Home85 has quit [Quit: Client closed]
Saur_Home85 has joined #yocto
<noyez>
Hi -- i use `source poke/oe-init-build-env` to source the the needed variables. Is there a way to "unsource" those variables, i.e. i want to get back my previous shell. is that possible with completely exiting the shell? thx.
starblue has joined #yocto
paw has quit [Ping timeout: 264 seconds]
starblue has quit [Ping timeout: 268 seconds]
florian has joined #yocto
starblue has joined #yocto
starblue has quit [Ping timeout: 268 seconds]
DvorkinDmitry has joined #yocto
starblue has joined #yocto
<DvorkinDmitry>
I need an advice. I have 32GB RAM and 12/24 cores/threads now with Ryzen 7900. My build speed is good, but I'm thinking about adding more memory. What is your opinion, will it increase the build speed and how much If I'll add 32GB? How much memory would you recommend? (I saw 2GB core recommendation).
<DvorkinDmitry>
I'm watching at my build with TOP command and I see at the most hard steps (php/mysql/nodejs build it runs one or two tasks in parallel only and memory is mostly taken (80%) by the HDD cache.
<rburton>
DvorkinDmitry: as much as you can afford
<rburton>
DvorkinDmitry: if you can, 128gb would be great
<rburton>
but watch the memory usage during a build to see if you'll actually get any use from it
<DvorkinDmitry>
rburton, will it speedup my build and how much, do you think?
<rburton>
about fix googleflips
<rburton>
that's unknowable
<rburton>
watch memory usage during a build, see how much of your ram is disk cache and how much is actually spare
<DvorkinDmitry>
rburton, by looking at TOP output I see at the most hardest build steps it uses 80% of RAM for hdd cache...
<rburton>
if you swap at all, then more ram would help. if you have no free ram and its all cache then more ram means more cache
prabhakalad has quit [Quit: Konversation terminated!]
<DvorkinDmitry>
rburton, I do not use the SWAP at my build machine
prabhakalad has joined #yocto
<rburton>
depending on budget then 64 or 128. unless you can borrow ram you'll never know in advance how much - if any - improvement you'll get
<rburton>
you might be entirely IO bound, for example
<rburton>
a fast nvme disk makes a massive difference
<DvorkinDmitry>
rburton, I'm using the fastest ssd available, so my IO is not a problem
<rburton>
nvme or sata ssd?
<DvorkinDmitry>
nvme
<rburton>
IO is _always_ a problem, it's the slowest thing on a computer
<DvorkinDmitry>
I know, that's why I'm using ssd gen4
<rburton>
we need a wiki page for standardised build time reporting
<rburton>
report your ram, processor, storage, standardised kirkstone core-image-sato build time
<rburton>
(personal record: 22 minutes)
<DvorkinDmitry>
I'm just thinking about the RAM size influece on such tasks as mysql/php/nodejs build. Case I see it does everything else in 24 threads in parallel, but the mysql or php or nodejs build is running along
<DvorkinDmitry>
rburton, ok. I'll do the test
<rburton>
my ghetto benchmark is literally git clone -b kirkstone https://git.yoctoproject.org/poky; cd poky; . oe-init-build-env; bitbake core-image-sato --runall fetch ; time bitbake core-image-sato
<rburton>
no tuning for processor count, fetch first, time build
<rburton>
super dumb but also repeatable trivially
jmd has quit [Remote host closed the connection]
<rburton>
the 22 minutes was a beefy threadripper with 256gb ram and very fast nvme. it did a sato from sstate in about 40 seconds.
<DvorkinDmitry>
rburton, started!
<moto-timo>
rburton: we need to include PCIe bus generation and MoBo bus speeds (memory, etc.). These are the things that start making an old dual Xeon look like a waste of time.
* moto-timo
stares at the Broadwell era dual Xeon (total of 12C/24T) with the dead spinning rust.
* moto-timo
gives it the evil eye.
<moto-timo>
Some kind of peak electricity usage would also be a valuable insight in our modern life. Extra disks cost extra power consumption ;)
<moto-timo>
BBOPS/Watt my new metric :)
alessioigor has quit [Quit: alessioigor]
starblue has quit [Ping timeout: 268 seconds]
starblue has joined #yocto
noyez has quit [Ping timeout: 250 seconds]
alperak has quit [Quit: Connection closed for inactivity]
wacke has quit [Ping timeout: 256 seconds]
Xagen has quit [Ping timeout: 268 seconds]
<khem>
NVME SSDs consume more power than a spinning rust so you are good Moro-timo on that front at least
<mischief>
moto-timo: some PSUs do provide power monitoring
<mischief>
from what i've seen they can give pretty detailed info on each power rail, but you could also just get a cheap smart plug if you only wanted to see V/A/W
<JPEW>
Heh, the "basic" SPDX 3 for the Linux Kernel from Yocto is 1.2 MB; 99% of that is reporting the CVE
<JPEW>
CVEs
starblue has quit [Ping timeout: 268 seconds]
starblue has joined #yocto
simonew has joined #yocto
<RP>
JPEW: nice, ouch :)
simonew has quit [Ping timeout: 272 seconds]
<jwinarsk>
Is beaglebone-yocto in master expected to work with GPU? Running a vanilla core-image-weston image, weston fails to load with: `failed to create gbm surface`
starblue has quit [Ping timeout: 255 seconds]
starblue has joined #yocto
<jwinarsk>
weston is reporting `GL renderer: softpipe` prior to the error