<JaMa>
RP: I had "fetch2: Add autorev warning when it is set too late" in my poky branch (from last rebase on master-next), I guess this explains the bbtests.BitbakeTests.test_git_unpack_nonetwork_fail failure I've seen
<JaMa>
after dropping it during rebase on today's master-next, it passes again: 2023-03-22 02:23:19,602 - oe-selftest - INFO - RESULTS - bbtests.BitbakeTests.test_git_unpack_nonetwork_fail: PASSED (38.50s)
qschulz has quit [Remote host closed the connection]
qschulz has joined #yocto
prabhakarlad has quit [Ping timeout: 260 seconds]
camus has quit [Quit: camus]
camus has joined #yocto
davidinux has joined #yocto
starblue has quit [Ping timeout: 264 seconds]
seninha has quit [Quit: Leaving]
starblue has joined #yocto
nerdboy has quit [Ping timeout: 250 seconds]
nerdboy has joined #yocto
nerdboy has joined #yocto
nerdboy has quit [Changing host]
sakoman has joined #yocto
zpfvo has quit [Ping timeout: 255 seconds]
zpfvo has joined #yocto
barometz has quit [Ping timeout: 268 seconds]
barometz has joined #yocto
jclsn has quit [Ping timeout: 248 seconds]
jclsn has joined #yocto
zpfvo has quit [Ping timeout: 240 seconds]
amitk has joined #yocto
zpfvo has joined #yocto
nemik has quit [Ping timeout: 248 seconds]
nemik has joined #yocto
zpfvo has quit [Ping timeout: 240 seconds]
sakoman has quit [Quit: Leaving.]
nemik has quit [Ping timeout: 240 seconds]
nemik has joined #yocto
zpfvo has joined #yocto
thomasd13 has joined #yocto
amitk_ has joined #yocto
alessioigor has joined #yocto
<thomasd13>
Good morning guys. How do you deal with CMake-Projects which uses FetchContent/FindPackages to resolve their dependencies? Do you replicate these dependencies to the "recipe-level" or just run CMake-Project and let it do, what it wants to do?
<thomasd13>
And also, these CMake-Project establish a local cmake package cache at ~/.someDir. How would you deal with that?
<pope>
How can I recompile a kernel to support the earlyprintk option?
<mcfrisk>
thomasd13: inherit cmake, then FindPackages should just work, though there are bugs like some recipes generate bad .cmake files with absolute paths which need to be fixed. dowloading dependencies needs to be disabled and instead recipe dependencies used.
<kanavin_>
thomasd13, those aren't common, but generally you need to unroll the dependencies either into their own recipes, or additional entries in SRC_URI
<pope>
pope: Or how can I add it if no recompile is necessary...?
goliath has joined #yocto
<thomasd13>
Thanks guys, I guess I just get started and see how it works. It sounds a bit weird to create recipes for each and every internal cmake-project dependency and supply them on recipe-level
wooosaiiii has quit [Quit: wooosaiiii]
wooosaiiii has joined #yocto
<mcfrisk>
thomasd13: if the cmake modules are internal to the source tree then that can be kept as is, no need to create recipes for them
<mcfrisk>
but I've had to separate a source tree build for target from the build host/native before. some cmake modules were code generators etc and used in a number of recipes/source trees
<thomasd13>
mcfrisk, ah okay, I think I get your point now. And how would you deal when the CMake-Project expects stuff on hardcoded location? Example the package cache at ~/.someDir ? Would that be isolated from other yocto recipes?
<mcfrisk>
thomasd13: build output needs to be in the ${B} and recipe level dependencies in recipe specific sysroot. cmake.bbclass creates the toolchain file with paths into the sysroots so those must be used.
<mcfrisk>
I know cmake stuff can become really annoying. downloading and using externals should be just disabled. everything must be used as recipe dependencies or via SRC_URI like kanavin_ said too. With SRC_URI, you can prepopulate a cache directory with downloaded externals
<kanavin_>
thomasd13, it's hard to give specific hints if you can't show the source tree you want to build
<kanavin_>
thomasd13, you can hire me to sort you out ;)
<kanavin_>
sales@linutronix.de
frieder has joined #yocto
<mcfrisk>
rust cargo stuff feels just as annoying as cmake code downloading random dependencies in various ways
<thomasd13>
mcfrisk, toolchains are another good point. I have to use some specific toolchains which are also not "available" in yocto
<JaMa>
thomasd13: there are many recipes wich used FetchContent/ExternalProject_Add in meta-ros, you can find some examples there (but sometimes it's not simple)
<thomasd13>
Alright, thanks for the hint JaMa
<JaMa>
thomasd13: sometimes it is enough just to let bitbake fetcher to fetch the right sources to right directory
<thomasd13>
kanavin_, i think I had contact to Linutronix at embedded world some years ago.. Maybe I'll get in touch if things get really worse ;) Do you have free capacity for new clients atm?
<JaMa>
thomasd13: but then the cmake calls in ExternalProject_Add often don't pass the right params toolchain file to build the external dep correctly, so it was easier to build it in separate simple recipe and then modify CMakeLists.txt to accept this external dependency from "system"
<JaMa>
until you get 2 projects which require the same external dependency, but different versions (for whatever reason), then it goes even worse
<mcfrisk>
JaMa: oh yes, I have never seen a cmake externals using provided toolchain file correctly so that cross compilation would work :(
<thomasd13>
Well... I have a setup which builds same software for ARM R5, A72, DSP C6x and C7x targets, so thats working
<JaMa>
you can pass the toolchain file parameter in ExternalProject_Add, but in all cases you have to patch the original CMakeLists.txt which is a bit annoying
<JaMa>
I was also using these patch files to make sure that whenever requested version of ExternalProject_Add is changed, I get a conflict from this .patch (so that I notice that bitbake fetcher params need to be adjusted)
<RP>
JaMa: I've tried with and without that patch. I still wonder if an older version corrupted the state of the builddir
<JaMa>
and that was before network varflag, so to detect these ExternalProject_Add in meta-ros I was sometimes rebuilding from scratch (without sstate) with dns config intentionally crippled
frwol has joined #yocto
<JaMa>
RP: I've let it run with most selftests over night and now I see more unexpected failures, but didn't investigate yet due to work
<mcfrisk>
I disabled networking from a build container and saw CMake code downloading custom python versions etc...
mckoan|away is now known as mckoan
Schlumpf has joined #yocto
xmn has quit [Ping timeout: 268 seconds]
xmn has joined #yocto
prabhakarlad has joined #yocto
<LetoThe2nd>
yo dudX
<mcfrisk>
how to update a crate in rust application recipe? If I update the version in SRC_URI, then build fails due to Cargo.lock having different version. Then in devshell I update the Cargo.lock including the checksum, but running run.do_compile still fails
<mckoan>
hey LetoThe2nd
<mcfrisk>
rust recipes seem to use these crates to download also sources, but how to update to pre-crate release or specific commit in git to test fixes? specifically how to update parsec-service from 1.1.0 from crate to 1.2.0-rc1 from git? 1.1.0 fails to compile due to some enum parsing issue
leon-anavi has joined #yocto
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
xmn has quit [Ping timeout: 255 seconds]
davidinux has quit [Ping timeout: 276 seconds]
davidinux has joined #yocto
<kanavin_>
thomasd13, I'm not sure about capacity - I'm not involved in staff allocation or project management :)
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
Thorn has joined #yocto
<RP>
mcfrisk: I wonder why our normal network disabling didn't catch that. Was that with something in core?
<mcfrisk>
RP: an old issue, was using dunfell at the time so no support for bitbake network disable. btw that code was even using plain IP addresses to avoid DNS
<mcfrisk>
hnez: I'm kind of a bitbake person so would like to do that in the recipe devshell but it seems that all these cargo updates are tricky there even if I manage to update Cargo.lock and Cargo.toml correctly.
<kanavin_>
mcfrisk, there is a oe-core class to update crate lists in recipes from what is declared in Cargo.lock
<RP>
mcfrisk: lovely :(
florian__ is now known as florian
<JaMa>
RP: me too, some of them seem to be caused by my pending runqemu changes (from the next batch), but I didn't really look yet (other than restarting the failed tests on background to see if the issues are reproducible)
* mcfrisk
goes down the rabbit hole of cyclic crate dependencies, aargh...
ptsneves has joined #yocto
sgw has quit [Ping timeout: 268 seconds]
sgw has joined #yocto
azcraft has joined #yocto
<jclsn>
join #kitty
florian_kc has quit [Quit: Ex-Chat]
<mcfrisk>
is there a way to use bitbake devshell to modify rust recipe and it's crate dependencies? "crate update" just sees packages downloaded in SRC_URI but I'd like to test changes so that I generate the new SRC_URI..
starblue has quit [Ping timeout: 276 seconds]
starblue has joined #yocto
<barath>
when populating an ext4 image file using mkfs.ext4 image_types.bbclass sets -i 4096 by default. but this only makes sense if mkfs's blocksize, inode_size "match" with a bytes-per-inode value of 4096. the default blocksize, inode_size etc can differ between distros, as each distro may ship a different mke2fs.conf file
<barath>
would it make sense to also set default values for those other mkfs parameters? I discovered this while debugging why image generation failed on one host, but not on another and the reason was that the two hosts had different mke2fs.conf files
seninha has joined #yocto
wooosaiiii has quit [Quit: wooosaiiii]
wooosaiiii has joined #yocto
wooosaiiii has quit [Client Quit]
wooosaiiii has joined #yocto
<rburton>
barath: what release? i thought we fixed something like that already
<rburton>
barath: ah i'm thinking about the inode size in small file systems. shouldn't mkfs be using the mke2fs.conf we install in e2fsprogs-native though, so you shouldn't get different behavious?
invalidopcode1 has quit [Remote host closed the connection]
invalidopcode1 has joined #yocto
<barath>
kirkstone, but looks like the default -i 4096 argument has been there a while
<barath>
rburton: hm I'll double check whether it is/should be using the e2fsprogs-native conf file... it doesnt look like it is using that in my case
Xagen has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<barath>
thanks for the pointeres rburton! the e2fsprogs-native conf file from upstream is indeed used by default and everything is fine
invalidopcode1 has quit [Remote host closed the connection]
invalidopcode1 has joined #yocto
Schlumpf has quit [Quit: Client closed]
Xagen has joined #yocto
zpfvo has quit [Ping timeout: 255 seconds]
zpfvo has joined #yocto
<landgraf>
I'm trying to clone linux-yocto (single branch v6.1/standard/bcm-2xxx-rpi) from git.yoctoproject.org to digitalocean machine (fast internet connection) and it works with --depth=1 but failed without it (fatal: fetch-pack: invalid index-pack output) . Is git.y.o overloaded?
sakoman has joined #yocto
kscherer has joined #yocto
rob_w has quit [Quit: Leaving]
roussinm has joined #yocto
<RP>
JaMa: I merged some, dropped a couple and will ponder some a bit further :)
<RP>
landgraf: I don't know. It is a bit early for our US based sysadmins too :/
<RP>
landgraf: you could email helpdesk@yoctoproject.org and report it ?
<landgraf>
RP: I'll give it another try and report if failed.
landgraf has quit [Ping timeout: 255 seconds]
nemik has quit [Ping timeout: 240 seconds]
nemik has joined #yocto
ilunev has joined #yocto
landgraf has joined #yocto
nemik has quit [Ping timeout: 256 seconds]
nemik has joined #yocto
Xagen has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
Xagen has joined #yocto
prabhakarlad has quit [Quit: Client closed]
|Xagen has joined #yocto
nemik has quit [Ping timeout: 255 seconds]
nemik has joined #yocto
prabhakarlad has joined #yocto
Xagen has quit [Ping timeout: 240 seconds]
<rburton>
barath: so what was breaking for you?
nemik has quit [Ping timeout: 240 seconds]
nemik has joined #yocto
tleb has quit [Remote host closed the connection]
bryanb has quit [Remote host closed the connection]
<barath>
rburton: I'm transitioning us from dunfell to kirkstone and we lost mklibs optimization. We were already close our the max rootf size and now we reached it. I got confused because running the same mkfs.ext4 on my host worked, whereas it failed in yocto which runs in a docker
bryanb has joined #yocto
tleb has joined #yocto
<barath>
Turned out my host has a mke2fs.conf with a smaller default inode size (and block size) than the defaults used by poky / e2fsprogs defaults
frieder has quit [Ping timeout: 240 seconds]
<barath>
*running the same mkfs.ext4 command
<barath>
An inode size of 128 vs 256 made it work, though that's a bad workaround
<barath>
For us
<rburton>
barath: ah got it
<rburton>
barath: i see mklibs has finally been ported to py3 upstream, so it shouldn't be difficult at all to resurrect it
frieder has joined #yocto
nemik has quit [Ping timeout: 240 seconds]
nemik has joined #yocto
gsalazar has quit [Remote host closed the connection]
nemik has quit [Ping timeout: 240 seconds]
nemik has joined #yocto
invalidopcode1 has quit [Remote host closed the connection]
invalidopcode1 has joined #yocto
amelius has joined #yocto
seninha has quit [Quit: Leaving]
|Xagen has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
Xagen has joined #yocto
<barath>
rburton: ah cool, thanks for the heads up, I'll look into that :)
|Xagen has joined #yocto
Xagen has quit [Ping timeout: 260 seconds]
<JaMa>
RP: thanks
tleb has quit [Remote host closed the connection]
bryanb has quit [Remote host closed the connection]
vladest has quit [Ping timeout: 246 seconds]
<JaMa>
RP: there was one more "safe" for meta-yocto if you want, but no rush
prabhakarlad has quit [Quit: Client closed]
bryanb has joined #yocto
tleb has joined #yocto
prabhakarlad has joined #yocto
thomasd13 has quit [Ping timeout: 276 seconds]
Net147_ has quit [Ping timeout: 256 seconds]
pope has quit [Quit: Client closed]
zpfvo has quit [Ping timeout: 240 seconds]
zpfvo has joined #yocto
Net147 has joined #yocto
Net147 has joined #yocto
Net147 has quit [Changing host]
frwol has quit [Quit: leaving]
seninha has joined #yocto
seninha has quit [Remote host closed the connection]
seninha has joined #yocto
<rfs613>
just noticed that openssl 1.1.1 branch goes EOL this september. What will happen to dunfell at that time, no more SSL updates? I'm guessing updating to 3.1 is not under consideration?
<michaelo>
Hi RP: thanks for the reply about "argp". I'll send a patch after 4.2 is out, right?
<rfs613>
to clarify, I mean openssl 3.1 or 3.0 (LTS)
<RP>
michaelo: we can just do that now
<RP>
rfs613: A major openssl version change would likely be painful on dunfell
<michaelo>
RP: ok, great, thanks
<michaelo>
RP: oops, my bad, I was checking the kirkstone branch, not master. It has already been removed.
<RP>
michaelo: I wondered why I couldn't find it!
leon-anavi has quit [Quit: Leaving]
zpfvo has quit [Quit: Leaving.]
florian_kc has joined #yocto
ptsneves has quit [Ping timeout: 276 seconds]
amelius has quit [Read error: Connection reset by peer]
florian has quit [Quit: Ex-Chat]
Bardon has joined #yocto
Bardon_ has quit [Ping timeout: 252 seconds]
florian_kc has quit [Ping timeout: 252 seconds]
sgw has quit [Quit: Leaving.]
sgw has joined #yocto
invalidopcode1 has quit [Remote host closed the connection]
invalidopcode1 has joined #yocto
alessioigor has quit [Quit: alessioigor]
frieder has quit [Remote host closed the connection]
olani- has quit [Ping timeout: 265 seconds]
olani- has joined #yocto
olani- has quit [Ping timeout: 265 seconds]
<kanavin_>
rfs613, you can do local testing and propose a mixin layer, but it won't go into dunfell proper
<kanavin_>
rfs613, yocto LTS means there is a dedicated person reviewing, testing and publishing patches, but there is no promise to 'fix stuff' otherwise
<kanavin_>
project users need to take care of that part
<rfs613>
kanavin_: yup, sakoman and I are acquainted already ;-) I just wondered if for something as critical as SSL (or maybe glibc) there might be different policy.
<rfs613>
most users of course will see "Supported until Aug 2024" on the releases page, and will (probalby rightfully) assume everything is hunky dory until then.
<rfs613>
kanavin_: yup, understood, we talked about that for docker updates a while back
<kanavin_>
rfs613, there's no magic trick to go around the fact that we do not have people to staff a security team
<kanavin_>
rfs613, we produce CVE reports weekly, if entries in those are concerning, fixing them needs to be done by the project users
<rfs613>
indeed, and it is difficult to know when a given project will decide to stop supporting a certain version
<sakoman>
rfs613: Dunfell support ends April 2024, so for the last 6 months of life we'll have to rely on patches to ssl 1.1.1
<rfs613>
sakoman: if we can get them ;-)
<sakoman>
rfs613: indeed, hopefully the community will step up if something serious arises
<kanavin_>
rfs613, you should move onto a newer yocto before that really
<kanavin_>
and generally think of your lifecycle management, and codify that as a policy
<rfs613>
this makes me wonder how many other existing packages in are in a similar position - upstream supported ended, and we are relying on backports done by volunteers, or not at all
<sakoman>
rfs613: I'm sure there are many!
<rfs613>
having that info might help sell a version upgrade to management...
<kanavin_>
most upstreams do not support anything except the latest released version
<rfs613>
the folks i'm working with at least have plans to move to kirkstone... though I suspect there will be dunfell holdouts for quite a while yet...
<kanavin_>
rfs613, yes, security can be a major selling point to not stick with the old stack. Another is that falling far behind upstream means unpredictable effort when you do need to upgrade (e.g. not possible to estimate or plan).
<sakoman>
rfs613: kirkstone support is also scheduled to end in April 2024
<rfs613>
been on both ends of that stick, I can appreciate product owners being fearful of changing stuff
<sakoman>
rfs613: dunfell support was extended two years as an experiment
<kanavin_>
I have an even more radical view: everyone should be doing product development on top of master, and branch off to LTS for shipping products
<rfs613>
kanavin_: and for folks with 10+ year product life cycles?
florian_kc has joined #yocto
<kanavin_>
rfs613, push software updates to deployed products that take them from one LTS to a newer LTS
<sakoman>
rfs613: those folks should have developed a plan before committing to 10 years!
<fury>
what am i supposed to see when core-image-sato finishes booting up? right now all i'm getting is a blank black screen after the boot progress bar goes to full
<sakoman>
(and included the cost of that support in their plans!)
<rfs613>
and they also need a crystal ball to predict shutdown of 2G cell network, they need double or triple the flash space (because eveyrthing gets bigger over time), etc.
<kanavin_>
rfs613, 'product owners being fearful of changing stuff' == 'our testing sucks or is non-existent'
<rfs613>
those are do-able but at a cost.. not going to work in many situations
<sakoman>
rfs613: In that case they shouldn't be committing to 10 years!
<rfs613>
hehe half the time it takes them 2 years just to develop the product... it's obsolete on launch day ;-)
<kanavin_>
sakoman, you should make a talk with your perspectives in a year's time
<sakoman>
kanavin_: I suspect it would be a quite boring talk