<RP>
JPEW: I think python is a bit cleverer about how it handles the gc in forks but I'm still worried about the memory usage, particularly as I think the kernel OOM killer might not understand the shared pages
azcraft has quit [Remote host closed the connection]
florian_kc has quit [Ping timeout: 268 seconds]
yann has quit [Remote host closed the connection]
seninha has quit [Remote host closed the connection]
sakoman has quit [Quit: Leaving.]
nerdboy has quit [Ping timeout: 255 seconds]
nerdboy has joined #yocto
Ram-Z has quit [Ping timeout: 252 seconds]
Ram-Z has joined #yocto
davidinux has quit [Ping timeout: 260 seconds]
davidinux has joined #yocto
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
jmk1 has quit [Read error: Connection reset by peer]
<JaMa>
will try to remember that next time I'm adding a lot of print() in some bbclass
<rburton>
yeah, its neat
<rburton>
as great as bb.plain(f"{foo=}") is, pysnooper is better
florian_kc has quit [Ping timeout: 255 seconds]
<mabnhdev>
rburton For various reasons I think I need to downgrade Kirstone's python to 3.9. I copied Honister's 3.9 python into my meta layer and fixed the parse errors. However, when I try to build, I'm getting build errors while trying to build python3-installer. The 3.9 python doesn't provide python3-installer and I think its being grabbed from
<mabnhdev>
poky/meta. I trying to figure out why python3-install is even being build since python 3.9 doesn't provide it. Any ideas?
<rburton>
python3-installer isn't part of python, it's a dependency needed to build python code
<mabnhdev>
Is that dependency new to Kirkstone?
<rburton>
yes
<rburton>
it should work with 3.7 onwards
<rburton>
obviously the right thing to do here is fix the reason you need 3.9 to work with 3.10, not backport 3.9
mvlad has quit [Remote host closed the connection]
mvlad has joined #yocto
<mabnhdev>
Ah, that's the rub. My hand is being forced by needing to stay on OpenSSL 1.0.2 for another 9 months or so - we're working on upgrading to 3.0. Python 3.10 dropped support for OpenSSL 1.0.2 and now needs 1.1.1 or greater. This is all a transition effort.
<mabnhdev>
Once we're compatible with OpenSSL 3.0, I can drop all this transition hacking.
<kanavin>
mabnhdev, moving major pieces between yocto releases is asking for pain and suffering.
<mabnhdev>
kanavin Don't I know it ;)
<mabnhdev>
This is a short-ish term hack to keep making progress. I'm trying to jump from Yocto LTS to LTS.
<kanavin>
if you are staying on openssl 1.0.2, then you have to stay on a yocto release which has 1.0.2
<rburton>
the openssl carnage isn't fun or easy
<rburton>
though installer claims to work with 3.7 onwards, so you should debug that
<kanavin>
and jumping from LTS to LTS isn't a great idea either, you do need to set up branches and a build that track upstream master milestones
<kanavin>
I know almost every yocto user does big bang transitions, still a bad idea in my eyes
<mabnhdev>
Isn't that the idea behind LTS; release stability for years. I'm currently based on Dunfell; looking to advance to Kirkstone before Dunfell EOL. Rinse, wash, repeat.
<rburton>
sure, but it's a big transition thats harder to do.
<kanavin>
No. The idea is to avoid accumulating years of technical debt by staying on a stale LTS release.
<rburton>
easier to not do it in stages, because you end up with situations like "we need openssl 1.0 but python 3.10 won't work but we need python 3.10 for something else"
<kanavin>
Rather, you follow master, then when a new LTS release happens, you branch to that, and continue following master.
<mabnhdev>
agreed, from a Yocto POV. But the non-Yocto side of things is much more difficult to transition, so it's much easier if I reduce Yocto version thrashing.
<JaMa>
agreed, also easier to fix regressions as soon as they are introduced in master, instead of trying to bisect what broke your code/layer in last 2 years
<JaMa>
non-yocto side teams can use whatever released LTS the company uses, but build/yocto team should track master in parallel
<mabnhdev>
JaMa in a perfect world.
<JaMa>
I live in perfect world for last 10+ years, doing that as 1-person yocto team for really big builds
<rburton>
yeah but i bet you never deployed anything at scale
<rburton>
(runs)
Notgnoshi has joined #yocto
<kanavin>
no more retro x86 \0/
<kanavin>
we can claim superiority over all 'classic' distros now :)
<kanavin>
and parity with a certain x86 demo distro by Intel
<kanavin>
the likes of RHEL still have to work on e.g. pre-2022 Atoms
<JaMa>
luckily I no longer use AMD Bulldozer for OE builds :)
<kanavin>
JaMa, this wouldn't break the builds, only runqemu (and only if using kvm)
<JaMa>
IIRC IvyBridge was too new for that as well and qt was failing in runtime with Illegal Instructions
<rburton>
yeah i remember fun and games with clearlinux when i had an ivybridge xeon
<kanavin>
rburton, I think nowadays they build everything three times
<kanavin>
v2, v3, v4
<kanavin>
I think it's kind of hilarious that intel made huge amounts of noise about avx512 being glorious, then dropped it from products accessible to regular people
<rburton>
is elfutils failing to build debuginfod for everyone else with master?
<rburton>
ah
<rburton>
kanavin: your elfutils fix for curl was BUILD_CFLAGS so doesn't fix the target build
bps2 has quit [Ping timeout: 246 seconds]
gho has quit [Quit: Leaving.]
gho has joined #yocto
leon-anavi has quit [Quit: Leaving]
ccf has joined #yocto
victoridaho[m] has joined #yocto
Guest59 has joined #yocto
<Guest59>
Is this an acceptable place to ask questions related to a problem I am having?
<abelloni>
probably
<victoridaho[m]>
Ok. I am seeing the error "ERROR: os-release-1.0-r0 do_package_qa: QA Issue: os-release rdepends on os-release-dev [dev-deps]". I understand what it means but I can't seem to find any info on how to fix it.
dmoseley has quit [Ping timeout: 260 seconds]
<JaMa>
victoridaho[m]: have you read log.do_package?
<JaMa>
it often shows why this is happening
<Guest59>
It has the following:
<Guest59>
NOTE: Package os-release-dbg skipping libdir QA test for PACKAGE_DEBUG_SPLIT_STYLE equals debug-file-directory
<Guest59>
I'm trying to add "perf" to the distribution and have added this to local.conf:
<kanavin>
rburton, I'd like to understand how the elfutils fix made it through all the builds though
<kanavin>
building it myself now
florian has quit [Ping timeout: 268 seconds]
florian has joined #yocto
gho has quit [Quit: Leaving.]
<rburton>
kanavin: i'm surprised nothing in qa built target elfutils
<kanavin>
rburton, I just built target elfutils. And world builds include it too.
Estrella has quit [Remote host closed the connection]
mabnhdev has quit [Quit: Client closed]
manuel1985 has quit [Ping timeout: 265 seconds]
<RP>
rburton: was this not clang specific?
Payam has joined #yocto
<rburton>
hm. just had another look at the builds, it only triggered on armgcc.
<rburton>
i wonder why thats emitting different deprecation warnings?
<Payam>
hi, I know I have asked it before but I already forgot it. so bitbake first looks at DL_DIR if specified then it looks att Mirror if specified right? How about sstate? does it look for at local one and then remote?
<rburton>
Payam: it would be madness to look at remote first, wouldn't it?
<Payam>
rburton, yes but I wana make sure that it looks at the mirror and doesn't just skip it
<kanavin>
rburton, I'm looking at target log.do_compile, and I see no deprecation warnings at all :-/
<kanavin>
for native they're there
<rburton>
Payam: do_fetch will check the file is in DL_DIR and if that fails, try "the network", that is PREMIRRORS then SRC_URI then MIRRORS.
<rburton>
Payam: sstate lookups will try SSTATE_DIR first and then SSTATE_MIRRORS
<Payam>
What do you mean by MIRRORS? you mean premirror:append?
<rburton>
kanavin: some gcc thing. makes me wonder if our gcc is doing the right thing
<rburton>
Payam: i mean the variable MIRRORS
<Payam>
ah
<rburton>
everything in uppercase is a variable name
<Payam>
rburton, As you know I uploaded the sstate to s3 and when I download it and wana use it, it takes alot of time almost like it doesn't use it at all.
<rburton>
without your config, your setup, your logs, it's literally impossible to help
<rburton>
bitbake will tell you exactly how much sstate its using
<rburton>
"Sstate summary: Wanted 92 Local 58 Mirrors 0 Missed 34 Current 294 (63% match, 91% complete)"
<JaMa>
"when I download it" you seem to run in circles
<rburton>
also i hope you're not downloading the sstate before the build, instead use the s3 as a sstate mirror. downloading all the sstate will get increasingly slow as it grows.
<rburton>
of course the easier solution is to not use s3, but we told you this yesterday.
<rburton>
kanavin: so why is our gcc not emitting deprecation warnings?
<kanavin>
rburton, curl.h makes those conditional (look for #define CURL_DEPRECATED at the top_, and I'm trying to figure out if the condition differs between native and target builds somehow
<rburton>
i presume gcc12 doesn't pretend to be __INTEL_COMPILER :)
prabhakarlad has quit [Quit: Client closed]
demirok has joined #yocto
<RP>
kanavin: thanks for the qemu fixes btw. We're a little backlogged on patches but getting there :)
prabhakarlad has joined #yocto
<kanavin>
RP: cheers. We should be pretty good overall now in the up-to-dateness.
<kanavin>
the next AUH should not bomb the list like the last few
<khem>
mcfrisk: If meta-clang master still works with kirkstone thats a side effect. I would expect it to break at some point since its not a validated combination if some one is interested in keeping master of meta-clang working for kirkstone layers working I am happy to take patches
<RP>
kanavin: Getting things caught up is very much appreciated!
<kanavin>
RP: thanks :-)
<kanavin>
RP: by the way, this x86-v3 feature was not asked by intel or anyone at LX. it's something of a personal peeve for me.
<kanavin>
RP: we'll get better performance out of mesa's software renderer that way, for example.
<RP>
kanavin: I gathered that, I think it is nice to have. I'm just hoping we run on hosts that can handle it
<kanavin>
RP: for builds and non-kvm runqemu we're fine. For runqemu kvm, yes, it will crash out with illegal instruction on anything pre-haswell, or on any Atom cpu that's over a year old (people do run yocto builds on all sorts of weird hosts, so you never know).
<kanavin>
RP: and before you know it, chatgpt and other AI in yocto will be a major use case, so I can imagine vector instructions would come in super handy to train and exercise those neural nets :)
<JaMa>
khem: I've seen them, but I haven't set a build to reproduce those 64bit time_t issues, so I might just test it in regular 64bit build
<RP>
kanavin: the AI can take care of those issues though right? :)
dmoseley has quit [Ping timeout: 252 seconds]
florian has quit [Ping timeout: 252 seconds]
<kanavin>
RP: yes, as long as it doesn't decide that to achieve that goal most efficiently it also needs to kill all humans.
<kanavin>
(also known as 'the paperclip argument')
<RP>
kanavin: I'm sure the software engineers will have put the right safeguards in
<kanavin>
RP: the problem is, AI will develop to a point where it is smarter than all of the human intelligence combined. Then it will work around any possible safeguards, and quicker than those can be built.
<kanavin>
so it shouldn't be connected to the internet, except that ship has sailed :-)
<kanavin>
the most convincing argument I've heard is that before AI is able to destroy the world in a competent manner, it will try to do so incompetently - kind of like chatgpt writing about yocto today
<kanavin>
at that point we can catch it and try to figure out how it got to that decision
<JaMa>
then solar flare will kill AI and human kind will start again from scratch (seen some post like that today, but cannot find it now)
<JaMa>
khem: thanks, that was the bit I've seen on ML, but then haven't seen it in oe-core/yoe/mut to cherry-pick from
gsalazar has joined #yocto
<JaMa>
as I didn't notice that you've switched to poky repo as a base (which explains last oe-core/yoe/mut change couple months ago)
<kanavin>
rburton, I confirmed that curl's deprecation declarations have no effect on our target gcc 12.x :-/ do we do something to it that subverts the checks?
<rburton>
not afaik. i just checked that the sysroot poison thing didn't break warning flags and it looks right
<rburton>
khem: ^^^ help!
<rburton>
i need to go now
<kanavin>
i'd like to get a glass of wine, and be afk too :-)
<khem>
rburton: interesting hmm
<khem>
let me take a look
Vonter has quit [Quit: WeeChat 3.7.1]
dgriego has joined #yocto
invalidopcode has quit [Remote host closed the connection]
<uniqdom>
Hello, I need to run base64 command inside a recipe. But it fails, as it seems to not be avaialble when using bitbake, i.e. "Command not found".
<uniqdom>
what can I do to have base64 available in the recipe?
<uniqdom>
Just to be clear, I don't need base64 in the image.
<kanavin>
uniqdom, DEPENDS += "coreutils-native"
<JPEW>
hmm, I don't think my builds have been going _that_ long: `qemu-native-7.2.0-r0 do_fetch - 172h22m54s`
<JaMa>
hehe
<uniqdom>
kanavin: THANKS A LOT!!!
manuel1985 has joined #yocto
sakoman has quit [Quit: Leaving.]
sakoman has joined #yocto
gsalazar has quit [Ping timeout: 248 seconds]
Guest59 has quit [Quit: Client closed]
Estrella has joined #yocto
Estrella__ has quit [Ping timeout: 252 seconds]
Estrella___ has quit [Ping timeout: 260 seconds]
Estrella_ has joined #yocto
dmoseley has joined #yocto
dmoseley has quit [Ping timeout: 256 seconds]
Payam has quit [Quit: Leaving]
nemik has quit [Ping timeout: 272 seconds]
nemik has joined #yocto
uniqdom has quit [Ping timeout: 268 seconds]
manuel1985 has quit [Ping timeout: 264 seconds]
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
prabhakarlad has quit [Quit: Client closed]
dmoseley has joined #yocto
dmoseley has quit [Ping timeout: 268 seconds]
dmoseley has joined #yocto
dmoseley has quit [Ping timeout: 272 seconds]
uniqdom has joined #yocto
uniqdom has quit [Client Quit]
uniqdom has joined #yocto
Haxxa has quit [Ping timeout: 268 seconds]
Haxxa has joined #yocto
BobPungartnik has joined #yocto
azcraft has quit [Remote host closed the connection]
BobPungartnik has quit [Client Quit]
<uniqdom>
I wonder what's the error here, can't figure out from the output. Paste of the build and recipe here: https://paste.debian.net/1266377/
<uniqdom>
can you confirm to me if "install" doensn't like * wildcard?
<tangofoxtrot>
uniqdom, you should not need the '*' in the FILES stanza, and I'd recommend specifying every directory and file needed in your install_append(). Alternatively you could use cp -r oor something similar, but I've found it's best to be precise.
<uniqdom>
tangofoxtrot: Thanks, I'm going to change that
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 272 seconds]
nemik has joined #yocto
<JaMa>
RP "I'm just hoping we run on hosts that can handle it" heh looks like our hosts aren't, a lot of recipes failing today with "fribidi-1.0.12/meson.build:1:0: ERROR: Executables created by c compiler x86_64-webos-linux-gcc -m64 -march=x86-64-v3 -fstack-protector-strong -O2 -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security -Werror=return-type
<JaMa>
--sysroot=/data001/jenkins/gecko/webos_qemux86-64/build/BUILD/work/qemux86_64-webos-linux/fribidi/1.0.12-r0/recipe-sysroot are not runnable."
demirok has quit [Quit: Leaving.]
<JaMa>
fribidi, orc, libsigc++-2.0, xorgproto, pixman, libdrm, glib-2.0, systemd-boot, systemd, iputils in do_configure (all using meson), nodejs do_compile (also uses qemu to run v8 mksnapshot - Illegal instruction (core dumped))
bps2 has joined #yocto
florian has quit [Ping timeout: 252 seconds]
mvlad has quit [Remote host closed the connection]
<RP>
JaMa: hmm, that is qemu failing to run target binaries? :(
<JaMa>
RP: yes, I'll find out what CPUs we have in builders and in worst case change DEFAULTTUNE for our qemux86-64 builds until we're ready to upgrade (cannot ssh into the builders rn, will check on Monday)
<RP>
JaMa: ok, I guess it should be easy to switch back to an older tune
invalidopcode has quit [Remote host closed the connection]
invalidopcode has joined #yocto
<JaMa>
RP: yes, tune-x86-64-v3.inc includes the "old" tune-corei7.inc as well, so it should be easy