kanavin has quit [Remote host closed the connection]
efeschiyan has quit [Quit: <insert a ninja vanishing trick with a snowboard>]
olani_ has quit [Ping timeout: 246 seconds]
olani_ has joined #yocto
olani has quit [Ping timeout: 260 seconds]
olani has joined #yocto
<atripathi>
GA/GM folks.
<atripathi>
Is there a crops/poky container built for arm64 which can be used for mac with m1 chip?
<atripathi>
Or need to use a VM with linux guest host for building yocto? Please recommend.
lars__ has joined #yocto
<lars__>
Good morning. I'm having trouble getting Yocto to build on a github runner. For some reason it keeps segfaulting on do_image_ext4. There is enough space, and the same runner has succesfully built other Yocto repos
<geoffhp>
/quit
geoffhp has quit [Quit: Leaving]
olani has quit [Remote host closed the connection]
<varjag>
if one needs a product/license key in a recipe from an external file (which should not be stored with the recipe)
<varjag>
is there a canonical way of doing that
<varjag>
just adding it to SRC_URI?
florian has joined #yocto
prabhakarlad has joined #yocto
ptsneves has joined #yocto
rfuentess has quit [Ping timeout: 252 seconds]
leon-anavi has joined #yocto
Schlumpf has quit [Ping timeout: 246 seconds]
ckayhan has joined #yocto
<ckayhan>
Hello, How do I add locales to Petalinux 2022.2?
<KanjiMonster>
varjag: depending on the size of the license key (and data type), you could also create it dynamically with content from a variable the user is expected to set in their configuration
<varjag>
right
<KanjiMonster>
at least that's how I would approach this. might be a bit more CI friendly
<RP>
one in selftest with getopt, one in rust create_spdx with a missing file, a libacl error from patches in -next, the tar warnings from sstate and maybe more,I've not looked
<nedko>
hello, i'm new to yocto and i'm thinking about using yocto for packaging a gentoo derivate for proaudio (ladios). what part of yocto build process requires 8G RAM? on the tegra-nano here i have only 4G (GDDR) RAM and was able to build quite a lot, although with lower than 4 job count
rfuentess has quit [Remote host closed the connection]
rfuentess has joined #yocto
atripathi has quit [Quit: Client closed]
<ptsneves>
nedko: any special reason you are building in a tegra and not on something more usual? Even a normal laptop has more than 4GB ram
<nedko>
ptsneves: "normal" (x86_64) computer are less secure because they come with spy software that is very hard to remove
<nedko>
sometimes even coreboot/libreboot is not enough
<ptsneves>
nedko: Oh you do not trust a normal desktop to build something for your embedded target?
<nedko>
so i prefer to buy arm, there at least the possible backdoors will be rendered to transistors (unless arm puts microcode, meh :)
<nedko>
ptsneves: i dont have "normal" desktop
<nedko>
my desktop is the tegra board
<nedko>
attached to my monitor, i run ardou, kicad and freecad on it
prabhakarlad has quit [Quit: Client closed]
<nedko>
*ardour
<ptsneves>
nedko: Cool :) Ok, what image are you trying to build?
<nedko>
ptsneves: for ladios i target more than one. for sure one for the jetson tegra board(s), wherever is possible for the lots of olinuxinos i have, for teres-1 for sure, and also x86_64 with grub
<nedko>
with the x86_64 scenario probably being cross-compilled
<ptsneves>
I mean what yocto image recipe are you trying to build? If you try to build desktop stuff i am pretty sure you will not be able to build it with 4GB of ram
<nedko>
that somewhat bad but i guess can be improved
<nedko>
i mean, with per package job reduction on gentoo i'm able to build llvm, firefox, etc
<nedko>
nodejs is a bit slow :]
<nedko>
that is on arm32 tho: 2023-08-17T00:30:30 >>> net-libs/nodejs: 2 days, 6 hours, 47 minutes, 21 seconds
<RP>
ptsneves: nothing special should be needed. I added your patch to master-next and I don't think I saw any failure like that so perhaps it is a different change in that branch?
<ptsneves>
RP: Thanks. @abelloni let me know if you can reproduce the issue with the patch i sent
<RP>
lucaceresoli: ^^^
<ptsneves>
RP: Ouch :D Thanks :)
prabhakarlad has quit [Quit: Client closed]
mbulut has joined #yocto
mbulut has quit [Quit: Leaving]
mbulut has joined #yocto
mbulut has quit [Client Quit]
mbulut has joined #yocto
mbulut has quit [Client Quit]
mbulut has joined #yocto
Schlumpf has quit [Quit: Client closed]
tgamblin has joined #yocto
<rburton>
nedko: you know modern arm boards have several layers of firmware operating at a higher level than the kernel so if nvidia want to spy on you they absolutely can, right
<rburton>
RP: urgh.
<nedko>
rburton: true :) but i also have olimex boards with PRC chips
atripathi has joined #yocto
Saur has quit [Ping timeout: 240 seconds]
olani- has quit [Ping timeout: 246 seconds]
olani has quit [Ping timeout: 252 seconds]
lars__ has quit [Quit: Lost terminal]
Saur has joined #yocto
olani has joined #yocto
zpfvo has quit [Ping timeout: 250 seconds]
zpfvo has joined #yocto
zpfvo has quit [Ping timeout: 246 seconds]
zpfvo has joined #yocto
olani- has joined #yocto
belsirk has joined #yocto
rfuentess has quit [Ping timeout: 256 seconds]
prabhakarlad has joined #yocto
<leon-anavi>
nedko, all these boards are kind of constrained machines and the idea is to bitbake an image for them not on them :)
<nedko>
leon-anavi: i could do it with GNU make or with custom shell scripts (or with gentoo's catalyst)
<nedko>
for some targets i'd cross compile from computer that has at least 2 or 4 G RAM
<nedko>
otoh i have lots of 1G boards, so with distcc and proper setup for it, in theory i could scale the builds just by adding more boards to the local micro-cluster
<rburton>
apart from when the link stage needs 5gb of ram in a single process, sure
<nedko>
in practice i have only about 10-20 lime2s :D
<nedko>
scaling nodejs build at about 64 * 1G is lucrative target
<nedko>
rburton: 4G of zram + few G of real swap can do good things
<nedko>
the more powerful jetson-tegra boards come with more TDP and stock heatsink with fan, while i want fanless
<nedko>
on the jetson board i use for desktop the USB3 disk IO speeds were impressive
<nedko>
4 USB3+ ports on board is really nice feature
<nedko>
s/4G of zram/4G ram plus some zram/
<nedko>
i think full trashing starts at about 6.5G virtual ram in active use
<tgamblin>
Are there any examples where a script from openembedded-core/scripts can be copied to an image rootfs?
<rburton>
write a recipe for it?
<tgamblin>
rburton: That's the idea, but I'm blanking on whether there are existing recipes that do similar
<rburton>
not afaik
<rburton>
you can probably use COREBASE to get the base path of oe-core in the SRC_URI
<tgamblin>
rburton: Alright, I'll do some experiments. Thanks!
atripathi has quit [Quit: Client closed]
<JPEW>
RP well, I have the patch if you haven't fixed it already
<RP>
JPEW: I haven't
mabnhdev has joined #yocto
<mabnhdev>
Hi. I'm working on some kernel modifications and the do_kernel_metadata task fails in scc. I just get the generic error: bbfatal_log 'Could not generate configuration queue for qemuarm64.' I tried using -v (verbose) for teh scc command, but it doesn't give me anything helpful. How can I figure out exactly what is causing the scc command
<JPEW>
RP: ok, I can send it after I take the kids to school
mabnhdev has quit [Quit: Client closed]
belsirk has quit [Ping timeout: 260 seconds]
rfuentess has joined #yocto
mabnhdev has joined #yocto
xmn has joined #yocto
mabnhdev has quit [Client Quit]
varjag has quit [Quit: ERC (IRC client for Emacs 27.1)]
Tyaku_ has joined #yocto
Chaser has quit [Quit: Chaser]
<Tyaku_>
Hello, I have added "cronie" recipe in my yocto build to implement the cron jobs. When I add a cron job in /etc/cron.d/dailyreboot.cron it is never executed. But if I do it in crontab -e it works, is there anything to do to force crontab to parse the cron jobs from /etc/cron.d ?
<Tyaku_>
SHELL=/bin/bash
<Tyaku_>
48 13 * * * root /sbin/reboot -f
Xagen has joined #yocto
<ptsneves>
Tyaku_: Is cronie running on startup? Can it be the cause?
<Tyaku_>
i also try with this syntax: "53 13 * * * /sbin/reboot -f" also, if I do it directly in the shell from "crontab -e" command, it works.
<Tyaku_>
It's only when the file is in /etc/cron.d/ that it doesn't works.
<Tyaku_>
Hum, I'm going to check
speeder has joined #yocto
sakoman has quit [Quit: Leaving.]
<Tyaku_>
"/usr/sbin/crond -n" is started
<Tyaku_>
Wow, I found it on the logs of crond: "Aug 24 15:51:52 crond[368]: (root) BAD FILE MODE (/etc/cron.d/rebootdaily.cron)"
<Tyaku_>
I'm going to check on google it's possible that the file need some permissions.
sakoman has joined #yocto
<Tyaku_>
I finaly resolve all my problems: The cron file need 600 permission and we have to add an extra parameter for the user name "root". (Only when not started from crontab -e)
<ptsneves>
Tyaku_: congrats :) I had a similar issue with python refusing to use .netrc with different permissions than 600.
speeder_ has joined #yocto
warthog9 has quit [Remote host closed the connection]
mbulut has quit [Remote host closed the connection]
speeder has quit [Ping timeout: 245 seconds]
warthog9 has joined #yocto
Guest11 has joined #yocto
speeder__ has joined #yocto
mbulut has joined #yocto
speeder_ has quit [Ping timeout: 246 seconds]
mbulut has quit [Client Quit]
Guest11 has quit [Quit: Client closed]
<landgraf1>
hmmm. Do we have meeting today?
Guest11 has joined #yocto
<Guest11>
I'm looking for a way to output a manifest of all dependencies in a particular package group with their PN, SRC_URI, and SRC_REV, anyone have ideas?
speeder_ has joined #yocto
speeder__ has quit [Ping timeout: 246 seconds]
Chaser has joined #yocto
speeder_ has quit [Ping timeout: 245 seconds]
atripathi has joined #yocto
<Tyaku_>
I have a dummy question: is It possible to set (DEFAULT_TIMEZONE) variable in an image file ? (this is used by tzdata.bb to specify the timezone like Europe/Paris)
atripathi has quit [Client Quit]
<Guest11>
might be able to in your local.conf in your build folder?
<Tyaku_>
Yes, I the local.conf I know that It will work
<Tyaku_>
In*
<sakoman>
landgraf1: the meeting moved a half hour later just for this week
<sakoman>
landgraf1: so it starts in 14 minutes
<dario`>
Tyaku_: afaik the image isn't that special, it can't set variables for other recipes
<dario`>
the whole way caching works is that packages are built independently from each other and images are just glorified collections of packages
amitk_ has joined #yocto
<dario`>
if you want to define things across recipes it has to be something that's passed in globally and interpreted by the right recipes, can't be recipe to other recipe
amitk has quit [Ping timeout: 245 seconds]
goliath has quit [Quit: SIGSEGV]
<Guest11>
dario` well, kinda. A lot of recipes post-eval `??=` or `?=` so you can set stuff at the image level if it behaves. It's really inconsistent on what recipes do this, though, so you're better off just using local.conf or appending the tzdata.bb Tyaku_
<dario`>
but isn't the package already a binary/.ipk at the point where the image recipe comes into play?
<rburton>
yes, you can't set something in an image recipe and expect it to reach other recipes
<rburton>
??= is lazy evaluation but still after _that recipe_ finished parsing
<rburton>
Tyaku_: you need to use local.conf, your distro.conf, or a bbappend to set DEFAULT_TIMEZONE
zpfvo has quit [Ping timeout: 240 seconds]
zpfvo has joined #yocto
zpfvo has quit [Ping timeout: 244 seconds]
zpfvo has joined #yocto
efeschiyan has quit [Remote host closed the connection]
efeschiyan has joined #yocto
Nixkernal has joined #yocto
florian has quit [Quit: Ex-Chat]
florian_kc has quit [Ping timeout: 252 seconds]
frieder has quit [Remote host closed the connection]
frieder has joined #yocto
frieder has quit [Remote host closed the connection]
frieder has joined #yocto
rfuentess has quit [Remote host closed the connection]
frieder has quit [Remote host closed the connection]
frieder has joined #yocto
frieder has quit [Remote host closed the connection]
frieder has joined #yocto
Chaser has quit [Quit: Chaser]
<adrianf>
nedko: in our case it was the xz compressor which lead to OOMs. Maybe you check XZ_MEMLIMIT, XZ_THREADS, ZSTD_THREADS settings
leon-anavi has quit [Quit: Leaving]
Guest89 has joined #yocto
Guest11 has quit [Ping timeout: 246 seconds]
Chaser has joined #yocto
<yolo_>
if I set machine to qemuarm etc I don't really need meta-yocto-bsp layer right?
<Tyaku_>
Did you know the good yocto way to call a scrip when ethernet interface is UP or DOWN ?
zpfvo has quit [Quit: Leaving.]
<yolo_>
https://docs.yoctoproject.org/2.0/yocto-project-qs/yocto-project-qs.html in this page, adding meta-intel is done under poky/ which makes poky/ unclean, should meta-intel should be one layer up and symlink poky's oe-init script there instead, this way I don't mess up anything within poky/?
mpb27 has quit [Remote host closed the connection]
mpb27 has joined #yocto
<Saur>
JPEW: I'm looking into what you said the other day about sharing the hashserv.db between builds that share a common sstate-cache. AFAICT, the only way to do that is by setting PERSISTENT_DIR (or CACHE) to a common directory for the builds. But does that really work? Can I really share the cache directory with multiple builds running in parallel and building with different releases of OE-Core?
florian_kc has joined #yocto
<JPEW>
Saur: hashserv.db can for sure.... Not sure about the rest (my guess is no)
<JPEW>
mmm, I see the problem though since it's hardcoded to PERSISTENT_DIR / hashserv.db
<Saur>
JPEW: And the format of hashserve.db never change? (Hint: Rhetorical question since I know it is currently at version 1.1...)
<JPEW>
Saur: Ya, I guess if we change it and upgrade it would break
<Saur>
RP: Maybe that is more a question for you. What is the expectation for PERSISTENT_DIR? Is it supposed to be shareable between different concurrent builds? If so, even between different OE-Core releases?
mpb27 has quit [Ping timeout: 246 seconds]
sakoman has quit [Quit: Leaving.]
sakoman has joined #yocto
mpb27 has joined #yocto
<khem>
RP: qemu upgrade also need to reflect in QEMUVERSION
<RP>
khem: I know, there are several issues with that and it was mainly to test ppc
<khem>
sure
<RP>
Saur: originally it was intended to save between builds rather than wipe like tmp. It it a bit confused now
<RP>
sadly qemu 8.1.0 does not help
<Saur>
RP: So setting CACHE = "${SSTATE_DIR}/cache" (where ${SSTATE_DIR} is a common directory used by all builds) and expecting it to work with multiple, concurrent builds might be a bit optimistic?
brrm has quit [Ping timeout: 245 seconds]
brrm has joined #yocto
<RP>
Saur: Yes, I wouldn't expect that to work at all
<RP>
Saur: this is an area where more thought is really needed and probably rework but it isn't straight forward
Chaser has quit [Quit: Chaser]
<Saur>
RP: Ok. It seemed too good to be true, so I am not surprised.
<RP>
Saur: there were various design ideas but it really never quite worked out
<Saur>
Ok. Is it ok if I add a variable to specify the path to the hashserv DB then (e.g., BB_HASHSERV_DB) so that I can move that to a common directory since that is supposed to work according to JPEW.
<RP>
Saur: my big worry is someone is going to try this over an NFS sstate directory
mpb27 has quit [Ping timeout: 246 seconds]
<RP>
Saur: I'm not sure this will be as simple as you'd like it to be since these things get expanded really early in parsing and we have some juggling to make sure CACHE/PERSIST_DIR are defined early enough
<RP>
Saur: my point being that if you add the variable, I think people will use it in ways that won't work
<Saur>
Is it any different from setting the BB_HASHSERV variable to specify the address of the HE server? I assume they are used at the same time, so if it works to set that, it should work to set the other, no?
<khem>
RP: the acl and xattr patches are still in master-next, I think they have issues on hosts with glibc 2.38
<khem>
I hope you are not planning to accept them in its current form
<RP>
khem: I'm not aware of glibc 2.38 issues?
<RP>
khem: I do know about the broken tar issue I posted on the list about
<khem>
I have already provided needed info to Piotr
<RP>
khem: the hard part about these issues is many of them only appear on restoration from sstate
<RP>
so if you force a full build, it will work fine
<khem>
yes I have descibed the symptoms I have narrowed down to
<khem>
no,
<khem>
I have done a full scratch build
<khem>
it does not help
<khem>
regenrated whole sstate
<khem>
I use f2fs filesystem for dirs where sstate is stored and ext4 for tmp/
<khem>
that might also be a variable of interest
<RP>
ext4 certainly shouldn't be the issue at least!
olani_ has quit [Ping timeout: 245 seconds]
olani has quit [Ping timeout: 256 seconds]
<RP>
khem: I pushed a tweaked qemu upgrade patch FWIW
<RP>
khem: if the qemuppc issue isn't from the 6.1 kernel point revisions and isn't from the qemu version, which recent change would you think might be causing it? Happens both for sysvinit and systemd
<RP>
khem: glibc?
<khem>
qemuppc hmm, I thought it was load related no ?
<RP>
khem: it is but it didn't used to break under load
<khem>
nothing pops are ppc specific in glibc 2.38
<RP>
khem: anything timing related?
<RP>
khem: something that might break really slow qemu emulation?
* RP
will try a couple of combinations overnight and see...
<khem>
maybe disable --enable-fortify-source
<khem>
in glibc
<khem>
I think it could be performance regression which slows its more and timeouts occur
<RP>
khem: that is entirely possible
<khem>
we have lot of changes in this release especially in toolchain area
<RP>
khem: it is something recent, glibc would be around the right time
<khem>
yeah
<khem>
does qemu has enough RAM allocated ?
<RP>
khem: 768MB and the errors aren't memory related
<RP>
khem: I'm trying three new tests, glibc 2.37, glibc 2.38 without fortify source and an older kernel
<khem>
and CPU cores to emulate ?
<RP>
khem: its single core
<khem>
can we have more than 1 ?
<RP>
not sure the machine we use has more than 1
<khem>
try with -smp 2 or something
<RP>
khem: I'm sure we'd have done that if we could when we switched to smp
<RP>
JPEW: if you have any ideas on how to make those unpack intercepts less of a concern I'd be open to them. I worry at the moment the structure becomes too convoluted :(
michele has joined #yocto
<JPEW>
In the tracking?
<JPEW>
source tracing that is
Xagen has quit [Ping timeout: 245 seconds]
<RP>
JPEW: yes
<JPEW>
Ok, ya. There has to be a better way
zelgomer has quit [Ping timeout: 246 seconds]
zelgomer has joined #yocto
florian_kc has quit [Ping timeout: 252 seconds]
louson has joined #yocto
olani has joined #yocto
olani_ has joined #yocto
Xagen has joined #yocto
goliath has quit [Quit: SIGSEGV]
dgriego has quit [Ping timeout: 245 seconds]
dgriego has joined #yocto
Xagen has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]