asriel has quit [Quit: Don't drink the water. They put something in it to make you forget.]
Danct12 has quit [Remote host closed the connection]
Danct12 has joined #yocto
asriel has joined #yocto
xmn has quit [Ping timeout: 260 seconds]
jmd has joined #yocto
jmd has quit [Remote host closed the connection]
amitk has joined #yocto
goliath has joined #yocto
Guest22 has joined #yocto
enok has joined #yocto
alessioigor has joined #yocto
enok has quit [Ping timeout: 255 seconds]
rob_w has joined #yocto
OnkelUll_ is now known as OnkelUlla
mckoan|away is now known as mckoan
leon-anavi has joined #yocto
frieder has joined #yocto
grma has joined #yocto
rfuentess has joined #yocto
zpfvo has joined #yocto
Jones42 has joined #yocto
frieder has quit [Ping timeout: 276 seconds]
frieder has joined #yocto
frieder has quit [Ping timeout: 276 seconds]
berton has joined #yocto
prabhakalad has joined #yocto
mvlad has joined #yocto
frieder has joined #yocto
altru has joined #yocto
mbulut has joined #yocto
rber|res has joined #yocto
ltp_newbie has joined #yocto
enok has joined #yocto
enok71 has joined #yocto
frieder has quit [Ping timeout: 260 seconds]
enok has quit [Ping timeout: 252 seconds]
enok71 is now known as enok
altru has quit [Ping timeout: 256 seconds]
<mbulut>
does anyone know about a recipe providing docker native for running the docker daemon at build time?
johndunet has joined #yocto
frieder has joined #yocto
frieder has quit [Client Quit]
<rburton>
mbulut: there have been people trying to beat enough assumptions out of podman that you can actually build and run it as a normal user, but we're not there yet. no way to _install_ docker as a non-root user to use at build time. If you desperately need docker during a recipe build first reconsider and find an alternative, or just rely on the host having docker.
<rburton>
if you desperately need this then fixing podman so a normal user can install it into their $HOME and it works entirely inside the home directory shouldn't be _too_ difficult, just need to fix the assumptions.
guillaume has joined #yocto
johndunet has quit [Quit: Client closed]
guillaume is now known as johnjohn
johnjohn is now known as johnjohn28
johnjohn28 has quit [Changing host]
johnjohn28 has joined #yocto
frieder has joined #yocto
johnjohn28 has quit [Client Quit]
johnjohn28 has joined #yocto
<mbulut>
i think podman is out of the race as my end goal is to pre-populate /var/lib/docker for offline container usage in the field. having docker on the host (docker-in-docker in my case) was the method i was going after but before starting with that i just wanted to hear if there's anything recent providing a docker-native...
<rburton>
you'll have more luck if you use podman instead of docker, i expect. its the same but without horrible license.
<rburton>
and meta-virt has OCI container image types, so you might be able to just drop the images in the right place. zeddii might have more input when he's awake.
<mbulut>
i don't know too much about podman tbh and our current container deployment infrastructure is based on docker, so i'm reluctant to switch if there's a viable way to do this in a way that's compatible with docker at runtime
<mbulut>
i might take a look at meta-virt though
<rburton>
podman and docker have identical runtime interfaces
<mbulut>
so podman would look for/populate layers in /var/lib/docker?
<rburton>
no, but the commands you'd use are identical
<mbulut>
ok
<rburton>
use docker if you prefer, it's just more annoying
<mbulut>
too bad, would have been nice if i could use podman to populate /var/lib/docker at build time and use docker at runtime
<rburton>
if you're populating with podman just run with podman too. the podman docs explicitly say you can alias docker=podman.
<rburton>
or just use docker if you want to use docker :)
<mbulut>
:)
<mbulut>
i think i should read a bit about podman
<mbulut>
gotta figure out compatibility with container registries, docker-compose, etc...
sakoman has joined #yocto
simonl has joined #yocto
Guest22 has quit [Quit: Client closed]
enok has joined #yocto
<simonl>
Hi! Sorry if this is a dumb question - I want to file a bug in Bugzilla (for the 'pseudo' tool - have script to reproduce the issue and a patch to fix it), but I can't find anywhere that I can create a Bugzilla account. At e.g. https://wiki.yoctoproject.org/wiki/Bugzilla_Configuration_and_Bug_Tracking it says 'click on "New Account" in the footer area', but there is no such link. Does anyone know
<simonl>
how you actually should go about to get an account?
ltp_newbie has quit [Quit: Client closed]
<rburton>
yeah that needs fixing! halstead, how does one create a bugzilla account now?
frieder has quit [Ping timeout: 276 seconds]
Jones42_ has joined #yocto
rob_w has quit [Remote host closed the connection]
Jones42 has quit [Ping timeout: 248 seconds]
Jones42_ has quit [Read error: Connection reset by peer]
Jones42 has joined #yocto
frieder has joined #yocto
<Jones42>
If I change the path prefix of some package from /usr to /foo, the package itself builds nicely, but other packages that depend on it, can't find the pkgconfig file anymore, since it's now in recipe-sysroot/foo/lib/pkgconfig, which isn't in PKG_CONFIG_PATH. Is there an elegant way to fix this?
<RP>
simonl: in the meantime you could email helpdesk@yoctoproject.org with an account request and mention there is no link on the bugzilla
rob_w has joined #yocto
<simonl>
RP: Ok, thanks!
<RP>
simonl: please do send the patch for pseudo to the list too. I'm curious what you found!
<rburton>
Jones42: don't change the prefix? why do you want to change the prefix?
<rburton>
it keeps on coming up and i've never understood why
florian has joined #yocto
<Jones42>
rburton: because I want to split out an application into an own partition. (which i want to be able to individually replace/update with rauc)
<rburton>
you'll need to extend PKG_CONFIG_PATH in recipes which want to use that app
frieder has quit [Ping timeout: 260 seconds]
<CrazyGecko>
lol, i wanted to ask the same question as @simonl about account creating. @RP should I send a account request too or should I wait for the website to be fixed? How long does this usually take?
<Jones42>
rburton: thanks, that's what feared... there's a surprising amount of hardcoded paths in bitbake.conf
<rburton>
well it's all variabled, so not really hardcoded
<rburton>
if you want to carve a system into two pieces then that's your choice. your distro could globally add to PKG_CONFIG_PATH and that might be all you need.
<CrazyGecko>
@simonl found that regarding the account creation. https://bugzilla.yoctoproject.org/createaccount.cgi It seems, like it is intended, but the link should nod be missing. so better write an E-Mail there for the account
<Jones42>
rburton: that could work, will give it a try. thanks!
<Jones42>
rburton: I'm open to alternatives, however. using rauc and having the app in its own image seemed to be the easiest way
<zeddii>
mbulut: I'm about 90% of the way through cross container install at build time. I have it slightly hacked and working for liboci users (podman, etc) and have a few final issues to deal with on the docker front.
florian has quit [Ping timeout: 248 seconds]
<mbulut>
cool, i'd be interested in that work. for now i'm working on a solution for my specific use case on the basis of that savoirelinux poc-layer. if i manage to get a viable recipe together, i might take a look at rootless docker (https://docs.docker.com/engine/security/rootless/) as the savoirelinux approach involves sudoing and stuff which i'd like to avoid...
<zeddii>
that particular document won't really help you. I've been through it about a hundred times, you just run into the guid/uid mapping that requires setup and permissions, I've been working on modifying the docker source code to not require it in an install configuration.
<mbulut>
you mean the rootless docker one?
<zeddii>
I also have a custom native docker registry (helps in some cases, not all), as well as various other tools have done to manipulate the vfs store
frieder has joined #yocto
<zeddii>
the savoirelinux type approach is ok if you are hacking something together, but I've already gone on the record several times as saying that it just won't fly in m-virt
<rburton>
this is why i suggest podman: it had support for running entirely in $HOME for longer so _should_ be easier to translate to a rootless build environment
<zeddii>
rburton: you don't even need podman for the cross container install part, I can already install meta-virt OCI images for podman that "appear" and are runnable on boot without needing podman on the host
<rburton>
yeah
<zeddii>
but docker, because they refuse to use the oci based VFS is a harder nut to crack, and I won't put it into m-virt until both are working,.
<rburton>
meh ignore docker
<zeddii>
don't make me haul up my presentation that says "won't crown a runtime king"
<zeddii>
plus, RH is annoying in their own way with podman.
<rburton>
their licensing fiasco made that choice for me ;)
<zeddii>
they won't play nicely with the rest of the CNCF tools, madly replacing bits with their own creations. very systemd-like.
<zeddii>
maybe I can just to a presentation @ vienna where someone just says things to wind me up and gives me the mic!
<zeddii>
"fireside rant" ?
<rburton>
yeah do it
<mbulut>
yeah agree, the savoirelinux thing is nothing that could ever be upstreamed as something generally useful because of its assumptions on the host tools but yet currently still my best hope to get the job at hand done without stirring up too much dust in our infrastructure//
<rburton>
that would be great
<mbulut>
zeddii, is there any gh repo or sth i could watch to see how your thing goes so i might come back to it some time?
<RP>
CrazyGecko: feel free to send a email about an account as well
<mbulut>
oh, i guess meta-virt is where i should be looking, right?
<zeddii>
mbulut: I maintain meta-virtualization, when ready the stuff either goes there or goes into a WIP branch, but I put it down about a month ago to do oe-core kernel stuff and package updates in m-virt, I'm back to it again shortly. I expect to push at least the minimal infrastructure in August so it'll be ready for the fall release.
<mbulut>
cool, thx
<zeddii>
it must be Monday. half my infrastructure is busted, I'm going to go see if I can revive some machines.
<mbulut>
confirm: it's monday (at least in my tz)
florian has joined #yocto
<johnjohn28>
Hi all
rob_w has quit [Remote host closed the connection]
<johnjohn28>
I'm still struggling to add the VETH kernel module to my Yocto image, and I can't understand why it's not working.
<johnjohn28>
I enabled the kernel module via menuconfig (bitbake virtual/kernel -c menuconfig), then created a fragment with the modified config (bitbake -c diffconfig virtual/kernel which created the file builds/build-genericx86-64/tmp/work/genericx86_64-poky-linux/linux-yocto/5.15.72+gitAUTOINC+441f5fe000_0b628306d1-r0/fragment.cfg).
<johnjohn28>
I then created a recipe using recipetool appendsrcfile -w ../../layers/meta-my-layer/ virtual/kernel path/to/fragment.cfg. I cleaned and recompiled the kernel with bitbake -c clean virtual/kernel then bitbake virtual/kernel.
<johnjohn28>
After that, I built my image and installed it, but the module is still not installed:
<johnjohn28>
root@device:~# zcat /proc/config.gz | grep VETH >> # CONFIG_VETH is not set
<johnjohn28>
I'm desperate, I've been trying for hours to add this module by all means, and nothing works. Yet it's a relatively simple module without any dependencies. Do you have any idea what might be causing the problem?
<rburton>
Jones42: did you install the kernel module?
<rburton>
enabling a module does not install it into the image
<rburton>
also double-check the .config in the build tree to verify your setting actually stuck, kconfig is a fickle thing
<johnjohn28>
with kernel-module-veth on IMAGE_INSTALL ?
<rburton>
yes
belsirk has joined #yocto
<rburton>
oh you checked the on target config, so you can blame your fragment not being sufficient (or not being used at all)
florian has quit [Ping timeout: 248 seconds]
rfuentess has quit [Ping timeout: 252 seconds]
<johnjohn28>
No match for argument: kernel-module-veth
<rburton>
check the .config in the build tree to see if your assignment stuck (bitbake virtual/kernel -c showconfig)
<rburton>
if it didn't then double-check that your append is actually working, you can check the SRC_URI with bitbake-getvar -r virtual/kernel SRC_URI
<zeddii>
johnjohn28: assuming my build server revives shortly, I can help as well. but I'll need a few more to get an initial build started.
<johnjohn28>
The output of the command bitbake-getvar -r virtual/kernel SRC_URI does show my fragment.
jmiehe has quit [Quit: jmiehe]
<Jones42>
johnjohn28: does your .config in the build folder still say "m", while you have "y" in your fragment?
<rburton>
sounds like your target isn't running the same kernel...
<Jones42>
johnjohn28, how do you get the image on the target? can you give us some information on your config?
<johnjohn28>
oh you're right, I use RAUC to update my device, but it only updates the rootfs and a data partition.
<johnjohn28>
I'm really stupid.
<vvn>
should I add to vardepsexclude the direct variable used in the value or all variables recursively? i.e. FOO="${BAR}" BAZ="${FOO}", should I add BAZ[vardepsexclude] = "FOO" or BAZ[vardepsexclude] = "FOO BAR"?
<Jones42>
johnjohn28: we've all been there, glad you found the issue!
<rburton>
kanavin_: fyi "python3: drop deterministic_imports" breaks the build of python3-meson-python
<rburton>
which is fun
Jones42_ has joined #yocto
<rburton>
i wonder if empty directories are breaking things
<RP>
rburton: ah, I was nervous about dropping that :/
<RP>
rburton: there is an upstream bug saying it should all work so we need to tell them about it most likely
Jones42 has quit [Ping timeout: 248 seconds]
<rburton>
yeah commenting now
<shoragan>
johnjohn28, so you're not updating your kernel? i'd suggest configuring rauc to update it in lock-step with the rootfs. if you're not using secure boot, it's often simpler to just have it in the rootfs.
Haxxa has quit [Ping timeout: 252 seconds]
<johnjohn28>
yeah it's work
<johnjohn28>
I'm torn between the joy of finally succeeding and the shame of having spent so much time searching for such a simple mistake ^^
<johnjohn28>
Thank you all for your help.
Haxxa has joined #yocto
rob_w has joined #yocto
<johnjohn28>
shoragan, yeah, I only update the rootfs and some other partition but not the kernel.
<johnjohn28>
I make : losetup -Pf --show image.wic
<johnjohn28>
dd if=/dev/loop35p1 of=/tmp/boot.img
<johnjohn28>
mount -o loop /tmp/boot.img /tmp/boot
<johnjohn28>
on the device cp /tmp/bzImage /boot/bzImage
<johnjohn28>
and it work, the module is here
<johnjohn28>
after a reboot
goliath has quit [Quit: SIGSEGV]
Haxxa has joined #yocto
Haxxa has quit [Read error: Connection reset by peer]
Haxxa has joined #yocto
paulg has joined #yocto
CrazyGecko has quit [Ping timeout: 252 seconds]
frieder has quit [Remote host closed the connection]
xmn has joined #yocto
Guest12 has joined #yocto
enok has quit [Ping timeout: 260 seconds]
<Guest12>
Hello, I report a yocto bug here. the `INHERIT += "cve-check"` is incompatible with do_populate_sdk_ext with core-image-sato (I suppose it is the case for all images) on scarthgap `7fb368604c5c7` : `ERROR: Task cve-update-nvd2-native.do_fetch attempted to execute unexpectedly` . thank you in advance. (PS: I didn't open a bugtracker ticket, I let
<Guest12>
you have the pleasure)
Guest12 has quit [Quit: Client closed]
leon-anavi has quit [Quit: Leaving]
jmiehe has joined #yocto
mjm has joined #yocto
jmiehe has quit [Quit: jmiehe]
geoffhp has joined #yocto
belsirk has quit [Remote host closed the connection]
mckoan is now known as mckoan|away
<halstead>
simonl: rburton: I'm sorry the account request instructions keep getting hidden. Can you email it-coreprojects-helpdesk@linuxfoundation.org and we'll get it made.
enok has joined #yocto
goliath has joined #yocto
enok has quit [Ping timeout: 245 seconds]
zpfvo has quit [Remote host closed the connection]
MattWeb__ has joined #yocto
jmd has joined #yocto
enok has joined #yocto
enok has quit [Ping timeout: 272 seconds]
enok has joined #yocto
florian has joined #yocto
florian has quit [Ping timeout: 248 seconds]
Haxxa has quit [Ping timeout: 252 seconds]
<mbulut>
in a recipe that does sudo (with sudo in HOSTTOOLS_NONFATAL) i get `sudo: /etc/sudo.conf is owned by uid 65534, should be 0` and `sudo: /workspaces/sandbox/build/01047/tmp/hosttools/sudo must be owned by uid 0 and have the setuid bit set`
<mbulut>
i can do the same sudo command from a devshell but inside the recipe it fails.... any idea what could be causing this?
Haxxa has joined #yocto
<mbulut>
also i checked who owns /etc/sudo.conf and it's owned by root (uid 0) so really don't know why bitbake claims it's owned by 65534
<mbulut>
i found some hints but they're all related to ansible -- nothing bitbake related..
<mbulut>
very similar to my situation and post https://lists.yoctoproject.org/g/yocto/message/59594 suggests an issue "introduced somewhere between dunfell and kirkstone" ... if that's true i might very well be hitting the same problem as i'm using kirkstone 4.0.2
enok has quit [Quit: enok]
alessioigor has quit [Remote host closed the connection]
<khem>
which recipe is doing sudo operation, that should be looked into
<rburton>
mbulut: guessing this is your docker experiments. _this_ is why its a terrible idea to sudo docker inside a recipe
florian has joined #yocto
<rburton>
so I'm back to thinking that our splitting of the python modules is too granular to be actually useful
jmiehe has joined #yocto
jmiehe has quit [Client Quit]
<mbulut>
rburton, khem, yes it's the docker experiments. just running a build off master to check if it's down to fakeroot/pseudo and not a bug present in 4.0.2
<mbulut>
not claiming at all that doing sudo docker inside a recipe is a good thing but before exploring the implications of using podman instead, i wanted to see how far i get with this
<mbulut>
seems like at the time when that article was written (dunfell), it used to work so i wanted to give it a shot
enok has joined #yocto
florian has quit [Ping timeout: 248 seconds]
berton has quit [Quit: Connection closed for inactivity]
enok has quit [Quit: enok]
enok71 has joined #yocto
enok71 is now known as enok
enok has quit [Ping timeout: 252 seconds]
enok has joined #yocto
sotaoverride has quit [Ping timeout: 264 seconds]
enok has quit [Ping timeout: 252 seconds]
dankm has quit [Remote host closed the connection]
dankm has joined #yocto
mvlad has quit [Remote host closed the connection]
Haxxa has quit [Quit: Haxxa flies away.]
Haxxa has joined #yocto
Algotech has joined #yocto
Algotech has quit [Client Quit]
jmd has quit [Remote host closed the connection]
zwelch has quit [Remote host closed the connection]
zwelch has joined #yocto
<RP>
mbulut: it could be pseudo interacting really badly with sudo
<mbulut>
yeah, i'm not pursuing that further (though the master build still running just for the sake of being sure it's by design and not a bug)
<mbulut>
shifted myself towards rootless-docker instead -- let's see how that goes
<vvn>
what's your view on setting PACKAGECONFIG:pn-foo in the distro conf vs. setting PACKAGECONFIG:distro in bbappend?
<RP>
I'd have said distro conf file
alessioigor has joined #yocto
alessioigor has quit [Remote host closed the connection]
marka has quit [Ping timeout: 248 seconds]
marka has joined #yocto
jbo- is now known as jbo
goliath has quit [Quit: SIGSEGV]
TundraMan has joined #yocto
marka has quit [Ping timeout: 248 seconds]
rob_w has quit [Read error: Connection reset by peer]
BrianL has joined #yocto
<BrianL>
Is there any way to change the disk label used for mmcblk0p1, so that it is not boot-mmcblk0p1, via config or otherwise, when using wic to package the image?
amitk_ has joined #yocto
amitk has quit [Ping timeout: 260 seconds]
<mbulut>
zeddii, u around?
Guest8 has joined #yocto
<mbulut>
BrianL, have you tried --use-label in the .wks?
<mbulut>
stuck on the rootless docker as well... :/
<mbulut>
so my build env is a docker container itself, that's for the record
<mbulut>
i added the bits suggested in https://docs.docker.com/engine/security/rootless/ to my dev container and can run the daemon using dockerd-rootless.sh from normal shell and also bitbake shell
<mbulut>
but freaking not from within a build task
<mbulut>
newguid fails while setting up the UID/GID map:
<mbulut>
[rootlesskit:parent] error: failed to setup UID/GID map: newuidmap 36572 [0 1000 1 1 100000 65536] failed: newuidmap: write to uid_map failed: Operation not permitted
rjones2 has joined #yocto
<mbulut>
this isn't down to the headroom for nested uids, i tried increasing it in /etc/subuid and /etc/subgid
<mbulut>
i admit this might be a too specific problem to raise here but since it only fails when running the build task i'm kind of lacking knowledge of what happens behind the scenes that might cause the EPERM