GNUmoon has quit [Remote host closed the connection]
GNUmoon has joined #yocto
dushara has joined #yocto
<dushara>
Hi noticed that svn fetcher in bitbake doesn't handle URLs with spaces properly. I started tweaking fetcher with shlex.quote() around URL, but seems to affect common unpack code as well. Any suggestion on best way to deal with this?
<geoffhp>
coldspark29[m]: no need to delete your layer to get it refreshed in kas. Use the --update flag in your as command . e.g. `kas build --update myconfig.yml` or `kas checkout --update myconfig.yml`
<ad__>
i have this group in host pc, sound strange
<ad__>
oh, found, sry. i am in a container without that group
<ad__>
mm stil lgetting that error even creating the group
goliath has joined #yocto
<ad__>
ok looks something related to my systemd bbappend
dushara has quit [Quit: Never put off till tomorrow, what you can do the day after tomorrow]
kroon has joined #yocto
GNUmoon has quit [Ping timeout: 276 seconds]
rob_w has joined #yocto
huseyinkozan has joined #yocto
mvlad has joined #yocto
leon-anavi has joined #yocto
GNUmoon has joined #yocto
frieder has joined #yocto
frieder has quit [Client Quit]
frieder has joined #yocto
alessioigor has joined #yocto
alessioigor has quit [Client Quit]
mckoan|away is now known as mckoan
zpfvo has joined #yocto
<coldspark29[m]>
<geoffhp> "coldspark29: no need to delete..." <- Great, thanks for the tip. I was skim-reading the docs, but couldn't find that,
rob_w has quit [Remote host closed the connection]
zpfvo has quit [Ping timeout: 256 seconds]
zpfvo has joined #yocto
zpfvo has quit [Ping timeout: 256 seconds]
zpfvo has joined #yocto
gsalazar has quit [Quit: Leaving]
gsalazar has joined #yocto
Guest21 has joined #yocto
Schlumpf has joined #yocto
<Guest21>
Hi all, I would like to create a package (.zip) with a yocto image inside and severals binaries (in a github repo). As severals images need this feature I have planned to create a .bbclass, but the class have to fetch sources, only recipe can fetch from git isn't it ? feel free to ask
dev1990 has joined #yocto
ederibaucourt has joined #yocto
Guest2140 has joined #yocto
<qschulz>
Guest21: what I can say already is that there's a zip image type which makes a .zip of your image
<qschulz>
c.f. IMAGE_TYPES
ederibaucourt has quit [Client Quit]
<qschulz>
Guest21: those binaries, why are they not part of the "yocto imagE" ?
Guest21 has quit [Ping timeout: 256 seconds]
lucaceresoli has joined #yocto
<Guest2140>
qschulz binaries are build outside from yocto. Our main board use this package to flash several devices (one of them use the yocto image) so binaries are not needed inside the "yocto image"
<qschulz>
why I am asking is that you could have a separate partition in a wic "image" for those binaries specifically and then have almost nothing to create n Yocto except a .wks file
<Guest2140>
qschulz Thank you for sharing i will take a look to wic image
osama has joined #yocto
zpfvo has quit [Ping timeout: 268 seconds]
zpfvo has joined #yocto
an_sensr[m] has quit [Quit: You have been kicked for being idle]
jmiehe has joined #yocto
osama has quit [Ping timeout: 256 seconds]
zpfvo has quit [Ping timeout: 240 seconds]
zpfvo has joined #yocto
jmiehe has quit [Quit: jmiehe]
Guest2140 has quit [Quit: Client closed]
zpfvo has quit [Ping timeout: 268 seconds]
zpfvo has joined #yocto
Etheryon70 has joined #yocto
<Etheryon70>
Hello
zpfvo has quit [Ping timeout: 240 seconds]
zpfvo has joined #yocto
<coldspark29[m]>
Morning, this imx kernel patching is really complicated. There is the linux-fslc-imx kernel that requires the linux-fslc.inc that requires the linux-imx.inc.
<coldspark29[m]>
What is the best way to apply our patches to this kernel? Should I just create a patch folder or fork the whole thing and apply our patches? I am looking to have as little work as possible for future releases.
zpfvo has quit [Ping timeout: 240 seconds]
zpfvo has joined #yocto
<qschulz>
coldspark29[m]: is this recipe inheriting kernel-yocto?
<qschulz>
(might be indirectly through other inherited classes or .inc files
<coldspark29[m]>
Yes, it is
<qschulz>
I think you're supposed to use scc/cfg "config" files in that case to patch things up
<coldspark29[m]>
All I need to do is add some drivers and device tree files
<qschulz>
but in short, have a bbappend for your recipe with FILESETRXAPATH:prepend := ...
<coldspark29[m]>
Yeah I thought so. Like this I will not have to fork and patch the kernel repository
<qschulz>
coldspark29[m]: usually you fork off and push to your own git repo and use that one instead, you'll quickly have too many patches for hosting stuff in Yocto
<qschulz>
also.. Yocto is a build system, not technically where source code resides
<coldspark29[m]>
Yeah, but I thought I could manage with some kconfig and .dts files. Yocto should be able to do that, shouldn't it?
<qschulz>
"I could manage with some kconfig" sorry didn't understand
<qschulz>
the point is, your kernel sources shouldn't be associated with which build system you're using
<qschulz>
otherwise if you want to switch to Buildroot for example, you'll have to duplicate the patches
<qschulz>
also, makes it much harder to develop the kernel outside of Yocto
<qschulz>
just saying that forking off the kernel is what most people do. you use scc/cfg file to provide for customization options (e.g. does the user want Ethernet, hardened kernel, PCIe, etc...)
<qschulz>
so technically, what I did in our layer isn't (IMO) correct
prabhakarlad has joined #yocto
camus1 has joined #yocto
camus has quit [Ping timeout: 256 seconds]
camus1 is now known as camus
<dacav>
Hello. I would need some help for remote debugging.
<dacav>
I run `gdbserver 0.0.0.0:$port $command $args
<dacav>
Then on my host I run gdb, use `target remote $host:$port`
<dacav>
At this point I can effectively run the command by `continue`, but I don't see symbols
<dacav>
I've installed the debug symbols package
<dacav>
Am I missing some step maybe?
<kroon>
dacav, debug symbols need to be available on the host, not the target
<dacav>
I see. Calling `symbol-file` tells me they're in `target:/usr/bin/.debug/$command`
<kroon>
dacav, you can use "set sysroot" gdb command to point to where the target sysroot is on the host
<dacav>
Thanks kroon. I'm not sure of where that will be, but I guess I can find it below the yocto tmp/
<kroon>
dacav, you can build the SDK, install it somewhere, it will contain the target sysroot with debug info
<dacav>
Too bad, my sdk is broken
<dacav>
(any advice is well accepted on that topic -- see yocto mailing list, and I mentioned on this channel too, it is somewhere in the backlog)
<dacav>
kroon: I'm only left with a docker image that was built before the SDK ended up in broken state
<dacav>
Question 1: pretending my SDK is not broken, how can it contain debug info that belongs to a binary I've built?
tnovotny has joined #yocto
<kroon>
dacav, oh, you built the binary outside yoto ?
<dacav>
kroon: I ended up to package my binary with a recipe, so I could leverage bitbake
<dacav>
(it kinda sucks to learn yocto from an established/ongoing/broken project)
florian has joined #yocto
<kroon>
dacav, ah ok. i'm sure you can point gdb to the debug info on the host somehow. find the -dbg package and unpack it somewhere
<kayterina[m]>
Hello. I got a pseudo abort with inode mismatch in do_installwhile bitbaking a recipe, the link that says to start with a clean TMPDIR. Will a bitbake -c clean_all <recipe> suffice?
<kayterina[m]>
s/that//
<dacav>
kroon: So, something like unpacking the -dbg in a certain directory in the docker container that has the SDK, and then using `set sysroot` there?
<kroon>
'sysroot', 'solib-search-path'
<kroon>
maybe there are other ways to instruct gdb where to look for debug info
<dacav>
Thanks
Etheryon70 has quit [Ping timeout: 256 seconds]
<kroon>
dacav, can you post a link to the archived email detailing the sdk problem ?
tnovotny_ has joined #yocto
<qschulz>
kayterina[m]: don't do that, just remove everything EXCEPT downloads and sstate-cache in your build directory
Schlumpf has quit [Ping timeout: 256 seconds]
<qschulz>
cleanall removes the fetched sources and sstate-cache, almost making it a build from scratch
<qschulz>
kayterina[m]: if you're building within a container, I personally had to add --tmpfs /tmp to the podman/docker run command to make those pseudo abort go away
tnovotny has quit [Ping timeout: 256 seconds]
<coldspark29[m]>
<qschulz> ""I could manage with some..." <- I was just thinking that all I have to do is add the drivers source files, add our kconfig and device tree to the kernel
zpfvo has quit [Ping timeout: 256 seconds]
<qschulz>
coldspark29[m]: and defconfig
zpfvo has joined #yocto
<coldspark29[m]>
So an .scc file seems like a patch list to me. Is that correct? My colleague just said the same thing as you, that is a patched repository makes you independent from build systems.
<coldspark29[m]>
qschulz: I probably meant that. Are they not the same?
<qschulz>
kconfig is the way of listing options, which is then parsed by tools such as menuconfig/xconfig/whatever to allow you to select options and create a defconfig
<qschulz>
ultimately, you need both. One (kconfig file/option) to declare a new option, and a defconfig (or modification of a defconfig) to select this option so it's built
<coldspark29[m]>
Ah so kconfig is the drivers sources available I guess
<qschulz>
no, kconfig is a config file listing options. kernel drivers are C source code files that gets compiled if the kconfig option is enabled in your defconfig (technically .config, since defconfig is just a stripped down version of defconfig)
<dacav>
kroon: I didn't discuss the broken sdk, actually
zpfvo has quit [Ping timeout: 268 seconds]
<kayterina[m]>
with --tmpfs=/tmp in docker, what do you send to linux /tmp? Some temporary files of yocto?
<kroon>
dacav, ok. id suggest getting the sdk fixed. the devshell thing, you only get build dependencies in PATH, the output binaries for the actual recipe you're devshell:ing will not be in your PATH
<qschulz>
kayterina[m]: it makes /tmp inside the container a tmpfs, that's all I can say
<kroon>
dacav, so if you devshell a recipe the DEPENDS in your cross compiler, then you'd see it in PATH
<RP>
michaelo: are there other bitbake patches I'm missing?
<kayterina[m]>
and also I have to delete workspace directory? The error comes from building a recipe with changes to its code
<dacav>
kroon: I see. ...would it work to add the cross-compiler as DEPENDS of my tool?
<kroon>
dacav, i think so
mvlad has quit [Remote host closed the connection]
<dacav>
kroon: or I could spend time in fixing the SDK, but ...I'm really not sure which direction to take -- I'm a noob, and I find huge problems everywhere: I might spend months (one is gone) before I get to have some result :-/
<kroon>
dacav, then devshell your tool, and you should have the cross compiler in PATH
<dacav>
kroon: thanks. That would be *MUCH* better than a docker container
<dacav>
docker is crap, I have to copy data or use volumes, then all permissions become root... pff...
<kroon>
dacav, its worth a shot decribing the problem with the SDK on the oe-core ml, people will help out if they can
<dacav>
kroon: I can put that on my todo list, sure. The problem comes from a patch that can't be applied, and the culprit recipe is in xilinx's layers
<dacav>
I know there's a xilinx mailing list.
<dacav>
should I use oe-core or xilinx ml?
zpfvo has joined #yocto
jmiehe has joined #yocto
jmiehe has quit [Client Quit]
<dacav>
- I'll be using oe-core as you say. It is now in my long pile of things to do later :(
xmn has quit [Ping timeout: 240 seconds]
rob_w has joined #yocto
jatedev has quit [Quit: Client closed]
Schlumpf has joined #yocto
<dacav>
kroon: thanks for the suggestions above. Adding DEPENDS on gdb-cross-aarch made it, and besides I correctly get the debug symbols too for free.
<kroon>
dacav, 👍
Schlumpf has quit [Quit: Client closed]
Schlumpf has joined #yocto
<paulbarker>
I'm trying to speed up yocto-check-layer runs in CI, is it safe to keep `${TMPDIR}/cache` and throw away the rest of `${TMPDIR}`?
<paulbarker>
Or does that cache depend on other info in `${TMPDIR}`
<Skinny79>
However when I add the 'podman' recipe (from meta-virtualization) to my image bitbake starts complaining about not being able to download ostree :
<Skinny79>
ERROR: ostree-2021.3-r0 do_fetch: Fetcher failure for URL: 'gitsm://gitlab.gnome.org/GNOME/libglnx.git;protocol=https;name=libglnx;subpath=libglnx;bareclone=1;nobranch=1'. Unable to fetch URL from any source.
<Skinny79>
ERROR: ostree-2021.3-r0 do_fetch: Fetcher failure for URL: 'gitsm://github.com/ostreedev/ostree;branch=main;protocol=https'. Unable to fetch URL from any source.
<Skinny79>
When I go to the github page it all seems ok, and the revision id in the .bb file is actually a valid commit. Where do I start troubleshooting this ? I'm on the honister branch
<rburton>
the actual error is likely above what you pasted
<rburton>
can you pastebin the full log?
<Skinny79>
you mean the console output or the complete task log ?
<Skinny79>
Freshly installed host machine (windows) with WSL2 ;-)
<Skinny79>
Never thought the issue is in that area
<paulbarker>
Skinny79: What's the distro there? I had similar issues with an Ubuntu-based container which didn't have ca-certificates installed, you may want to try installing that if it's Ubuntu
<Skinny79>
It has ca-certificates but now that I look at it its a version from 2019 :shrug:
<Skinny79>
updating..
<paulbarker>
That far out-of-date would definitely cause issues!
prabhakarlad has quit [Quit: Client closed]
<Skinny79>
It's Ubuntu 20.04 LTS (but apparently not updated with recent updates)
xmn has joined #yocto
<Skinny79>
Am I right when I say that "DISTRO_FEATURES" are built but not ending up on the rootfs until you actually "INSTALL_" them ?
<rburton>
DISTRO_FEATURES are not things that get built exactly
<rburton>
like there isn't a 'wifi' recipe
<rburton>
but yes, a thing being in distro features doesn't mean it is *definitely* in the image
vmeson has quit [Quit: Konversation terminated!]
<rburton>
concrete example would be useful
<smurray>
and what image you use as a base makes a difference, there's a bunch more of conditional inclusion based on DISTRO_FEATURES in some of the non-minimal ones
<Skinny79>
Example would be that I need the 'salt' (stack) minion on my system. There is a recipe 'salt' in the meta-cloud-services/meta-openstack layer which actually builds all the packages for salt where I only need one specific binary in my final image. And also I have the feeling (based on the tasks performed) that a lot of stuff in the meta-openstack
<Skinny79>
layer is also build but I don't need that at all. I'd really like to stick with the recipe already available instead of copy/pasting it into my own layer
Tokamak has joined #yocto
<Skinny79>
BTW.. updating ca-certificates works, ends an afternoon looking for something in the wrong place, THANKS!
<rburton>
a root certificate expired, so everyone with stale certs suddenly got weird errors
<Skinny79>
So it had nothing to do with ostree and googling for issues with that aren't very helpful in this case ;-)
<rburton>
the fetcher can be a pain as there's so many layers, best to look at the underlying log straight away
Wouter0100 has quit [Read error: Connection reset by peer]
Wouter0100 has joined #yocto
<Skinny79>
Another lesson learned
amitk has quit [Ping timeout: 240 seconds]
rob_w has quit [Remote host closed the connection]
<Skinny79>
Still trying to learn the concepts and best practices here : In my own layer I have a 'salt_3001.1.bbappend' file. What location is the "correct" one to include/require the base 'salt' recipe ?
<qschulz>
Skinny79: the bbappend (if found) will be applied to the original recipe matching the name and the version (if found)
<qschulz>
if that was your question
<qschulz>
there's no need to include/require the original recipe in a bbappend
<rfs613>
Skinny79: a .bbappend file does not need to include/require the base recipe. It will instead be found by matching the filename (eg. salt_3001.1.bb)
<qschulz>
you just need to make sure your layer.conf has bbappends in the regexp in BBFILES
<Skinny79>
ok, but I still need to add 'salt' to the IMAGE_INSTALL somewhere
<qschulz>
Skinny79: not necessarily
<qschulz>
if something RDEPENDS on salt and is part of IMAGE_INSTALL, then it's not needed
<Skinny79>
okkkaay..... so..
<qschulz>
if nothing pulls the dependency for you, then you need to add your package to the packages to be installed by the image recipe
<Skinny79>
I have 'meta-custom' layer with recipes-core/mycore.bb which RDEPENDS 'salt' and has the .bbappend file
<Skinny79>
then I only need 'mycore' in my local.conf to add everything
<Skinny79>
What I am just not seeing is how just including my customer layer in 'bblayers.conf' will actually DO something ? I mean, otherwise just including a meta-xxx layer would just add all the software in there ? I figured that I need something in my local.conf but apparently I don't ?
<qschulz>
Skinny79: you need an additional parent directory
<qschulz>
otherwise it does not match the regexp in BBFILES and your bbappends aren't found
<qschulz>
you could see this by running btibake-layers show-append I think
<Skinny79>
brilliant.. the beauty of sense is that when it makes sense to you it's beautiful
<qschulz>
Skinny79: if I may suggest the Youtube tutorial from LetoThe2nd.. will help you get started and build good foundations
<Skinny79>
qschulz and the custom image recipe you were referring to is to replace my current core-image-minimal.bbappend with a core-sentinel-image.bb (or whatever) and use that as the argument for bitbake ?
<Skinny79>
Nice! Thanks.. Have watch a lot already but suggestions are very welcome
<Skinny79>
-ed
<Skinny79>
Thank you so much!
<qschulz>
enjoy :)
<Skinny79>
Although my wife wouldn't agree because here's another night spend on hobbies ;-)
<vd>
hi there -- qschulz you were right. After a few tests I went back to a single distro with multiple images. It makes me write a lot of recipes, but it's way faster. Multiple distros (managed manually) or multiconfigs (from the same bitbake call) extend considerably the parsing and build time, kinda deal breaker.
<kergoth>
Ah, hadn't considered parse time! Makes sense
<vd>
kergoth: I just moved 3 customizations into 3 multiconfig, and it was literally 3 times longer and also a lot of messages about deferring tasks.
jatedev has joined #yocto
<kergoth>
I wish those messages were moved to only with verbose, they mean nothing to most users anyway
<RP>
kergoth: we possibly could do that now. The problem was where multiconfig builds went mental and we couldn't debug it, we're trying to avoid that and be able to debug failures
<fray>
I found in gatesgarth that merging multiconfigs was about the same amount of time as doing them manually -- for a large "fresh" build. But doing them for a small build often was longer due to parsing overhead.. (small build was one or two recipes).
<fray>
With Honister though, I'm finding that generally multiconfig builds are now _faster_ (which is what I would have expected with better re-use) for larger builds
Schlumpf has quit [Quit: Client closed]
<fray>
I should add, one place we found a lot of slowdown though. When you get multiple copies of gcc (for different configs) building at the same time. It can overwealm the system. We ended up putting in some additional limits to try to avoid that and it gave us really good results. (Our builders run in some IT managed magic container system.. so I don't have a lot of details WHY it was failing, only that it
<fray>
was)
<fray>
Just looked we has to set: do_compile[number_threads] = "4" as well as PARALLEL_MAKE = '-j 4' (for gcc, binutils and gdb)...
lucaceresoli has quit [Quit: Leaving]
<fray>
the machine is a 48 thread machine, but 48/48 caused the system to kill threads
<Konsgn>
just built a system, but now I get crashes when trying to power off, How would I debug that? stack traces on debug port show things like: WARNING: halt/1612 still has locks held!
prabhakarlad has joined #yocto
<Konsgn>
earliest trace is : [<c01557d4>] (__do_sys_reboot) from [<c0100060>] (ret_fast_syscall+0x0/0x28)
<halstead>
qschulz: Guest96: I've updated the irc logging and 2022 logs are up now.
alessioigor has joined #yocto
alessioigor has quit [Client Quit]
<qschulz>
halstead: thanks :)
<vd>
given than the machine is parsed before the distro, is it OK to have FOO:append:mydistro = " bar" in a machine configuration?
<rburton>
yes, appends happen after parse
<vd>
rburton: but FOO:mydistro = "bar" would be a problem?
<kergoth>
It will *work*.. whether it's ideal is a different question. Ideally the boundaries should be kept between those axes of the build, but there are times when you need to configure for a particular distro/machine combination, or are dealing with external layers you don't want to fork
<kergoth>
I've seriously considered having my distro include conf/distro-machine/${DISTRO}-${MACHINE}.conf or something before, though never ended up doing int
<kergoth>
actually ended up using meta-mentor's custom setup scripts which append local.conf.append and local.conf.append.${MACHINE} fragments to local.conf as a way to modify machine behavior from the distro in a way that was user-visible instead..
<rburton>
vd: more accurately, overrides happen after parse
goliath has quit [Quit: SIGSEGV]
<qschulz>
vd: the second one (FOO:mydistro) will completely override FOO if mydristro is used
<vd>
qschulz: I know, my question was regarding the parsing order (since the distro is parsed after the machine config)
<qschulz>
vd: I don't think it matters unless you are requiring direct expansion (:= or d.getVar)
<qschulz>
I mean, if you use variables from other places that in the current file, for which the parsing order will matter
<qschulz>
otherwise, I'd say distro or machine first does not matter
<qschulz>
overrides are anyway all saved and at the end of parsing the one that applies is taken
<vd>
So the parsing is two passes?
<qschulz>
One pass and then everything is resolved? I don't know honestly about the internals
GNUmoon has quit [Remote host closed the connection]
GNUmoon has joined #yocto
<vd>
They are grey areas like this one: let's say your machine has a push button and you want the bootloader to behave differently for production and development when pressing that button. Is it a machine or a distro configuration? :-)
<qschulz>
if you;re going to have more changes, in the end, you'll most likely need a distro anyway
<qschulz>
but if that were to be the only change, I'd have two machines to maximise the sstate-cache reuse
<qschulz>
or even better, two bootloader recipes
<vd>
qschulz: so many machines requiring a base one isn't a problem? I was worried about duplicating kernel artifacts, and MACHINE_ARCH packages, etc.
<vd>
qschulz: and then you choose your preferred virtual/bootloader provider from the machine or distro conf ideally?
<qschulz>
in case of two bootloader recipes and only one machine conf, you just build both and have them deployed, create two wic files, one with production bootloader the other wiht debug bootloader?
<qschulz>
vd: you could even just make a U-Boot env recipe only and set environment variables based onproduction/debug and have the same uboot binary
<qschulz>
vd: the issue with distros is that nothing is re-used
<qschulz>
between distros ofc
zpfvo has quit [Remote host closed the connection]
<vd>
qschulz: that's a good insight thank you
alejandrohs has joined #yocto
<vd>
qschulz: having a single machine and a single distro is definitely faster... I forgot I could change the IMAGE_BOOT_FILES and WKS_FILE per image recipes. A dev image recipe could do the trick.
<qschulz>
vd: Ah I forgot we opnly support building one wks file at a time
<qschulz>
if we were to support multi wks file in one image recipe then both could be built at once
frieder has quit [Remote host closed the connection]
<kergoth>
vd, qschulz: overrides used to be applied at a specific point in time at the end of parsing, but now they are applied when variable expansions are applied, which is done when the variable is *used* (i.e. when d.getVar() is run, either programmatically or when tasks are emitted to be run)
mckoan is now known as mckoan|away
<vd>
it makes sense now that software like systemd split their config in a systemd-conf package. You can select different conf based on a the machine or distro.
<vd>
kergoth: that makes a lot of sense.
<xperia64>
I'm having trouble with do_fetch[mcdepends]; when the dependency is rebuilt/dirty, the do_fetch[mcdepends] is still cached and marked as covered
<vd>
Wouldn't it be better to tweak the hostname file in an image class rather than tweaking hostname:pn-base-files globally?
<rburton>
vd: how would that work? :)
<rburton>
unless you mean write a rootfs postprocess thing
<rburton>
in which case sure, do that if you want
<vd>
rburton: yes a post process function overriding ${IMAGE_ROOTFS}/etc/hostname if ${HOSTNAME} is set. Wouldn't it be less intrusive? Similar to extrausers.
<rburton>
vd: personally all my images are called bob, makes ssh easier :)
Skinny79 has quit [Quit: Connection closed]
<rburton>
s/images/hostnames/
Skinny7987 has joined #yocto
<vd>
rburton: is there a cost to tweak files from post process commands rather than relying on installed packages?
<rburton>
vd: package feeds break any changes
<Skinny7987>
Two files, but the custom binary in /opt/newblack/bin/sentinel-config results in a ` [installed-vs-shipped]` error
<Skinny7987>
the systemd unit works fine
<Skinny7987>
What could be the reason ?
<vd>
rburton: what do you mean?
<rburton>
vd: if you fix the rootfs in a post-process and then use a pacakge feed, the feeds don't have the changes so an upgrade will revert to the non-tweaked file
<rburton>
Skinny7987: /opt isn't in FILES_${PN} by default, you'll need to add it yourself
<vd>
rburton: but you can't install an image from a package feed, can you? What's the point of it?
<rburton>
vd: if you're not using feeds, it's moot
<rburton>
if you're using feeds its a fundamental problem
<Skinny7987>
ah.. adding only the file (within that path) isn't enough ?
<vd>
rburton: I'll be using opkg in a dev environment yes. I still don't see the impact on package feeds if the image is modified. You won't do 'opkg install core-image-minimal', so what is the problem exactly?
<rburton>
if you upgrade base-files then the hostname file changes
<rburton>
generically, any file you touch in a rootfs postprocess is not reflected in the feeds, so an upgrade will remove the rootfs postprocess changes
<vd>
rburton: hooo ok I see. That makes sense
<vd>
rburton: that must be a huge problem for users, passwords, or fstab created that way. How are people dealing with this when using feeds?
<rburton>
don't package those files in the first place
<vd>
true
<rburton>
stuff in /etc is typically marked as a conffile and packaging tools will offer to pick one
<vd>
hostname is a good candidate then
gsalazar has quit [Ping timeout: 256 seconds]
Tokamak has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
florian has quit [Quit: Ex-Chat]
<vd>
I wish I could select PREFERRED_PROVIDER in an image recipe
huseyinkozan has quit [Quit: Konversation terminated!]
<RP>
(cool on the patch to be clear, not that CI exploded)
mvlad has joined #yocto
<paulg>
RP, by any chance did you see anything about JIT and "accel=tcg" in the release notes or similar when you did the qemu upgrade in late Dec?
<paulg>
one of our internal sanity tests claims they now get "Error: JIT information is only available with accel=tcg" from qemu monitor and they think it happened coincident with the upgrade.
<Saur>
RP: I'm in the process of updating one of our builds from Honister 3.4 to 3.4.1, but it is failing in Jenkins left and right due to sstate_unpack_package() failing and causing setscene tasks to fail. Unfortunately there is no information in the log of why it fails. :( Looking at the function, after the changes that were introduced in 3.4.1, the only way for that function to fail should be if tar fails. The question now is what happens to
<Saur>
stderr from tar because I sure wish I could see it in the log...
vladest has quit [Remote host closed the connection]
vladest has joined #yocto
<Saur>
Scratch that, I found the stderr from tar. Apparently there are empty tarballs in our global sstate cache. Argh. :P
<jonmason>
Is python3-cryptography_36.0.1.bb broken now?
<jonmason>
python3-cryptography_36.0.1.bb:21: Could not inherit file classes/setuptools3_rust.bbclass
<moto-timo>
yeah, somehow the bbclass commits didn't get merged... I sent them and neither python3-cryptography nor python3-pyruvate would have built without them
chep` has joined #yocto
chep has quit [Read error: Connection reset by peer]
chep` is now known as chep
<tgamblin>
moto-timo: I must not be catching them all, let me send another