<JaMa>
DvorkinDmitry: if you just want modules-load.d file then look at KERNEL_MODULE_AUTOLOAD in meta/classes-recipe/kernel-module-split.bbclass
<DvorkinDmitry>
JaMa, oh, thank you!
<JaMa>
FILES variable doesn't do anything unless you meant FILES:${PACKAGE_NAME}
<DvorkinDmitry>
JaMa, yes, I know. FILES:${PN} += meanless in kernel module recipe
<JaMa>
it's not meaningless as long as you have correct packagename there, but for modules-load.d it's not needed as there is better mechanism for that
<Crofton>
Build success
<Crofton>
wacking everything and trying again
<JaMa>
whack that spinning rust at least to 15K RPM
SK has joined #yocto
<SK>
Hello, I'm all of the sudden start seeing following errors in my build process, when running `repo sync`. "error: Server does not allow request for unadvertised object" Does anyone saw similar problems? I don't have any changes on my end, it just started happening today. (it runs via github runner, in AWS)
jbo has quit [Ping timeout: 245 seconds]
tct has joined #yocto
<SK>
does yocto git repo (git.yoctoproject.org) has some sort of rate limiting implemented?
SK has quit [Quit: Client closed]
belgianguy has quit [Ping timeout: 246 seconds]
davidinux has quit [Ping timeout: 240 seconds]
davidinux has joined #yocto
SK has joined #yocto
SK has quit [Quit: Client closed]
Ablu has quit [Ping timeout: 255 seconds]
OnkelUlla has quit [Read error: Connection reset by peer]
Ablu has joined #yocto
OnkelUll_ has joined #yocto
prabhakarlad has quit [Ping timeout: 246 seconds]
Ablu has quit [Changing host]
Ablu has joined #yocto
jclsn has quit [Ping timeout: 245 seconds]
jclsn has joined #yocto
old_boy has quit [Quit: Client closed]
Marian64 has joined #yocto
<Marian64>
Hi,
<Marian64>
I'm trying to build a .wic image and I have the following wks.ini file
<Guest39>
JaMa Yes, it fails all the time. I am able to reproduce now
zpfvo has quit [Ping timeout: 246 seconds]
mckoan has quit [Ping timeout: 246 seconds]
olani_ has quit [Ping timeout: 246 seconds]
olani has quit [Ping timeout: 246 seconds]
kayterina has quit [Quit: Client closed]
olani has joined #yocto
olani_ has joined #yocto
mckoan has joined #yocto
leon-anavi has joined #yocto
zpfvo has joined #yocto
kpo has quit [Quit: Konversation terminated!]
prabhakarlad has joined #yocto
rob_w has quit [Ping timeout: 246 seconds]
vladest has quit [Ping timeout: 245 seconds]
rob_w has joined #yocto
belsirk has joined #yocto
rfuentess has quit [Ping timeout: 246 seconds]
Guest61 has joined #yocto
ptsneves has joined #yocto
mvlad has joined #yocto
Guest62 has joined #yocto
<Guest62>
has someone got the irc chat web interface url for nvidia development boards ?
Guest61 has quit [Quit: Client closed]
lars_ has joined #yocto
<lars_>
Hello. I tried making my image recipes cleaner by moving common IMAGE_INSTALL stuff into a packagegroup, which makes my layer much tidier. But I get an error for some packages: An allarch packagegroup shouldn't depend on packages which are dynamically renamed (fuse-dev to libfuse-dev)
<RP>
lars_: don't make your packagegroup allarch ?
<lars_>
How?
Kubu_work has joined #yocto
<RP>
is your packagegroup recipe setting PACKAGE_ARCH ?
<lars_>
I just made a packagegroup, I did not explicitly say anything about allarch. Also those packages actually are machine type specific inside my packagegroup, like this: RDEPENDS:mypackagegroup:genericx86
<lars_>
No
<RP>
so set PACKAGE_ARCH = "${MACHINE_ARCH}" ?
<lars_>
But I share this packagegroup between both arm and x86 targets
<lars_>
Can I set it for just parts of the packagegroup?
<RP>
lars_: packagegroup.bbclass defaults to allarch FWIW, that is where it is coming from
<RP>
duplicating per arch is really minor, don't worry about that would be my advice
<lars_>
Ah, my whole point of making this packagroup was to minimize duplication to make maintainability easier and to always keep all targets in sync and up to date
<RP>
well, you can mark some bits as machine specific but the net result will be it has to be built once per arch
<RP>
split the machine specific bits to a separate recipe ?
<lars_>
Yeah, that will probably be good
<lars_>
Thanks!
<lars_>
When I set PACKAGE_ARCH for the packagegroup, then this will automatically handle compatibility? So if I add this packagegroup to my image, then it will only be installed for the compatible machines? Or do I have to add it like this: IMAGE_INSTALL:genericx86 ?
<RP>
It can't know about compatibility
<lars_>
Well, probably wrong word to use. Architecture might be better word
vladest has joined #yocto
<RP>
In that case, probably, if I understand the question
<dvergatal>
btw., we would be grateful if any of you have already taken a look at this patch and shared your thoughts, because it is still a draft
<RP>
sadly fortify source doesn't change the qemuppc issue
<RP>
dvergatal: I did just glance at the patch and I can kind of see what you're doing but the lack of any commit message and explanation doesn't help. I did wonder if it would make sense to split it into pieces
<dvergatal>
there is a comment in bugzilla
<dvergatal>
the last message
<dvergatal>
RP: shortly in this bug the problem we have discovered is because posinst scripts are being run in alphabetical order and when user is being added with some dependecies with different order than alphabetical the problem occurs
<RP>
dvergatal: That is not really enough. It only says what the patch does, it doesn't say what the issue being fixed is, why it matters or how the changes fix the issue
<RP>
dvergatal: I guessed that but I should be guessing :)
<dvergatal>
as i said this is just a draft
<dvergatal>
and we wanted to discuss the approach in here
<dvergatal>
if it is correct
<dvergatal>
RP: you shouldn't :P
rfuentess has joined #yocto
<RP>
dvergatal: I'm just giving feedback on your draft ;-)
<dvergatal>
RP: ok Jan is eating his launch so he will join us in couple of minutes
* RP
feels like a bisection of glibc coming on
<dvergatal>
are you scared?:P
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
<RP>
dvergatal: weary
<dvergatal>
aahhh i see
goliath has joined #yocto
<dvergatal>
btw. i still need to taken care of the issue reported by khem but i'm still configuring my gentoo in parallel with other fixes like the one above because our CI stopped to be usable because of it
slimak has joined #yocto
frieder has joined #yocto
<dvergatal>
slimak: finally you joined us
<slimak>
@dvergatal last time I used IRC was about 20 years ago... lots changed since then
<dvergatal>
ok :D
<JaMa>
matching name :)
<dvergatal>
hahahaha
<dvergatal>
JaMa: yeah indeed:P
<dvergatal>
RP: slimak: OK i think we can start
<dvergatal>
slimak: I think you can reveal the details of the patch and the approach
<varjag>
when stripping of target binaries happen in Yocto?
<varjag>
which stage
<JaMa>
do_package
<varjag>
hm.
<slimak>
The issue we had was, than addition of users groups and group memberships was done with postinst scripts run in non-deterministic order --- when there were multi level dependencies it could happen that this order was causing error
<RP>
slimak: I was saying to dvergatal earlier, the patch really needs more explanation. I see you added sort() calls which would make it deterministic but I guess that wasn't enough to ensure everything ran as needed
<slimak>
My approach is to split each useradd postinst script into 3 parts, one for groups, second for users third for group memberships
<slimak>
basepasswd script is already executing useradd scripts in alphabetical order, so I gave names to the scripts to run them correctly
<slimak>
and sorted() was needed for ordering in python part
<RP>
I think it does make sense but I don't really like the reuse of user_group_groupmems_add_sysroot for the other two, I think that is confusing
<dvergatal>
RP: yeah this is what we wanted to discuss
<slimak>
my question is if you generally agree with the idea --- this patch is just proof of concept --- the final should look closer at avoiding generated code duplication for sure
<RP>
I'd probably split the patches into two, one for the sort and one splitting useradd
<RP>
The principle looks fine to me
<dvergatal>
RP: ok so we all agree on the approach
<dvergatal>
cool :D
silbe has quit [Remote host closed the connection]
<RP>
shoragan: not heard of it, kind of sad OE/Yocto aren't mentioned
<shoragan>
RP, yes. there's a lot of existing work in the embedded community which seems to be overlooked :/
<shoragan>
I'll see if I can fit it into my schedule..
<RP>
shoragan: it would certainly be good to raise awareness
<shoragan>
hmm, registrations closed
<KanjiMonster>
it also says invitation only
<kanavin>
it seems like a microsoft-oriented thing aimed primarily at azure/cloud use cases
<kanavin>
I would go if I were invited, as I happen to be in Berlin, but :shrug:
<RP>
Do we know anyone going or any of the organisers?
<kanavin>
RP: Organizers: Linux Systems Group at Microsoft - Christian Brauner, Lennart Poettering, Luca Boccassi
<RP>
Ah. I guess we could ask Luca
<RP>
they are at least aware of YP
arisut_ is now known as arisut
gsalazar has joined #yocto
<RP>
of course not all glibc commits actually work :(
<neverpanic>
Surprised they also haven't reached out to other industry players, considering quite a bit of the agenda seems to be on over-the-air updates. I'm sure there are plenty of people in the automotive space with experience on that.
zpfvo has quit [Ping timeout: 260 seconds]
zpfvo has joined #yocto
zpfvo has quit [Ping timeout: 240 seconds]
zpfvo has joined #yocto
<manuel1985>
What is the path of a file? Just the dirname, or dirname+basename?
<manuel1985>
What's the common lingo?
GNUmoon has quit [Quit: Leaving]
<ptsneves>
shoragan: the font of that document's titles is something special
<alessioigor>
Follow the docs meta-openembedded merts both requirements.
<alessioigor>
*meets
silbe has quit [Ping timeout: 246 seconds]
<alessioigor>
I'll send my patch to openembedde-devel. Thanks to all!
Vonter has quit [Ping timeout: 245 seconds]
Vonter has joined #yocto
silbe has joined #yocto
rob_w has quit [Quit: Leaving]
Vonter has quit [Ping timeout: 246 seconds]
<vvn>
site.conf.sample has been removed from the template files, right?
Vonter has joined #yocto
<vvn>
What is the expected way to deploy a site.conf file? Manually on first build?
* vvn
is just confused by the upstream site.conf.sample
Xagen has joined #yocto
dgriego has joined #yocto
<vvn>
RP: in addition to TEMPLATECONF, what do you think about adding support for a SCONF_FILE environment variable in oe-setup-builddir in order to copy a site.conf file on first build?
varjag has quit [Quit: ERC (IRC client for Emacs 27.1)]
<vvn>
or do we prefer that the user copy their site.conf file manually after sourcing oe-init-build-env?
<RP>
vvn: It is meant to be site provided, not from our code
<vvn>
sure, because it is site provided, I guess it's okay to blindy copy the site.conf file in <builddir>/conf/ on first build or even before every builds.
<RP>
alessioigor: thanks, we should tweak that meta-* piece
<vvn>
is uninative likely a site-wide configuration?
vladest has quit [Read error: Connection reset by peer]
vladest has joined #yocto
<kergoth_>
vvn: site could very well live outside the build directory entirely, also, if your default BBLAYERS includes an external path. lots of ways to do it. makes sense to have a copy option though
Schlumpf has quit [Quit: Client closed]
<vvn>
yeah I'm using TEMPLATECONF to provide a base configuration for bblayers.conf and local.conf (without versionning them) but I wanted a way to make sure the site.conf is deployed as well
<vvn>
Otherwise I know that at some point I will rm -rf build/ ; . oe-init-build-env and that'll re-create the download and sstate caches
<vvn>
so so far I'm using a wrapper script which cp /etc/site.conf $BDIR/conf after sourcing oe-init-build-env...
<RP>
vvn: ideally you'd point at a standard location for site.conf e.g. somewhere in $HOME rather than copying it around all the time
<vvn>
RP: you mean require/include /etc/site.conf?
<RP>
vvn: there are various options
<kergoth_>
alter your bblayers.conf to set the default BBLAYERS to ${HOME}/.oe, put site.conf in ~/.oe, as one example. as always with yocto, multiple approaches are available :)
<vvn>
having a "site" layer is an option, but it seems weird to append BBLAYERS to alter the build in that way
<kergoth_>
I actually don't do it that way myself, since I use a product that has a templateconf and i didn't want to override it, or have that path in customer files, so i just have my workspace scripts symlink it into place instead
<kergoth_>
the whole point of site is to be specific to your network and host, set PREMIRRORS, etc, which doesn't make sense to be a part of build, which is more specific than that, or your main layers, which are less
* kergoth_
shrugs
<vvn>
yep indeed, I'd set SSTATE_DIR, DL_DIR, maybe DEPLOY_DIR (with "/${DISTRO}" appended) and maybe INHERIT += "buildhistory" in my site.conf
<vvn>
in other words, site.conf is a site-wide static configuration, while auto.conf is a site-wide dynamic configuration (build tag name, image links, ...)
<vvn>
but yeah I think that it'll be convenient if oe-setup-builddir could provide the user a way to check for these configs too
<vvn>
in the meantime, a wrapper to oe-init-build-env it is :-)
zpfvo has quit [Ping timeout: 245 seconds]
zpfvo has joined #yocto
zpfvo has quit [Ping timeout: 245 seconds]
zpfvo has joined #yocto
rfuentess has quit [Remote host closed the connection]
frieder has quit [Remote host closed the connection]
florian has quit [Quit: Ex-Chat]
leon-anavi has quit [Quit: Leaving]
Vonter has quit [Ping timeout: 248 seconds]
<JaMa>
khem: quick q about meta-clang should clang always find corresponding gcc installation or do you remember some cases where default GCC_INSTALL_PREFIX didn't work and you had to explicitly pass --gcc-install-dir? in some strange mulilib setup I'm seeing gcc install in /usr/lib32 but clang seems to search only in lib32-recipe-sysroot/lib/../lib
slimak has quit [Ping timeout: 245 seconds]
florian_kc has quit [Ping timeout: 255 seconds]
Vonter has joined #yocto
mbulut has quit [Ping timeout: 246 seconds]
<khem>
JaMa: to be honest I have not tested it much with multilib setup I remember someone did work on it few years ago and made it work on x86 in meta-clang
zpfvo has quit [Remote host closed the connection]
<khem>
OE's installs are not defaults of gcc and so we might have to teach clang that
<JaMa>
ok, I'm cooking patch for that
amitk_ has joined #yocto
amitk_ has quit [Client Quit]
<JaMa>
seems simple enough as it searches /lib and /lib64, adding /lib32 as well might fix the issue I'm seeing
<khem>
yeah
amitk_ has joined #yocto
ptsneves has quit [Ping timeout: 246 seconds]
gsalazar has quit [Ping timeout: 246 seconds]
Guest62 has quit [Quit: Client closed]
ptsneves has joined #yocto
pabigot has quit [Ping timeout: 246 seconds]
Guest11 has joined #yocto
<Guest11>
h
<RP>
khem: I've been trying to bisect glibc to track down which commit causes qemu to fail for ppc. It is looking like 2c6b4b272e6b4d07303af25709051c3e96288f2d works and b40f5f84c41bc484d4792531a693d7583cecae0a fails. Continuing to work through it. Each test takes around 30 mins
<Guest11>
is there any way to get data from a recipe in a packagegroup from that packagegroup? I'd like to, effectively, iterate over each recipe RDEPEND'ed in a packagegroup and get the SRCREV for them
Guest61 has quit [Quit: Client closed]
<RP>
khem: it doesn't make such sense so I wonder if one of my tests was a false result :/
<ptsneves>
Guest11: with tinfoil you can
<ptsneves>
or jsut processing packagedata
<Guest11>
packagedata doesnt have SRCREV, unfortunately
<Guest11>
and hwo can I do that with tinfoil?
<Guest11>
I was trying to, but couldnt figure out how
<rburton>
right, packagedata is _target metadata_ so if the SRCREV doesn't end up in the version then it's lost
<rburton>
you can iterate packagedata to get the recipes and then look up srcrev in tinfoil
<RP>
khem: right, they don't seem obvious candidates :/
<RP>
khem: I'll keep going, see where things end up
Guest72 has joined #yocto
<Guest72>
ptsneves it killed my session, still me; can I do this from a job in a recipe/bbclass or no?
<ptsneves>
Guest72: ah no.
<ptsneves>
Then what you should do is augment the packagedata task to also put the SRCREV in the pkgdata. Then you can access it in another recipe's task after depending on the task that added the SRCREV data
prabhakarlad has quit [Quit: Client closed]
Guest11 has quit [Ping timeout: 246 seconds]
<Guest72>
there's no way to attach to the running server?
pabigot has joined #yocto
<ptsneves>
Guest72: you mean arbitrarily query for the data cache of another recipe?
<ptsneves>
if you want that, then I do not think you can
<Guest72>
I was just hoping to make a BOM (or at least a packagegroup-specific BOM) that also had the SRCURI and SRCREV
<ptsneves>
Guest72: so to achieve that you have the 2 options provided
<rburton>
Guest72: personally i'd write a script to resolve the packagegroup to a list of packages using pkgdata, and then use tinfoil to look up SRC_URI/SRCREV for those recipes
<ptsneves>
Guest72: what is cool about the script is that you can do the post processing without constrains on bitbake way of doing things. You can also juggle datastores from multiple recipes which you can never have inside a recipe.
<RP>
khem: 2d472b48610f6a298d28035b683ab13e9afac4cb is broken does lead to that commit. Continuing to bisect...
Guest72 has quit [Quit: Client closed]
dvergatal has joined #yocto
ptsneves1 has joined #yocto
ptsneves has quit [Ping timeout: 246 seconds]
ptsneves1 is now known as ptsneves
<khem>
yeah resolvers can be hog
florian_kc has joined #yocto
ptsneves has quit [Ping timeout: 246 seconds]
ptsneves has joined #yocto
ptsneves has quit [Client Quit]
<RP>
khem: bb9a4fc02841cf58a112a44b259477547893838b breaks so there is something wrong with the bisect :(
florian_kc has quit [Ping timeout: 250 seconds]
mvlad has quit [Remote host closed the connection]
ptsneves1 has joined #yocto
ptsneves1 is now known as ptsneves
speeder__ has joined #yocto
speeder__ has quit [Remote host closed the connection]
<RP>
khem: confirmed that 0567edf1b2def04840e38e3610452c51a3f440a3 is broken so it is something further back. The joys of intermittent failures :/
florian_kc has joined #yocto
<yates_work>
suggestion: in the subsections of https://docs.yoctoproject.org/ref-manual/tasks.html, e.g., "Normal Recipe Build Tasks," list the task subsubsections in normal or typical order they occur, or provide a diagram of their order.
<yates_work>
otherwise, that section of the manual is quite excellent
Kubu_work has quit [Quit: Leaving.]
Kubu_work has joined #yocto
<yates_work>
does the do_package task happen when building a single executable application recipe? i.e., something like a command-line utility written in C and generating a single executable output?
<yates_work>
or does it only happen when builing images?
<yates_work>
building
<rburton>
it always happens for 'normal' recipes
Kubu_work has quit [Quit: Leaving.]
<rburton>
even a single C binary recipe has the main package, the debug package, and the source package
Kubu_work has joined #yocto
<yates_work>
aha. good.
<yates_work>
and i see the local.conf specifies the package format (e.g., rpm). i was getting confused on whether this was specified by the image recipe or elsewhere. there's my answer.
<rburton>
remember every recipe is effectively isolated
<rburton>
an image recipe can't tell your libc recipe to generate rpms or ipkgs
<rburton>
the distro can, because that's global state before each recipe is parsed
flom84 has joined #yocto
<JaMa>
khem: only partial success with lib32, I'm collecting the info in Draft PR https://github.com/kraj/meta-clang/pull/851 once I have something usable, will change it to regular PR
<khem>
JaMa: yeah it seems to be going in right direction AFAICT
<khem>
RP: yeah that list looked light to me
<yates_work>
rburton: ack. thanks for the guidance.
<RP>
khem: I have 0567edf1b2def04840e38e3610452c51a3f440a3 breaking and 6f962278e24bdf5cb5f310c5a17add41da95407c working
<RP>
khem: obviously the working may be another false positive, but I'll try and bisect it, see what that gives
<RP>
khem: the strerr fflush there looks interesting
<khem>
RP: that has some interesting commits lets see
<khem>
RP: do we have a build host with glibc 2.38 ?
<khem>
* af130d2709 Always do locking when accessing streams (bug 15142, bug 14697)
<RP>
khem: not yet
<RP>
khem: I tried with fortification turned off btw, it didn't help
<khem>
* 64d9580cdf Allow glibc to be built with _FORTIFY_SOURCE
<vvn>
would you have a preferred way between TEMPLATECONF=foo . oe-init-build-env and export TEMPLATECONF=foo ; . oe-init-build-env?
<rburton>
former
<vvn>
is there a rationale for that or is it purely a preference?
<Saur>
vvn; With the former, the TEMPLATECONF variable is only set while the command is running, in the other case it becomes part of the environment.
<vvn>
yep this I know. Put differently, is it unwanted to have it part of the environment?
<RP>
that variable name is too vague to be part of a general environment
<vvn>
ok so it is more meant to be a variable for a single given script, thus not having it in the environment is better