<moto-timo>
kevinrowland: the package manager has a "cache" on target and therefore know what it has installed vs what the "package feed" (at the simplest just a 'python3 httpserver' on the proper directory in deploy)
<moto-timo>
this "package feed" workflow does require a "PR server" to be sane. And you might end up needing to bump "PR" in recipes to make it work. It's not without pain.
<moto-timo>
kevinrowland: there are also things in the wiki or TipsNTricks about it... some of that is "write once wiki" and needs updating
<kevinrowland>
moto-timo: gotcha. I wonder if the "package feed" thing is a little too heavy weight for us.. especially reading your latest message about ${PR}. I'll take a look either way. At the end of the day I just want to deploy valgrind or similar debug tools when needed
<moto-timo>
kevinrowland: it's not as _simple_ people want it to be, but a NFS root/TFPT workflow allows very very rapid developer cycles... but it is far enough from production that I rarely use that tool
<moto-timo>
kevinrowland: there is also the eSDK workflow, but it needs a bit of love so I won't go out on a limb on it
<moto-timo>
kevinrowland: Just throwing out a random thought. No attempt at validity or verification.
<moto-timo>
kevinrowland: there you go. that is a GoodIdeaTM
<moto-timo>
I do care about this, but it is a bit draining to move forward when we are struggling to get releases out the door. And I apologize for being particularly exhausted as we push for the feature freeze.
<moto-timo>
I am more than willing to help advise technically and move such a feature forward
vmeson has quit [Remote host closed the connection]
jclsn7 has quit [Ping timeout: 240 seconds]
vmeson has joined #yocto
jclsn7 has joined #yocto
jclsn7 has quit [Ping timeout: 240 seconds]
jclsn7 has joined #yocto
qschulz has quit [Remote host closed the connection]
jclsn7 has quit [Ping timeout: 256 seconds]
Starfoxxes has quit [Ping timeout: 252 seconds]
Starfoxxes has joined #yocto
qschulz has joined #yocto
chep has quit [Read error: Connection reset by peer]
chep` has joined #yocto
chep` is now known as chep
jclsn7 has joined #yocto
prabhakarlad has quit [Ping timeout: 256 seconds]
jclsn7 has quit [Ping timeout: 272 seconds]
jclsn7 has joined #yocto
jclsn7 has quit [Ping timeout: 272 seconds]
GillesM has quit [Remote host closed the connection]
bluelightning has quit [Ping timeout: 240 seconds]
<thekappe>
due to some errors I've already added "glibc" to "DEPENDS" and inherit autotools bash-completion pkgconfig
<thekappe>
Now I'm getting the following error: Exception: FileNotFoundError: [Errno 2] No such file or directory: '/home/user/yocto/build/tmp/sysroots-components/aarch64/gcc-runtime/sysroot-providers/gcc-runtime
<thekappe>
Does anyone has an idea on how to fix it ?
JaMa has joined #yocto
ilunev has joined #yocto
ilunev has quit [Read error: Connection reset by peer]
asconcepcion[m] has quit [Quit: You have been kicked for being idle]
ilunev has joined #yocto
ilunev has quit [Client Quit]
Austriker has quit [Ping timeout: 256 seconds]
osama4 has joined #yocto
osama4 has quit [Read error: Connection reset by peer]
behanw has quit [Quit: Connection closed for inactivity]
<qschulz>
FWIW, I tested locally, and the index page is generated and I can access it
Dracos-Carazza has quit [Ping timeout: 256 seconds]
Dracos-Carazza has joined #yocto
osama4 has joined #yocto
xmn has quit [Quit: ZZZzzz…]
oobitots has joined #yocto
florian has joined #yocto
<RP>
qschulz: something isn't working as for example 3.4.1 is missing
<qschulz>
can we add `set -eu` to this script so we have an idea if an issue arises and stop the script from running?
<RP>
qschulz: probably, I don't know if that will cause any other issues but we could try
<qschulz>
RP: e.g. could simply be some rsync issue? I seem to recall reading somewhere there are intermittent network issues?
<RP>
qschulz: we see clear errors about things being missing before that so whilst that could occasionally be an issue, I'm not convinced
otavio has quit [Read error: Connection reset by peer]
otavio has joined #yocto
Dracos-Carazza_ has joined #yocto
Dracos-Carazza has quit [Ping timeout: 272 seconds]
leon-anavi has joined #yocto
<mcfrisk>
is the some QA check in poky for images to e.g. check that /usr/include doesn't exist, maybe one that can be configured for forbidden content to avoid developers doing silly things
goliath has quit [Quit: SIGSEGV]
Guest2142 is now known as JaMa
JaMa has quit [Killed (tantalum.libera.chat (Nickname regained by services))]
JaMa has joined #yocto
JaMa is now known as Guest6151
Guest6151 has quit [Killed (silver.libera.chat (Nickname regained by services))]
JaMa has joined #yocto
<Saur[m]>
mcfisk. There is, kind of (in master, and once 3.4.2 is released it will work in Honister as well). The check QA check is called "empty-dirs" and is used to validate that a directory is empty. The variable QA_EMPTY_DIRS is used to define the list of directories that are expected to be empty (see insane.bbclass).
<RP>
mcfrisk: no, it could be something we could add though
<Saur[m]>
It can typically be used to make sure that files are not installed into directories that will be used as mount points, or to make sure that files are not being installed into the wrong directories.
<RP>
mcfrisk: I like Saur[m]'s approach
JaMa is now known as Guest7223
JaMa has joined #yocto
<Saur[m]>
It also allows to define a message per directory that will be shown in case a file is installed in the directory, so one can give an informative message as to what one should do instead. See QA_EMPTY_DIRS_RECOMMENDATION in insane.bbclass.
Vonter has quit [Ping timeout: 256 seconds]
<mcfrisk>
thanks Saur[m] RP, I'll have a look and possibly cherry-pick to my dunfell tree
Vonter has joined #yocto
<qschulz>
RP: I'm even tempted to set -x in the script
<Perceval[m]>
Hello all, on my yocto distrib I Have the following error regularly
<Perceval[m]>
systemd-journald[3073]: Forwarding to syslog missed 28 messages.
<Perceval[m]>
do you have an idea on where it could come from?
<Perceval[m]>
I know that it is the forwarding from systemd logging system to syslog, but I don't know what service or soft could cause such error
Thorn has quit [Ping timeout: 256 seconds]
<RP>
qschulz: I was wondering to. We could try a patch?
Thorn has joined #yocto
<qschulz>
RP: preparing something yes, will send soon
<LetoThe2nd>
yo dudX
florian_kc has joined #yocto
goliath has joined #yocto
lucaceresoli has joined #yocto
<qschulz>
RP: mmm set -e isn't a great idea since old branches of the docs might throw warnings and then fail
<qschulz>
I'll send it anyway, so we can start discussing this stuff
Schlumpf has quit [Quit: Client closed]
Schlumpf has joined #yocto
JaMa is now known as Guest889
Guest7223 is now known as JaMa
Guest889 has quit [Remote host closed the connection]
florian_kc has quit [Ping timeout: 272 seconds]
mauro_anjo has joined #yocto
tre has quit [Ping timeout: 272 seconds]
thekappe has quit [Ping timeout: 256 seconds]
tre has joined #yocto
Schlumpf has quit [Quit: Client closed]
fitzsim has joined #yocto
thekappe has joined #yocto
jmiehe has joined #yocto
<thekappe>
hello guys, I have to build the A recipe that RDPENDS from the B recipe. The point is that B must be compiled with unicode support (--enable-utf --enable-unicode-properties) there is a way to instruct bitbake to compile B with the proper flag while bitbaking A ?
<RP>
michaelo: I'm thinking we should apply qschulz's changes quickly?
alessioigor has joined #yocto
alessioigor has quit [Remote host closed the connection]
<RP>
kanavin_: your changing TARGET_ARCH in the cross recipes is worrying me
<RP>
kanavin_: The intent, at least for cross canadian was to have one arch compiler which targeted all the different SYS pieces
<RP>
cross is a little bit less clear but in theory the same mapping could apply
<kanavin_>
RP: gcc-cross-arm-none-eabi needs to be configured differently to gcc-cross-arm-poky-linux: "--with-sysroot= --with-headers=no --disable-gcov --disable-threads --enable-multilib --with-multilib-list=rmprofile,aprofile"
<kanavin_>
RP: even then, I wouldn't necessarily trust that --target=arm-poky-linux would produce the same output as --target=arm-none-eabi
<kanavin_>
RP: that said, I reverted that change locally, as I knew it would be tough sell, and I was right ;)
<RP>
kanavin_: why does it need to be configured differently?
<kanavin_>
RP: because it needs to build libgcc pieces in absence of libc
<RP>
kanavin_: libgcc needs to be configured differently, sure
<RP>
we build gcc-cross and libgcc in separate recipes
<kanavin_>
RP: the way we build libgcc is 'copy preconfigured gcc build tree into $S, run make in libgcc subdir'.
<kanavin_>
there is no separate configuration step for libgcc
<RP>
kanavin_: well, there probably should be as these things aren't that tied together (and aren't in cross-canadian)
<kanavin_>
RP: right, I am now checking how this is done in cross-canadian sdk builds
<RP>
like a lot of things, I've tried to clean up and separate out the pieces over time but it only gets so far
<kanavin_>
makes one wish for multi-target toolchains like rust ;)
<RP>
kanavin_: cross-canadian doesn't need it's own libgcc so in that sense it can cheat and use the target one we have already
<RP>
kanavin_: but the fact that cross-canadian works with that shows gcc-cross isn't sys specific if you pass the right params in
<RP>
kanavin_: there is no good reason gcc can't be multi target, it just can't do multi arch
<kanavin_>
RP: gcc-cross isn't but libgcc is, and the params you need to pass for libgcc are passed via gcc-cross
<kanavin_>
(see above)
<RP>
kanavin_: Right, I agree that is a problem today :(
<RP>
I was just trying to share the direction I was trying to move things in
<RP>
gcc-cross is meant to only rebuild per arch. Obviously it doesn't quite work right today :(
<kanavin_>
it also shouldn't require target libc to build, but alas, it does
<kanavin_>
for libgcc
<kanavin_>
unless you suppress that with a bunch of not-obvious options
<RP>
kanavin_: it depends. musl doesn't have that issue, glibc does
<RP>
the dependencies are tweaked accordingly but you'll find it uses the same gcc-cross for both
<RP>
despite them being different _SYS
<kanavin_>
RP: I wish gcc upstream would better separate the pieces, and not control everything through a top level configure
<RP>
kanavin_: well, yes
<kanavin_>
I understood just enough of it to do what the customer wants, but the whole thing is just short of collapsing under its own complexity
<RP>
kanavin_: I have tried to improve things on several occasions, I keep chipping away at it
<RP>
which is why I felt I should warn you that patch didn't look quite right to me
<RP>
but if you want to tell me I'm wrong, whatever, I have a large queue of problems today :(
<kanavin_>
RP: the patch is already reverted locally
<kanavin_>
I found a way to build additional gcc-cross without changing the confuguration for the main one, but I didn't find a way to decouple libgcc from gcc-cross
<RP>
kanavin_: I would have to go and look at the code again to be able to give useful feedback :(
<kanavin_>
RP: maybe it's best to wait until I'm happy with it, and ready to submit for review :)
<kanavin_>
and yes I'm abusing mcextend on a grand scale there, I love how I can avoid writing extra recipes with it :)
<RP>
kanavin_: I'm just worried about you spending a ton of time on something which isn't a direction I'm comfortable with, that was the reason I mention it
<RP>
kanavin_: that was probably more what the standard class extensions were for rather than mcextend :/
prabhakarlad has joined #yocto
<RP>
qschulz: "we probably want to use virtualenv with specific Sphinx versions" - this fills me with dread making this work over all the distros on the autobuilder :(
<michaelo>
RP, qschulz: I agree. This way we know what happens.
<kanavin_>
RP: thanks for the tip, if the standard class extension works, that'd be easy to adjust - much easier than sorting the multi-cross issue
<RP>
michaelo: I was meaning the docs list fix for basehash
<RP>
the script changes in helper I'm worried about making work well :/
<RP>
we probably want the script to run all the different pieces, collect up the exit code and skip the rsync if anything fails?
Vonter has quit [Ping timeout: 240 seconds]
<RP>
then the build will fail, the docs website would be ok but we have a list of all the failures
<kanavin_>
RP: if that helps with the worry, I warned the customer there is no certainty in upstreaming the needed tweaks in the time scale they agreed to pay for. It's a problem that might become important in the future as people would want to build things using one part of the target system that is capable of builds for the other part of the system that is too constrained to host a compiler for itself
Vonter has joined #yocto
diesUndDas has joined #yocto
<diesUndDas>
Hello guys, I am confused.
<diesUndDas>
I would like to create a symlink to a library, in this case it is libsdl2.So I have created a bbappend and set FILE_${PN} to /usr/lib/libSDL2.so
<diesUndDas>
But it get's removed by do_populate_sysroot :(
<rburton>
what does your bbappend actually contain?
<diesUndDas>
I have created the bbappend file with the recipetool
<rburton>
you don't need to set FILESEXTRAPATHS if you're not adding patches
<rburton>
I presume this is a libsdl2.bbappend?
<michaelo>
RP: now I understand you're talking about qschulz' doc changes, not his changes to run-docs-build. I'm merging them into master-next and preparing a pull-request.
<diesUndDas>
rburton: okay, I am going to remove it then. Almost, it is a libsdl2_%.bbappend
<rburton>
diesUndDas: so packaging happens in the order of packages in the PACKAGES variable. PN-dev is before PN, and that contains ${libdir}/*.so, because those symlinks are only needed at build time
<rburton>
diesUndDas: so if you've something that is dlopen()ing libsdl.so, then it's broken
<diesUndDas>
rburton: Why is it going to be broken? Isn't it referencing the actual libsdl2 shared library anymore? :S
<rburton>
no distribution will ship the .so on its own unless it also ships the headers and stuff
vladest1 has joined #yocto
<diesUndDas>
:(
<diesUndDas>
And how can I tell the buildsystem to create one?
<rburton>
step back: why do you need libSDL2.so to exist
<diesUndDas>
rburton: because an application tries to load the library as "libSDL2.so". I assume that's not how it should be done? :D
<rburton>
does it dlopen() at runtime?
ecdhe_ has quit [Ping timeout: 260 seconds]
<rburton>
if so, yes, that's wrong.
<rburton>
if you *really* need to do this you can remove the .so from FILES:PN-dev so they end up in PN. but the application is the problem.
<qschulz>
RP: sorry was away.. What are the issues you're foreseeing with virtualenv for the docs?
<qschulz>
michaelo: RP: also re the docs, we probably want to update the renamed variables too. Didn't have the time now but I guess a good starting point is convert-variable-renames script?
<qschulz>
jclsn[m]: if any dependency of Chromium gets rebuilt, Chromium will get rebuilt, there's no going around that unfortunately
kroon has quit [Quit: Leaving]
<jclsn[m]>
qschulz: Okay, I will ask my boss to reconsider that Threadripper then
TundraMan is now known as marka
vladest has quit [Quit: vladest]
<qschulz>
RP: michaelo: halstead: pretty sure I fixed the most occurring issue with sphinx docs with the patch I just sent for the autobuilder
vladest has joined #yocto
<qschulz>
RP: I don't know what you think about that but I think it makes sense to set -eux -o pipefail and re-use the same Sphinx version for all doc sbuilding and the day we have an incompatibility we'll rethink about this pipenv/virtualenv stuff?
<RP>
qschulz: I'm happy to set those and worry about the sphinx version later
<RP>
qschulz: We'll need mhalstead later for the rsync issue or I can try and poke at it
pgowda_ has quit [Quit: Connection closed for inactivity]
<RP>
moto-timo: I have some changes which may help, still debugging issues though. I've spent the day on this :/
<rfs613>
qschulz, rburton, RP: morning gents, I have a bit more info about the cve-check issue.
<rfs613>
I had noticed it ran slower locally with the lock, but in our CI it was far worse - build that ususally takes 1-2 hours is taking 24 to 48.
<rburton>
rfs613: i sent a patch this morning you can try
<rfs613>
The lock is in $DL_DIR subdir, and in the case of CI this is shared via NFS (I think)
<rfs613>
rburton: i'
<rfs613>
ll check the list i a moment; I tried a patch yesterday that seemed to help FWIW
<rfs613>
s/i a/in a/
<rburton>
yeah i don't want to merge a patch which involves copying the database for every recipe
<RP>
rburton: could we just copy into TMPDIR once after updating and do that copy under a DL_DIR lock?
<rburton>
i'd hope we can avoid copying entirely
<rfs613>
rburton: okay, I see the patch on list, makes sense not to copy, will test it here shortly.
<rburton>
the only thing that writes is the cve-update-db recipe, which is a dependent of the reading tasks, so it will happen once early
<RP>
rburton: but not with builds in parallel which worries me :/
<rfs613>
what happens if we run multiple concurrent builds?
<zyga[m]>
rburton: I ran into that issue in oniro
<zyga[m]>
our builds run on top of a shared nfs disk
<zyga[m]>
we ended up moving the database to local storage to avoid long lock waits
<zyga[m]>
otherwise builds would stall as the database lock was held by one task
<zyga[m]>
things worked but were terribly sequential
<rfs613>
we were seeing "NFS reclaim lock" errors, not 100% certain if they are related, but seems likely.
<rburton>
right,my patch removes the lock
<rburton>
rfs613: are you running concurrent builds across the same DL_DIR? can you verify with timestamps that the problem is caused by multiple builds writing to the same sqlite database?
<zyga[m]>
at oniro we use the same DL_DIR and SSTATE_DIR across all builds
<rfs613>
rburton: yes, we share DL_DIR and sstate across builds. Debugging is difficult because I have no shell access, all I can do is add "ls <whatever>" commands to the build recipes, and hope they show up in the logs.
<rfs613>
Also the build runs in container which is destroyed after build...
<rburton>
you can look at the build logs and see if the times of cve-update-db-native correlates to failures in other runs
<rfs613>
(but obviously DL_DIR and sstate are outside, via NFS apparently)
<michaelo>
qschulz: great catch! Many thanks
<qschulz>
michaelo: still, I think the bitbake rsync needs to be modified
<rfs613>
rburton: yep, I can try checking timestamps, but let me get a new build running first, then I'll have time for log grepping
<qschulz>
michaelo: still, I think the bitbake rsync needs to be modified. I assume
<qschulz>
will actually delete everything under the docs directory on docs.yoctoproject.org
pasherring has quit [Remote host closed the connection]
<qschulz>
so if the yp-docs fail to build and does not reach its own rsync, then the yp-docs won't be available on the website, though bitbake's will
<qschulz>
so I **guess** rsync -irlp --checksum --ignore-times --delete bitbake/* docs@docs.yoctoproject.org:docs/bitbake/
<qschulz>
might be a good replacement but I had so many issues with the few rsync scripts I wrote that I don't know if this does anything I want :)
pasherring has joined #yocto
pasherring has quit [Remote host closed the connection]
<rburton>
RP: again i think bitbake needs shared lock support
<rburton>
a solid fix would be to let the reading tasks shared lock the database
<rburton>
i guess the tasks can do that lock manually
<RP>
rburton: happy to add it, shouldn't be hard
<RP>
rburton: internally the lock functions already do
<rburton>
yeah
<RP>
rburton: I think the sstate code uses it already
<RP>
rburton: I still think for efficiency you'd want a tmpdir local copy of that database
<rburton>
i really don't think its needed
<rburton>
writing is rare
<RP>
rburton: the writer tasks would likely stall out under pressure from cbe check tasks though :/
<rburton>
there won't be lots of writer tasks
<RP>
rburton: I was surprised we didn't have shared task locks to be honest
<rburton>
if the mtime of the database is less than an hour, it doesn't update it
<michaelo>
qschulz: why not directly rsync -irlp --checksum --ignore-times --delete bitbake/ docs@docs.yoctoproject.org:docs/bitbake/ (without the "*")?
<michaelo>
it's true I'm not comfortable with the previous command :-|
<qschulz>
michaelo: whatever works by deleting only what's in docs/bitbake and not docs/ :)
<rfs613>
rburton: tested your two patches locally, seems fine (no slowdown)
<qschulz>
but now that I'm reading the last rsync,it's probably just fine? otherwise we'd also be missing the bitbake docs
<qschulz>
s/also/always/
<qschulz>
michaelo: never mind, seems like the manpage tells us the rsync commands are just fine. So I can we can "safely" enable set -eu -o pipefail. The side effect being that if nobody monitors the builds, the docs will stay out of date since rsync won't be running if anything fails before
<RP>
qschulz: swat does monitor failures
<michaelo>
qschulz: it sounds right. However, the first rsync command doesn't always do what it's supposed. That's probably safer to submit the change you proposed. I would do that if I were you...
<qschulz>
RP: so the question right now is... do we disable the warning-to-error to pass most builds (and don't see the error about non-existing BB_HASHBASE_WHITELIST variable for example)
<RP>
qschulz: can we get to a point where we're warning free again easily?
<qschulz>
RP: no warning from master + my two patches, didn't check the other branches
<qschulz>
but if it's what we aim for, then I can check. I would rather keep the fail-on-warning in Sphinx and enable set -eu -o pipefail so that we don't miss stuff
<qschulz>
(note that they aren't exclusive, it's just that set -eu -o pipefail is a bit less useful if you disable the warning-to-error in sphinx builds :) )
<michaelo>
I agree to keep the warnings too
<RP>
qschulz: looks like transition, dunfell and yocto-3.4.1 still fail :(
<qschulz>
RP: transition is because of missing make target, can be easily fixed (since it's a branch)
tre has quit [Remote host closed the connection]
<rfs613>
hmm, so it seems I cannot .bbappend for a bbclass, correct?
<smurray>
rfs613: yep
<rfs613>
right, that makes it tricky to test rburton cve-fixes in our CI setup, since I cannot modify poky there.
<rfs613>
any clever ideas for me?
pasherring has joined #yocto
<rfs613>
i guess I could have the build script do "git revert d55fbf47794" which would at least get the build times back down to a few hours...
<smurray>
rfs613: the typical approach is to just drop the new version of the bbclass in a layer to override
<smurray>
rfs613: which isn't great for maintainability, but it does work
oobitots has quit [Quit: Client closed]
goliath has quit [Quit: SIGSEGV]
<qschulz>
RP: yocto-3.3.x and yocto-3.4.x need a patch to use the correct bitbake version for the object.inv from bitbake, but I don't see any specific error in the autobuilder for dunfell branch though? Also, the issue of -W messing up the build only exists since honister branch fro the docs (it was not part of the flags before)
<mckoan>
Generating an image with 'dunfell' results in the creation of timestamped images which are not deleted but continue to be added in deploy/images. Has anyone had the same problem recently?
<michaelo>
qschulz, RP: will we have to fix 3.4.1 too? I wish we could only generate the docs for each release branch, not for all each individual release :|
<rburton>
rfs613: we use kas which lets you drop patches onto other layers as part of the setup
osama4 has quit [Ping timeout: 256 seconds]
<qschulz>
michaelo: fcb24deb8b3abb8a77a12baa2cdd5ba5aa976f01 is missing for all tags/branches
<qschulz>
branches can be fixed, tags needs to be patched
<smurray>
mckoan: specifically dunfell HEAD? I don't see that with 3.1.14 in AGL here, can rig up a test with HEAD in a bit
<michaelo>
qschulz: right. That's why I'm asking whether we could only generate the docs for each branch, not for each release. What would we miss?
user95 has joined #yocto
<RP>
michaelo: the idea was to be able to select the docs for each release version in the version selector
<michaelo>
qschulz, RP: by the way https://docs.yoctoproject.org/_static/switchers.js is out of date (check the Dunfell release, 3.1.12 instead of 3.1.14). It seems the run-docs-build script is not replicating switchers.js from master but from the latest tag that was processed.
<mckoan>
smurray: thanks. No it is an intermediate version. However the configuration involves some 3rd party laters therefore I am not sure whu's guilty. It's very difficult do figure out where is it and I wanted any useful hint.
<mckoan>
s/laters/layers
<mckoan>
s/whu/who
<mckoan>
damn dyslexia
<tgamblin>
zeddii: What is the cutoff date for M3 contributions to the kernel (assuming some might need to be made)?
<moto-timo>
RP: do you think the sysconfigdata is what is breaking meson in sdk?
<smurray>
mckoan: maybe look in 'bitbake -e' for oddness wrt changes to the image variables?
<zeddii>
I'll do bugfix and -stable updates up until release, so no real cutoff. i have some queued changes now, that I'll be sending this week.
<mckoan>
smurray: thx
<moto-timo>
RP: and thank you for figuring out the pip mangling patch
<michaelo>
RP: yes, I understand, but why can't we use the 3.4.2 docs when we're using 3.4.1 (for example)? Shouldn't the latest docs for each branch be always sufficient? Do you expect a case when the docs of 3.x.n+y are incompatible with the 3.x.n release? Just hoping to simplify things if possible...
florian_kc has joined #yocto
<RP>
michaelo: we replicated what went before and didn't want to "hide" history from users.
<RP>
the docs shouldn't be determining the development process
<rfs613>
smurray, rburton: don't trust myself to mess with the BBLAYERs, so I've kludged a "git revert" into my build script... not ideal but we should know in an hour or two if it works.
<michaelo>
RP: this makes perfect sense. What's making perhaps less sense is always regenerating the stuff that was generated successfuly.
<RP>
michaelo: how else will the deprecation warnings/headers get added and the menus updated?
<michaelo>
RP: oh right, you win :-)
<RP>
moto-timo: I think nativesdk is being cross compiled so probably needs that
<RP>
moto-timo: it was meaning my pip mangling patch wasn't working for mason-nativesdk and I thought it should
AKN has joined #yocto
<moto-timo>
RP: agreed. Trying to wake up and catch up.
<RP>
moto-timo: I've stuffed my hacks into master-next and run a build, we'll see what happens
Tokamak has quit [Ping timeout: 250 seconds]
Etheryon has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<RP>
moto-timo: I know some of the fixes helped e.g. python3-pip to build
<michaelo>
qschulz, RP: I can propose a patch to add "make clean" to the "transition" branch.
<RP>
moto-timo: the wheel install thing is because it tests "python3 -m pip" and then decides it is already installed and tries to remove it
<RP>
moto-timo: so rather than force-reinstall, we want ignore installed
<RP>
michaelo: please
<RP>
michaelo: I think we just have to work through the issues
Tokamak has joined #yocto
user95 has quit [Ping timeout: 256 seconds]
Tokamak has quit [Read error: Connection reset by peer]
<qschulz>
RP: michaelo: yup, needs a clean makefile target for the transition build is required
<qschulz>
then fcb24deb8b3abb8a77a12baa2cdd5ba5aa976f01 or similar for honister branches/tags
<qschulz>
michaelo: we already patch 3.3 and 3.4 tags so you can hook up a new patch to the logic without too muhc hassle I think
<qschulz>
From what I saw from the logs, this should probably be enough to pass the build
Tokamak has joined #yocto
<qschulz>
but there are fixes required for other branches/tags too which aren't failing the build (warnings only, and since they weren't turned into errors before honister, the build succeeds)
<qschulz>
I cannot have a look at it today, only tmrw, so if michaelo you have time before ~10am tmrw, please do. Oitherwise we can sync tmrw
<rfs613>
rburton: I searched through previous build logs, but don't have any cve-check "read only database" in the last few weeks. Older builds get pruned... So cannot confirm or deny if timestamps align...
<moto-timo>
RP: I was on the fence about ignore-installed. Makes sense I think
lucaceresoli has quit [Remote host closed the connection]
oobitots has joined #yocto
mckoan is now known as mckoan|away
<moto-timo>
RP: I think that's the first time buildtools job passed?
<moto-timo>
RP: oh nevermind
AKN has quit [Ping timeout: 240 seconds]
<michaelo>
qschulz: 3.4.1 conf.py looks good to me. It's just that 3.4.2 conf.py sets the current version to 3.4.1 instead of 3.4.2. Shall we patch this if we don't want to move the tag?
kevinrowland has joined #yocto
florian_kc has joined #yocto
oobitots has quit [Quit: Client closed]
florian_kc has quit [Ping timeout: 240 seconds]
florian has quit [Quit: Ex-Chat]
<RP>
moto-timo: key test is the epoxy sdk test. Looks like build-appliance has an issue though. The pip calls there were always dubious
<moto-timo>
RP: yeah... pip3 not found... is that native? My naïve bootstrapping only unzips the wheel... time to fix the ${bindir} installs
<RP>
moto-timo: yes, that calls pip3 native
<kergoth>
Anyone around that's building container images with bitbake? Wondering about self-hosting in-container oe builds
<moto-timo>
RP: another workaround would be to patch it to call "nativepython3 -m pip" but the right thing is to install to ${bindir} anyway
<moto-timo>
RP: let me hack at the boostrapping
<RP>
moto-timo: we may just need to inject the right magic file in there as it is probably generated
<moto-timo>
RP: I'm pretty sure we just need to mv ${D}${PYTHON_SITEPACKAGES_DIR}/bin/* ${D}${bindir}/
<moto-timo>
RP: might also need to do that for ${datadir} or ${mandir}
<moto-timo>
hacking ${bindir} now
<moto-timo>
ERROR: Variable BB_ENV_EXTRAWHITE has been renamed to BB_ENV_PASSTHROUGH_ADDITIONS
<moto-timo>
do i just need to blow away tmp?
<RP>
moto-timo: no, rebuild your environment most likely
<moto-timo>
RP: ok, thanks
<RP>
moto-timo: mandir will be irrelevant for -native
<RP>
moto-timo: good news is testsdk is working this time
<moto-timo>
\o/
<moto-timo>
RP: good point
Tokamak_ has quit [Ping timeout: 256 seconds]
frieder has quit [Remote host closed the connection]
Tokamak has joined #yocto
<RP>
michaelo, qschulz: the rsync issue was a protection against breaking the site until the scripts were fixed. That shouldn't be there now so just waiting on a free worker to run another docs build
Tokamak_ has joined #yocto
gsalazar has quit [Ping timeout: 245 seconds]
Tokamak has quit [Ping timeout: 256 seconds]
<moto-timo>
RP: well that was wishful thinking... as you suspected it is generated... still working on it
<moto-timo>
RP: we could also use host pip to install, but that feels wrong
<RP>
moto-timo: I suspect it is just a few lines so maybe just handcode it in?
<moto-timo>
RP: exactly
Tokamak_ has quit [Ping timeout: 256 seconds]
gsalazar has joined #yocto
goliath has joined #yocto
Tokamak has joined #yocto
gsalazar has quit [Ping timeout: 240 seconds]
<kergoth>
RP: Would it be possible to put something together about what you and other Yocto tech decision makers would like to see out of the project members? Where would resources best be used/allocated? What's the best way we can help you? Allocate dev time to work on yocto project dev tasks? Etc
jmiehe has quit [Ping timeout: 250 seconds]
<michaelo>
RP, qschulz: happy to see that the docs are apparently getting generated correctly again.
<michaelo>
and the switchers.js is correct everywhere too!
florian_kc has joined #yocto
<michaelo>
Thanks to this, the out-of-date version warnings are gone too.
<kanavin_>
(the reverted patch is not coming back, I only kept it until I stash it elsewhere for history)
<kergoth>
huh, like cross-canadian, but to run on MACHINE rather than SDKMACHINE?
<kanavin_>
kergoth, YES
<moto-timo>
RP: also fixing the selftest failures like python3-scons-native in maintainers.inc and python3-flit-core not having HOMEPAGE
Tokamak_ has joined #yocto
Tokamak has quit [Ping timeout: 240 seconds]
prabhakarlad has quit [Quit: Client closed]
<fray>
kanavin_ what are you doing w/ the gcc-arm-none-eabi? Baremetal stuff?
<kanavin_>
fray, I hand it over to a client who's paying for Linutronix providing the toolchain :)
<fray>
but what is it for? Linux, baremetal, another OS? With the 'none', I'm assuming baremetal targeting
<fray>
I would like to see a 'baremetal' or 'baremetal-newlib' distro defined in standard Yocto Project.. since I'm seeing a lot of need for this kind of stuff
<kanavin_>
fray, setting up its actual usage is on them, I'm not sure I'm even going to see how they use it, but let me check.
<fray>
being able to generate toolchains with meta-toolchain for baremetal is something I do regularly.. targeting newlib (with a request for 'newlib-nano' on the r5), for a9, aarch64 (various), r5 and microblaze
<kanavin_>
fray, I checked, they seem to be using ecos+newlib
GillesM has joined #yocto
<fray>
ok.. so effectively newlib based baremetal
<kanavin_>
fray, they say, just give us the toolchain, we'll take care of the sysroot
<khem>
ecos uses libgloss + newlib yes
<fray>
took me a long time to understand this, but there is two parts of newlib.. newlib (the c library) and libgloss [or an alternative] that provide the OS/hardware itnerfaces)
<khem>
right
<fray>
So a proper baremetal + newlib distro would use newlib, libgloss (while allowed alternative providers for libgloss)
<khem>
you can classify baremetal as semihosted and purely baremetal
<kanavin_>
fray, I briefly looked into building cross-canadian newlib in the context of poky, but as the customer didn't ask for it, I didn't push further.
<fray>
it's pretty easy to configure, but would be better if we had a more standard YP configuration to use for this, IMHO
<fray>
with outbuilding it, you need to provide the binaries the customer wants or gcc (libgcc) is incomplete
<khem>
its gcc's muddled build architecture, with llvm/clang its quite easy, you build one compiler for all. then you can build rumtimes as a normal target package
<kanavin_>
yes, you can't compile anything useful without libc
<kanavin_>
khem, ack. it's an awful mess of autoconf, and our recipes reflect that :(
<kanavin_>
GSOC proposal: rewrite gcc build system in meson, split the pieces into independent parts!
<khem>
heh yeah, it will be like taking GCCs identify away 🙂 but good idea
<khem>
IMO the build is one part of it, but s/w arch of gcc is also due to overhaul
<kanavin_>
khem, I once suggested meson to GNU gettext maintainer
<kanavin_>
their answer? 'autotools have been around since I was young, meson is only a fad'
<khem>
yes, I am not surprised :).
<kanavin_>
and so we're still stuck with 5 minutes of configure and two minutes of install for something that should take seconds
<khem>
yeah but to be fair I think OE does something unusual too where it runs autoreconf which is mainainer mode operation generally
<kanavin_>
khem, autoreconf is not where the time goes in gettext
<kanavin_>
khem, it's drowned into recursive ./configure, each of which tests everything under the sun
<khem>
yes but it just makes a bad problem worse
<kanavin_>
... serially
<RP>
kergoth: I can try but it is hard to give a very specific list. It is really a question of how the project can cover basic core needs. Recent examples - inclusive language or more specifically, bitbake variable rename support, or the python changes to adopt to packaging changes upstream, or the changes coming to CVE format, or help getting patchtest back online, or.... etc
<rburton>
gettext is my pathological test for autotools, autoreconf is slow but its the execution which takes FOREVER as alex says
<rburton>
amazingly it does already re-use a config cache between the runs
<rburton>
imagine how much slower it would be if it didn't!
<khem>
electricfense guys did a build analysis of cross builds using yocto and I think do_configure averages around 25% of build time
<fray>
this is the reason we generate and store the site.conf files
<RP>
kergoth: I did try and create the "big ticket" item list with the future directions topics
<khem>
and in do_compile 30% of time was spent in preprocessor
<fray>
without that it takes a lot more time for do_configure steps.. it was always intended to add more to the site cache, but nobody ever expanded it
<kanavin_>
rburton, and for what? a target-independent library and some utilities for UI translation?
<rburton>
there was an attempt at a minimal gettext clone
<rburton>
i wonder if that's finished
<khem>
BSDs do the caching stuff for some packages, it needs proper care and feeding otherwise it becomes stale
<kanavin_>
fray, nowadays autotools is pretty much abandoned by everyone except GNU
<rburton>
khem: as gettext is five nested configures the first one sets a cache that the others can re-use, so that's safe at least
<kanavin_>
freedesktop and gnome moved to meson, and never looked back
<fray>
kanavin_: people keep saying that, yet everytime someone asked me to integrate something new -- it seems to be autoconf based unless its python based
<kanavin_>
something legacy you mean ;)
<fray>
it all depends on the part of the system you work with. core-os components still use autoconf a ton.. higher up in the stack (graphics) meson is used more and more
<rburton>
our firmware is all moving to cmake which i'm in a mixed mood about
<rburton>
i mean, it's better than bare makefiles of doom
<fray>
I work on base OS integration. System services, firmware loaders, etc etc..
<fray>
cmake is used a lot at (former) xilinx, I'm not a fan.. it's horribly complex to do simple things
<moto-timo>
cmake is just another tool to allow folks to write bad things
<fray>
yup
<rfs613>
and building cmake itself takes forever
<kanavin_>
cmake is something c++ crowd likes a lot for some unknown reason
<moto-timo>
"works for me on this one machine that I have in the basement"
<fray>
cmake also makes it 'even easier' for people to ignore cross compiling as well..
<JaMa>
and setting CMake policies to keep ti as much backwards compatible -> more complexity
<fray>
the number of times in the last 2 years I've had to convince people to NOT hard code a magic cross-compiler into their cmake files is mind boggling to me..
<khem>
gn + ninja is best when it comes to speed and use of system resources optimally but it has mind of its own
<fray>
with GNU make, I have the same argument to make -- but it seems to be "easier" to just fix..
<moto-timo>
RP: flit_core has an update from 3.6.0 to 3.7.1 but I think I am going to hold off on that until we get the current mess sorted
<rburton>
khem: gn is the worst of all!
<JaMa>
after bazel.. :)
<rburton>
khem: entirely because the distribution model appears to be 'your source ships a binary of gn'
<khem>
rburton: look at immense research chromium team has done on existing solutions
* moto-timo
wonders if all wheels have _ and never - in the PN
<rburton>
are you suggesting that chromium is easy to build?
<JaMa>
it builds with python3native now, finally some nails in meta-python2 coffin
<moto-timo>
that's a GoodThingTM
amitk_ has quit [Ping timeout: 240 seconds]
rob_w has quit [Quit: Leaving]
<khem>
rburton: the change looks good
florian_kc has quit [Ping timeout: 250 seconds]
<khem>
rburton: re chromium, its not at all easy and that was not the point, it was about building complex s/w without much overhead,
<khem>
fray: for small projects make is ok, ninja really shines with large projects
pasherring has quit [Read error: Connection reset by peer]
pasherring has joined #yocto
<RP>
moto-timo: you're handling the selftest maintainer issues? so I can sort out my tweaks in -next and post them and between us we should be good for a better test run?
<moto-timo>
RP: exactly
pasherring has quit [Remote host closed the connection]
<moto-timo>
RP: at this point I am creating patches on top of master-next (just waiting for build-appliance-image to finish)
pasherring has joined #yocto
<moto-timo>
RP: I can send the patches now... I'm not clear if the shebang in pip-native should be #!/usr/bin/env python3 or nativepython3.
<RP>
moto-timo: the later I think
<moto-timo>
RP: ok... I think I am starting to get it
<RP>
michaelo: qschulz: happy to see the docs did build cleanly again :)
<moto-timo>
\o/
<RP>
moto-timo: the way to think about it is that the tool needs to use the right python without having the pythonnative inherit
<moto-timo>
RP: and if it is -native that means nativepython3
<RP>
moto-timo: yes, and tools would normally be natives
<RP>
moto-timo: I only realise this having spent the day arguing with it :)
<moto-timo>
RP: sadly my brain shut down on me yesterday and I'm barely recovering
<moto-timo>
too many hours in too few days
<moto-timo>
not that I was being efficient
<kanavin_>
khem: I do not think running specific sub-gcc configure scripts, e.g. gcc-source/libgcc/configure, gcc-source/gcc-runtime/configure etc. is correct anymore. If you check the logs, you'll see they all complain about a ton of bogus options that only work with the top level configure. It's a wonder we made it this far :-/ I'll look into using the top level gcc configure in all cases.
<kanavin_>
and because we do cd {B}/libgcc first, these aren't picked up by insane :(
<kanavin_>
(insane looks only at ${B}/config.log
<RP>
kanavin_: that insane check was always a best effort, not built to handle the complex cases
<kanavin_>
RP: for gcc-arm-none-eabi I went back to how it used to be many years ago: let gcc-cross build and install libgcc as well, then change libgcc to handle only the packaging for the target
<RP>
kanavin_: yes, the work put in to stop rebuilding gcc-cross all the time was pretty pointless so it would make sense to rip it out and go back to that
<khem>
the independent configures are done because gcc has successfully made them build independent
<RP>
kanavin_: if we're not calling the sub configures correct, we should probably look at getting the options to those correctly
<khem>
if you use top level configure, it will take us back to monolithic intertwined checks and catch-22 situations that we did away with 3 phased gcc builds in past, but if you find it works better all ears for it
<khem>
I guess it will be fine to take a bit of hit to accomodate them if they dont have side effects
<kanavin_>
those are for the top level configure only
<RP>
kanavin_: So great, you found a bug. It is susprising it works but it hasn't seemed to do too badly over the last few years
<RP>
kanavin_: I understand the problem but I'm going to get annoyed if you keep just telling me how broken it all is. You have no idea the work it was to actually get some of this stuff to this point where it does work for most of the current use cases
<RP>
I'd actually love to get back and continue to improve that but instead I get to do other things
rsalveti has quit [Quit: Connection closed for inactivity]
<khem>
🙂
<RP>
kanavin_: and just to be clear, it isn't actually the same configure options that are passed to top level gcc-cross build, they are different since one is a cross recipe and one is a target recipe. I agree there are some extra options which are passed and which are ignored
<RP>
kanavin_: passing target flags in a cross recipe is actually a big problem and shouldn't be done
<moto-timo>
RP: I decided to just pretend I was pip
<dvorkindmitry>
is there a line I can write in my img.bb file?
<RP>
moto-timo: fair enough. I've started a build
<dvorkindmitry>
i know about meta-qt5, it looks like I have to write all qt modules one by one in DEPENDS += ""...
<moto-timo>
you could probably leverage bitbake-layers show-recipes -l meta-qt5 somehow to script that generation into a packagegroup
<dvorkindmitry>
there are packagegroup with all modules enumerated in qt layer, but they are listed in RDEPEND_ variable
<dvorkindmitry>
how can I make it "DEPEND" with only one line?
<JaMa>
the list of recipes won't probably change for dead-end meta-qt5, so just list them in DEPENDS manually
<moto-timo>
unfortunately, meta-qt6 doesn't follow oe branch naming so it is not in layer index
<JaMa>
not sure what is worse, with meta-qt5 I was following the naming and lucklily the release cadence of OE and QT matches, so there is separate branch for each Qt release, but then people are mixing different branches due to that (e.g. using 5.12 from meta-qt5/warrior with dunfell build and so on)
osama1 has quit [Ping timeout: 240 seconds]
<JaMa>
now with incompatible metadata changes it gets a lot worse from this perspective (so I have jansa/warrior-overrides branch as well to be usable with honister and newer)
<JaMa>
but not going to support every sensible combination of Qt version / yocto release
<RP>
I am worried about how this renaming will work out
<JaMa>
luckily I'm no longer maintaining meta-ros :)
<JaMa>
but compared to overrides syntax, it's at least relatively limited at least in layers I care about
<JaMa>
but with meta-qt6 e.g. 6.2 branch I was able to get overrides syntax change merged (and support for zeus dropped) while with variable renames they would probably need to add some 6.2-kirkstone branch (and who knows how well they will keep it in sync with 6.2)
<JaMa>
but if not-spdx license identifiers are turned to warning soon (even after kirkstone is released) than maybe we should backport all necessary mappings to dunfell (iirc something was missing when I run the conversion script on dunfell branch)
lucaceresoli has quit [Quit: Leaving]
GNUmoon has quit [Ping timeout: 240 seconds]
<RP>
JaMa: I think those changes might not be easily backported to dunfell as it was the "+" handling
<RP>
(vs -or-later)
mvlad has quit [Remote host closed the connection]
<khem>
kirkstone would mean new ports mostly, its how it is turning out for RDK,
<khem>
scripts are helping but people have done some really innovative stuff internally
lettucepunch[m] has joined #yocto
<khem>
🙂
<RP>
khem: I dread to think :)
GNUmoon has joined #yocto
<JaMa>
RP: ah :/
ar__ has quit [Ping timeout: 272 seconds]
florian_kc has joined #yocto
goliath has quit [Remote host closed the connection]
<RP>
There is a downside to the renames where once renamed, new bitbake is happy but the older layers can silently break if you still claim compatibility with them
<RP>
i.e. we don't detect the new names in the old bitbake