qschulz has quit [Read error: Connection reset by peer]
qschulz has joined #yocto
locutusofborg_ has joined #yocto
LocutusOfBorg has quit [Ping timeout: 245 seconds]
RobertBerger has joined #yocto
rber|res has quit [Ping timeout: 252 seconds]
hpsy has joined #yocto
hpsy1 has quit [Ping timeout: 258 seconds]
sakoman has quit [Quit: Leaving.]
<zeddii>
RP: *very* little, but there was an inotify difference, and a rwsem difference, both small, but could absolutely account for the hanging test.
prabhakarlad has quit [Quit: Client closed]
pidge_ has joined #yocto
pidge has quit [Read error: Connection reset by peer]
elfenix|ghost has joined #yocto
rber__ has joined #yocto
Lihis_ has joined #yocto
x0n^^ has joined #yocto
dev1990_ has joined #yocto
pidge_ has quit [Ping timeout: 250 seconds]
pidge has joined #yocto
otavio_ has joined #yocto
RobertBerger has quit [*.net *.split]
nerdboy has quit [*.net *.split]
otavio has quit [*.net *.split]
Lihis has quit [*.net *.split]
dev1990 has quit [*.net *.split]
ant__ has quit [*.net *.split]
x0n^ has quit [*.net *.split]
elfenix has quit [*.net *.split]
Lihis_ is now known as Lihis
nerdboy has joined #yocto
ant__ has joined #yocto
manuel1985 has quit [Quit: Leaving]
boo has quit [Ping timeout: 252 seconds]
leonanavi is now known as leon-anavi
x0n^^ has quit [Remote host closed the connection]
x0n^^ has joined #yocto
Vonter has quit [Ping timeout: 250 seconds]
fray has quit [Ping timeout: 252 seconds]
Ch^W has quit [Ping timeout: 264 seconds]
fray has joined #yocto
Ch^W has joined #yocto
leon-anavi has quit [Remote host closed the connection]
pidge has quit [Remote host closed the connection]
leon-anavi has joined #yocto
pidge has joined #yocto
Guest38 has joined #yocto
chrfle has joined #yocto
Guest38 has quit [Quit: Client closed]
rob_w has joined #yocto
frieder has joined #yocto
zyga-mbp has joined #yocto
pidge has quit [Read error: Connection reset by peer]
pidge has joined #yocto
LetoThe2nd has joined #yocto
<LetoThe2nd>
yo dudx
mckoan|away is now known as mckoan
<mckoan>
good morning
gsalazar has joined #yocto
zyga-mbp has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
zyga-mbp has joined #yocto
rber__ has quit [Quit: Leaving]
rber|res has joined #yocto
RobertBerger has joined #yocto
RobertBerger has quit [Remote host closed the connection]
rber|res has quit [Remote host closed the connection]
rber|res has joined #yocto
creich has joined #yocto
zyga-mbp has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
pidge has quit [Read error: Connection reset by peer]
pidge has joined #yocto
lexano has joined #yocto
ant__ has quit [Ping timeout: 244 seconds]
creich has quit [Quit: Leaving]
creich has joined #yocto
creich has quit [Client Quit]
creich has joined #yocto
prabhakarlad has joined #yocto
creich has quit [Client Quit]
zyga-mbp has joined #yocto
creich has joined #yocto
goliath has joined #yocto
rber|res has quit [Remote host closed the connection]
rber|res has joined #yocto
creich has quit [Client Quit]
creich has joined #yocto
zyga-mbp has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
zyga-mbp has joined #yocto
ilunev has joined #yocto
zyga-mbp has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
beneth has joined #yocto
pbaptista has joined #yocto
<qschulz>
greetings
<mckoan>
qschulz: hi
Vonter has joined #yocto
pbaptista has quit [Quit: Client closed]
shoragan|m has quit [Remote host closed the connection]
janvermaete[m] has quit [Remote host closed the connection]
asus_986_gpu[m] has quit [Remote host closed the connection]
Pierre-jeanTexie has quit [Remote host closed the connection]
Jari[m] has quit [Remote host closed the connection]
Spectrejan[m] has quit [Remote host closed the connection]
dwagenk has quit [Remote host closed the connection]
AlessandroTaglia has quit [Remote host closed the connection]
shoragan[m] has quit [Write error: Connection reset by peer]
alex88[m] has quit [Remote host closed the connection]
ndec[m] has quit [Remote host closed the connection]
Saur[m] has quit [Remote host closed the connection]
ejoerns[m] has quit [Remote host closed the connection]
khem has quit [Remote host closed the connection]
Andrei[m] has quit [Remote host closed the connection]
kayterina[m] has quit [Read error: Connection reset by peer]
jordemort has quit [Remote host closed the connection]
barath has quit [Remote host closed the connection]
Emantor[m] has quit [Remote host closed the connection]
cody has quit [Remote host closed the connection]
pbaptista has joined #yocto
Andrei[m] has joined #yocto
kayterina[m] has joined #yocto
jordemort has joined #yocto
janvermaete[m] has joined #yocto
Saur[m] has joined #yocto
shoragan[m] has joined #yocto
Jari[m] has joined #yocto
Emantor[m] has joined #yocto
cody has joined #yocto
ndec[m] has joined #yocto
Pierre-jeanTexie has joined #yocto
khem has joined #yocto
ejoerns[m] has joined #yocto
barath has joined #yocto
shoragan|m has joined #yocto
AlessandroTaglia has joined #yocto
asus_986_gpu[m] has joined #yocto
alex88[m] has joined #yocto
Spectrejan[m] has joined #yocto
dwagenk has joined #yocto
pbaptista19 has joined #yocto
pbaptista has quit [Ping timeout: 250 seconds]
bps has joined #yocto
zyga-mbp has joined #yocto
davidinux has quit [Ping timeout: 264 seconds]
Ileana has joined #yocto
davidinux has joined #yocto
BCMM has joined #yocto
zyga-mbp has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
zyga-mbp has joined #yocto
kuzz has joined #yocto
zyga-mbp has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<ilunev>
RP: still struggling with that issue with mass hash mismatch. Found out that the first clean bitbake launch emits those errors while the second one seems to work fine. Also, recipes are reparsed every bitbake call, cache seems to invalidate. Tried to diff bitbake -e for a package in those two cases. Vars that differ are DATETIME, TIME, IMAGE_NAME (?), IMAGE_VERSION_SUFFIX and LOGFIFO (probably, that's ok).
warthog9 has quit [Ping timeout: 264 seconds]
chrfle has quit [Ping timeout: 245 seconds]
ant__ has joined #yocto
ant__ is now known as ant
ant is now known as Guest6520
Guest6520 has quit [Client Quit]
<RP>
ilunev: that doesn't really make much sense to me. You would be far better off picking a simple early target like quilt-native and seeing whether you can get hash mismatches with that. Look at the output of bitbake-dumpsig for files in tmp/stamps/XXX/quilt-native
<RP>
ilunev: to debug it you really need to two sigdata files for the different hashes, then it becomes easy
<ilunev>
Finally got it. Lots of noise when comparing with diff, some set values are reordered
<RP>
ilunev: try bitbake-diffsigs
x0n^^ has quit [Remote host closed the connection]
x0n^^ has joined #yocto
<ilunev>
RP: bitbake-diffsigs says quilt-native:do_patch has changed yet there is only a single one (so is with unpack). "Unable to find matching sigdata" with a hash that is indeed absent in stamps/.../quilt-native
<ilunev>
yet there are two do_compile, do_configure etc
<ilunev>
so it seems like fetch, unpack are going fine, something is wrong with patch (just a single stamp while the following do_configure refers to another hash). And starting with do_configure it diverts.
mccc has quit [Read error: Connection reset by peer]
mccc has joined #yocto
<RP>
ilunev: finding the diverging task is a good start. Are there two sigdata files for that task?
<ilunev>
RP: Just a single one. And the following do_configure in diffsigs complains about the missing file
<RP>
ilunev: that is where the challenge is then, getting something for the other hash to diff to :/
<ilunev>
RP: I'm thinking about updating my dunfell
warthog9 has joined #yocto
<ilunev>
RP: updated dunfell did emit the same warning for quilt-native, but it does not do rebuilds etc, so there's just a single stamp for every task now. Yet the warnings.
<ilunev>
and it does not seem to fully reparse recipes now
kpo_ has joined #yocto
<ilunev>
the first of the warnings is about do_patch. and that's definitely before even do_fetch is actually executed
<ilunev>
There were 509 ERROR messages shown, returning a non-zero exit code for just quilt-native build :-)
<ilunev>
RP: when I run -Snone as the warning suggests, I get ERROR: Bitbake's cached basehash does not match the one we just generated (/data/yocto/tmp/yocto-rpi4-builder/build/../poky/meta/recipes-devtools/quilt/quilt-native_0.66.bb:do_patch)!
<RP>
ilunev: that sounds like you're hitting this during parsing then and I don't know why, we don't see that anywhere else
zyga-mbp has joined #yocto
<RP>
ilunev: if you've multiple layers and configuration involved I'd probably try removing them and adding back step by step to isolate what is causing it
<ilunev>
RP: switched off Mender stuff and it went away...
<RP>
ilunev: have you tried asking the mender people?
zyga-mbp has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<ilunev>
RP: yes, trying :-)
chrfle has joined #yocto
zyga-mbp has joined #yocto
otavio_ is now known as otavio
otavio has quit [Quit: leaving]
otavio has joined #yocto
<ilunev>
RP: Ok, looks like I finally learned that there's difference between "inherit ..." and INHERIT += "...". I used the former one in local.conf and when I switched to INHERIT it all went fine
zyga-mbp has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<RP>
ilunev: interesting. I wonder why that didn't throw a syntax error
<RP>
ilunev: which class was that with?
<ilunev>
mender-specific one, "mender-full" to be precise.
<RP>
zeddii: I tried reverting the aufs fix and running ltp locally and I see it stuck in D state for some tests with kernel backtraces in the logs. The latest patch from you didn't do that
BCMM has joined #yocto
zyga-mbp has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<janvermaete[m]>
Somebody got the answer how to keep the .config of uboot and Linux in the artfacts (build/deploy/images...)?
<janvermaete[m]>
when the CI doesn't have to build the e.g. kernel, there will be no .config in the build directory.
<RP>
janvermaete[m]: extend do_deploy to place it there?
<janvermaete[m]>
So, i have to ask it in Yocto itself to create and store it.
<janvermaete[m]>
but I don'k now how
<janvermaete[m]>
Could be, I will check. But I thought not to be the first one to have this.
<janvermaete[m]>
So I would believe it's already there. But I could not find it.
<RP>
zeddii: Looking at the diff this doesn't make sense since nothing that is changed is built :/
<RP>
zeddii: what was different in my test was the kernel patch level version so I'm starting to suspect something in that
<zeddii>
meaning .42 versus .41 ?
<RP>
zeddii: yes
<RP>
zeddii: I was testing b3b1627391bf18358547d84e4bb4b53438d5cf98 vs bb3f40e801fed14f9233749f7eaa27b105979059
<zeddii>
and that was the RCU stall you mentioned first ? (in the first, and not the second) ?
gsalazar has quit [Ping timeout: 244 seconds]
<RP>
zeddii: rcu stall was in the first. hard to tell if it is related or not
<zeddii>
yah . I was thinking the same (for the rcu).
<RP>
zeddii: starting to think we should upgrade to .42 and see where that gets us
<zeddii>
I can send that one right away, I did the upgrade locally. I can send it in about 10 mins. Just have to make sure I don't cross up the SRCREVs and LINUX_VERSIONS.
boo has joined #yocto
<RP>
zeddii: the aufs "fixups" look harmless enough to merge onto the branch too. Looks to be a couple of potential bugs there but only if you build aufs which we don't for the autobuilder
<zeddii>
I was thinking the same. Let's reset it to stock aufs5, plus 42 and then reasses the behaviour of that.
<RP>
zeddii: I do think the ltp stuff does effectively reproduce this issue if you watch the dmesg output and use that as an indicator of issues
<RP>
(and ignore the OOMs, its the filesystem null pointer dereferences which are nasty)
<RP>
zeddii: spoke too soon, it just blew up with the aufs re-import :/
<zeddii>
the ltp hang, right ?
<RP>
zeddii: yes, ltp kernel null pointer dereferences in the fs code which can hang
<RP>
zeddii: so the issue is there with base-aufsv2 and it doesn't always happen as this did pass tests earlier
<zeddii>
which makes things murkier. It could always be there, just a timing window is changing.
<zeddii>
in that case, there's really no sense going with it in my update.
<zeddii>
let me do .42 and just push the aufs5 revert along with it.
<RP>
zeddii: right, sorry to muddy the waters :(
<RP>
zeddii: I'm just sharing data points as I see them, wish they made more sense
<zeddii>
but we know that we may just be masking whatever it is, but if m1 goes out, we'll get more runs on the changes and can watch to see if it does happen with aufs reverted
<zeddii>
and to remind me .. that hang, if I run the test locally a few times, it can happen, right ? so that might be what I need to do to sort it out.
<RP>
zeddii: seems to. I'm trying to see if I can isolate it to an faster subset of ltp tests
<RP>
zeddii: that is the full thing, working on a reduced set. Its the unshare test that I've seen hang twice today
<zeddii>
ok. i can definitely try some runs with that. I'll try and find a machine I can let it churn on in the background.
<zeddii>
I'll do the .42 and aufs5 revert. and the let's see if it is free from ltp hangs through the m1 work.
<zeddii>
and then I'll focus on 5.13 and seeing if I can reproduce it on either 5.10 or 5.13 + aufs and make the call to just drop it completely.
<zeddii>
and I'll contact the developer, if I can get a decent reproducer.
<zeddii>
he's still updating with every kernel release, so it is active.
Guest5636 has joined #yocto
Guest5636 is now known as ncaidin_lf
gsalazar has joined #yocto
<RP>
zeddii: makes sense. I'm getting nowhere fast trying to make a smaller reproducer :/
<moto-timo>
JPEW: where does gnome-shell-gsettings come from? getting no provider...
<RP>
moto-timo: meta-gnome?
* boo
tilts over the garden-gnome and looks underneath
boo is now known as paulg
frieder has quit [Ping timeout: 250 seconds]
<paulg>
irc shuffle fallout will be haunting us for months...
ecdhe_ has joined #yocto
FO3 has joined #yocto
kergoth` has joined #yocto
rewitt1 has joined #yocto
droman has joined #yocto
hpsy has quit [Ping timeout: 244 seconds]
<moto-timo>
maybe I’m missing a distro feature…
BCMM_ has joined #yocto
rob_w has quit [Quit: Leaving]
v0n has joined #yocto
elfenix|ghost is now known as elfenix
sakoman has joined #yocto
BCMM has quit [*.net *.split]
FO2 has quit [*.net *.split]
vdl has quit [*.net *.split]
ecdhe has quit [*.net *.split]
stkw0 has quit [*.net *.split]
rewitt has quit [*.net *.split]
kergoth has quit [*.net *.split]
kergoth` is now known as kergoth
<JPEW>
moto-timo: Sorry, I have a local patch to make it a seperate package of gnome-shell
<JPEW>
I forgot to mention that
pbaptista19 is now known as pbaptista
<moto-timo>
JPEW: that makes sense... and I suspected that might be the case
<JPEW>
I can push it as an RFC quick. It's pretty simple
<moto-timo>
sure
<moto-timo>
I'm about to be afk for a few days, so it might wait until the weekend
<JPEW>
moto-timo: sent
mckoan is now known as mckoan|away
hpsy has joined #yocto
hpsy has quit [Client Quit]
hpsy has joined #yocto
<jsbronder>
I'm having issues building the esdk in dunfell. It looks like the files in conf/multiconfig are not getting copied to ${WORKDIR}/sdk-ext/image/tmp-renamed-sdk/conf/ but BBMULTICONFIG is getting set in ${WORKDIR}/sdk-ext/image/tmp-renamed-sdk/conf/local.conf
<jsbronder>
The actual error from bitbake is: ERROR: ParseError at /home/jbronder/st/veo/head/tmp/build-glibc/work/intel_skylake_64-oe-linux/root-image/1.0-r0/sdk-ext/image/tmp-renamed-sdk/layers/openembedded-core/meta/conf/bitbake.conf:765: Could not include required file conf/multiconfig/mfg.conf
rber|res has quit [Ping timeout: 244 seconds]
<jsbronder>
I'm not trying to use one of the multiconfigs, just the default.
<RP>
jsbronder: esdk will copy your current settings so if you have it set, it will try and copy that into the eSDK
ncaidin_lf has quit [Quit: ncaidin_lf]
<RP>
jsbronder: we probably need to add a copy of those files
michael has joined #yocto
<jsbronder>
RP: I wasn't sure if they were intentionally left out. Should they be getting copied in populate_sdk_ext.bbclass where local.conf is written?
rber|res has joined #yocto
ilunev has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
mcc_ has joined #yocto
Ileana has quit [Quit: Client closed]
michael_ has joined #yocto
RobertBerger has joined #yocto
rber|res has quit [Read error: Connection reset by peer]
jbronder has joined #yocto
halstead_ has joined #yocto
<jbronder>
I suppose I can just work around this by adding BBMULTICONFIG to SDK_LOCAL_CONF_BLACKLIST for now.
wesm1 has joined #yocto
<RP>
jsbronder: yes, somewhere like that. As you say, blacklisting may also work
chrfle_ has joined #yocto
alejandr1 has joined #yocto
<JPEW>
RP: I added the source mapping JSON to the recipe package data, and it adds several 100 KB to each package file
roussinm1 has joined #yocto
goliath_ has joined #yocto
paulg_ has joined #yocto
<JPEW>
err, not the package itself, the package data file for the recipe
tperrot_ has joined #yocto
<RP>
JPEW: hmm. splitting it to a separate file would probably be nicer maybe (still within packagedata)?
<JPEW>
RP: Probably, just so that we don't have to load it unless wanted
<JPEW>
which is basically what meta-doubleopen is doing today :)
<RP>
JPEW: exactly. Can we save any size by getting rid of path prefixes?
<JPEW>
yes
dti has joined #yocto
<RP>
JPEW: well, putting it into packagedata should mean we have the data in a common place for anyone to use
zeddiii has joined #yocto
<RP>
question is if we want the hit on space usage but I suspect it is probably inevitable
<JPEW>
The paths for the processed files aren't the main consumer of space, it's the dependency files (which already use the "/usr/src/debug" prefix)
<JPEW>
RP: Ya, you have to store it somewhere
hpsy has quit [Ping timeout: 264 seconds]
<JPEW>
The only "optimization" would be to make it optional, or use some database format with fast lookup so you don't have to load the whole JSON in memory
warthog19 has joined #yocto
michael has quit [*.net *.split]
paulg has quit [*.net *.split]
chrfle has quit [*.net *.split]
warthog9 has quit [*.net *.split]
mccc has quit [*.net *.split]
goliath has quit [*.net *.split]
roussinm has quit [*.net *.split]
wesm has quit [*.net *.split]
dtometzki has quit [*.net *.split]
alejandrohs has quit [*.net *.split]
jsbronder has quit [*.net *.split]
zeddii has quit [*.net *.split]
halstead has quit [*.net *.split]
tperrot has quit [*.net *.split]
<RP>
JPEW: I think if you're using it, the hit is ok?
<RP>
as for optional, that is asking for trouble with testing :(
<JPEW>
Ya, I think several 100KB isn't too bad in that regard
<JPEW>
Right, I'd rather just have it on all the time
<RP>
JPEW: the other trick would be to use pickle since it would compress them by using references for the strings
* RP
isn't claiming that is a good trick
<JPEW>
Ya, that would reduce a lot of duplication since there are a lot of duplicate debug dependencies (I'm look at you libc!)
BCMM_ is now known as BCMM
<JPEW>
TBH, simple compression (lz4) would probably work just a well
<RP>
JPEW: would we want to do some kind of custom json by reference/lookup? Is there a module for it? :)
<JPEW>
RP: lz4 is so fast that its not even noticeable in most cases (faster that whatever I/O you happen to be using)... it makes it really attractive for stuff like this
<zeddiii>
RP: (I realize you are getting two streams of conversation here). I didn't get the AUFS warnings when I didn't explicitly enable it in my local testing, so we in theory can drop the inhibit that you had done for the AB runs. if it comes back (the audit warning), I'll have to do some configuration fragment changes.
<RP>
JPEW: I would love to write out the whole bitbake parsed datastores for debugging
<RP>
zeddiii: added to master-next and will trigger a test run. I dropped the config change a while ago after realising I don't see those warnings
<JPEW>
... someday bitbake might have to bite the bullet and run in a Python venv so we can actually pull in 3rd party modules reasonably
<RP>
JPEW: if bitbake uses it for parsing that is pre HOSTTOOLS which is where this gets messy :(
<RP>
JPEW: someday. I just worry about who updates all the docs, CI and so on
* RP
knows who likely gets to do that
<RP>
its a huge change sadly, way more work than anyone first realises
<JPEW>
Hmm....
<JPEW>
Are there a lot of users of the bitbake library outside of bitbake itself?
<JPEW>
e.g. entrypoints that are *not* through some bitbake or OE provided command
<RP>
JPEW: no
<JPEW>
It... might not be as bad as you would suspect then
<RP>
JPEW: basically the list of problems goes something like new buildtools, recipes for the dependencies, updates to the autobuilder workers to add the depends, updates to the CI code if/as needed, updates to the docs to mention the new dependencies, write the migration entry
<RP>
Which doesn't sound too bad until you try it
<JPEW>
Fair enough
zyga-mbp has joined #yocto
<RP>
I'm not saying no, I'd love to use more things, its just not straightforward. Even a new buildtools release is a pain :(
<JPEW>
Hmm... I guess I'm a little bit confused. The idea of using venv is that it would pull all the bitbake dependencies from PyPI (ideally, there's no change to buildtools because venv is part of Python proper).
zyga-mbp has quit [Client Quit]
<RP>
JPEW: well, it depends whether we care about determinism any more and want a way to run things without the venv
<rburton>
you can lock the versions of external deps
<RP>
I'm assuming we continue to need to have a way to setup something equivalent with buildtools
<rburton>
also i'd say we don't, we just use buildtools to provide a python
<JPEW>
RP: I think venv is an established enough thing that we just use it all the time
* RP
is really tired of saying no all the time. Should I care? I'm just worn out :(
<RP>
I want to do neat cool stuff. I can't even get stable builds right now and am stuck debugging all the junk I've been pushed into running/supporting already.
<JPEW>
RP: Ya, that's fair
<dl9pf>
can't we just build it as -native and be done ?
<dl9pf>
wouldn't break the current scheme
<JPEW>
dl9pf: Build what?
<dl9pf>
lz4
<RP>
dl9pf: for use in the parser?
<JPEW>
dl9pf: I don't think so because bitbake might need it before it can build anything
<smurray>
if you could defer the tasks that need it, I could imagine a bootstrap phase, maybe?
<dl9pf>
just thinking - can we check on very first run within a project and pull/build it ?
goliath_ is now known as goliath
<dl9pf>
yep
<rburton>
now you've just badly reimplemented venv
<smurray>
well, it was just expressed that's not an option ;)
<smurray>
if the SBOM stuff is just output generation in recipes, bootstrapping for it seems reasonable? Or is it more fundamental than that?
<JPEW>
smurray: I think the discussion was to compress some of the SBOM data, which TBH is completely optional :)
<JPEW>
It... sort of went a different direction :)
<v0n>
hi all -- my python app requires a custom 'prepare' step to compile protobuf things. Should I add do_compile_prepend in my recipe, or overriding do_configure (since distutils3_do_configure is a noop)?
<v0n>
a custom 'setup.py prepare' step*
<RP>
there is a desire to make bitbake able to use much more of the wider python module ecosystem. That would certainly be more modern/trendy. it isn't without its downsides. We could no doubt figure out a way to make packagedata able to use lz compression alone
<v0n>
copy-pasting the content of distutils3_do_compile into my recipe's do_compile_prepend and s/build/prepare/ seems the way to go for me, but I prefer to double-check with you
chrfle_ has quit [Ping timeout: 265 seconds]
<rburton>
i'd just do a do_compile_prepend that invoked the step you need
<rburton>
you don't need to copy/paste
<rburton>
actually i'd fix the setup.py so that the special protobuf prepare step happens when you do setup.py build
<v0n>
rburton: I don't know how to inject a custom step in the build step, without having an ugly double call
<rburton>
how do you do this prepare step?
ilunev has joined #yocto
chrfle_ has joined #yocto
<v0n>
rburton: not my thing, but there seem to be a PreBuild() class in there
<v0n>
rburton: btw I thought about copy-pasting distutils3_do_compile because of the usage of ${PYTHON_PN}. can I simply do DEPENDS = "python3" and "python3 setup.py prepare" without prefix in do_compile_prepend?
gsalazar has quit [Ping timeout: 272 seconds]
<rburton>
I meant how do you do the prepare step
<rburton>
what are you planning on writing in this new task or prepend
<rburton>
RP: am i imagining there being badge urls for layer compatibility results?
ncaidin_lf has joined #yocto
ncaidin_lf has quit [Client Quit]
<v0n>
rburton: the setup.py 'prepare' task (i.e. this PreBuild class) does call protoc, and for some reasons this cannot be done from within the Build class, because of how the touched __init__.py files
<rburton>
so it is a whole new step in setup.py
<rburton>
then yes, copy/paste do_compile to a prepend and change 'compile' to 'prepare'
<rburton>
erm, 'build' to 'prepare'
<rburton>
bonus points for changing the base class so you can set a variable of the stages to build
vmeson has quit [Ping timeout: 272 seconds]
<v0n>
rburton: so that makes sense? Is there a way to properly integrate the `protoc protos_sub_dir/` call into the build step?
<rburton>
i'd have thought it was trivial to do that in setup.py but i didn't write it so <shrugs>
kuzz has quit [Quit: Lost terminal]
<RP>
rburton: It would be nice
<RP>
rburton: really need advocacy to help there
ilunev has quit [Ping timeout: 244 seconds]
<rburton>
Too warm brain not working
nerdboy has quit [Changing host]
nerdboy has joined #yocto
dti is now known as dtometzki
<v0n>
rburton: I think the problem is that this prepare step compiles sources into python files, which are then built|installed. That's why it cannot be added that easily into the Build class
<RP>
rburton: know what you mean about warm :/
leon-anavi has quit [Quit: Leaving]
warthog19 is now known as wartohg9
<v0n>
my custom prepare step is invoked without error, even though I did not mention any DEPENDS to provide the 'protoc' binary, it kinda worries me :)
davidinux has quit [Ping timeout: 252 seconds]
davidinux has joined #yocto
<RP>
zeddiii: I'm struggling to make my local failure case fail anymore. So frustrating :(
<v0n>
if I use RDEPENDS python3-protobuf, which has DEPENDS protobuf, does my recipe have protobuf at build time as well?
<JPEW>
v0n: You may need protobuf-native for a build-time host tool
<zeddiii>
RP: :(
<JPEW>
v0n: e.g `DEPENDS += "protobuf-native"`
ant has joined #yocto
ant is now known as Guest1107
vmeson has joined #yocto
Guest1107 has quit [Client Quit]
<v0n>
JPEW: thank you!
<paulg_>
RP, if you've failed to fail, is that a fail?
paulg_ is now known as paulg
* RP
notes an ltp test with "sleep 900" in it
<RP>
paulg: yes, total fail
<paulg>
'cause why would you ever expect LTP to complete in less than 15h ?
<RP>
paulg: indeed. Would be nice if I could ask for "fast" tests only
<paulg>
tgamblin was just noticing that the "fast" tests he was running had already racked up 1+ hrs of wall time, so be careful what you wish for. :-P
<v0n>
btw the setup.py describes the dependencies, do I still need to add the corresponding RDEPENDS python3-* librairies?
<zeddiii>
RP: does the AB run riscv32 ? I can start a local build, but it'll take 3 or 4 hours on my builder.
<paulg>
that looks like a rootfs arch mismatch faii
<paulg>
/sbin/init .... /bin/bash -- the whole default list all failed to execute?
<RP>
zeddiii: no, it isn't official test matrix or supported by YP
<zeddiii>
yah. no matter how low my opinion of what gets into -stable, they shouldn't have done that.
<paulg>
I'd have to check the code to be sure ; at one point 10+ years ago zeddiii and I had a more informative printk in there if your rootfs was missing completely.
vmeson has quit [Read error: Connection reset by peer]
vmeson has joined #yocto
chrfle_ has joined #yocto
michael_ has quit [Quit: Leaving]
<v0n>
I'm getting (from pydoc import locate) "ModuleNotFoundError: No module named 'pydoc'", isn't it a standard python thing? Am I missing an RDEPENDS maybe?
<v0n>
is there a meta package for all standard python-foo ?
<v0n>
python3-foo
Guest38 has joined #yocto
hpsy has joined #yocto
zyga-mbp has joined #yocto
<wesm>
noob question here, I am trying builds for my raspberrypi-cm3 and I switched from a tar IMAGE_FSTYPES to wic. now I get an error about a missing rpi-zero dtb from do_image_wic. can I get bitbake to build just that dtb file or just exclude it? I'm not using that platform anyway
zyga-mbp has quit [Client Quit]
<wesm>
I don't know why it even wants that one, really
<wesm>
since that's not the MACHINE I'm targeting
hpsy has quit [Ping timeout: 264 seconds]
<wesm>
ok I guess it comes from the meta-raspberrypi/conf/machine/include/rpi-base.inc
Vonter has quit [Ping timeout: 244 seconds]
chrfle_ has quit [Ping timeout: 252 seconds]
<zeddiii>
qemuriscv32 login: root
<zeddiii>
Linux qemuriscv32 5.10.42-yocto-standard #1 SMP PREEMPT Wed Jun 9 13:47:13 UTC 2021 riscv32 riscv32 riscv32 GNU/Linux
<zeddiii>
root@qemuriscv32:~# uname -a
<zeddiii>
RP: ^------------
<zeddiii>
(so keep running with those SRCREVs, we'll sort out why the different results).
<RP>
khem: ^^^
prabhakarlad has quit [Quit: Client closed]
<RP>
zeddiii: I have ltp hacked where it faults about 75% of the time in around 8mins
<RP>
zeddiii: interestingly, I think the hangs depend on causing earlier tests to trigger the oom killer :/
<zeddiii>
heh,. I've seen that before, in fact, maybe even with ltp. it corrupted something so badly in test a), that test b) actually failed.
<RP>
zeddiii: I'll now try that with the noaufs kernel
<RP>
paulg: it shouldn't be able to trash kernel memory though?
<paulg>
I've lost track of what is going on. Does LTP break vanilla 5.10.42, or 5.10.42+aufs, or... and are we chasing an NX warning now, or is each crash different, or ... ?
<RP>
paulg: the ltp test above breaks v5.10/standard/base-aufsv2 @ b3b1627391bf18358547d84e4bb4b53438d5cf98 which I think is 5.10.42+aufs with aufs not enabled in the defconfig
<RP>
paulg: it triggers a kernel crash which can be an NX fault, a crash in the interrupt handler or a null pointer dereference. I.e. something is trashing kernel memory
<RP>
it usually faults in a fs call of some kind like a mount op or an mkdir
hpsy has quit [Ping timeout: 264 seconds]
<paulg>
any arch? seen on bare metal or only under qemu?
<RP>
paulg: I'm testing under qemu on x86-64 with kvm
* paulg
isn't a big qemu fan.
<RP>
paulg: I don't get real hardware any more
<paulg>
yah, most of mine is stray-cat rescue material.
<RP>
paulg: I am also trying to stop the autobuilder crashing/hanging and it is all qemu
<paulg>
well, lemme see if I can make something out of the above and get it on x86-64 w/o qemu and see what happens. Probably won't be happening before your bedtime tho...
<ant_>
RP: last I used qemu it was during gcc6->7 move. It could boot misaligned kernels failing on real hw
<RP>
zeddiii, paulg: So far, switching to v5.10/standard/base @ a673c127156c156a4a490ef66e0194d239cfbfa1 has had several runs without provoking the bug
<paulg>
maybe I should be looking at aufs for stuff that lives outside of its ifdeffery...
<paulg>
not that I really want to, but....
Vonter has joined #yocto
LetoThe2nd has quit [Quit: Connection closed for inactivity]
<RP>
zeddiii, paulg: bad news, I have the bug with a673c127156c156a4a490ef66e0194d239cfbfa1
<paulg>
well, I guess the next step is vanilla 5.10.42
<RP>
paulg: I used to build kernels outside the buildsystem but it does it so much better than I do now...
<paulg>
it gets easier when you slowly drift towards only building for one architecture. :-P
<paulg>
gave away all my powerpc boards like 5-8y ago....
<RP>
paulg: you're a mips only shop now? :)
<paulg>
heh.
<paulg>
wonder what Ralf is up to these days?
dev1990_ has quit [Quit: Konversation terminated!]
<paulg>
anyway I'll boot a v5.10.42 @ a673c127156c156a4a490ef66e0194d239cfbfa1 on a turd with ltp already on it and see if I can make it go crunch-bang.
<paulg>
going with "defconfig" for now.
<paulg>
no modules... x86-64
<RP>
paulg: my hacked up test run is basically mm, containers then controllers
<RP>
paulg: mm seems to break it, the last bit of containers provokes, controllers then crashes it
<paulg>
that will tell me if I can see what y'all are seeing, since you've confirmed a673c is borken.
<RP>
paulg: a673c seemed harder to break but I did get a crash
<paulg>
I'll see if I can glom your reduced set into the existing ltp on platters. Don't really want to rebuild ltp
<RP>
paulg: just copy that hackedup file into the runtests dir
<RP>
paulg: that is the magic bits of mm, containers and controller that seem to provoke it
<paulg>
has been a while since I've poked at ltp ; will need to clear some cobwebs and dust.
<RP>
paulg: of course whether it is the right ltp version... :)
<paulg>
yeah, I was going on an assumption that mm tests haven't changed radically, but I might have to revisit that if it doesn' t break.
<RP>
zeddiii, paulg: have some good/bad news. 65859eca4dff1af0db5e36d1cfbac15b834c6a65 breaks too. So it is in upstream .42
<v0n>
does setuptools3 honor the Python requirements.txt?
<paulg>
RP, well you are in this far, with the reproducer handy... test .41 just to confirm it was OK?
<paulg>
zeddiii is no stranger to me griping how -stable has increased their scope of what qualifies ; chances are you've heard him gripe about the same.
<paulg>
see the iommu wreckage of late.
<RP>
paulg: could be worse, could be the RH kernel where we're chasing bugs from totally new syscalls in a point upgrade
<paulg>
don't get me started on ranting about frankenkernels.
<zeddiii>
RP: I just pushed the clean .42
<zeddiii>
v5.10/base, if you still need it.
<RP>
paulg: I am going to have to sleep shortly but I'll try .41 first
<RP>
zeddiii: I think we've concluded it is bust too :/
<paulg>
I'll see if I can break 42 on bare metal, and then maybe I can continue the "rewind" to something non borked.
<zeddiii>
ack'd. took SIGFOOD and was trying to catch up :D
<paulg>
yah, I've pending SIGFOOD here.
<paulg>
RP, before you flee; what kinds of run quantity are you needing on your mimimized test to see a break on your last couple tests? Less than 5?
<paulg>
seeing as this is one of those "doesn't fail doesn't mean good" kinda pseudo-bisects....
<RP>
paulg: mostly its taken 2-3 but plain .42 took about 7
<paulg>
ok - good to know where the "try harder" threshold is presumed to be currently.
<RP>
paulg: you have some "known broken" revisions you can at least know should break...
<RP>
zeddiii, paulg: I should say you need to ignore the pass/fail from testimage, you have to look at the qemu console log in WORKDIR/testimage for tracebacks
* RP
suspects the ltp test status needs some work.
* paulg
won't be using anything from testimage
<paulg>
at least not immediately
<RP>
zeddiii, paulg: .41 confirmed as bust too
<paulg>
whee.
<paulg>
.38 was the iommu fixed one, IIRC.
<paulg>
maybe we think that was good?
<RP>
well, I should sleep. If you have any ideas on what I should be testing tomorrow leave comments :)
<RP>
should I pick some early version of 5.10 for comparision?
<paulg>
it might be wise
<paulg>
in terms of ruling out seomthing !kernel -- like qemu
<paulg>
otherwise we might be on a wild goose chase rewinding thru kernels for no value.
<paulg>
when was the last time we think the autobuilder didn't spit the dummy on ltp?
<RP>
paulg: with all the problems we've been having, hard to say :(
* RP
-> Zzzz
<paulg>
well, in that case, I typically just carve off a sizeable chunk, like a 1/2 dozen stable versions and see what happens.
<paulg>
anyways, if I get a reproducer, then I'll rewind some more. If I don't get one on vanilla .42 then that also tells us it is something specific to toolchain/ltp/qemu/...
<paulg>
sigfood
<paulg>
I've got "./runltp -f mm" going on vanilla v5.10.42 now - we'll see what that does in the interim
<paulg>
serial console should catch any blood spatter and chunks of brain.
Dracos-Carazza has quit [Ping timeout: 252 seconds]
<paulg>
INFO: ltp-pan reported all tests PASS
<paulg>
LTP Version: 20200515-34-g10c317d6e
<paulg>
time to try the "hack" subset.
BCMM has quit [Quit: Konversation terminated!]
<paulg>
default mm set did trigger about a 1/2 dozen oom on the sercons though.
Dracos-Carazza has joined #yocto
otavio has quit [Ping timeout: 264 seconds]
otavio has joined #yocto
otavio has quit [Read error: Connection reset by peer]