<RP>
I noticed malloc fixes on the top of the stable branch and those may help, still trying to confirm
<mcfrisk>
RP: doing kernel debugging work by applying an efi debug patch and the kernel compile is really slow with do_package and do_package_qa both running for 10 minutes
<mcfrisk>
alse all actions with stripping, shlib analysys are in series, one file at a time. I feel like generating a Makefile for the task..
rfuentess has quit [Remote host closed the connection]
<RP>
mcfrisk: there is supposed to be parallelism in at least some of the code
<RP>
mcfrisk: people don't often pay attention to performance and it has been a while since I last optimised that code
<mcfrisk>
RP: yea on normal builds this is hidden in noise, but I'm just changing a kernel patch and rebuilding, and end up waiting a bit too much. I'll try some fixes/workarounds..
dvergatal has quit [Ping timeout: 246 seconds]
dmoseley has quit [Ping timeout: 246 seconds]
dvergatal has joined #yocto
<RP>
mcfrisk: I'd be interested in knowing more about where the time is spent. bitbake -P can help with that btw
<RP>
the task should have a profile log generated with that
varjag has quit [Quit: ERC (IRC client for Emacs 27.1)]
<RP>
rburton: any luck with the mmu tweaks?
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
mbulut has quit [Ping timeout: 244 seconds]
goliath has quit [Quit: SIGSEGV]
<khem>
RP: that change was in 2.37 too isnt it
<RP>
khem: it is :/
<RP>
khem: although only fairly recently
<RP>
khem: I'm going to suggest we pull in the latest glibc fixes on stable, rebuild uninative and see if that works
<khem>
sure is there anything interesting in the 2.38 delta so far
Vonter_ has quit [Read error: Connection reset by peer]
alessioigor has quit [Quit: alessioigor]
prabhakarlad has quit [Quit: Client closed]
Vonter has joined #yocto
wmills has joined #yocto
<wmills>
Anyone around that understands the testresults git repo structure and procedures?
<RP>
wmills: I'm tempted to hide ;-)
<wmills>
I am trying to understand why some builds have just one result set ( /0 ) but most of 2 to 3 and finally a few per year have 10 result sets ( /0 to /9 )
<khem>
RP: Seeing your glibc version bump patch the memalign patches look interestign
<wmills>
I do realize it is Friday night
<RP>
wmills: it depends how many times the autobuilder runs against that revision
<RP>
wmills: if there are no pushes over a weekend you may see 2 to 3
<wmills>
It is not different people running different tests?
<RP>
khem: that was my thought
<RP>
wmills: in theory it could be, in practise I think it is when we don't push and the nightly autobuilder jobs rerun
<wmills>
OK that helps. For build master/70850-gd221e59a5067266c3f620259a1e56a56823df1fb I see 10 result sets. Most have 222 testresults.json files but /1 and /8 have 61 more testresults.json files. Any idea what is going on there?
<RP>
wmills: a-full vs a-quick builds?
<wmills>
That makes sense. (222 does not seem quick). The extra tests are 2 more images in poky-altcfg for qemumips qemuppc and 26 new ptests for qemux86-64 and qemuarm64. Additional ptests include openssh parted etc
<RP>
wmills: that sounds very like the differences between a-full and a-quick
leon-anavi has quit [Read error: Connection reset by peer]
<RP>
wmills: see trigger_builders_wait_full vs trigger_builders_wait_full
<RP>
er, wait_full vs wait_quick
<wmills>
So I am trying to figure out how Linaro can record its test's results from genericarm64. First I wanted to understand the scope of what was tested first. But I was looking for examples of HW tests being checked in independently
ptsneves has quit [Quit: ptsneves]
ptsneves1 is now known as ptsneves
<wmills>
I think I saw examples in the stable branches
ptsneves1 has joined #yocto
<RP>
wmills: each release should have manual entries from QA's runs
<wmills>
each release but not each master build, correct? I think I should switch back to looking at the stable branches as the release builds will be easier to find
<RP>
the top level report is re-generated to include them all
<RP>
wmills: correct
<Saur>
RP: Sorry for the wall of text to the openembedded-core list.
<wmills>
How do the intel & wr test results get checking in to the git repo? Do they are push rights? A maintainer does it? or a server process?
<RP>
wmills: Intel does the release engineering, they collect up the results and push them in. They have access as release engineering
amitk_ has joined #yocto
<RP>
Saur: what happens before the SRCPV changes if you try and build one of those recipes with invalid srcrevs?
<wmills>
RP: OK let me keep looking at this. Thanks. Have a good weekend
<RP>
wmills: From memory I think there is a collaboration staging git repo and Intel collects the results from there
<Saur>
RP: In Mickledore I get no errors during parsing, and the same errors that I now get with my suggested change applied to Nanbield. I.e., with my suggested change applied the behavior in Mickledore and Nanbield is identical.
<RP>
Saur: your change feels like we're just papering over problems :(
florian_kc has joined #yocto
<Saur>
RP: Well, this maintains status quo when tags are used in SRCREV (for good or bad).
<RP>
Saur: so to be clear, you don't get a parsing error when you try and execute the recipe, it fails at runtime in do_fetch
<Saur>
RP: Yes, exactly as it used to be.
<RP>
whilst it may have done that, I think it is bad
<RP>
Saur: what happens if you make the fetcher raise SkipRecipe() ?
<Saur>
You mean as I did in my original patch?
leon-anavi has quit [Quit: Leaving]
<Saur>
Then the error message changes for the case where you set SRCREV to a tag to the one I included in the commit message.
<RP>
Saur: I guess I'm thinking of the fetcher function itself doing it but it would be equivalence
<RP>
I guess I still think that these recipes should be fixed to raise SkipRecipe themselves if they set broken configurations
<Saur>
The problem (for me) is that the brokenness depends on external factors, i.e., whether the user building happens to have access to a specific Git repository or not. A repository for a recipe he probably isn't interested in anyway (since he does not have access to it).
<RP>
Saur: there is another side to this - a user does generally prefer to know about breakage earlier than later. This is just going to hide breakage from a user until part way through a build
<Saur>
True, but at the same time, errors during parsing are horrible as they, e.g., prevent `bitbake -e` from being used.
<Saur>
And the fetch tasks are typically run early in the build process anyway.
<RP>
Saur: I'll think about it. I'm more of the view that in general people do want to know about broken configuration sooner than later though
<Saur>
RP: My problem with that is that the current early error is just a side effect of using tags in SRCREV. E.g., if you set an incorrect SHA-1 in the SRCREV you will not get the error until the fetch fails, i.e., the same behavior that I have for tags in SRCREV with my patch applied.
<RP>
Saur: we have to resolve tags and we can detect that early. It is assumed that revisions are correct but we would detect e.g. typos in the revision that made then invalid
<Saur>
RP: Well, true. But as I see it, having a reference in the SRCREV that all may not be able to access, is no different from having a PACKAGECONFIG that adds a dependency on a recipe that you do not have. It only becomes an error if you actually try to enable that PACKAGECONFIG, much like the SRCREV only becomes an error if you try to build the recipe without the proper access rights.
<RP>
Saur: I disagree and I you're rapidly pushing me to the view this really should be an error
<RP>
I appreciate it is tricky for your situation
<Saur>
RP: Well, (I think) I can handle the situation for us, by using an anonymous Python function in each recipe that is affected and letting it raise SkipRecipe. But I would of course very much prefer a generic solutions that maintains the behavior from previous releases.
<Saur>
RP: Related to this: while we currently rely on being able to set SRCREV to a tag, have you considered removing this support and only allow SHA-1s in SRCREV?
* RP
thinks the previous behaviour is broken
<RP>
Saur: I wondered about it but I suspect people would be out with pitchforks if I did that :/
goliath has joined #yocto
<Saur>
RP: Well, you don't know until you try. I know it would definitely make us reprioritise some of our development work. ;)
slimak has quit [Ping timeout: 246 seconds]
gsalazar has quit [Ping timeout: 260 seconds]
florian_kc has quit [Ping timeout: 255 seconds]
flom84 has joined #yocto
prabhakarlad has joined #yocto
flom84 has quit [Ping timeout: 246 seconds]
flom84 has joined #yocto
Vonter has quit [Ping timeout: 248 seconds]
flom84 has quit [Remote host closed the connection]
<RP>
khem: I think so. We'll need more testing to know for sure
mvlad has quit [Remote host closed the connection]
mdp has quit [Server closed connection]
mdp has joined #yocto
goliath has joined #yocto
ptsneves1 has quit [Ping timeout: 248 seconds]
NishanthMenon has quit [Server closed connection]
NishanthMenon has joined #yocto
<adrianf>
Saur,RP: just a third opinion. If we think about OSS repositories only, a parsing error is fine. However, for example we have a GitLab infrastructure with 10 thousands of users. Only users who really need access to a git repo get access to it. If parsing a layer requires access to all the repos referred by a recipe in the layer, tags are no longer
<adrianf>
usable. I changed all our recipes from tags to hashes after updating to Kirkstone. If I remember correctly, Kirkstone started throwing warnings. I hope this comment makes sense, but I'm not sure that I got the begin of the discussion.
<adrianf>
It's maybe also important to know that GitLab runs bitbake with the previledges of the user who pushed the build pipeline. In such an environment a fetch error is much better than a parse error.
pabigot has quit [Ping timeout: 255 seconds]
xmn has quit [Ping timeout: 246 seconds]
<RP>
adrianf: the point is that recipes should either parse and work or they should be skipped and effectively not be present. The state of parsing then failing part way through a build is not a good user experience
xmn has joined #yocto
<adrianf>
For us tags are useless now. It's not a big issue. We can go with recipes with hashes only. But it means manually maintaining the hash and the pv=tag per recipe. Users are asking why this is required.
l3s8g has quit [Ping timeout: 246 seconds]
pabigot has joined #yocto
rsalveti has quit [Quit: Connection closed for inactivity]
<adrianf>
Resolving at parse time also makes parsing slower. This makes sense for e.g. bitbake world. But it is a disadvantage if rebuilding one recipe with the esdk is the use case. Especially if you are offline.
<JaMa>
adrianf: bitbake was always resolving tags into shas while parsing, that's why we switched from tags to hashes about 10 years ago, because it was hammering our gerrit servers too much while parsing the recipes
<JaMa>
adrianf: we have a bbclass which checks that has matches with a tag name and PV, so it doesn't have to be done manually
<Saur>
JaMa: Do you have a URL for that class?
<adrianf>
Yes, the DoS attacks are also why we changed all recipes.
<JaMa>
but there weren't so many changes since introduction
<adrianf>
But my understanding of the question was: If resolving at parse time is better than resolving at fetch time. I just wanted to say: It really depends.
<JaMa>
but you also said it changed in kirkstone which I don't think is true, it was always like that
l3s8g has joined #yocto
<JaMa>
or maybe you just meant that kirkstone started throwing warnings
<Saur>
JaMa: I think the change adrianf is thinking of is the requirement to use ${SRCPV} when using anything but SHA-1 or you get an error.
<JaMa>
IC
<Saur>
I guess that requirement has solved itself now though since ${SRCPV} is no more...
<RP>
aaround kirkstone time there was a realisation that there were big problems if you didn't use SRCPV when bitbake was expecting it. That was why the warnings were added, I spent days working out why some odd behaviour happened :/
<RP>
now, we get to try and clean things up a bit and if we can, simplify...
<RP>
adrianf: tags had to be resolved to hashes for a while after it was found people do move tags :(
brrm has quit [Ping timeout: 245 seconds]
brrm has joined #yocto
<JaMa>
for that we enforce using annotated tags as well with their hash ending in SRCREV, so if someone recreates tag with the same name it's detected as well
<adrianf>
not be used.
<adrianf>
RP, Saur,JaMa: With Kirkstone my conclusion was that tags have several negative side effects and I finally removed them. This discussion pointed out some more disadvantages. I agree, that resolving while fetching would lead to even worse corner cases than failing fast at parse time. But from a user's perspective it's not too obvious why tags should
<RP>
adrianf: we probably should at least document this somewhere...
pabigot has quit [Ping timeout: 248 seconds]
warthog9 has quit [Ping timeout: 246 seconds]
<JaMa>
RP: I don't want to jinx it, but the ping timeout fix from you seems to work on my 90K task jobs, only timeouts I got today were from j
<JaMa>
java.io.EOFException
<adrianf>
RP: I would probably search for that in the bitbake manual git fetcher section.
pabigot has joined #yocto
warthog9 has joined #yocto
ptsneves has quit [Quit: ptsneves]
l3s8g has quit [Ping timeout: 246 seconds]
florian_kc has quit [Ping timeout: 250 seconds]
Marian66 has joined #yocto
<Marian66>
Hi,
<Marian66>
I have a recipe that on do_compile is creating build/x86_64/ and build/arm64.
<Marian66>
On do_install I'm trying ot install them based on the TARGET_ARCH:
<Marian66>
if ["${TARGET_ARCH}" == "aarch64"]; then
<Marian66>
target="arm64"
<Marian66>
elif ["${TARGET_ARCH}" == "x86_64"]; then
<Marian66>
target="amd64"
<Marian66>
fi
<Marian66>
When I compile I get the following:
<Marian66>
177: [x86_64: not found
<Marian66>
179: [x86_64: not found
<Marian66>
Can you please help?
ldts has quit [Server closed connection]
ldts has joined #yocto
florian_kc has joined #yocto
<Saur>
Marian66: You are lacking spaces after [ and before ]
<Saur>
Marian66: You should also use = instead of == as the latter is a bashism.