otavio has quit [Remote host closed the connection]
nemik has joined #yocto
Herrie has quit [Ping timeout: 258 seconds]
zeddii has quit [Ping timeout: 276 seconds]
seninha has quit [Remote host closed the connection]
bps3 has quit [Ping timeout: 276 seconds]
seninha has joined #yocto
nemik has quit [Ping timeout: 255 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 240 seconds]
nemik has joined #yocto
camus has joined #yocto
sakoman has quit [Quit: Leaving.]
starblue has quit [Ping timeout: 258 seconds]
starblue has joined #yocto
sakoman has joined #yocto
zeddii has joined #yocto
zeddii has quit [Ping timeout: 255 seconds]
zeddii has joined #yocto
seninha has quit [Quit: Leaving]
sakoman has quit [Quit: Leaving.]
davidinux has quit [Ping timeout: 260 seconds]
davidinux has joined #yocto
olani has quit [Ping timeout: 255 seconds]
manuel1985 has quit [Ping timeout: 255 seconds]
nemik has quit [Ping timeout: 246 seconds]
nemik has joined #yocto
goliath has joined #yocto
nemik has quit [Ping timeout: 246 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 258 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 258 seconds]
nemik has joined #yocto
Schiller has joined #yocto
Schiller has quit [Quit: Client closed]
manuel1985 has joined #yocto
Schiller has joined #yocto
frieder has joined #yocto
Herrie has joined #yocto
rob_w has joined #yocto
tre has joined #yocto
rob_w has quit [Remote host closed the connection]
mckoan|away is now known as mckoan
<mckoan>
good morning
<wyre>
o/ :)
leon-anavi has joined #yocto
gsalazar has joined #yocto
acki_ has joined #yocto
kroon has joined #yocto
bps3 has joined #yocto
<LetoThe2nd>
yo dudX
<RP>
jaskij[m]: I was going to sleep too! :)
<jaskij[m]>
RP: Nice one. Having slept on it, I think I'll just report as is, and if they need a more detailed repro, we'll go from there. If you want to chip in with a link to a failing build in Yocto's CI?
<RP>
jaskij[m]: I don't easily have such a thing as yet, I noticed it failing but I didn't dig into it or get any fruther
<RP>
I suspect if they know it can return 0, they'll know how to fix it
<jaskij[m]>
Fair enough. I should send to their mailing list in a few hours
<RP>
jaskij[m]: Sounds good thanks. It could of course be a different issue too :/
Schlumpf has joined #yocto
<jaskij[m]>
Also, I've seen a mailing list thread about Go fetching in do_compile, with talks about making a generator. Did anything make it into Kirkstone? So far I just enabled network for the task in the few recipes I have.
<RP>
jaskij[m]: nothing merged as yet
<landgraf>
jaskij[m]: I think we have some kind of workaround for this problem. CC zyga[m]
<jaskij[m]>
Oh well.
violet has joined #yocto
<zyga[m]>
good morning
<zyga[m]>
landgraf: nothing fancy, I'm afraid, we wanted to write a proper fetcher but never got around to it
<zyga[m]>
landgraf: the idea is to use go itself, since go can correctly fetch all the deps and be ready for offline builds
<landgraf>
zyga[m]: Oh. Sorry. I'm reading "trick" paragraph in the recipe and it looks tricky :D
<zyga[m]>
landgraf: but we never wrote that code
<zyga[m]>
I mean, ideally it would be little more than `go mod download`
<zyga[m]>
I don't think supporting pre-module code is worth-while
<zyga[m]>
ah :)
<landgraf>
do_compile[network] = "1"
<landgraf>
jaskij[m]: ^ nvm then. Sorry for false alarm
<jaskij[m]>
landgraf: Yeah, that's what I'm doing now. Thanks for the help anyway.
<zyga[m]>
yeah, that's the hack we do
ptsneves has joined #yocto
kriive has joined #yocto
acki_ has quit [Ping timeout: 260 seconds]
<ernstp>
mrybczyn[m]: Any more thoughts on my suggested changes to cve-check?
<mrybczyn[m]>
ernstp: should have a result from the build farm tomorrow. I'm checking the results and would likely suggest adding a switch. But let me analyse the result set
<ernstp>
mrybczyn[m]: Oh you're testing it, nice! Hmm.. the switch would change the "recipies" list to be a longer list in that case.. ?
<mrybczyn[m]>
ernstp: if I have a doubt that we're getting an incomplete list, I'll suggest you to make it optional. It's quite tedious to check so that's why it is taking time
<ernstp>
mrybczyn[m]: I don't think it's incomplete, but please let me know what you find!
<ernstp>
mrybczyn[m]: with my $company hat and a number of extra layers I want a small list of CVEs to take care of. But from the Yocto project view they probably always want all CVEs always
Schlumpf has quit [Ping timeout: 252 seconds]
<mrybczyn[m]>
ernstp: we're on the same boat here
<ernstp>
mrybczyn[m]: 👍
alessioigor has joined #yocto
florian has joined #yocto
<ernstp>
mrybczyn[m]: but the summary file is always still there!
<mrybczyn[m]>
ernstp: a small secret: I'd like to split the image and sdk reports
<ernstp>
mrybczyn[m]: the sdk is also an image. but the whole sdk build could have another summary...
OnkelUlla has quit [Read error: Connection reset by peer]
OnkelUlla has joined #yocto
<ernstp>
mrybczyn[m]: I think my initial annoyance was that simply enabling "cve-check" caused sqlite-native to build and include sqlite in the cve list
<jclsn[m]>
Is there a way to add the toolchain and sysroot to CMake without sourcing it?
<jclsn[m]>
Actually this shouldn't be necessary anymore, shouldn't it? I added
<jclsn[m]>
set(CMAKE_TOOLCHAIN_FILE /opt/fslc-wayland/3.4-snapshot-20220517/sysroots/x86_64-fslcsdk-linux/usr/share/cmake/OEToolchainConfig.cmake) to my CMakeLists.txt, but it doesn't find anything
<Schiller>
RP: Hey, sorry couldn't respond in time to <<are you using the codebaseGenerator function?>>. Atm i removed the codebaseGenerator and have a normal SingleScheduler which seems to trigger correctly. The Problem is the <<Fetch yocto-autobuilder-helper>> step. It does a git clone from the autobuilder repo into the builddirectory and then tries a <<git
<Schiller>
reset --hard <commit> -->>. I don't quite understand why he even tries to do this as i surely should fail because they are different repos. Can you give me some more intel on how this is meant to work?
alessioigor has quit [Quit: alessioigor]
<RP>
Schiller: I don't remember the details but I suspect you may need something similar to what we have but tweaked for your repos
uglyoldbob has quit [Quit: Connection closed for inactivity]
florian_kc has joined #yocto
nemik has quit [Ping timeout: 255 seconds]
nemik has joined #yocto
<jaskij[m]>
RP: I can't file a patch immediately, as I'm at work and we don't have a FOSS policy (although I'm working on it, this will take some time though). Will try to get at least the quick and dirty workaround in later this week (I'm unfortunately busy tonight).
nemik has quit [Ping timeout: 255 seconds]
nemik has joined #yocto
<manuel1985>
Is anyone around here developing yocto on an Apple M1? What's your experience? My boss offers us some and I'm hesitating.
nemik has quit [Ping timeout: 244 seconds]
nemik has joined #yocto
GNUmoon has joined #yocto
<LetoThe2nd>
manuel1985: its cool.
nemik has quit [Ping timeout: 276 seconds]
nemik has joined #yocto
<LetoThe2nd>
manuel1985: given a few assumption: 1) you get a parallels license to run a linux vm 2) you get enough ram 3) you don't try to build "older releases", e.g. those that require syslinux for wic before it was aarch64-enabledä
<manuel1985>
LetoThe2nd: Thanks, it's good to know to keep that in mind.
<manuel1985>
Is parallels supporior to other VMs?
<manuel1985>
I'd say 16gig RAM is enough for now. True?
<JaMa>
depends on the type of images you're regularly building, for me 16gig is terribly small
<LetoThe2nd>
manuel1985: i'd go for 32GB, and... what other vms do you mean exactly?
<manuel1985>
LetoThe2nd: VirtualBox or VMware. (Don't know if they even support Apple, so perhaps my question was a stupid one)
<LetoThe2nd>
manuel1985: ;-) good guess, exactly thats the point.
<manuel1985>
D'oh
<RP>
jaskij[m]: sounds good thanks!
<jaskij[m]>
<manuel1985> "I'd say 16gig RAM is enough..." <- IME, on a Linux host, running Yocto in a VM, with CLion (IDE) and a browswer open, 32 GB is tight but doable.
<jaskij[m]>
With M1 using the same memory for CPU and GPU personally I'd shoot for 64 gigs
<jaskij[m]>
iirc C++ builds can easily do 1+ GB/thread
<LetoThe2nd>
jaskij[m]: well, it depends.
<LetoThe2nd>
jaskij[m]: i'm on 32 and not noticing any tightness, but again, YMMV
<jaskij[m]>
oh, it sure does, but as a generalization for using Yocto 1 GB/core seems like a decent approximation
* JaMa
is regularly triggering OOMK with 2GB/core (128G/64c)
<jaskij[m]>
fair, the tightness started when I added more VMs or launched a second IDE
<jaskij[m]>
when I had a VM, I had it set to 20 GB for a 6c12t CPU
<jaskij[m]>
with a headless Debian running Yocto
<LetoThe2nd>
jaskij[m]: exactly, and i'm usually on one builder VM, and using vscode. so essentially there's only two real workloads: browser(s) and one vm 6c20gb.
<LetoThe2nd>
works pretty nicely for me.
ptsneves has joined #yocto
<jaskij[m]>
so yeah, for a 10c M1 32 GB should be enough, if it's just browser, editor and a Yocto VM
<JaMa>
as long as you don't build chromium or nodejs
<jaskij[m]>
Back when Dunfell was latest I gave up on packaging nodejs things
<manuel1985>
Thanks guys! Your field reports are very helpful.
<jaskij[m]>
was a nightmare, especially if your nodejs code had native dependencies (eg. serial)
<LetoThe2nd>
jaskij[m]: i'm "only" doing documentation, demo and talking these days, no more nodejs involved! yes!
* JaMa
just bisecting gcc to find out where it breaks webruntime build, as many cores as possible is useful :)
<LetoThe2nd>
having said that, i actually like writing js/ts - but not for an embedded target
<jaskij[m]>
Luckily, I left the company before packaging nodejs backends under Yocto became required. And I cursed the day I OKed the nodejs backend for weeks.
<jaskij[m]>
Nowadays I'm firmly in the Python backend camp.
<jaskij[m]>
sure, Python native dependencies can be a pain, and setuptools seems mildly cursed by itself, but it's at least a known quantity with good support in Yocto
<neverpanic>
At my previous employer, we regularly hit OOM with 2GB per core, compiling lots of C++
<neverpanic>
IIRC we ended up with 192GiB/40c in the end, which may have been a bit overkill
<LetoThe2nd>
the problem is not nodejs or python per se. the key is using the right tool for the given problem, and understanding that actually deploying through a yocto build is part of the problem, not something that will magically happen.
<LetoThe2nd>
neverpanic: at my last company we actually bought a 256c512gb epyc box, which served as the dev host for the sw people.
<jaskij[m]>
LetoThe2nd: you are right, and because of that I adopted the policy that all dependencies going into Yocto must be okayed by me first (working as the only Yocto, and Linux in general, person in small embedded place has it's perks).
<neverpanic>
Share machine doing Yocto builds was always a bit annoying. It becomes bareable if everbody does their builds in cgroups.
prabhakarlad has quit [Quit: Client closed]
td70 has joined #yocto
<td70>
Hi,
<td70>
is there any way to decrease the build time with sstate-cache disabled? We're having a jobs that test the build from scratch and after update to from Krogoth to Gatesgarth the build time increased from 3-4h to 5-6 hours.
<LetoThe2nd>
td70: you can try to trace the build and find out if there is any outlier that eats an unusual amount of time, but in the krogoth to gatesgarth history there have been a lot of substantial changes and updates, so i'd expect this to be "normal"
<JaMa>
td70: more cores, more ram
otavio has joined #yocto
<LetoThe2nd>
even that doesn't always scale
<LetoThe2nd>
td70: and please note that gatesgarth is EOL and unsupported by now too!
<td70>
yes, I know it's not supported already. when we started the update it was alive but it took a lot of time to update :D
<LetoThe2nd>
td70: keep on updating then!
<JaMa>
why didn't you target dunfell as it's LTS?
davidinux has quit [Ping timeout: 255 seconds]
<td70>
I wasn't at the decision making, but probably we targeted something more recent and we didn't expect the update to take 1.5 year
davidinux has joined #yocto
<td70>
obviously we will start the next update soon, but we're trying to solve the problems we have now
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
GNUmoon has quit [Remote host closed the connection]
GNUmoon has joined #yocto
fitzsim has quit [Ping timeout: 258 seconds]
Schiller has quit [Quit: Client closed]
seninha has joined #yocto
BobPungartnik has joined #yocto
seninha has quit [Remote host closed the connection]
seninha has joined #yocto
cmd has joined #yocto
BobPungartnik has quit [Client Quit]
pgowda_ has joined #yocto
tnovotny has joined #yocto
Schiller has joined #yocto
<Guest87>
anybody have tips on putting toaster behind a reverse proxy?
rob_w has joined #yocto
<JPEW>
derRichard: We bind mount in the entire working directory into the docker container and it seems to work just fine
<derRichard>
JPEW: so do i here too. i still have no idea why i'm facing these problems
<JPEW>
derRichard: Do you have builds that get interrupted (e.g. stopped in the middle)
<derRichard>
not that i know of.
<JPEW>
derRichard: Sometimes it happens without realizing; anyway pseudo does heavy in-memory caching and if it get killed for some reason (esp. if the container it's running in dies because the kernel will SIGKILL all processes), it won't write out database. This causes things to get out of sync if you build again
<derRichard>
qschulz: how is this realated to my problem?
<derRichard>
did psudo also fail randomly?
<qschulz>
derRichard: don't know, but reading pseudo+container failing I always bring this up
<derRichard>
:)
<qschulz>
I think it consistently crashed on ncurses or icu native recipes
<qschulz>
however, I never have sstate-cache enabled
<qschulz>
so YMMV
<qschulz>
(will enable sstate-cache but don't have the time to set it up across jenkins nodes atm)
<sotaoverride>
so my recipe only has a bunch of install -d under do_install, why do i get this "Recipe file fetches files and does not have license file information (LIC_FILES_CHKSUM) [license-checksum]"
<sotaoverride>
not fetching anything, just creating directories
<qschulz>
sotaoverride: interesting kind of recipe.. what's the need for this? can't you just install the directories in the recipes that need it?
<Schiller>
RP Hey. When i do a commit in the repo on a branch other then master i get the Error: <<Target not present in branch configuration>>, when the autobuilder runs the <<target-present>> script. Where do i make the target present in the config.json?
<sotaoverride>
qschulz: it's just for my filesystem overlay stuff, I dont need to fetch anything but just setup the directory structure for the upper/lower dir
<sotaoverride>
and im on older yocto so i cant use the fsoverlay bb class either
Net147 has quit [Quit: Quit]
<derRichard>
qschulz: thx
<ptsneves>
qschulz it uses an incomplete fuse implementation incompatible with pseudo. The nightmare
<sotaoverride>
qschulz: is there some sort of ruling against against straight up recipes for just setting up directories and such?
Net147 has joined #yocto
Net147 has joined #yocto
Net147 has quit [Changing host]
<ptsneves>
sotaoverride maybe you can just do a rootfs post process where you add those directories. It is a bit strange to have a package with empty directories and i am not sure package managers even "like" that
<qschulz>
sotaoverride: all recipes have a license, that's the issue BitBake is reporting. I don't know if there's a default license (or was), because it probably should have failed on LICENSE missing. If LICENSE is set to something common, you could reuse one from the common-licenses directory (you probably have some recipes using COMMON_LIC_DIR or something like that in LIC_FILES_CHKSUM somewhere)
<qschulz>
sotaoverride: or follow what ptsneves just suggested :)
<ptsneves>
LICENSE="closed" will make that requirement go away
<qschulz>
ptsneves: yeah but it's also too easy to have everything use LICENSE = CLOSED instead of taking 5min to do things properly (does not apply to this uncommon recipe of sotaoverride though)
<ptsneves>
qschulz definitely agree. I have seen projects doing that and then needing to go through lots of recipes to fix the licenses
<sotaoverride>
yeah, I think ill go with what ptsneves suggested, I was getting crap for setting LICENSE = CLOSED when on a PR I had up for it :P ill find out the right rootfs post porcess
<ptsneves>
sotaoverride the Yocto inquisition
<qschulz>
derRichard: also, we had to drastically increase containers.pids_limit in /etc/containers/containers.conf for podman to not kill the process (we have it set to 100000 instead of the default 1024)
<qschulz>
s/process/container/
<sotaoverride>
ptsneves: I can just append the directory creation function to ROOTFS_POSTPROCESS_COMMAND ? does that sound good enough?
<qschulz>
derRichard: sharing all my "knowledge" of container work-arounds :)
<derRichard>
qschulz: well, we don't see sporadic deaths of pseudo (or other programs in the container). we see that pseudo aborts itself due to inode mismatches
<derRichard>
we run yocto in docker for years, all good so far.
<qschulz>
sotaoverride: I'd create a function that rootfs_postprocess_cmd would then contain. but yes
Guest21 has joined #yocto
<qschulz>
derRichard: yeah, couldn't rarely finish a build with rootless podman with jenkins (but for some reason, on the same server, manually it worked sometimes...)
<qschulz>
and it was pseudo inode mismatches, fixed with --tmpfs /tmp
<Guest21>
After bumping to kirkstone version I got "python -m installer: error: unrecognized arguments: --interpreter /home/builder/yocto-kirkstone-next/builddir/tmp/work/x86_64-linux/python3-wheel-native/0.37.1-r0/dist/wheel-0.37.1-py3-none-any.whl" any idea to fix? (I'm not Yocto expert sorry:( )
<qschulz>
also, rootless podman does not behave the same as podman (it worked on "root" podman)
<derRichard>
qschulz: hmmm, i need to dig into this in detail. so far it makes no sense to my why /tmp plays a role
<qschulz>
derRichard: explained in the link I sent you earlier IIRC
<qschulz>
was not involved in the discussion, tlwoerner was with folks from KAS
<derRichard>
as i said, i need to dig into this in detail
<qschulz>
something about the filesystem storage being different
<qschulz>
and also changes of tech between podman versions
<ptsneves>
RP did you explore the idea of git subtree for poky ?
nemik has quit [Ping timeout: 246 seconds]
nemik has joined #yocto
sakoman has joined #yocto
<hushmoney>
i'm trying to use a layer provided to me that list both dunfell and honisterin LAYERSERIES_COMPAT. is there a bitbake config to make it tolerate the old style override syntax, or is it wrong for a layer to advertise compatibility with both?
<JaMa>
hushmoney: the new syntax is backwards compatible, so it can be compatible with both, but then it should be using the new syntax
kscherer has joined #yocto
<qschulz>
hushmoney: or said another way, dunfell (since 3.1.11 I think?) supports both syntaxes
<qschulz>
so using the new syntax for a layer that supports dunfell and honister is ok
<qschulz>
provided your users are on dunfell >= 3.1.11 (and if they are not, they should upgrade (we're on 3.1.16 now))
<RP>
ptsneves: yes, we'd have to change the layout :/
<ptsneves>
RP oh. So no free lunch
rob_w has quit [Remote host closed the connection]
<RP>
ptsneves: of course not
MWelchUK1 has quit [Ping timeout: 240 seconds]
<hushmoney>
JaMa: qschulz: ah-ha, thanks i didn't consider it that way. in that case i should revise my question to say they also advertise compat with zeus. maybe they haven't updated the syntax yet because some of their users still use zeus, but then it seems they shouldn't advertise honister
<qschulz>
hushmoney: there's actually patches in zeus branch for the old override syntax I think
<qschulz>
(at least thud has it, and it predates zeus)
<hushmoney>
did you mean to say for the new syntax?
<qschulz>
hushmoney: yes
<hushmoney>
i see, thanks
<qschulz>
so if you update to latest commit in zeus branch you might have support for the new syntax too
<qschulz>
hushmoney: but I'd triple check, i'm not entirely sure the patch is there
<hushmoney>
i sure wish we would just refer to these by version number, i keep having to look it up what release comes before or after another release
Schiller has quit [Ping timeout: 252 seconds]
<RP>
hushmoney: they do sort alphabetically FWIW
<RP>
within a name series anywah
<jaskij[m]>
Speaking of licensing, if upstream's licensing is a mess my only recourse is to bug them to fix it, right?
<hushmoney>
RP: thank you i was too stupid to notice that
mckoan is now known as mckoan|away
pgowda_ has quit [Quit: Connection closed for inactivity]
ThomasRoos has joined #yocto
<RP>
jaskij[m]: probably, depends what the mess is I guess
tre has quit [Remote host closed the connection]
<jaskij[m]>
RP: The package has an overarching license, but a lot of submodules or individual files are under different licenses. They do have a combined licenses file, but it's woefully out of date.
zwelch_ has joined #yocto
<jaskij[m]>
Referring nonexistent files and similar
<RP>
jaskij[m]: you can either try and untangle it or ask them to
<jaskij[m]>
Mhm... I might try looking through if some files were simply moved. The lack of a FOSS policy means anything I want to commit I have to do on my own time :/
ptsneves has quit [Ping timeout: 252 seconds]
fitzsim has joined #yocto
ThomasRoos has quit [Remote host closed the connection]
ThomasRoos has joined #yocto
nemik has quit [Ping timeout: 258 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 255 seconds]
nemik has joined #yocto
seninha has quit [Ping timeout: 256 seconds]
nemik has quit [Ping timeout: 240 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 272 seconds]
nemik has joined #yocto
ThomasRoos has quit [Remote host closed the connection]
seninha has joined #yocto
bps2 has quit [Ping timeout: 272 seconds]
Guest21 has quit [Quit: Client closed]
Schiller has joined #yocto
Schiller has quit [Client Quit]
Tokamak has quit [Ping timeout: 256 seconds]
leon-anavi has quit [Quit: Leaving]
florian has quit [Quit: Ex-Chat]
kevinrowland has joined #yocto
Tokamak has joined #yocto
florian_kc has quit [Ping timeout: 244 seconds]
kevinrowland has quit [Quit: Client closed]
tnovotny has quit [Quit: Leaving]
<JaMa>
hushmoney: qschulz: neither zeus (bitbake 1.44) nor thud (1.40) supports new syntax AFAIK
<JaMa>
claiming support for zeus and honister is usually just wrong, some people prefer to claim the compatibility with the next release while it's still far from release (so they might have added honister before it required new syntax to be used), that's why I hate this premature claim
<sakoman>
JaMa: still struggling trying to understand the webkit-gtk reproducibility breakage with the gcc version bump. The files are so huge it really makes analysis difficult :-(
<sakoman>
the .so is more than a gigbyte!
florian_kc has joined #yocto
florian_kc has quit [Ping timeout: 244 seconds]
<JaMa>
sakoman: I was trying to find if there was some fix for webkit-gtk in master (after the upgrade to 11.4) but haven't found any, it's strange that it wouldn't be an issue in master before the upgrade to gcc-12 as well (unless it was an issue, but disappeared before someone tried to tackle it)
goliath has joined #yocto
<JaMa>
sakoman: the 2nd part of the fix I wanted to backport (from gcc-12) is starting to be more and more messy, so no rush to get 11.4 in kirkstone now (at least from me)
<sakoman>
JaMa: yeah, I looked for fixes in master too, and it is quite possible it was there for a bit and disappeared because something else changed
<sakoman>
JaMa: master upgraded to 12.1 a couple of weeks after the 11.3 update, so that may explain why no one put much energy into it
<sotaoverride>
hey fsoverlay pros here, is there a way to prevent deletes from the lowerdir, (for directories / files ovelayed)
<sotaoverride>
I want write reads to happen normally, but for stuff thats in the lower dir, I want to prevent lowerdir contents from ever getting deleted
<JaMa>
sakoman: do you have a handy oneliner to run reproducibility test just for webkit-gtk? I would test it on master with 11.4 just to see if it was reproducible there
<sakoman>
Jama: master never had 11.4, just 11.3
<JaMa>
sakoman: ah sorry I was thinking about 11.3 in all my messages about it
<sakoman>
JaMa: :-)
<sakoman>
JaMa: I haven't tried doing reproducible builds locally, just on the autobuilder. Let me take a look and see what it is doing there
<JaMa>
ok, will try locally
<sakoman>
JaMa: looks like it is doing oe-selftest -r reproducible
<sakoman>
JaMa: I don't think you can limit that script to do just a single package
Tokamak has quit [Ping timeout: 244 seconds]
<sakoman>
But basically it does one build allowing sstate to be used, and then another build with no sstate usage, and compares the outputs of the two builds
<sakoman>
reproducibleA is the output w/sstate, reproducibleB is without
<sakoman>
normally the diffoscope output would be in diff-html
Tokamak has joined #yocto
<sakoman>
but diffoscope crashed in this case due to the size of webkit-gtk
<sakoman>
what I did was grab the .debs from reproducibleA and reproducibleB, extract the files, and compare them to see what changed. Everything looked the same except the .so files
<JaMa>
I'm testing locally with cee443ae75f (last commit in master before the upgrade to 12.1)
<JaMa>
is the default, don't see how to restrict it just to webkit-gtk, but I guess webkit-gtk is the heaviest one in core anyway, so will let it continue to run with defaults
<hushmoney>
qschulz: thanks, good to know
<sakoman>
JaMa: yeah, I didn't see an easy way to restrict it either. It will take quite a few hours . . .
* JaMa
has more build power than energy right now, so longer build time won't be an issue
<sakoman>
JaMa: yes, that patch will be in my next-next set of patches :-) I usually let them age in master for 2-3 days to see if they cause any issues ;-)
kevinrowland has joined #yocto
<JaMa>
cool, makes sense
<sakoman>
JaMa: I can relate to the build_power/personal energy equation!
florian_kc has joined #yocto
<JaMa>
didn't take that long to get first failure from ovmf-native which doesn't like gcc-12 on my host :)
<derRichard>
qschulz: i'm digging now into that tmpfs podman issue. my case seems to be a little different. i'm not using an unprivileged podman. so no fuse-overlayfs here.
gsalazar has joined #yocto
gsalazar has quit [Ping timeout: 256 seconds]
vladest has quit [Remote host closed the connection]
vladest has joined #yocto
seninha has quit [Quit: Leaving]
Tokamak has quit [Ping timeout: 250 seconds]
florian_kc is now known as florian
Tokamak has joined #yocto
Tokamak has quit [Ping timeout: 246 seconds]
<sakoman>
JaMa: I just ran into an ovmf-native issue on kirkstone too: GenFfs.c:545:5: error: pointer ‘InFileHandle’ used after ‘fclose’ [-Werror=use-after-free]
<jaskij[m]>
derRichard: it's a build issue, right? Does Podman use lxcfs? I've had build failures (most notably in fsck.ext4) because of a bug in lxcfs. Probably unrelated, but I thought to mention it
<sakoman>
JaMa: I already grabbed the ovmf-native fix into kirkstone
<JaMa>
"lookup github.com: Temporary failure in name resolution" failed twice with 10 mins between while github.com works for me, so probably something else is going on
<sakoman>
JaMa: sounds like my life :-(
<derRichard>
jaskij[m]: no need to worry. :-)
<jaskij[m]>
JaMa: Go package? `do_compile[network]=1`
florian has quit [Ping timeout: 240 seconds]
<jaskij[m]>
(if you're one of the people who helped me with that, sorry, I have trouble remembering who talked with me in chats)
<JaMa>
jaskij[m]: this is git lfs in do_unpack
<jaskij[m]>
Iirc Kikrstone blocks networking in any tasks other than do_fetch
<JaMa>
that's correct
<jaskij[m]>
Ah, I'll just shut up :P not taking it hard, just know too little to actively help in here.
<JaMa>
maybe it's the combination of gitsm + lfs which isn't so common
goliath has quit [Quit: SIGSEGV]
mvlad has quit [Remote host closed the connection]
Tokamak has joined #yocto
<RP>
JaMa: there may be an open bug for something like that
Tokamak has quit [Read error: Connection reset by peer]
Tokamak has joined #yocto
nemik has quit [Ping timeout: 244 seconds]
nemik has joined #yocto
Tokamak has quit [Read error: Connection reset by peer]
Tokamak has joined #yocto
nemik has quit [Ping timeout: 246 seconds]
nemik has joined #yocto
Tokamak has quit [Remote host closed the connection]
Tokamak has joined #yocto
<zwelch_>
looks like the downloads.yoctoproject.org certificate expired a couple of hours ago
<zwelch_>
halstead: ^^^ is that your bailiwick?
<sakoman>
zwelch: I sent a note to the help desk about that
<sakoman>
zwelch: yes, it will end up with halstead
<halstead>
zwelch_: Yes. And thank you sakoman.
<halstead>
zwelch_: sakoman: I've forced the renewal.
dev1990 has quit [Quit: Konversation terminated!]
seninha has joined #yocto
<sakoman>
halstead: thanks for the quick service!
<zwelch_>
halstead: Thanks, that did the trick. Curious why it didn't autorenew, but happy that it works.
<halstead>
zwelch_: The renewal cronjob wasn't created correctly during the server install in March when we moved downloads.yp.org to the new data center.
florian has joined #yocto
<halstead>
zwelch_: More specifically it was converted to a job handled by periodic instead of cron directly and not enabled.
<zwelch_>
halstead: it's always the little things, eh? :)