tgamblin has quit [Remote host closed the connection]
tgamblin has joined #yocto
davidinux has quit [Ping timeout: 255 seconds]
davidinux has joined #yocto
Ablu has quit [Ping timeout: 260 seconds]
Ablu has joined #yocto
Daanct12 has joined #yocto
Vonter has quit [Ping timeout: 258 seconds]
Vonter has joined #yocto
prabhakarlad has quit [Quit: Client closed]
jclsn has quit [Ping timeout: 260 seconds]
jclsn has joined #yocto
tlhonmey has quit [Quit: Client closed]
amitk has joined #yocto
Furry has joined #yocto
Furry has quit [Quit: Leaving]
Schlumpf has joined #yocto
Guest98 has joined #yocto
linfax has joined #yocto
rob_w has joined #yocto
rfuentess has joined #yocto
luc4 has joined #yocto
schtobia has quit [Quit: Bye!]
schtobia has joined #yocto
zpfvo has joined #yocto
l3s8g has joined #yocto
Kubu_work has joined #yocto
vladest has quit [Remote host closed the connection]
vladest has joined #yocto
<LetoThe2nd>
yo dudX
<Guest98>
hi, morning
Guest98 has quit [Ping timeout: 245 seconds]
Guest98 has joined #yocto
zpfvo has quit [Ping timeout: 255 seconds]
leon-anavi has joined #yocto
xmn has quit [Ping timeout: 272 seconds]
radanter has joined #yocto
silbe has quit [Ping timeout: 255 seconds]
Guest9 has joined #yocto
Guest98 has quit [Ping timeout: 245 seconds]
zpfvo has joined #yocto
bhstalel has joined #yocto
<bhstalel>
Im trying to debug and analyse the collections expansion and layers config expansion, and I noticed that putting only BBFILES in layer.conf does the trick of finding the recipe and also finding the bbclass, which is not logical because BBPATH is used to find conf and bbclass files which is not defined, does bitbake set a default value for it ?
<bhstalel>
though, I checked BBPATH expanded value and it does not have my new layer's path
Guest86 has joined #yocto
l3s8g has quit [Ping timeout: 258 seconds]
mvlad has joined #yocto
<landgraf>
bhstalel: run bitbake -e <target> and see where the value came from
Guest86 has quit [Client Quit]
<bhstalel>
The logic say that bitbake needs to expand BBPATH in order to find conf and .bbclass files ? I am explicitly creating new layer with custom unique class name, and I only put BBFILES in layer.conf, no BBFILE_COLLECTIONS, no BBFILE_PATTERN, no nothing, I only set BBFILES to the same layer directory ${LAYERDIR} to force bitbake to go though
<bhstalel>
find_bbfiles(), it finds the recipe and also finds the .bbclass file under classes/, so if bitbake uses BBPATH to find the bbclass and my BBPATH does not contain the new class, how did it find the class ?
<bhstalel>
I am still trying to find the answer in the code, but I thought I'd ask
<qschulz>
we're trying to have the commit date of the last commit in a layer to be the timestamp for the rootfs, c.f. REPRODUCIBLE_TIMESTAMP_ROOTFS
<qschulz>
so we run git log against that layer and assign the value to the variable
<qschulz>
now, I'm pretty sure that this isn't enough because bitbake does not reparse the file setting the variable if it doesn't detect a change, correct?
bhstalel has quit [Ping timeout: 245 seconds]
<qschulz>
what are the options for this? I assume I could just set nostamp varflag on do_rootfs for example, kind of overkill
<qschulz>
we could also have BB_DONT_CACHE set in the recipe directly to force reparsing of the recipe every bitbake execution?
<qschulz>
also, michaelo we should probably document REPRODUCIBLE_TIMESTAMP_ROOTFS and how to properly use it?
<mcfrisk>
sorry need ask stupid questions: yocto autobuilder/buildbot, it seems it has not been configured to capture nothing but bitbake output from each build? Images or extra log files in case of failures are not captured or archived? with qemu boot/login prompt failures, the qemuboot log is not captured. with ptest failures, the do_testimage task logs are not archived, only bitbake build output.
JerryM has joined #yocto
<mcfrisk>
so I guess RP and others are manually archiving extra logs from workers when builds fail?
l3s8g has joined #yocto
<JerryM>
anyone who can tell me what happened on meta-openembedded? kirkstone reports a force push for me..
<landgraf>
mcfrisk: I think so. Richard, Alex or swat team member on duty do this manually
<landgraf>
at least for core dumps and some additional logs
<RP>
mcfrisk: yes, I'm manually extracting things
<mcfrisk>
RP: can I propose to add some files to be archived automatically in yocto-autobuilder-helper/./scripts/publish-artefacts ? for example the do_testimage task logs and qemu boot logs?
<mcfrisk>
though I don't know how those are visible in buildbot web UI, https://autobuilder.yoctoproject.org/typhoon/#/builders/127/builds/2199 for example. I'm just lost in the UI and failing to find the details I'd need to help with the bugs. plain bitbake log output is not enough in many cases
<RP>
mcfrisk: it would end up as huge amounts of data we don't use :(
mr_nice has joined #yocto
alessioigor has joined #yocto
<mcfrisk>
sigh, sadly bug reports with just the bitbake output are not really useful. the task logs would be needed, and qemuboot too. I don't mind if the files get recycled in a few days. In all CI's I've setup, the build output and some extra logs are archived and linked to the build job result web page.
Vonter has quit [Ping timeout: 248 seconds]
Vonter has joined #yocto
<RP>
mcfrisk: recycle which pieces in a few days though. You don't see the issues we have already with data building up and causing problems :(
<RP>
mcfrisk: I do get it that we have challenges but I simply can't do everything we need and in general these logs haven't been the biggest blocker on issues. We do also have a log collection bug open for ptest logs for example which was a much more pressing issue
<RP>
Sadly I think I'm blocking that one despite having someone lined up to work on it due to lack of time to give the input needed
Schlumpf has quit [Quit: Client closed]
florian_kc has joined #yocto
l3s8g has quit [Ping timeout: 240 seconds]
l3s8g has joined #yocto
florian has joined #yocto
zpfvo has quit [Quit: Leaving.]
prabhakarlad has joined #yocto
zpfvo has joined #yocto
l3s8g has quit [Ping timeout: 240 seconds]
mr_nice has quit [Remote host closed the connection]
zpfvo has quit [Ping timeout: 255 seconds]
l3s8g has joined #yocto
Daanct12 has quit [Read error: Connection reset by peer]
arisut has quit [Quit: install gentoo]
Daanct12 has joined #yocto
arisut has joined #yocto
zpfvo has joined #yocto
arisut has quit [Client Quit]
arisut has joined #yocto
<RP>
rburton, mcfrisk: The bug doesn't want to appear now, builds are passing :/
rob_w has quit [Remote host closed the connection]
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
<LetoThe2nd>
any ideas on how to locally cache git repositories? background: my automated builds use cached downloads and sstate, but I'm on a relatively slow connection. so I'm seeing about 5 minutes time for a build where nothing changed, of which 4 minutes are just pulling the layers from git.
<landgraf>
LetoThe2nd: see fetcher tests for an inspiration (MIRRORS and PREMIRRORS in particular)
<landgraf>
it allows to speed things up a lot on kernel git fetching etc
<LetoThe2nd>
landgraf: thanks, but thats one step later, right? my problem is the initial setup of the metadata layers before bitbake is even started.
<LetoThe2nd>
using kas here, for the record.
<rburton>
LetoThe2nd: kas has built in support for 'reference repositories'
<rburton>
LetoThe2nd: the biggest gap is that kas doesn't have an "update all the repos" script so that has to be done out of band, but even you update them once a month that's a huge win as it only has to grab the missing objects
<rburton>
basically, kas uses the repos you cache as reference repositories, so it will hit the real server for the object lists it needs, and fetch from the reference repository on disk if they're there.
<rburton>
RP: also i've seen a WIP patch to make it use 1M per GB of ram
<rburton>
LetoThe2nd: a `bitbake core-image-sato --runall fetch; time bitbake core-image-sato` for kirkstone would be much appreciated. i've a little table of build times and machines...
<LetoThe2nd>
rburton: sure thing, once the current runs are finished can do so. will take some time, though.
<LetoThe2nd>
(because long and full pipeline)
<rburton>
no rush
<LetoThe2nd>
rburton: a cold rebuild with just downloads cached on core-image-minimal took about 28 minutes, as a first data point. without any optimizations or anything. Not bad for a 650€ box.
<rburton>
that's excellent for a 650 euro box
<LetoThe2nd>
rburton: i'll provide better data once I have it.
<RP>
rburton: that sounds like a better idea for he swiotlb
<rburton>
yeah expect to see patches being bikeshedded in lkml later today
<RP>
rburton: I've updated master-next, I think I might merge the "switch to 6.5 pieces" but I'll run one more test with your updated patches first
<RP>
LetoThe2nd: the autobuilder does do repository caching FWIW
<RP>
LetoThe2nd: scripts in autobuilder-helper
<LetoThe2nd>
RP: thx
<LetoThe2nd>
rburton: so if I understand kas reference repositories correctly, wouldn't it already cache across builds if I just set the variable to a writable directory?
<LetoThe2nd>
rburton: or is the condition "point kas to the directory && the clone has to be in the directory accroding to the naming condition"?
Daanct12 has quit [Ping timeout: 255 seconds]
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
lexano has joined #yocto
<rburton>
LetoThe2nd: it won't create new reference clones
<rburton>
LetoThe2nd: this is why i want to write a plugin to create/update the reference directories
<LetoThe2nd>
rburton: okay, I got it. thanks a lot, this should be really helpful!
ptsneves has joined #yocto
ptsneves has quit [Ping timeout: 255 seconds]
Minvera2 has joined #yocto
xmn has joined #yocto
dianshi has quit [Quit: WeeChat 3.5]
pabigot has quit [Read error: Connection reset by peer]
<qschulz>
is there a way to do inline shell function?
<qschulz>
VAR = "${@'plep'}" is python
<qschulz>
and VAR= "${@bb.build.exec_func('shellfunc', d')}" returns None
<qschulz>
because I assume it just executes the shell function and doesn't return the string printed by the function
<qschulz>
but it feels a bit weird to have a python script just for starting a python subprocess for executing some binary :)
<rburton>
no, inline python is expanded trivially because its bitbake doing the expansion
<rburton>
whereas the shell tasks are just written to disk and ran
<qschulz>
ok, so no way to do it with shell functions I guess?
<rburton>
write the whole task in python?
<qschulz>
rburton: i know, was just asking if it was possible to write it in shell
florian_kc has quit [Remote host closed the connection]
<rburton>
"it" is quite vague. no, you can't do inline shell calls.
<qschulz>
thank you, that was the only question I had :)
<qschulz>
inline python it is
florian has quit [Ping timeout: 272 seconds]
luc4 has quit [Ping timeout: 272 seconds]
wyre has quit [Excess Flood]
fullstop has quit [Excess Flood]
fullstop_ has joined #yocto
wyre has joined #yocto
ederibaucourt has quit [Quit: ZNC 1.8.2 - https://znc.in]
fullstop_ is now known as fullstop
ederibaucourt has joined #yocto
deribaucourt has joined #yocto
ederibaucourt has quit [Ping timeout: 240 seconds]
Xagen has joined #yocto
Guest9 has quit [Quit: Client closed]
Vonter has quit [Ping timeout: 258 seconds]
Vonter has joined #yocto
goliath has joined #yocto
zpfvo has quit [Ping timeout: 258 seconds]
zpfvo has joined #yocto
l3s8g has quit [Ping timeout: 240 seconds]
Vonter has quit [Ping timeout: 252 seconds]
Vonter has joined #yocto
sakoman has quit [Quit: Leaving.]
sakoman has joined #yocto
sakoman has quit [Remote host closed the connection]
sakoman has joined #yocto
Guest74 has joined #yocto
<Guest74>
figured I'd ask again: I have a recipe that extends gpsd to overwrite the /etc/defaults/gpsd.default file; I also have a postinst that `update-alternatives` the file. Now, in the image, I have a `/etc/default/gpsd` that is a symlink to the gpsd.default file, but that file it just not there? In my WORKDIR/image, the file is correctly in
<Guest74>
`/etc/default` but in the actual image, its missing
JerryM has quit [Quit: Konversation terminated!]
luc4 has joined #yocto
leon-anavi has quit [Quit: Leaving]
<rburton>
RP: zeddii i hear 6.5.6 is due in the next 24 hours, i wonder if any of our backports merged
<mcfrisk>
rburton: meta-arm can drop 0001-arm64-defconfig-remove-CONFIG_COMMON_CLK_NPCM8XX-y.patch after next stable kernel updates since patch is finally applied upstream
<rburton>
yeah
<rburton>
we build -dev in CI so knew that one
<rburton>
Guest74: the easiest thing to do is just overwrite the default gpsd.default in a gpsd.bbappend
<rburton>
alternatives and stuff is overcompicated, unless you really need that
luc4 has quit [Ping timeout: 240 seconds]
<Guest74>
rburton thats what I did. I removed the update-alternatives call on pkg_postint and it still isnt in `/etc/default`
<rburton>
oh right, i didn't realise gpsd did that
<rburton>
in that case just write a recipe that ships a file, register it as an alternative with a higher priority, and install it
<rburton>
don't wrestle the existing stuff, work with it
<Guest74>
its more that the file I'm adding is just... gone?
<Guest74>
like `install -m 0644 ${WORKDIR}/gpsd-default ${D}/etc/default/gpsd.default` is in the `image` folder of my workdir, but the actual image doesnt have it
<rburton>
without seeing your recipe i can't say what you're doing but its wrong
<rburton>
create a new recipe that creates /etc/defaults/gpsd.mydefaults. register that as an alternative for gpsd-defaulst, but with a higher priority. install gpsd _and your new package_, alternatives will pick your file as the default.
<Guest74>
and oe-pkgdata-util-list-pkg-files for gpsd-conf shows the /etc/default/gpsd.default file
<rburton>
that's gpsd.default, your example was gpsd-defau;t
<rburton>
my typing today is terrible
<Guest74>
you're right; why is that gone, then?
<Guest74>
because not even the defualt gpsd file is there
<rburton>
did you install gpsd-conf?
<Guest74>
its not in that package anyways; but gpsd RRECOMMENDS gpsd-conf
<rburton>
and do you have recommendations turned off?
<Guest74>
oh weird, I'm not seeing any of my bbappend echos in the `workdir/temp`? Like the file is in `workdir` but not getting installed. the `do_install:append` is not working in my bbappend, I guess
<rburton>
i really suggest you don't use a bbappend
<Guest74>
and instead just duplicate the recipe?
<rburton>
no
<rburton>
as above: create a new recipe that creates /etc/defaults/gpsd.mydefaults. register that as an alternative for gpsd-defaulst, but with a higher priority. install gpsd _and your new package_, alternatives will pick your file as the default.
<rburton>
the recipe has gone to the effort of setting up alternatives, so use it
radanter has quit [Remote host closed the connection]
Guest74 has quit [Quit: Client closed]
rfuentess has quit [Remote host closed the connection]
sgw has quit [Ping timeout: 272 seconds]
sgw has joined #yocto
Kubu_work has quit [Quit: Leaving.]
RP has quit [Remote host closed the connection]
Piraty has quit [Quit: -]
roussinm has quit [Ping timeout: 240 seconds]
Piraty has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
leon-anavi has joined #yocto
manuel1985 has quit [Quit: Leaving]
bhstalel has joined #yocto
bhstalel has quit [Quit: Client closed]
florian_kc has joined #yocto
l3s8g has joined #yocto
tokamak has quit [Quit: ZNC 1.8.2+deb2build5 - https://znc.in]
tokamak has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
roussinm has joined #yocto
prabhakarlad has quit [Quit: Client closed]
Estrella___ has joined #yocto
Estrella_ has quit [Ping timeout: 260 seconds]
amitk has quit [Ping timeout: 255 seconds]
leon-anavi has quit [Remote host closed the connection]
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
silbe has joined #yocto
Estrella___ has quit [Remote host closed the connection]
Estrella_ has joined #yocto
bhstalel has joined #yocto
silbe has quit [Ping timeout: 258 seconds]
goliath has joined #yocto
florian_kc has quit [Ping timeout: 255 seconds]
silbe has joined #yocto
alessioigor has quit [Quit: alessioigor]
l3s8g has quit [Quit: l3s8g]
bhstalel has quit [Ping timeout: 245 seconds]
tokamak has quit [Quit: ZNC 1.8.2+deb2build5 - https://znc.in]
tokamak has joined #yocto
dgriego has quit [Quit: Computer going to sleep]
dgriego has joined #yocto
RP has joined #yocto
florian_kc has joined #yocto
mbulut has joined #yocto
mbulut has quit [Client Quit]
pabigot has joined #yocto
goliath has quit [Quit: SIGSEGV]
jholtom has joined #yocto
<jholtom>
Hi, I've been working on updating an image I maintain, and I've made a number of changes to different components, but am finally at the end packing up my image with wic. It is giving me a no such file or directory on my cramfs-xip (a symlink to a date stamped version) that clearly exists and can be read post bitbake completion. Using bitbake 2.0.0 and kirkstone - would be challenging to upgrade
<RP>
jholtom: if it exists at the end, it sounds like some kind of dependency issue/race ?
<jholtom>
RP: I checked the ordering of tasks, it is correct, my do_image_cramfs occurs before do_image_wic
mvlad has quit [Remote host closed the connection]
<RP>
jholtom: I'd put some debugging in, something like a prefunc with a bb.warn(str(os.listdir(XXX))) to see if what you think is happening really is
<mischief>
khem: is there a way to get lldb on the target from meta-clang without pulling in gplv3? or is there no way around binutils?
<yocton>
jholtom: I've already seen this multiple time when adding a rootfs image to a wic image. Each time I had to add a dependency between the task building the rootfs image and the one building the wic image.
<jholtom>
yocton: I have a WKS_FILE_DEPENDS but it's not explicit on the rootfs.
<jholtom>
What's strange is it used to have no issue for years, and now here we are with all sorts of orthogonal changes and boom
<jholtom>
I'll give it a whirl
goliath has joined #yocto
<yocton>
or maybe I ended up with 'do_image_wic[depends]+= "<rootfs_image>:do_build"'
<jholtom>
yocton: fixed it. Thanks. Wild to think I've been getting away with a race like that for years.
<jholtom>
Appreciate your help too RP
<yocton>
jholtom: which solution did you used?
<jholtom>
do_image_wic[depends] since they are under the same recipe