ChanServ changed the topic of #yocto to: Welcome to the Yocto Project | Learn more: https://www.yoctoproject.org | Join us or Speak at Yocto Project Summit (2021.11) Nov 30 - Dec 2, more: https://yoctoproject.org/summit | Join the community: https://www.yoctoproject.org/community | IRC logs available at https://www.yoctoproject.org/irc/ | Having difficulty on the list or with someone on the list, contact YP community mgr ndec
prabhakarlad has joined #yocto
prabhakarlad has quit [Client Quit]
prabhakarlad has joined #yocto
dacav has quit [Ping timeout: 240 seconds]
codavi has joined #yocto
codavi has quit [Ping timeout: 240 seconds]
Habbie has quit [Ping timeout: 240 seconds]
Habbie has joined #yocto
dev1990 has quit [Quit: Konversation terminated!]
dev1990 has joined #yocto
dev1990 has quit [Remote host closed the connection]
dev1990 has joined #yocto
jmiehe has quit [Quit: jmiehe]
<khem> RP seeing peudo fetch errors with master-next
<khem> pseudo-native PROVIDES virtual/fakeroot-native but was skipped: Skipping Recipe : Unable to resolve 'df1d1321fb093283485c387e3c933d2d264e509' in upstream git repository in git ls-remote output for git.yoctoproject.org/pseudo
tcdiem has quit [Ping timeout: 252 seconds]
<khem> RP see https://git.yoctoproject.org/poky-contrib/commit/?h=kraj/poky-next&id=3a1d7992c079a727fc4d3340c5372a0467b8723c
<khem> :)
<khem> sielicki given the situation you explained, I would suggest to start with kirkstone and make your intention clear to vendors that you are using this LTS, for same reason we started doing LTS, where large number of layers can be compatible with each other, dunfell was our first LTS and I could give rider to some of the layers to not have a release against it, but hope they have got their act together this time around with kirkstone
<khem> Xilinix does not support dunfell ? thats a bummer, I was not expecting that but then I dont expect too much these days
<khem> I think realistically there are two options use LTS or stay on master, staying on master might be troublesome too since some BSP layers do not support master as well as some others, perhaps your procurement should make a policy in MSA and ask explicitly for LTS before you buy these chips
<sielicki> nah, you can see them justify it here: https://lists.yoctoproject.org/g/meta-xilinx/topic/is_xilinx_working_on_a/77383753?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,0,77383753
<khem> yes thats perhaps fine as dunfell was first LTS, I hope they are on board with kirkstone
<sielicki> what's disappointing to me is to see him say, "We suspect that gatesgarth will work with dunsfell, but we've not tested it.". Kind of why I bring up the idea of giving users the ability to try it and see with a special flag. Vendors won't stick their neck out to throw a line in there, because they don't want to be on the hook for supporting it.
<sielicki> Here's hoping. I'm really hoping to catch the LTS train on kirkstone and help my company stay on the LTS train for as long as i'm here.
<khem> I think if a SOC support layer or any other layer is not LTS, I would consider that a big negative for chosing that SOC, especiallly if you have multi SOC products
<sielicki> the cold hard reality, much to my dismay, is that we don't actually value having up to date or secure systems. We still cook dora images.
<khem> oh wow, get your act together I must say
<sielicki> yeah
tcdiem has joined #yocto
sakoman has quit [Quit: Leaving.]
camus has joined #yocto
dacav has joined #yocto
starblue has quit [Ping timeout: 256 seconds]
starblue has joined #yocto
jclsn73 has quit [Ping timeout: 250 seconds]
jclsn73 has joined #yocto
jclsn73 has quit [Ping timeout: 252 seconds]
sakoman has joined #yocto
RobertBerger has joined #yocto
rber|res has quit [Ping timeout: 252 seconds]
jclsn73 has joined #yocto
jclsn73 has quit [Ping timeout: 268 seconds]
camus has quit [Quit: camus]
camus has joined #yocto
jclsn73 has joined #yocto
jclsn73 has quit [Ping timeout: 240 seconds]
jclsn73 has joined #yocto
jclsn73 has quit [Ping timeout: 252 seconds]
tcdiem has quit [Ping timeout: 240 seconds]
jclsn73 has joined #yocto
michalkotyla_ has joined #yocto
otavio has joined #yocto
dv_ has joined #yocto
otavio_ has quit [*.net *.split]
michalkotyla has quit [*.net *.split]
sgw has quit [*.net *.split]
dv__ has quit [*.net *.split]
sgw has joined #yocto
jclsn73 has quit [Ping timeout: 250 seconds]
jclsn73 has joined #yocto
jclsn73 has quit [Ping timeout: 256 seconds]
jclsn73 has joined #yocto
marka has quit [Ping timeout: 240 seconds]
jclsn73 has quit [Ping timeout: 256 seconds]
starblue has quit [Ping timeout: 252 seconds]
starblue has joined #yocto
jclsn73 has joined #yocto
amitk has joined #yocto
sakoman has quit [Quit: Leaving.]
marka has joined #yocto
marka has quit [Ping timeout: 240 seconds]
marka has joined #yocto
marka has quit [Ping timeout: 240 seconds]
marka has joined #yocto
frieder has joined #yocto
alessioigor has joined #yocto
alessioigor has quit [Client Quit]
alessioigor has joined #yocto
tgamblin has quit [Ping timeout: 240 seconds]
alessioigor has quit [Quit: alessioigor]
tgamblin has joined #yocto
GNUmoon has quit [Ping timeout: 240 seconds]
florian__ has joined #yocto
goliath has joined #yocto
rob_w has joined #yocto
jclsn73 is now known as jclsn
<jclsn> clear
<jclsn> Ups no terminal haha
<jclsn> Morning
<jclsn> qschulz: I did not have any success diffing the build folders btw. The diff on linux-fslc-ixm takes ages. I also tried a diff on kernel-imx-gpu-viv and I don't see anything suspicious https://pastebin.com/DC53BDVH
<jclsn> If I don't find out the cause for this, I will just wipe my laptop. The other machine I have installed, produces a bootable kernel with the same Ubuntu version, tools and zsh
<cb5r> What's the way to enable LTO for only one recipe?
GNUmoon has joined #yocto
mvlad has joined #yocto
<jclsn> cb5r: LTO?
<cb5r> LinkTimeOptimization
florian__ has quit [Ping timeout: 240 seconds]
<jclsn> Hmm no idea
<cb5r> I am aware that DISTRO_FEATURES:append = " lto" exists, but apparently will fail for some recipes (https://git.yoctoproject.org/poky/plain/meta/conf/distro/include/lto.inc). I tried the settings from the link in a dunfell build but it failed (qemu-native). Anyhow - I think LTO for my entire build might be a bit overkill considering the increased build time
<cb5r> I think I should be fine optimizing only a few components
mckoan|away is now known as mckoan
<mckoan> good morning
davidinux has quit [Ping timeout: 252 seconds]
davidinux has joined #yocto
xmn has quit [Ping timeout: 268 seconds]
rperier has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
rperier has joined #yocto
florian__ has joined #yocto
GillesM has joined #yocto
rfuentess has joined #yocto
camus1 has joined #yocto
camus has quit [Ping timeout: 268 seconds]
camus has joined #yocto
tnovotny has joined #yocto
vladest has quit [Quit: vladest]
camus1 has quit [Ping timeout: 256 seconds]
vladest has joined #yocto
camus1 has joined #yocto
camus has quit [Read error: Connection reset by peer]
camus has joined #yocto
camus1 has quit [Ping timeout: 240 seconds]
camus1 has joined #yocto
camus has quit [Read error: Connection reset by peer]
camus has joined #yocto
<qschulz> mornin
camus1 has quit [Ping timeout: 240 seconds]
Schiller has joined #yocto
<Schiller> Hello there. I am new to this Channel and it got it suggested from Michael Halstead for some quick questions. Further more i am not a native speaker so i hope it is all understandable. I have some problems with setting up standalone YPAutobuilder. Can you help me or suggest someone for further information? (also i am from germany and confronted you
<Schiller> because of your username).
Schiller has quit [Quit: Client closed]
Schiller has joined #yocto
<Schiller> Hello to everyone. I am having trouble with setting up the YPAutobuilder. It's also my first time in this Chatroom. So i guess if someone has the time and knowledge you can contact me privately. Thank you in advance for the help.
florian__ has quit [Read error: Connection reset by peer]
florian_kc has joined #yocto
florian_kc has quit [Read error: Connection reset by peer]
florian__ has joined #yocto
<rburton> Schiller: there's a few people here who know about it, but it's best if you explain what your problem is
<Schiller> Thank for the reply. I have some questions on the hash equivalency server and some general questions about the buildfactorysteps.
tcdiem has joined #yocto
camus1 has joined #yocto
camus has quit [Ping timeout: 240 seconds]
camus1 is now known as camus
<qschulz> Schiller: we're all ears, please ask your questions
camus1 has joined #yocto
leon-anavi has joined #yocto
camus has quit [Ping timeout: 240 seconds]
camus1 is now known as camus
<Schiller> When i run a build - let's say beaglebone - in my setup everything seems to work properly. All 10 Buildsteps which are mentioned in the Builders.py finish correctly. When i compare it to the Buildprocesses in https://autobuilder.yoctoproject.org/typhoon/#/builders/65/builds/4955 i recognized way more Buildsteps. For instants the setup i run doesn't
<Schiller> even run a full build with bitbake. Is that a buildstep i have to add manually or should it already be implemented in the minimal setup suggested on the Guide https://git.yoctoproject.org/yocto-autobuilder2/tree/README-Guide.md
<GillesM> Hello cant iI runqemu without network ... I tried runqemu qemux86 qemuparams="-nic none" without success
mckoan has quit [Ping timeout: 252 seconds]
pasherring has joined #yocto
<landgraf> GillesM: adding -nodefaults might help.
<Schiller> I'm new to this hole framework. Is my question understandable? Also not a native speaker.
<RP> qschulz: morning. I tried to reply yesterday, hope I made sense! :)
mckoan has joined #yocto
<RP> Schiller: I think that the code has changed a bit since the README you refer to was written
<RP> Schiller: The code now supports "dynamic" build steps the json code in yocto-autobuilder-helper can dynamically set how many steps are needed
<RP> Schiller: for the beaglebone you mentioned, it is declared here: https://git.yoctoproject.org/yocto-autobuilder-helper/tree/config.json#n275
<RP> (which uses the arch-hw template defined above that earlier in the file)
mckoan has quit [Ping timeout: 256 seconds]
starblue has quit [Ping timeout: 256 seconds]
mckoan has joined #yocto
<qschulz> RP: it did, thank you very much for sending a v2 of all patches in one thread :) I'll get to the review this afternoon
starblue has joined #yocto
<rburton> RP: do you have an opinion on the timeout question in JaMa's reply to my bitbake patch?
<RP> rburton: happy to have it be 10 mins
<RP> rburton: I worry that for long running AB tests we'll see spam but it probably is more useful than not
<rburton> lets see how spammy it is with a 10 minute cycle
<RP> qschulz: I thought it might help make things clearer (the autobuilder-helper and transition branch patches aren't there)
gsalazar has joined #yocto
mabnhdev2 has joined #yocto
<mabnhdev2> Hi, I'm trying to add several python recipes to my system.  Several of these packages don't include a license text file.  Instead, they just refer to the license in documentation or project metadata.  How do I handle this in Yocto?
<rburton> mabnhdev2: start by filing a bug so they ship a license statement in the source. worst case, i've set the license checksum to the specific line in the setup.py that says license="mit" or whatever.
<rburton> the checksum is all about noticing if the licence changes, so eg the setup.py license assignment is a good thing to track anyway as its canonical (gets used on pypi for example)
<mabnhdev2> rburton That war has already been fought and lost - https://github.com/jaraco/skeleton/issues/1
<rburton> :facepalm:
<rburton> in that case just set the license checksum to eg specifically line 11 of setup.cfg
<rburton> by setting LICENSE to MIT you get the full canonical license text in the license data we generate already
<mabnhdev2> rburton I'm not sure I understand about setting the checksum to a specific line.  Is there an example you can point me to.
<rburton> meta/recipes-core/musl/bsd-headers.bb:LIC_FILES_CHKSUM = "file://sys-queue.h;beginline=1;endline=32;md5=c6352b0f03bb448600456547d334b56f"
<mabnhdev2> rburton Cool.  Thanks!
<rburton> or more relevantly for python code
<rburton> meta/recipes-devtools/python/python3-setuptools-scm_6.4.2.bb:LIC_FILES_CHKSUM = "file://PKG-INFO;beginline=8;endline=8;md5=8227180126797a0148f94f483f3e1489
camus has quit [Remote host closed the connection]
camus has joined #yocto
<rburton> RP: hm. did a full build with the public sstate and three tasks failed with this message. obviously that log file is long gone, which is a pain. Maybe download problems and that's a bad way of saying timeout? https://www.irccloud.com/pastebin/b06WBykI/
Alban[m] has joined #yocto
<RP> rburton: not sure. That is annoying
<rburton> three out of nearly 10k fetches
<rburton> final aggregate results are Wanted 45932, found 19791 locally, 9773 remote, missed 16368. Hit rate 64%.
pgowda_ has joined #yocto
<RP> rburton: shame we don't know how it failed :/
<rburton> i think i just got our CI to archive the build logs
<rburton> so will see if it happens again
<cb5r> If I set DL_DIR to download/ from a different build dir (with slightly older layers) - should this normally work? - Or is there a chance that BB will use wrong sources then?
<rburton> cb5r: it will always work
<rburton> the only way it can use the "wrong" source is if eg bash-4.0.tar.gz has changed contents
<rburton> if that has happened, you've bigger problems
<cb5r> rburton: ...which basically only should happen if I manually modify it?
<rburton> you're absolutely encouraged to share DL_DIR and SSTATE_DIR between all builds and all versions
<rburton> well upstream could change the content without changing the version
<cb5r> True...
<rburton> again, bigger problems: is that a sneaky fix, or is that a backdoor
<rburton> the checksums will fail
<cb5r> OK great - I shall share DL_DIR and SSTATE_DIR in the future then - thanks!
<rburton> when in doubt ask what the yocto autobuilder does: every build across every supported release and every build host uses the same NFS mount for DL_DIR and SSTATE_DIR
<cb5r> Are there also temp dirs that I can/should put on a tempfs to speed up build time and reduce SSD wear? Or does BB use /tmp anyway?
<cb5r> "NFS mount for DL_DIR and SSTATE_DIR" << OK thats good to know - so those 2 dirs to not rely on high speed then?
<rburton> they're fetched occasionally so high speed isn't critical
<rburton> you can put the build dir in a tmpfs if you have enough RAM, yes
<rburton> you'll want rm_work to keep usage low
<cb5r> I am trying to optimize my partitions/mounts/shares/whatever here currently, since I am working with different VMs etc. So sharing would be pretty cool
<cb5r> whats rm_work?
<rburton> yes
<rburton> no point keeping 10gb of build tree in ram when its not needed anymore
pabigot has quit [Remote host closed the connection]
pabigot has joined #yocto
camus has quit [Quit: camus]
<cb5r> Thats true. - But if the build fails, my "cache" is gone - so everything needs to be compiled from scratch - right?
<marc3> nope, there's a difference between the 'work' and the 'shared-state cache'
mabnhdev2 has quit [Quit: Client closed]
BobPungartnik has joined #yocto
BobPungartnik has quit [Remote host closed the connection]
mckoan has quit [Ping timeout: 256 seconds]
<cb5r> OK, thanks!
<hmw[m]> Hi i'm trying to build an application with qt and mysql but when using a qt call db = QSqlDatabase::addDatabase("QMYSQL", name); i get a SIGILL
<rburton> illegal instruction: your compiler tune/etc doesn't actually match the hardware
<rburton> dmesg will tell you what the instruction is, but tell us what MACHINE and what hardware and it might be obvious
mckoan has joined #yocto
<hmw[m]> rburton: oke tnx but i use the sdk generated from same thing as the running rootfs
<rburton> is the target x86 and you're running the sdk on x86?
<hmw[m]> target is arm and the compiler in qt is arm-oe-linux-gnueabi-gcc
<rburton> <shrugs>. you're getting illegal instruction, so your compiler tune flags are wrong
<hmw[m]> ( running the sdk on x86
<rburton> dmesg will tell you what the instruction is, or run it in gdb and let that catch it to tell you where
<rburton> could be some dumb code making assumptions like 'yes of course i have <some instruction>' which you don't have
<rburton> like how last month I upgraded uboot and it assumes that CRC instructions are present, but they're optional.
<rburton> suddenly, it didn't boot
<hmw[m]> m tnx seems like i don´t have qtsql package
<hmw[m]> on rootfs
<hmw[m]> that is not a package :(
<RP> rburton: nice to have that sorted :)
<rburton> one more down!
<mckoan> .
florian__ has quit [Read error: Connection reset by peer]
florian__ has joined #yocto
<qschulz> hmw[m]: oe-pkgdata-util find-path '*sql*' to find which package to install
<qschulz> hmw[m]: if the recipe building the package has been baked
<Schiller> when i setup the YPAutobuilder like in this Guide https://git.yoctoproject.org/yocto-autobuilder2/tree/README-Guide.md . To accomplish a full build (for example beaglebone) do i need to configure the .../yocto-auto-helper/config.json for additional steps or should they already be set per default. Because my Step 9. (Check run-config steps to use)
<Schiller> seems to finish in less then a sec which surely isn't correct.
<hmw[m]> <qschulz> "hmw: oe-pkgdata-util find-path '..." <- it created sqldrivers/mysql/.moc/moc_qsql_mysql_p.cpp
<hmw[m]> so it should be in ?
<qschulz> hmw[m]: didn't understand your message sorry
sakoman has joined #yocto
<hmw[m]> qschulz: if the oe-pkgdata-util find-path '*sql*' | grep qt # returns qtbase-src: /usr/src/debug/qtbase/5.14.2+gitAUTOINC+3a6d8df521-r0.arago17/git/src/sql/kernel/qsqldriver.h
<hmw[m]> than that that "package" is installed ?
<qschulz> no
<qschulz> it returns which file belongs to which package
<qschulz> here /usr/src/debug/qtbase/5.14.2+gitAUTOINC+3a6d8df521-r0.arago17/git/src/sql/kernel/qsqldriver.h belongs to qtbase-src package
marka has quit [Quit: ZNC 1.8.2 - https://znc.in]
<qschulz> hmw[m]: also, don't grep for qt,since I think the file you're looking for is probably qsql and not qtsql
marka has joined #yocto
codavi has joined #yocto
xmn has joined #yocto
<RP> qschulz: To change activereleases I think I'll have to do another test cycle so it will take a while :/
ar__ has joined #yocto
codavi has quit [Ping timeout: 252 seconds]
codavi has joined #yocto
<moto-timo> hmw[m]: MySQL support is not included in the default
yolo has joined #yocto
<yolo> is usrmerge enabled by default in new yocto or oe-core these days
<hmw[m]> PACKAGECONFIG[qtbase] += "sql-mysql" ?
ar__ has quit [Ping timeout: 256 seconds]
<moto-timo> hmw[m]: no look at local.conf for example, such as https://git.yoctoproject.org/poky/tree/meta-poky/conf/local.conf.sample#n247
<RP> yolo: no
<qschulz> hmw[m]: create a bbappend for qtbase in your own layer and add PACKAGECONFIG += "sql-mysql" (or PACKAGECONFIG_append = " sql-mysql" or PACKAGECONFIG:append = " sql-mysql")
<sgw> Morning all
<qschulz> o/
Schiller has quit [Quit: Client closed]
* sgw has a historical question about kernel package naming, anyone know my most qemu MACHINE types install a kernel-<ver> package but Intel MACHINE types (including genericx86) install both a kernel and kernel-<ver> package. Yes, I have been digging around conf/machine, nothing jumps out at me yet.
<sgw> s/my most/why most/
<RP> sgw: no idea FWIW
<sgw> RP: more digging required, this is partly related to the depmod issue, since any Intel HW specific (not qemu) MACHINE types seem to add the kernel package, this causes the kernel-dbg package to be installed also, but other MACHINE types (including qemu) with kernel-<ver> don't have a kernel-<ver>-dbg, thus don't see this issue.
<LetoThe2nd> yo dudX
<mckoan> LetoThe2nd: hear! hear! the jester is back with us!
<LetoThe2nd> mckoan: yeah... i would have loved to not be "away"
<mckoan> LetoThe2nd: I am pleased to note that you have to work now, LOL
<yolo> RP: how to enable that? google did not show anything, mega-manual does not mention it either, is it 'ready' for use
<yolo> DISTRO_FEATURES_append = " usrmerge" -- is this it
<hmw[m]> <qschulz> "hmw: create a bbappend for..." <- tnx apparently i did do that already. still finde the SIGILL strange
<qschulz> yolo: seems about right (might be :append if you are on honister or master branch)
<qschulz> hmw[m]: are you sure the package with the appropriate file is installed in your image?
<yolo> qschulz: thanks!
<RP> yolo: probably DISTRO_FEATURES:append = " usrmerge"
<RP> heh, qschulz beat me to it
<yolo> cool, building and testing now
<hmw[m]> qschulz: no but If I ask for opkg info qtbase it shows that that is installed
* yolo is trying to suggest FHS to add /opt/local/{bin,sbin,lib,man,etc} to replace /usr/local so /usr can be read-only
<yolo> all scripts default to install to /usr/local but to have /usr read-only is worth that change IMHO
rob_w has quit [Remote host closed the connection]
<qschulz> RP: adding tags? but that's cheating :D
frieder has quit [Remote host closed the connection]
Tokamak has quit [Ping timeout: 240 seconds]
Tokamak has joined #yocto
<hmw[m]> <qschulz> "hmw: are you sure the package..." <- find /usr/ -iname "*qsql*"
<hmw[m]> /usr/lib/plugins/sqldrivers/libqsqlmysql.so
<hmw[m]> /usr/lib/plugins/sqldrivers/libqsqlite.so
Wouter0100 has quit [Read error: Connection reset by peer]
Wouter0100 has joined #yocto
Wouter0100 has quit [Remote host closed the connection]
Wouter0100 has joined #yocto
marka has quit [Ping timeout: 240 seconds]
creich_ has joined #yocto
Tokamak_ has joined #yocto
Schiller has joined #yocto
creich has quit [Ping timeout: 240 seconds]
Tokamak has quit [Ping timeout: 240 seconds]
marka has joined #yocto
<Schiller> in the YPAutobuilder setup u will need a User with root rights. Is it acceptable to edit the UID and GID in the /etc/passwd to 0 or will this run into some conflicts because i am not pokybuild3 anymore.
<rburton> RP: good news! my little patch to openssl has exploded their ci. by good i mean terrible.
<RP> Schiller: you don't need root. Just run runqemu-gen-tapdevs to setup the tap devices on the worker
marka has quit [Ping timeout: 240 seconds]
Tokamak has joined #yocto
Tokamak_ has quit [Ping timeout: 240 seconds]
marka has joined #yocto
<Schiller> do you mean this shell-script https://github.com/openembedded/openembedded-core/blob/master/scripts/runqemu-gen-tapdevs within the yocto-worker?
<RP> rburton: oops, or do I mean congrats :)
<RP> Schiller: yes
<RP> Schiller: that script allows the tap devices to be setup in advance removing the need for root access anywhere ele
<RP> else
ar__ has joined #yocto
codavi has quit [Ping timeout: 252 seconds]
<Schiller> ik. stupid question but what exactly is a tap device (because i have to input a number).
lucaceresoli has joined #yocto
<RP> Schiller: it is basically like a network interface. How many images might you run in parallel
<Schiller> k thx
tcdiem has quit [Quit: Connection closed]
<Schiller> is that script supposed to lie somewhere and get called during buildprocess and edited or is it supposed to be run manually before i start the buildbot-controller?
<Alban[m]> Hi! Is there a good alternative to NFS to share sstates, something that would work well over the public internet?
<RP> Alban[m]: http if you're ok with read only
<Alban[m]> RP: That would be using SSTATE_MIRRORS?
xmn has quit [Ping timeout: 252 seconds]
<Schiller> i am not quite sure how to use the runqemu-gen-tabdevs script. does it get called from some othere script? where in the YPAutobuilder project does this script has to lie. which parameter does native-sysroot-basedir expect (what path)?
starblue has quit [Ping timeout: 256 seconds]
starblue has joined #yocto
amitk has quit [Ping timeout: 240 seconds]
<Alban[m]> Build don't seems to re-use sstates properly, now looking at some sig diff I see the following:... (full message at https://libera.ems.host/_matrix/media/r0/download/libera.chat/75a9bfc53e59f2c2896b34c6c1f2757b366b7c9d)
<Alban[m]> Any idea what could be causing this re-ordering?
<Alban[m]> Could it be that some dictionary is in play somewhere? I have python 3.6 and ordered dict as default are from 3.7 on.
<RP> Schiller: on the autobuilder we run it once at boot up to setup those devices
pgowda_ has quit [Quit: Connection closed for inactivity]
<RP> Alban[m]: That looks like a display issue, it isn't the real difference
<rburton> RP: my ci job has hung, if only bitbake would tell me what jobs are still running after five minute of no output
<RP> rburton: you'll soon know ;-)
<Alban[m]> I see, then there is only differences in the "Hash for dependent task" 😕
rfuentess has quit [Remote host closed the connection]
<RP> Alban[m]: have a look at the difference in the dependent task then?
mckoan is now known as mckoan|away
<Alban[m]> I'm looking at that, but wouldn't that be tasks that depend on the one I was looking at?
prabhakarlad has quit [Quit: Client closed]
<RP> Alban[m]: no, it works the other way
<Alban[m]> the wording is quiet confusing
<RP> Alban[m]: now you mention that, the wording is not great :/
<RP> Alban[m]: I've never noticed that before
<Alban[m]> it's like tx/rx it often get difficult to understand what direction is really meant
wmat[m] has joined #yocto
kevinrowland has joined #yocto
dev1990 has quit [Read error: Connection reset by peer]
dev1990 has joined #yocto
<Alban[m]> That would be much better 🙂
goliath has quit [Quit: SIGSEGV]
<Alban[m]> I'm hitting some empty files in tmp/stamps, is that to be expected?
tcdiem has joined #yocto
<RP> Alban[m]: if they're the stamp files themselves, yes
Schiller has quit [Quit: Client closed]
kevinrowland has quit [Ping timeout: 256 seconds]
kevinrowland has joined #yocto
ecdhe has quit [Read error: Connection reset by peer]
ecdhe has joined #yocto
<kevinrowland> What happens internally if I use `_append` and `+=` in the same assignment? E.g. `IMAGE_INSTALL_append += "python3"`. That feels like a slick way to follow the recommendation to use `IMAGE_INSTALL_append = " python3"`, where the space before the variable is mandatory.
Tokamak has quit [Ping timeout: 240 seconds]
<qschulz> kevinrowland: this is not allowed anymore and will fail checks
<qschulz> but yes, that was the observed behavior
<qschulz> basically the += applies to _append operator
Tokamak has joined #yocto
<kevinrowland> Ha shoot, ok. Any idea why it was nixed? Seems to pass checks in Hardknott, is that a restriction in newer one?
<qschulz> kevinrowland: restriction in kirkstone and later
<qschulz> so just master branch for now
<qschulz> it was removed because _append += was never intended to be supported and it's also super confusing
<qschulz> because it implies you can "add" to existing _append but that is absolutely not how it works
Tokamak_ has joined #yocto
<qschulz> each _append is isolated, and you can't do operations on an _append or its content (except doing a _remove)
Tokamak has quit [Ping timeout: 240 seconds]
pasherring has quit [Remote host closed the connection]
tnovotny has quit [Quit: Leaving]
<Alban[m]> I have 2 workspace side by side, with shared sstate dir, the same layers and the same local.conf. After building in the first one I would expect the second workspace to be able to pull everything from the sstates, right?
<landgraf> RP: BB_SERVER_TIMEOUT issue... if devtool.DevtoolUpgradeTests.test_devtool_upgrade_git triggered after devtool.DevtoolUpgradeTests.test_devtool_upgrade it failed (do_fetch/unpack/patch not triggered for some reason), if devtool.DevtoolUpgradeTests.test_devtool_upgrade_git is triggered *without* previous devtool.DevtoolUpgradeTests.test_devtool_upgrade it works just fine. I guess the server "thinks"
<landgraf> do_fetch for test_devtool_upgrade_git is not needed to be re-executed. Do you have some hints where can be this logic?
<landgraf> RP: without TIMEOUT server restarts between tests and everything works
lucaceresoli has quit [Ping timeout: 268 seconds]
Schiller has joined #yocto
Schiller has quit [Quit: Client closed]
<rburton> Alban[m]: yes
Schiller has joined #yocto
Schiller has quit [Quit: Client closed]
<sakoman> Looking for a little debug help! I've added a recipe for an out of tree kernel module (patterned after the sample in meta-skeleton) and added kernel-module-foo to MACHINE_ESSENTIAL_EXTRA_RDEPENDS
<sakoman> On machine A (Ubuntu 20.04) the recipe builds without error, the kernel-module-foo package is added to the image and it works as expected
<sakoman> On machine B (also Ubuntu 20.04) the recipe builds without error, but do_rootfs fails with:
<sakoman> The following packages have unmet dependencies:
<sakoman> packagegroup-core-boot : Depends: kernel-module-foo
<sakoman> E: Unable to correct problems, you have held broken packages.
<sakoman> Using debian packaging, checking in deploy/deb on both machines shows the same .deb packages for the module, with identical sizes
<sakoman> Any hints on how to debug this? Not seeing any clues in the logs :-(
<sakoman> And to add to the mystery, trying a different MACHINE works as expected when building on either A or B
dev1990 has quit [Remote host closed the connection]
mvlad has quit [Remote host closed the connection]
<zeddii> sakoman, I think sgw was mentioning something like this earlier. something with a kernel-image depend.
florian__ has quit [Ping timeout: 256 seconds]
Guest79 has joined #yocto
florian__ has joined #yocto
<RP> landgraf: tracking it down to a specific two test cases like that is extremely helpful. I don't have any specific hints about what may be wrong. There is a lot of caching in bitbake so I suspect one of these caches must not be being reset properly
prabhakarlad has joined #yocto
<RP> sakoman: are both sharing sstate or anything like that?
<sakoman> RP: no, nothing shared between machines A and B
tcdiem has quit [Quit: Ping timeout (120 seconds)]
<sakoman> RP: also rm'd the tmp dir on both and rebuilt -- same result
<sgw> sakoman: this sounds different than what I was finding, my finding was for Intel and after further investigations MACHINE_FEATURES with efi enabled have an RDEPENDS on "kernel" where other MACHINEs without EFI get some default dependency (which I have not found where yet) for "kernel-<ver>".
<sakoman> sgw: yeah, sounds completely different!
<sgw> sakoman: another thing to look for is -dbg packages being installed with .debug .ko files
<sgw> but again it sounds different, unless you have more details error logs
<sakoman> sgw: the dbg.deb's in deploy/deb on both machines are the same
<sgw> are they being installed?
<sakoman> sgw: no, I don't see any evidence of any dbg.deb's being installed
<sgw> Then probably different issue
<sakoman> sgw: the only obvious difference between the A and B build machines is that A has an Intel processor and B an AMD processor ;-)
<sakoman> sgw: I may do a build from scratch on B just to make sure it isn't a sstate corruption issue
<RP> sakoman: which release?
<sakoman> RP: dunfell
<RP> sakoman: it is odd and would be nice to get to the bottom of. Can you do a diffoscope between the two sets of debs, see where any difference is arising ?
<sakoman> RP: sure I can try that
<RP> sakoman: or just compare them. Something has to be different
tcdiem has joined #yocto
goliath has joined #yocto
<sakoman> RP: there is a difference - on the passing machine the git tag in the kernel-module.deb filename matches all of the other kernel-module.debs. On the failing machine the git tag in the filename for the new module is different than all the others, so that is why it is failing
florian__ has quit [Ping timeout: 240 seconds]
<sakoman> Gives me a thread to pull on. I just realized that there was one other difference -- build machine B inherits rm_work
florian__ has joined #yocto
<RP> sakoman: may or may not be related to rm_work but I think you follow that filename thread
<sakoman> RP: yup, that is a good clue
<sakoman> RP: starting with no sstate just to see if it will reproduce
florian__ has quit [Ping timeout: 240 seconds]
kevinrowland has quit [Quit: Client closed]
goliath has quit [Quit: SIGSEGV]
<Alban[m]> Regarding my sstate problem I was able to narrow it down on my first workspace, it seems that the sstate computed before running the task doesn't match with what it is later. With some debug in sstate_checkhashes() I see the "missing" sstate, for example:
<Alban[m]> Missed /builder/workdir/openembedded-core/meta/recipes-support/gnutls/gnutls_3.7.2.bb:do_populate_lic: /builder/sstate-cache/e7/5b/sstate:gnutls::3.7.2:r0::3:e75b40c862a6d42f517f7abf9c770194c14f272fb55a6463f5da80776773c30a_populate_lic.tgz
<RP> Alban[m]: do you have hash equivalence enabled?
<Alban[m]> But when I look in this file after the build it contains another hash:
<Alban[m]> bitbake-dumpsig /builder/sstate-cache/b7/1e/sstate:gnutls:core2-64-aerq-linux:3.7.2:r0:core2-64:3:b71e34f368685ceada267c5c426d3a1e03b4c2fa205edc0b9ee28a588613b47b_packagedata.tgz.siginfo | tail -n 1
<Alban[m]> Computed task hash is 597bfe8be9a4cbd71aa8a607afc96f849c3f74657c234a453991d8e99883bc7d
<Alban[m]> I think so
<Alban[m]> Is it that broken?
<RP> Alban[m]: if you share sstate between builds you also have to share hash equivlance data
<Alban[m]> 597bfe8b... is the hash that the second workspace search and then doesn't find ☹️
<Alban[m]> but the sstate is written at the wrong place, the file b71e34f3... contains the data for 597bfe8b...
<RP> Alban[m]: it is hash equivalence making a mess
<RP> or I suspect so anyway. Either try disabling hash equivalence, or sharing the hash equivalence between the builds
<Alban[m]> My distro has BB_HASHSERVE ??= "auto"
<RP> which will be local to each build
<RP> we need to make this issue more discoverable somehow :/
<sakoman> RP: building with no sstate and no rm_work on build machine B succeeds, now to try no sstate and with rm_work
<Alban[m]> still the sstate is broken as some file contain a different data that the hash from the path
<RP> sakoman: I suspect rm_work could remove something which breaks the module version used somehow :/
<Alban[m]> even if they are supposed to be equivalent it is still badly inconsistent data
<sakoman> RP: I'll keep trying to reproduce
<RP> Alban[m]: it gets complicated and isn't simple as you think :(
<Alban[m]> well my second workspace lookup a hash that exists but is stored under another one
<Alban[m]> and my first workspace also return this hash after the build
<RP> Alban[m]: I'm tired, stressed, overworked and really don't want to get into a discussion about whether it is right or wrong
<Alban[m]> just before the build the forst workspace somehow see a different hash
<RP> Alban[m]: I've tried to explain what I suspect is causing your problem and I'm sorry you've hit it, I think we do need to better document and expose it somehow
<Alban[m]> right, thank you for the help
<RP> trying to add new features to a well established system like sstate is hard and you're finding a glitch in the periphery of it. If we had a team of people to work on it, we could likely do better but we have me and someone else doing our best
<Alban[m]> don't worry I do appreciate. btw I disabled hash equivalence and now my first workspace use the right hash from the start
GillesM has quit [Quit: Leaving]
GillesM has joined #yocto
GillesM has quit [Client Quit]
<armpit> smurray: you have anything to do with CVE-2022-24595?
AustrianCurrent has joined #yocto
<smurray> armpit: heh, first I'm hearing of it. There's a good chance it could become my problem if someone from IoT.bzh doesn't pop up to fix it.
<smurray> armpit: all the afb* stuff has been dropped from the upcoming release of AGL, and it's quite unclear if anyone uses it in a product (I wouldn't ;) )
<AustrianCurrent> Question: Is this an appropriate place to ask a question about Yocto as a user, or is this an internal development chat? Sorry for the possible spam.
<moto-timo> AustrianCurrent: just ask. It’s free for all. Although we do hope you will do your own homework/footwork and not expect us to be unpaid consultants.