ndec changed the topic of #yocto to: "Welcome to the Yocto Project | Learn more: https://www.yoctoproject.org | Join us or Speak at Yocto Project Summit (2022.11) Nov 29-Dec 1, more: https://yoctoproject.org/summit | Join the community: https://www.yoctoproject.org/community | IRC logs available at https://www.yoctoproject.org/irc/ | Having difficulty on the list or with someone on the list, contact YP community mgr ndec"
psj has joined #yocto
goliath has quit [Quit: SIGSEGV]
psj has quit [Remote host closed the connection]
florian has quit [Ping timeout: 260 seconds]
kscherer has quit [Quit: Konversation terminated!]
davidinux has quit [Ping timeout: 256 seconds]
davidinux has joined #yocto
m4ho has quit [Ping timeout: 260 seconds]
Tokamak has joined #yocto
Tokamak_ has quit [Ping timeout: 256 seconds]
sakoman has quit [Quit: Leaving.]
sakoman has joined #yocto
starblue has quit [Ping timeout: 260 seconds]
starblue has joined #yocto
m4ho has joined #yocto
ptsneves has joined #yocto
amitk has joined #yocto
jclsn has quit [Ping timeout: 252 seconds]
ptsneves has quit [Ping timeout: 260 seconds]
jclsn has joined #yocto
xmn has quit [Ping timeout: 260 seconds]
m4ho has quit [Ping timeout: 256 seconds]
Tokamak_ has joined #yocto
Tokamak has quit [Ping timeout: 260 seconds]
m4ho has joined #yocto
sakoman has quit [Quit: Leaving.]
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
Wouter0100 has quit [Quit: The Lounge - https://thelounge.chat]
Wouter0100 has joined #yocto
amitk_ has joined #yocto
alessioigor has joined #yocto
alessioigor has quit [Client Quit]
tor has joined #yocto
rob_w has joined #yocto
tor has quit [Quit: Leaving]
tor has joined #yocto
tomzy_0 has joined #yocto
leon-anavi has joined #yocto
manuel has quit [*.net *.split]
manuel has joined #yocto
risca has quit [Ping timeout: 248 seconds]
Circuitsoft has quit [Quit: Connection closed for inactivity]
mckoan|away has quit [Ping timeout: 256 seconds]
iokill has quit [*.net *.split]
georgem has quit [*.net *.split]
ernstp has quit [*.net *.split]
KanjiMonster has quit [*.net *.split]
Zappan has quit [*.net *.split]
polprog has quit [*.net *.split]
iokill has joined #yocto
georgem has joined #yocto
ernstp has joined #yocto
Zappan has joined #yocto
polprog has joined #yocto
KanjiMonster has joined #yocto
<hmw[m]> how do i overwrite the default do configure?
manuel1985 has joined #yocto
mvlad has joined #yocto
zpfvo has joined #yocto
gho has joined #yocto
Payam has quit [Remote host closed the connection]
vladest has joined #yocto
derRichard has joined #yocto
<derRichard> hey!
<derRichard> is there a reason why this fix is not in dunfell yet? https://git.yoctoproject.org/yocto-kernel-tools/commit/?id=64bdfc4ca221cf181af7790e862d26f87a9ea881
<tomzy_0> Hi
<tomzy_0> is there someone who successfully compiled xorg-xserver 21.1.3 provided since kirkstone?
d-fens has joined #yocto
<jclsn> Morning
<jclsn> How can I get the return value from a bash function in a Bitbake script. I tried automatically setting PARALLEL_MAKE = "-j ${nproc}", but that throws an error
kiwi_29_[m] has quit [Quit: You have been kicked for being idle]
<LetoThe2nd> yo dudX
kiwi_29_[m] has joined #yocto
kiwi_29_[m] has left #yocto [#yocto]
<jclsn> It seems you can't define Bitbake functions in local.conf either
florian has joined #yocto
<LetoThe2nd> jclsn: yeah AFAIK functions cannot go into .conf files?
<jclsn> LetoThe2nd: Pity, so only fixed values are possible for PARALLEL_MAKE
zpfvo has quit [Ping timeout: 268 seconds]
<jclsn> Well, I guess the default is nproc, so I could just work with that and divide it by two or something
<LetoThe2nd> jclsn: well you can always override those in a specific recipe or class
<jclsn> LetoThe2nd: I want to globally override them though. I want to see if keeping the load average lower will improve performance. So something like PARALLEL_MAKE = "${PARALLEL_MAKE}/${BB_NUMBER_THREADS}"
<jclsn> Per recipe doesn't make sense
d-fens has quit [Quit: Client closed]
<LetoThe2nd> jclsn: well globally should not be that much of a problem. just look at how they are assigned in the first place and then patch/modify
<jclsn> But you can't do arithmetic calculations inside the variables
<jclsn> Typisation would come in handy here
<jclsn> Ah well I guess I will just leave it
<qschulz> jclsn: you could set the variable in the local environment and use the environment variable in your local.conf
<qschulz> otherwise, it supports in-line python
<qschulz> so something like PARALLEL_MAKE = "${@os.cpucount()}" ?
manuel1985 has quit [Ping timeout: 260 seconds]
zpfvo has joined #yocto
<jclsn> qschulz: cpucount is in multiprocessing. How can I do an import statement before that?
<jclsn> I tried "${@import multiprocessi}${@multiprocessing.cpucount()}
<jclsn> ah os.cpu_count() works it seems
mckoan|away has joined #yocto
<jclsn> Ah but now I have PARALLEL_MAKE="24/2" ^^
<qschulz> jclsn: why?
<qschulz> do PARALLEL_MAKE = "${@os.cpucount() / 2}"
<qschulz> and I think you need -j in front?
<qschulz> so PARALLEL_MAKE = "-j ${@os.cpucount() / 2}" ?
<jclsn> If I put "${@os.cpu_count/${BB_NUMBER_THREADS}}" I get 12.0
<jclsn> Ah yes
<jclsn> Let met try
<qschulz> jclsn: then //
<qschulz> that's just python
<mcfrisk> what about enabling kernel module signing by default in poky master branch? maybe via config snippet and PACKAGECONFIG. it's not the default in x86_64 in upstream but I think it should be. and it would show the stripping failure earlier too (still investigating, disabling stripping fixes this)
<jclsn> Yeah but make don't take floats bra
<qschulz> jclsn: hence the //
<jclsn> ok
<jclsn> Yep
<jclsn> But it's not maxing out the core now ^^
<jclsn> Guess going with the default is the best
<qschulz> jclsn: what are you trying to do
<jclsn> qschulz: Our devops guy was complaining that our load average is too high. So like 4 times the number of physical core. He said to optimize efficiency I should adjust those settings properly
<jclsn> Bitbake seems to have a lot of processes waiting for IO
<LetoThe2nd> why is that a problem?
<jclsn> Because our Devops guy is a nerd
<jclsn> He said we would waste performance
shoragan has quit [Read error: Connection reset by peer]
<LetoThe2nd> the build will eventually finish, why does $DEVOPS guy have to tell you how to run the builc? if it affects other stuff, he shall give you a quota. rest is not of his concern, IMHO
<jclsn> Actually the kernel should handle all of these things imo
OnkelUlla has quit [Read error: Connection reset by peer]
mckoan|away has quit [Ping timeout: 264 seconds]
<qschulz> the only worry you should have is not running out of memory
<qschulz> I remember reading somewhere that anything above 20 threads isn't going to help anyway so you can limit it to that
<LetoThe2nd> qschulz: yup. give or take a few, but somewhere above 24-32 you're not gaining anything anymore.
<jclsn> LetoThe2nd: Yeah true, as long as we don't have problems why fix it. I am just curious to make a few experiments too. I didn't even know what the load everage meant before that. So at least I learnt something
shoragan has joined #yocto
<Saur[m]> jclsn: If you are using Kirkstone or later, you might want to look inte to the new BB_PRESSURE_* variables too.
davidinux has quit [Ping timeout: 248 seconds]
<jclsn> Saur[m]: I wouldn't know how to find the right pressure values
<jclsn> Ah will just let it be for now. It is not so important really
<jclsn> Thanks for showing me those variables though. Maybe in the future I will make use of them
mckoan|away has joined #yocto
mckoan|away is now known as mckoan
d-fens has joined #yocto
<jclsn> If BB_SRCREV_POLICY = "cache" will that not break AUTOREV?
amitk_ has quit [Ping timeout: 268 seconds]
<jclsn> If it doesn't check the repo every time it builds, it can't be up-to-date really
OnkelUlla has joined #yocto
<jclsn> Seems like it does unfortunately
gsalazar has quit [Ping timeout: 248 seconds]
<RP> jclsn: devops people don't always think about builds in the right way. bitbake tends to queue up everything it can, it then leaves it to the kernel to work out what it can do with the resources available. The kernel tends to be much better at that than userspace can be
<d-fens> hi, fetching some python deps on kirkstone i was looking for  bitbake -s | grep ^python3-git which resulted in 3.1.27-r0 and then tried  https://layers.openembedded.org/layerindex/branch/master/recipes/?q=python+git and was given https://layers.openembedded.org/layerindex/recipe/51298/ but the latest version is 3.1.29  (not 27) - question: why
<d-fens> don't i see the 29 file that does live in http://cgit.openembedded.org/openembedded-core/tree/meta/recipes-devtools/python/python3-git_3.1.29.bb while 27 doesn't xist ? is the layerindex not updated regularly and what am i doing wrong?
<LetoThe2nd> d-fens: let`s say the layerindex is not exactly well maintained at the moment.
gsalazar has joined #yocto
Saur has quit [Ping timeout: 260 seconds]
Wouter0100 has quit [Quit: The Lounge - https://thelounge.chat]
Wouter0100 has joined #yocto
<JaMa> jclsn: you can see some benchmarks in https://github.com/shr-project/test-oe-build-time
starblue has quit [Ping timeout: 260 seconds]
Notgnoshi has quit [Ping timeout: 255 seconds]
starblue has joined #yocto
<d-fens> LetoThe2nd i see, so the poky layer is my oe-core layer, where only .27 exists for kirkstone; how should i handle the .29 requirement while staying on kirkstone?
<d-fens> copy the python3-git_3.1.29.bb to my distro layer?
Saur has joined #yocto
gsalazar has quit [Ping timeout: 265 seconds]
<mcfrisk> linux-yocto in poky has some neat config stuff, but sadly I haven't seen any BSP layer using it...
<LetoThe2nd> d-fens: rather the application layer, but basically yes.
gsalazar has joined #yocto
<jclsn> RP: Yeah I also think that thinking that you are able to handle things better than the kernel is a bit presumptuous
<rburton> mcfrisk: meta-arm does for some BSPs
<mcfrisk> rburton: cool, sadly vendor kernels weren't using them..
<RP> jclsn: I've tried before in bitbake and usually we just made performance worse
<jclsn> Same for me
<RP> It is a balance, which is why we have PARALLEL_MAKE and BB_NUMBER_THREADS but anything finer grained doesn't usually help
d-fens has quit [Ping timeout: 260 seconds]
<jclsn> RP: I am just looking at this bug https://bugzilla.yoctoproject.org/show_bug.cgi?id=14918 which was caused by one of your patches
<RP> jclsn: right, I fixed several problems and caused some others :(
<jclsn> I tried to workaround by using BB_SRCREV_POLICY = "cache", but that breaks AUTOREV it seems
<jclsn> RP: Happens to the best of us ;)
<jclsn> Well, I guess you have no idea how to fix it or did you just have no time to look at it?
<jclsn> I am trying to understand the code
<RP> jclsn: I haven't had the time to dig in and understand what is going wrong
<jclsn> Alright
<jclsn> No worries. I will use fixed SRCREV for now
<jclsn> Maybe I will have a look at it myself
<RP> help in fixing it would be very welcome, I just don't have the bandwidth to do it :(
<Saur[m]> jclsn: If you use BB_SRCREV_POLICY = "cache", which we do, then as you have noticed, you can not use ${AUTOREV, which means that devtool not supporting ${AUTOREV} is no longer a problem. ;)
d-fens has joined #yocto
<mcfrisk> for CPU usage kernel is good, same for virtual memory/RAM, but for IO, things are different. By default, background writing of things tmp will happen way too fast. Tuning of VM parameters is needed to keep things in RAM so that rm_work can remove them before flush to disk happens..
<mcfrisk> building on full tmpfs is nice but it's rare to have so much RAM available for full build/tmp
* RP really doesn't like rm_work :(
<jclsn> Saur[m]: Great workaround :D
<jclsn> I got used to AUTOREV though. So convenient
<RP> Saur[m]: can't you manually clear the cache with BB_SRCREV_POLICY = "cache" to trigger autorev?
<mcfrisk> RP: any non-trivial build env must use rm_work, otherwise tmp/work* will be too big
<RP> mcfrisk: thanks, clearly I only ever do trivial stuff
<mcfrisk> I only had 3 Tb disks/nvme/ssd's on the big machines, that was enoug for 3-4 project builds with download and sstate caches shared. A single build with rm_work was easily 150 Gb on disk. Can't even imagine how big those would have been without rm_work...
* RP notes the autobuilders are also trivial
<mcfrisk> only core-image-minimal? without systemd?
<RP> what worries me about rm_work is that it was hacked on top of everything, not designed in properly and is extremely fragile. It integrates with the task graph particularly poorly and has a poor story around debugging failures.
<RP> Despite it being used heavily, nobody actually cares about fixing any of that, just patching it to make it mostly work
<mcfrisk> for me it works, can't remember ever having problems with it
<RP> mcfrisk: you don't get the bug reports when it breaks :(
<mcfrisk> true, sorry about that
<RP> mcfrisk: trying to change other parts of the system to improve them whilst ensuring it keeps working is also frustrating given everything else
<qschulz> mcfrisk: I remove everything between CI builds except sstate cache and dl dir, no need for rm_work in that scenrario?
<mcfrisk> but it does help a lot to keep the amount of IO lower during builds and to keep fs buffers in RAM
<qschulz> (though yes, I only build very small images)
<RP> anyway, like I said, I just personally really don't like it. I know people use it and why which is why it does continue to exist
<mcfrisk> qschulz: check how much writes your build does..
<mcfrisk> i used pcp to monitor CPU, RAM, IO read/write bytes, network bytes
<RP> qschulz: you might see some speed up if rm_work can trigger before the data makes it out the cache and onto disk
<RP> if your build isn't io bound, it won't actually matter
<mcfrisk> there I could see the effect of Linux vm background writes going to real IO, and once disabled running low on fs buffers on RAM also triggering useless writes which rm_work helped to get rid of
<mcfrisk> it's a good idea to monitor full cache rebuilds to see how developers broke download and sstate caching: nothing should download anything from network if caches full, nothing should get recompiled if sstate cache uptodate and no changes...
<qschulz> RP: mcfrisk: thanks, will try to remember this the day we need to go for bigger builds (maybe never? only BSP products on Yocto right now)
<mcfrisk> some of the breakage I saw came from BSPs...
<qschulz> mcfrisk: I don't get your last sentence
<mcfrisk> RP: +KERNEL_FEATURES:append = " features/module-signing/force-signing.scc" in meta/recipes-kernel/linux/linux-yocto.inc results in runtime failure "Loading of unsigned module is rejected" on yocto master.. kernel modules are getting stripped of signing data
<mcfrisk> qschulz: I saw BSP layers breaking download and sstate caching
<qschulz> mcfrisk: ah yes, a layer is a layer, people can do many kind of crazy things in there :)
<qschulz> what I meant is as a BSP vendor, I have very little to build right now to test my layer/boards, so no use yet for rm_work
<mcfrisk> yea, that's good for you, but checking that download and sstate caching isn't broken is a good idea. For downoad caching, debuild without network access with full download cache
<mcfrisk> /debuild/rebuild/
<mcfrisk> for sstate cache test, rebuild with full cache and check that no do_compile tasks got executed
<RP> mcfrisk: I'm not surprised, I thought there was an open bug around this
<mcfrisk> would be nice to device a test for this, it's so easy for various layers and recipes and developers to accidently break things
amitk_ has joined #yocto
davidinux has joined #yocto
manuel1985 has joined #yocto
<RP> mcfrisk: The maze of kernel options does need more tests, one would be very welcome
alessioigor has joined #yocto
<RP> mcfrisk: we are slowly improving as we notice cases like this, document and add tests
Payam has joined #yocto
xmn has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
mckoan has quit [Ping timeout: 264 seconds]
manuel1985 has quit [Ping timeout: 264 seconds]
amitk_ has quit [Ping timeout: 265 seconds]
<ldericher> how do I define a multiline string in a bitbake recipe?
<rburton> backslash-escape the newline
<ldericher> So I want a string like BAZ = "foo\nbar" with a literal newline in between. Can I `BAZ = \[NL]"foo\[NL]bar"` where [NL] is a newline in my bbfile?
Payam has quit [Remote host closed the connection]
<rburton> oh you want a literal newline? i guess \\n might do it
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
<ldericher> rburton, well I think so … maybe it'd be clearer to use an array of lines though. Possible?
<rburton> no
<rburton> bitbake variables are strings
<ldericher> so finally, I'm writing a ROOTFS_POSTPROCESS_COMMAND in my image recipe that generates an /etc/issue file for me.
<ldericher> in there, I want (multiline) ASCII art, exposed as a bitbake var
goliath has joined #yocto
<rburton> is it literally just copying the file from the var to the rootfs?
<rburton> or do you process it in some way
<ldericher> it is processed - "issue" needs escaped backslashes
<ldericher> also I want to add some stuff like distro name
<rburton> if thats all your doing just create a bbappend for base-files and put an issue file alongside
<rburton> base-files will install it for you
<rburton> well its a bit more complex than that for this specific file, but a postprocess hook is overcomplicating things
<ldericher> rburton, well yes but actually no, I need it to pull in the correct ASCII art for the image that's to be built.
<rburton> so there is processing: you have per-image artwork
<ldericher> yes
<rburton> i'd still write a recipe that emits lots of packages for each of your images
<rburton> easier to write the ascii art in a text editor instead of faffing with it being in a variable
<ldericher> I'm really rusty tbh, can you help with the "emit lots of packages" part?
<rburton> something like write a recipe which installs /etc/issue.(name) files for each of your image types and then symlink the right one to /etc/issue
manuel1985 has joined #yocto
xmn has quit [Ping timeout: 264 seconds]
<ldericher> oooh that's clever!
davidinux has quit [Quit: WeeChat 3.5]
xmn has joined #yocto
<rburton> if you're feeling really clever, use alternatives to provide /etc/issue and then you just install the right issue-foo package and the symlink is made for you
rob_w has quit [Ping timeout: 260 seconds]
manuel1985 has quit [Ping timeout: 268 seconds]
manuel1985 has joined #yocto
sakoman has joined #yocto
rob_w has joined #yocto
<ldericher> when the image is built, where can I find the resulting sysroot, again?
davidinux has joined #yocto
d-fens has quit [Ping timeout: 260 seconds]
Net147 has quit [Read error: Connection reset by peer]
<OnkelUlla> otavio: Hi Otavio! Is there a chance that you perhaps could have a look at https://lists.openembedded.org/g/openembedded-devel/message/99426 ("[meta-java][PATCH] layer.conf: Mark as compatible with langdale")?
<OnkelUlla> I sent it together with https://lists.openembedded.org/g/openembedded-devel/message/99425 ("[meta-java][PATCH] openjdk-8: refresh patches") some weeks ago and did not get any feedback up to now.
Net147 has joined #yocto
Net147 has joined #yocto
Net147 has quit [Changing host]
d-fens has joined #yocto
manuel1985 has quit [Remote host closed the connection]
mckoan has joined #yocto
florian_kc has joined #yocto
zpfvo has quit [Ping timeout: 265 seconds]
risca has joined #yocto
zpfvo has joined #yocto
rob_w has quit [Ping timeout: 260 seconds]
<d-fens> it seems to be a stupid idea to delete the contents o build\tmp-glibc\deploy\images ... how can i trigger a rebuild of bootfiles and dtb's ?
<qschulz> d-fens: remove everything in your build/tmp-glibc and rebuild it's the easiest
<qschulz> otherwise I know rburton has some bitbake magic command somewhere up his sleeve
<d-fens> thx!
<ldericher> ROOTFS_POSTPROCESS_COMMAND must be in an image recipe?
Notgnoshi has joined #yocto
rob_w has joined #yocto
rob_w has quit [Remote host closed the connection]
manuel1985 has joined #yocto
<vvn> ldericher: yes. If you want to share some code between image recipes, write a foo.bbclass, add your ROOTFS_POSTPROCESS_COMMAND += "foo;" in there with a foo () { } function and make your images inherit foo
<vvn> qschulz: btw do you need to remove TOPDIR as well or remove TMPDIR is enough to trigger a "fresh" build?
<vvn> in other words, is the content of TOPDIR (except TMPDIR) reusable/sharable?
nemik has quit [Ping timeout: 248 seconds]
nemik has joined #yocto
mthenault has joined #yocto
nemik has quit [Ping timeout: 265 seconds]
nemik has joined #yocto
<rburton> d-fens: just delete tmp
<rburton> vvn: the cache/ is not relocatable
<rburton> sstate and downloads are
<d-fens> rburton did that and it worked, it's actually tmp-glibc , no idea why the output path changed from tmp a few days ago
<rburton> tmp/ is a pokyism
<rburton> default is tmp-(libc name) for... reasons
michaelo[m] has quit [Quit: You have been kicked for being idle]
alessioigor has quit [Quit: alessioigor]
mkorpershoek has left #yocto [#yocto]
<vvn> rburton: this means that TOPDIR is specific to the build machine, but you can reuse it for other fresh builds as long as you don't move it to another location, am I correct?
<rburton> vvn: well, you can always share sstate and downloads. tmp and cache just wipe and they'll regenerate. so then all that is left is conf.
<vvn> and bitbake.lock and bitbake.sock (as well as a daemon log)
<rburton> the former are specific to a running bitbake and will be deleted once it quits
<vvn> I should've said the content of TOPDIR, except TMPDIR, DL_DIR and SSTATE_DIR
d-fens has quit [Ping timeout: 260 seconds]
<vvn> rburton: bitbake.conf says that PERSISTENT_DIR (${TOPDIR}/cache) "should be shared by all builds"
<otavio> OnkelUlla: I can look, for sure. Can you send an email so I don't forget? otavio.salvador@ossystems.com.br
<vvn> So one only needs to wipe TMPDIR for a fresh build I guess, no need to wip cache
<vvn> wipe*
<rburton> vvn: we talked about that earlier in the week here. the bitbake parser cache goes into tmp/cache explicitly, and that very much shouldn't be shared
<rburton> RP: i say bitbake's caches should go into tmp/cache not topdir/cache
tor has quit [Remote host closed the connection]
<JaMa> rburton: PRSERV is in PERSISTNT_DIR, right? so that needs to stay in topdir like hashserve
<rburton> yeah
<rburton> i should be able to put persistant_dir alongside my shared sstate and downloads, right? the parser cache being in there means i can't do that.
<OnkelUlla> otavio: Thanks, I'll resend them! You've been on CC, but perhaps the address found in meta-java's README (Otavio Salvador <otavio@ossystems.com.br>) is not in active use anymore.
<otavio> it is the same; just resend.
<RP> rburton: depends which cache you mean
<rburton> codeparser.dat?
<RP> rburton: that is from codeparser rather than recipe parsing. If you had two TMPDIRs, you could share the code parser between them
<JaMa> RP: was this a NAK? https://lists.openembedded.org/g/bitbake-devel/message/14056 or should I send v2 accepting only '1' as value?
<rburton> RP: so it could sit in a central location alongside a shared sstate and different builds of different branches wouldn't argue over it?
<RP> JaMa: Oddly enough I was staring a the export and unexport flags today with the same issue and was thinking about the network flag
<RP> JaMa: we should probably put a bb.utils.to_boolean() around it (and merge the local patch I have to make bb.utils.to_boolean() handle int values
<JaMa> it's a bit unfortunate that inheritting icecc.bbclass now enables network everywhere
<RP> rburton: depends what central location means. It isn't as sharable as sstate/dl_dir but does offer a speed up to builds and is about as safe as the the bb persist data there
<RP> JaMa: yes :(
<JaMa> ok, I'll resend it with bb.utils.to_boolean once the local patch is merged, I didn't want to add more than bare-minimum processing as you said earlier that this is in hot path
<RP> JaMa: I'm worried that should we change the network flag, the export/unexport flags work differently
<RP> JaMa: If I change export/unexport to match we take a 4% parsing speed hit
<RP> perhaps I should just stop caring
<RP> rburton: codeparser basically says "python fragment X has dependencies on functions Y and variables Z"
<RP> rburton: it is versioned and the version could change depending on the version of bitbake
<RP> rburton: it isn't as simple as "this is sharable"
mthenault has quit [Ping timeout: 265 seconds]
<JaMa> FWIW: in rare cases I've seen codeparser cache getting corrupted when bitbake was killed in some unfortunate moment
amitk_ has joined #yocto
<RP> JaMa: I'm not entirely surprised since each parser thread writes out a copy of it's cache and then a thread merges them all together in the background
<RP> JaMa: in theory it should just throw it all away in case of problems and start again
<JaMa> I think in some rare cases it didn't throw it automatically and I had to delete it, but never found any reproducer and it was very rare (and possibly fixed since then)
kscherer has joined #yocto
olani has joined #yocto
<JPEW> RP: I had an epiphany about how to efficiently transfer the strings, but won't be able to do anything to implement it until next week
BrziCo has joined #yocto
<RP> JaMa: if you have a broken file sometime I'd be interested to see what it looks like and the error bitbake gives
<RP> JPEW: any hints on what you're thinking or you're going to keep me in suspense?
* RP suspects he will be getting told off by the conference organisers for distracting JPEW soon
<BrziCo> Hello everyone,
<BrziCo> I'm currently learning about Yocto project, and I'm trying to create Linux image for Raspberry Pi 3. I cloned the BSP layer (https://meta-raspberrypi.readthedocs.io) and I added desired variables in local.conf (e.g. ENABLE_UART = "1" ... ) and it works just fine. Now, I'm trying to move those variables away from local.conf to my own layer, so i
<BrziCo> created my machine (raspberrypi3-my.conf) that just contains this line include conf/machine/raspberrypi3.conf but when I run bitbake i get error saying: "Could not locate BSP definition for raspberrypi3-my/standard and no defconfig was provided". Any help? Thanks :D
florian has quit [Quit: Ex-Chat]
<RP> BrziCo: the MACHINE value is probably being used as an OVERRIDE in places. MACHINEOVERRIDES =. "raspberrypi3:" might help
<JaMa> RP: haven't found any in my current TOPDIRs, if I see it again I'll save it (but not doing so many build nowadays, so probability to hit it again is even lower)
<rburton> BrziCo: it might be better to create a new distro that uses the rpi machine, instead of a whole new machine
<RP> JaMa: fair enough. Just mentioning what we'd need to try and improve things in case you do run into it :)
leon-anavi has quit [Quit: Leaving]
florian_kc has quit [Ping timeout: 260 seconds]
gho has quit [Quit: Leaving.]
zpfvo has quit [Quit: Leaving.]
<JPEW> RP: getstate returns 2-tuple, first is the list of tuples ((os.getpid() len(cache)), "new string" or frozenset), for every new object added to the cache, the second is the actual dictionary, where the values are replaced with the (os.getpid(), len(cache)) tuple.
<JPEW> setstate populates the cache with all the new objects seen, then restores the dicts by replacing the tuples via lookup in the cache
<JPEW> It could even dedup efficiently across multiple worker process
<JPEW> And neatly, it's all wrapped up in the getstate, setstate, so nothing else needs to know it's happening
xmn has quit [Ping timeout: 265 seconds]
<JPEW> Anyway, the main idea is that the list of new objects makes sure each is only sent once the first time it's seen without needing special cases in the actual value dictionaries
<RP> JPEW: right, that makes sense. It was along the lines I was thinking but I was just doing to hack the state dictonary :)
<RP> going
mckoan is now known as mckoan|away
manuel1985 has quit [Ping timeout: 264 seconds]
Wouter0100 has quit [Quit: The Lounge - https://thelounge.chat]
Wouter0100 has joined #yocto
Minvera has joined #yocto
Tokamak_ has quit [Ping timeout: 256 seconds]
Tokamak has joined #yocto
<RP> do we need to move to #quecto?
<JaMa> lets discuss trivial builds in #quecto and non-trivial in #quetta to make things simpler :)
* JaMa just run out of 2+2TB nvme again
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
alessioigor has joined #yocto
alessioigor has quit [Remote host closed the connection]
<BrziCo> RP thanks MACHINEOVERRIDES = "raspberrypi3:${MACHINE}" helped
tokamak[m] has joined #yocto
BrziCo has quit [Quit: Client closed]
Tokamak_ has joined #yocto
Tokamak has quit [Ping timeout: 248 seconds]
florian_kc has joined #yocto
prabhakarlad has quit [Ping timeout: 260 seconds]
qschulz has quit [Remote host closed the connection]
invalidopcode has joined #yocto
qschulz has joined #yocto
gsalazar_ has joined #yocto
gsalazar has quit [Ping timeout: 264 seconds]
amitk_ has quit [Remote host closed the connection]
prabhakarlad has joined #yocto
amitk has quit [Ping timeout: 268 seconds]
florian_kc has quit [Ping timeout: 260 seconds]
gsalazar_ has quit [Ping timeout: 265 seconds]
Tamis has joined #yocto
gsalazar_ has joined #yocto
Tamis17 has joined #yocto
Tamis has quit [Ping timeout: 260 seconds]
gsalazar_ has quit [Remote host closed the connection]
gsalazar_ has joined #yocto
jlf` has joined #yocto
Wouter0100 has quit [Quit: The Lounge - https://thelounge.chat]
Wouter0100 has joined #yocto
mvlad has quit [Remote host closed the connection]
sakoman has quit [Quit: Leaving.]
Tamis17 has quit [Ping timeout: 260 seconds]
Minvera has quit [Remote host closed the connection]
gsalazar_ has quit [Remote host closed the connection]
gsalazar_ has joined #yocto
florian_kc has joined #yocto
Tokamak has joined #yocto
Tokamak_ has quit [Ping timeout: 260 seconds]
olani has quit [Ping timeout: 260 seconds]
gsalazar_ has quit [Ping timeout: 265 seconds]