<RP>
JPEW: I mailed a table of performance metrics to bitbake-devel and cc'd you. That frozenset patch works well, shame it makes the other change look so much worse!
florian_kc has quit [Ping timeout: 256 seconds]
psj has joined #yocto
dlan_ is now known as dlan
psj has left #yocto [#yocto]
psj has joined #yocto
<DvorkinDmitry>
is it possible to add dependancy do_image_MYTYPE[depends] = "mc:otherarch:somerecipe:do_populate_sysroot" ?
psj has left #yocto [#yocto]
dlan has quit [Changing host]
dlan has joined #yocto
psj has joined #yocto
<psj>
this might be a trivial thing, but I'm still learning a lot of the nuances in Yocto -- I'm looking to grab a patch and apply it. Specifically, this patch seems to be addressing the problem I'm experiencing:
<psj>
how do I go about actually snagging this patch and applying it locally?
<vmeson>
psj: I don't think it was ever merged so you'd have to git clone meta-mingw, checkout sumo and snag the patch from your link if you want to use it yourself.
<jlf`>
hi #yocto - hoping someone can point the way toward installing a set of packages to an alternate location in the rootfs - for example rooted at ${D}/opt instead of ${D}, with those binaries showing up in /opt/usr/bin on the target instead of /usr/bin, etc.
kscherer has quit [Quit: Konversation terminated!]
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
Tokamak has joined #yocto
Tokamak_ has quit [Ping timeout: 256 seconds]
sakoman has quit [Quit: Leaving.]
qschulz has quit [Quit: qschulz]
qschulz has joined #yocto
davidinux has quit [Ping timeout: 256 seconds]
davidinux has joined #yocto
starblue has quit [Ping timeout: 252 seconds]
starblue has joined #yocto
sakoman has joined #yocto
Tokamak_ has joined #yocto
Tokamak__ has joined #yocto
Tokamak has quit [Ping timeout: 248 seconds]
Tokamak_ has quit [Ping timeout: 256 seconds]
Tokamak has joined #yocto
Tokamak__ has quit [Ping timeout: 248 seconds]
Tokamak_ has joined #yocto
<psj>
vmeson: it only clicked when you sent your suggestion that sumo is literally the branch that particular patch exists in. That makes sense and I feel like a dope. THANK YOU!
chep has quit [Read error: Connection reset by peer]
chep` is now known as chep
alessioigor has joined #yocto
tor has joined #yocto
camus has joined #yocto
GNUmoon has joined #yocto
xmn has quit [Quit: ZZZzzz…]
xmn has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
camus has quit [Quit: camus]
Payam has quit [Quit: Leaving]
camus has joined #yocto
leon-anavi has joined #yocto
rob_w has joined #yocto
mckoan|away is now known as mckoan
<mckoan>
good morning
manuel1985 has joined #yocto
smooge has quit [Ping timeout: 260 seconds]
xmn has quit [Ping timeout: 256 seconds]
gho has joined #yocto
manuel1985 has quit [Ping timeout: 248 seconds]
smooge has joined #yocto
Payam has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
manuel1985 has joined #yocto
goliath has joined #yocto
Herrie has quit [Ping timeout: 248 seconds]
amelius has joined #yocto
smooge has quit [Ping timeout: 260 seconds]
smooge has joined #yocto
d-fens has joined #yocto
mvlad has joined #yocto
vladest has joined #yocto
grma has quit [Remote host closed the connection]
Herrie has joined #yocto
phako[m] has joined #yocto
<phako[m]>
Hello! So I won the task of providing our software stack for a yocto-based product. Are there any best practices documents for setting up and maintaining software layers?
<landgraf>
phako[m]: was it tough competition? :D
tomzy_0 has joined #yocto
<phako[m]>
" I need a volounteer - you!"
janvermaete[m] has quit [Quit: You have been kicked for being idle]
<jclsn>
Why is Bitbake not accurately distributing the load?
<jclsn>
We have a load average: 28,27, 22,60, 13,38 with 12 cores
<jclsn>
I would assume that without any settings Bitbake would try to make use of the cores as best as possible and not overload them
<mcfrisk>
jclsn: kernel does that, not bitbake. bitbake just fires as many parallel processes as configure (by default number of threads etc on the system).
<mcfrisk>
those load numbers are normal, sometimes bitbake saturates the CPU and scheduler with tasks which can also be waiting for IO
<landgraf>
jclsn: bitbake does nothing about underlying compilation in the recipes
<landgraf>
jclsn: bitbake (well, it's not even bitbake but underlying app depending on package manager/configuration) distributes buiding of the recipes depending of the number of cores and the builder itself may spawn its own processes and so on and so far
d-s-e has joined #yocto
grma has joined #yocto
rokm_ has quit [Read error: Software caused connection abort]
rokm_ has joined #yocto
<jclsn>
mcfrisk: Yeah but it seems to fire n process times n parellel build processes
<jclsn>
So with 12 cores it has like 144
<jclsn>
I would have assumed that it does more for optimally making use of the hardware, but I guess you have to set it manually on your buildserver
<d-fens>
yocto doesn't have that rolling release style like my gentoo install or rather portage has - so how do you guys decide when switch to a new LTS release and how does one handle packages that need to be sticky on their versions?
<mcfrisk>
jclsn: yes, bitbake does that. 12 bitbake tasks, 12 parallel compile threads within each bitbake task (or even as many as needed as with ninja). This isn't too bad unless build system runs out of RAM.
amelius has joined #yocto
amelius is now known as aduda
aduda is now known as amelius
amelius has quit [Client Quit]
<jclsn>
mcfrisk: Our devops colleague sees that differently
amelius has joined #yocto
camus has quit [Remote host closed the connection]
<JaMa>
jclsn: adjusting at least BB_NUMBER_THREADS is often useful, also look at the BB_PRESSURE_MAX_CPU
vm1 has joined #yocto
<JaMa>
most of my builds on various HW use BB_NUMBER_THREADS lowered to 8, while PARALLEL_MAKE is set based on available cores, e.g. PARALLEL_MAKE = "-j 70 -l 140" on 64t 3970x (with PARALLEL_MAKE:pn-webruntime = "-j 40" to avoid OOMKs)
<mcfrisk>
jclsn: all those tasks need to run anyway for a build to pass. If linux kernel and its resources are fully loaded up, then in theory kernel knows best how to handle the situation. Reality is different but heavily depends what you are building and what the CPU, IO etc needs are.
grma has joined #yocto
Saur has quit [Ping timeout: 260 seconds]
jclsn[m] has joined #yocto
<jclsn>
Well, I will do some testing now and see
<mcfrisk>
the plain CPU load is usually not a problem, running out of RAM is a show stopper, doing too much IO to disk is bad especially if there is RAM available and rm_work would delete the files anyway..
Herrie|2 has joined #yocto
Herrie has quit [Read error: Connection reset by peer]
Herrie|2 is now known as Herrie
arielmrmx has quit [Remote host closed the connection]
<jclsn>
I am reading "Understanding the Linux Kernel" atm. Hopefully I will be able to make my own opinion about this soon :)
arielmrmx has joined #yocto
Saur has joined #yocto
rber|res has joined #yocto
<rber|res>
it looks like images contain LICENSE="MIT", which does not make sense to me, since it depends on the packages installed and image.bbclass contains LICENSE ?= "MIT" anyways.
<rburton>
d-fens: new release every six months, ~april and ~october. every two years the april release is a LTS.
<d-fens>
rburton thanks, is it best to stay on the LTS branch and do a weekly git pull or only run the cve check and react to specific alerts?
<rburton>
d-fens: if I was releasing products i'd use the LTS point releases, picking commits from the branch early as needed but they'll merge out every release.
<rburton>
the point releases are fairly frequent
<d-fens>
rburton i think i'll go for a stable build and just catch CVE as needed and do the big bump to the next LTS when available
<rburton>
i'd definitely keep up with the LTS point releases
<rburton>
they pull in a pile of cve fixes and other improvements
<rburton>
but they'll be safe: no unexpected upgrades or big changes, it's all bug fixes and security work
<d-fens>
ok , that means just git pull on the same branch (e.g. kirkstone)
<rburton>
yes, there are tags and proper releases too
<hmw[m]>
hi, my do_ configure is not working
<hmw[m]>
it try to do a cp **//.config instead of **/.config ( but i can´t find where the do_configure is declared )
Guest82 has joined #yocto
alinucs has quit [Read error: Software caused connection abort]
florian_kc has joined #yocto
alinucs has joined #yocto
<rburton>
what recipe?
<hmw[m]>
trying to get old bitbake recipes working to get a uboot 2017 version working on dunfell
<Guest82>
Hi All, I'm trying to figure out what it takes and what is best practice to isolate a cpu for a particular process at boot time. It seems either isolcpus/affinity or cgroups/cpuset are candidates for this. What would your reccomendations be regarding Yocto setup to take care of this for me?
florian_kc has quit [Ping timeout: 248 seconds]
starblue has quit [Ping timeout: 268 seconds]
starblue has joined #yocto
florian_kc has joined #yocto
nemik has quit [Ping timeout: 260 seconds]
nemik has joined #yocto
<rburton>
Guest82: if its a daemon, then use systemd and just setup the affinity in the service file
Guest13 has joined #yocto
nemik has quit [Ping timeout: 268 seconds]
nemik has joined #yocto
<Guest13>
hi, everyone.
<Guest13>
i have a custom meta-layer(meta-mymeta) and fsl-community-bsp-platform(kirkstone). i have a custom machine config for an imx som(coral-dev.conf) which is not included in meta-freescales/conf/machine. also im using fslc-xwayland.conf distro.
<Guest13>
im trying to specify opencv version in like that: "PREFERRED_VERSION_opencv = "4.5.2.imx"
<Guest13>
but im getting this warning :
<Guest13>
"WARNING: preferred version 4.5.2.imx of opencv not available (for item opencv)
<Guest13>
WARNING: versions of opencv available: 4.5.5
<Guest13>
NOTE: Resolving any missing task queue dependencies
<Guest13>
WARNING: preferred version 4.5.2.imx of opencv not available (for item opencv)
<Guest13>
WARNING: versions of opencv available: 4.5.5
<Guest13>
WARNING: preferred version 4.5.2.imx of opencv not available (for item opencv-dev)
<Guest13>
WARNING: versions of opencv available: 4.5.5"
<rburton>
so it doesn't see the 4.5.2.imx release
<rburton>
where is that in the layers, and did you actually add that layer to bblayers.conf?
<rburton>
the lines above will tell you if the version got skipped for some reason
<Guest13>
it's in "meta-freescale/recipes-support/opencv" and i added "${BSPDIR}/sources/meta-freescale" to bblayers.conf.
<rburton>
custom machine? that opencv recipe does COMPATIBLE_MACHINE = "(mx8-nxp-bsp)"
<rburton>
so unless your machine matches that regex, then it won't work
<Guest13>
yep, custom machine.
<rburton>
meta-freescale has some really rough edges, like this...
<rburton>
the policy appears to be use their machine or don't use their layer
<rburton>
you can't pick and chose bits, it's not written generically
<rburton>
an easy hack would be to just delete that COMPATIBLE_MACHINE line from the recipe
<Guest13>
rburton thank you so much for help.
<rburton>
if you're a paying customer i'd complain to them
<Guest13>
im a student and trying to improve my yocto knowledge :D
<rburton>
lesson learnt: why hardcoding machines isn't great
<rburton>
i wonder if a bbappend would work
<rburton>
you might be able to create opencv_4.5.2.imx.bbappend and add a new COMPATIBLE_MACHINE line in there for your machine
<rburton>
you'll still have the rest of the machine overrides in that file to sort out, but that's probably the right thing to do anyway
mvlad has quit [Remote host closed the connection]
<Guest13>
rburton thank uuuu
mvlad has joined #yocto
manuel_ has joined #yocto
manuel1985 has quit [Ping timeout: 256 seconds]
<Guest13>
rburton as you said , i created "opencv_4.5.2.imx.bbappend" in my layer and just added "COMPATIBLE_MACHINE = "coral-dev"" problem gone, thank you again.
<phako[m]>
is there a deadline for the summit registration?
<amelius>
phako[m]: i havn't seen any deadline, only CFP closes 28th
<LetoThe2nd>
phako[m]: you can literally still register when the event is already running, AFAIK
rob_w has quit [Ping timeout: 256 seconds]
davidinux has quit [Ping timeout: 256 seconds]
davidinux has joined #yocto
Guest13 has quit [Quit: Client closed]
<jclsn>
I am experiencing issues with devool modify since kirkstone when AUTOREV is set in the recipe
<jclsn>
Exception: bb.fetch2.FetchError: Fetcher failure: Recipe uses a floating tag/branch without a fixed SRCREV yet doesn't call bb.fetch2.get_srcrev() (use SRCPV in PV for OE).
<jclsn>
Nothing in the kikrstone changelog about it. Is this a bug?
rob_w has joined #yocto
GNUmoon has quit [Ping timeout: 255 seconds]
BobPungartnik has joined #yocto
GNUmoon has joined #yocto
BobPungartnik has quit [Client Quit]
Guest13 has joined #yocto
<Guest13>
lets say i have the test.bb recipe. In do_install:append(){} there is a line like this:
<Guest13>
and this line is wrong. im getting an error in build because of this line. can i fix this line somehow by creating test.bbappend or do i need to fix test.bb?
<rburton>
the bbappend would need to :remove the broken bits, which may be tricky
<jclsn>
So I can confirm: When I check the source out with a fixed commit hash, building from the devtool workspace works. When I check out from a recipe set to AUTOREV, the build fails with the above mentioned failure
<Guest13>
rburton i dont know how to use it. can you show an example or what can i find by searching? i mean how to use ":remove" for my situation
<rburton>
Guest13: hard to give a concrete example without knowing what you actually want to do
<rburton>
Guest13: fixing the recipe is the best thing to do
<Guest13>
i want to change "cp -f bin/example ${D}${datadir}/samples/bin/" which is in "do_install:append(){}" to "cp -f ${S}/example ${D}${datadir}/samples/bin/" in bbappend somehow.
<Guest13>
i dont want to change .bb recipe for just one line
vm1 has quit [Quit: Client closed]
<LetoThe2nd>
Guest13: i don't think that this is possible.
<Guest13>
Log : "| cp: cannot stat 'bin/example_*': No such file or directory"
<Guest13>
when i comment line 291, it's working perfectly.
<qschulz>
jclsn: check if we haven't already a bug entry for this in our Bugzilla otherwise please open a ticket (also check it still happens in master)
<RP>
mcfrisk: I think there are patches around which change the stripping options which may be related to that?
<aduda>
hey, i'm trying to build u-boot with my sdk and getting the error '...libfdt_wrap.c:154:11: fatal error: Python.h: No such file or directory ', enviroment is sourced from the sdk
aduda is now known as amelius
<mcfrisk>
RP: I was checking that, kirkstone has same patches as master. digging deeper..
xmn has joined #yocto
rob_w has quit [Ping timeout: 260 seconds]
tomzy_0 has quit [Quit: Client closed]
Lumpi has joined #yocto
Lumpi has quit [Client Quit]
hcg has quit [Quit: Client closed]
prabhakarlad has quit [Quit: Client closed]
manuel_ has joined #yocto
<phako[m]>
does it make sense to have a branch per yocto version if the only thing that ties me to that version is the patch I have to provide for that specific boost version?
<JPEW>
Ya, that's close to what I had. I used d.items() to prevent the extra lookup, and also start the function with 'store= self.store' to prevent the self lookup every loop
prabhakarlad has joined #yocto
<JPEW>
Also no need to return the array when it's modified in place
<JPEW>
Sorry, no need to return the deps *dict*
<RP>
JPEW: right, it just makes it more convenient in the calling code
<RP>
by using __setstate__ it reduces the separate add() call that was there a lot
manuel_ has joined #yocto
<JPEW>
Also, I didn't see any performance different using a single dict for the cache as opposed to split string/set, which simplifies it even more
<JPEW>
Especially since you eliminated the frozenset conversion
amitk_ has joined #yocto
manuel_ has quit [Ping timeout: 256 seconds]
<JPEW>
My last takeaway yesterday was that any deduplication in the server process will be slow and single threaded, and there's not much we can do about that
amitk_ has quit [Ping timeout: 260 seconds]
<JPEW>
But.... We might be able to help with that. If we can make the parsing threads send all the recipe data at once in one big pickle, then the dedupliction on that side will transfer across to the server, which wouldn't need to deduplicte (probably).
<JPEW>
It means there still would be duplicates in the final cache, but only up to the number of parser processes
<JPEW>
It would mean completely reworking the way progress is reported though, since we wouldn't be getting one recipe at a time from the workers anymore
amitk_ has joined #yocto
* JPEW
was completely nerd-sniped by RP yesterday. Hopefully I can still finish my dev day talk in time :)
<vmeson>
Consider an infinite N-dimensional grid of 1 ohm resistors, plot the resistance between adjacent points as a function of increasing dimensionality ?
d-fens has quit [Ping timeout: 260 seconds]
<JPEW>
As an engineer, 1 ohm is too little for me to worry about, so "0"
<vmeson>
JPEW: lol
<LetoThe2nd>
vmeson: "radio guy to power guy: your 50/60Hz, thats basically DC anyways. power guy to radio guy: now please I touch your wires, then you touch mine"
<RP>
JPEW: I'm wondering if we could/should create a caching pickler. Subclass pickle to keep a cache of frozenset/strings at each end such that it would send a reference on later references rather than the object
<JPEW>
I think each pickled object it sends back has to be self contained, and I'm not sure how you would work around that
<RP>
JPEW: you can hook pickle so you'd just put your own reference in?
<JPEW>
Right, python 3.8 added out of band data transfer, so we could use that
<JPEW>
Maybe
<RP>
JPEW: I don't think you would even need that. First reference to set, you store in a dict and give it a number. Next time you send the number. Receiving end keeps a similar dict pointing where the object ends up
<RP>
you persist the dict on both ends and as long as you don't change the objects, it should work
<JPEW>
Ya, I don't know how much you can subclass pickle though. I think it's mostly in C
<JPEW>
But what your are talking about is the out of band buffer thing, it just doesn't do the actual transmit on first occurrence
<RP>
JPEW: well, you can control the __getstate__ on the class
<JPEW>
Ah, ya maybe that would work
lexano has quit [Ping timeout: 252 seconds]
<PhoenixMage>
tlwoerner: Looks like someone from Radxa is working on a mainline u-boot for Rock3a, already has it working on CM3 SODIMM and E25 (they are rk3566 though)
lexano has joined #yocto
<JPEW>
RP: ya I think that might work since we have a limited set of pickleable things. You'd need a cache per worker process though otherwise the indices would get crossed?
<RP>
JPEW: right, there would be a some juggling needed but in principle it could work
<RP>
JPEW: the cache would only need to be a lookup against the main cache too