tlwoerner has quit [Read error: Connection reset by peer]
tlwoerner has joined #yocto
<moto-timo>
I'll note that anyone can submit a layer. You do not have to be a layer maintainer.
<moto-timo>
You might unearth problems with a given layer and perhaps that will lead the the layerindex maintainers (me) sending the layer maintainers helpful emails.
smokey has quit [Quit: Ping timeout (120 seconds)]
nerdboy has quit [Remote host closed the connection]
vlrk has quit [Client Quit]
vlrk has joined #yocto
nerdboy has joined #yocto
nerdboy has joined #yocto
nerdboy has quit [Changing host]
davidinux has quit [Ping timeout: 255 seconds]
davidinux has joined #yocto
vlrk has quit [Quit: Client closed]
Saur75 has quit [Quit: Client closed]
Saur75 has joined #yocto
joekale has joined #yocto
starblue has quit [Ping timeout: 256 seconds]
starblue has joined #yocto
jclsn has quit [Ping timeout: 272 seconds]
jclsn has joined #yocto
mulk has quit [Ping timeout: 256 seconds]
mulk has joined #yocto
jmd has joined #yocto
pedrowiski has quit [Read error: Connection reset by peer]
sakoman has joined #yocto
Xagen has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
ablu has quit [Ping timeout: 260 seconds]
ablu has joined #yocto
<mischief>
is it possible to generate the '-native' sstate artifacts that a different host arch would need somehow? say, if we only push to our sstate mirror from x86_64 hosts, but we would want to provide sstate for people aarch64 laptops.
joekale has quit [Ping timeout: 264 seconds]
kpo has joined #yocto
kpo has quit [Ping timeout: 272 seconds]
xmn has quit [Ping timeout: 255 seconds]
alperak has joined #yocto
alimon has quit [Ping timeout: 260 seconds]
vlrk has joined #yocto
vlrk has quit [Quit: Client closed]
Saur75 has quit [Quit: Client closed]
Saur75 has joined #yocto
paulg has quit [Ping timeout: 246 seconds]
paulg has joined #yocto
Chaser has joined #yocto
jmd has quit [Remote host closed the connection]
jmd has joined #yocto
Saur75 has quit [Quit: Client closed]
Saur75 has joined #yocto
jmd has quit [Remote host closed the connection]
gvmeson has joined #yocto
vmeson has quit [Ping timeout: 260 seconds]
alperak has quit [Quit: Client closed]
shivamurthy has quit [Ping timeout: 256 seconds]
shivamurthy has joined #yocto
Daanct12 has quit [Ping timeout: 252 seconds]
amitk_ has joined #yocto
amitk has quit [Ping timeout: 268 seconds]
alessioigor has joined #yocto
mckoan|away is now known as mckoan
vladest has quit [Ping timeout: 264 seconds]
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
Daanct12 has joined #yocto
rob_w has joined #yocto
alperak has joined #yocto
Guest94 has joined #yocto
pretec has joined #yocto
Guest94 has quit [Client Quit]
luc4 has joined #yocto
frieder has joined #yocto
vladest has joined #yocto
zpfvo has joined #yocto
ptsneves has joined #yocto
ptsneves has quit [Ping timeout: 255 seconds]
mbulut has joined #yocto
mbulut has quit [Client Quit]
prabhakalad has joined #yocto
prabhakalad has quit [Quit: Konversation terminated!]
prabhakalad has joined #yocto
leon-anavi has joined #yocto
creich has joined #yocto
Saur75 has quit [Quit: Client closed]
Saur75 has joined #yocto
<creich>
hey guys, we've encountered very low download speeds (~40KiB/s) when cloning the linux-yocto repo from git.yoctoproject.org. we've watched this a few weeks now, and it allways is only that one repository. is this a known issue? or is there any specific reason for this?
<RP>
creich: no specific reason. Could you drop helpdesk@yoctoproject.org a note mentioning the issue along with which IP address git.yoctoproject.org is resolving to and a traceroute please?
<RP>
creich: we think there is a problematic network route but we're struggling to resolve it since we don't own the network involved
<creich>
RP: thanks for the hint. i'll write a mail
<RP>
rburton: how does the screensaver work on our qemu images?
<RP>
rburton: you might wonder why I ask that...
Saur75 has quit [Quit: Client closed]
Saur75 has joined #yocto
pretec_ has joined #yocto
<mckoan>
clever: same here, and it results in a do_fetch failure
pretec has quit [Ping timeout: 256 seconds]
<mckoan>
clever: consider that in EU we have problems due to undersea cables broken in red sea
<RP>
rburton: xorg.conf manuals say 10 mins then 20-40 mins for DPMS
pretec_ is now known as pretec
Vonter_ has quit [Ping timeout: 264 seconds]
Vonter has joined #yocto
pretec_ has joined #yocto
pretec has quit [Ping timeout: 256 seconds]
ptsneves has joined #yocto
Guest9743 has joined #yocto
starblue has quit [Ping timeout: 264 seconds]
starblue has joined #yocto
Saur75 has quit [Quit: Client closed]
Saur75 has joined #yocto
Noor has joined #yocto
<Noor>
Hello Guys, I have a question. is there a way to say the recipe that it should not generate the sstate and everytime it should build from scratch
<creich>
Noor: SSTATE_SKIP_CREATION = "1"
<creich>
Noor: you can put this into your recipe, this way it shuold avoid using sstate
<mckoan>
Noor: or do_compile[nostamp] = "1"
<Noor>
aahhh thanks. so nice if you guys
<creich>
mckoan: wouldn't that force a rebuild like everytime?
<creich>
just asking out of couriosity, since i also was just thought about it
<Noor>
I think so. I think image recipes has this
<creich>
Noor: so actually it depends on what you need explicitly
<creich>
the second option would rebuild that recipe every time you run bitbake
<creich>
the other option should only avoid using a sstate cache but i think it'll still use stamps and try to avoid a rebuild when there is no obvious change
<rburton>
the question is do you actually want it to run every time (in which case use nostamp) or do you just want to force it in testing (in which case use bitbake -C)
<creich>
exactly
<rburton>
or is the problem "my recipe makes so much sstate its actually faster to rebuild than pull from sstate"
pretec__ has joined #yocto
<Noor>
I am moving towards SSTATE_SKIP_CREATION as I don't want it rebuilding every time. I just want not use sstate. I remember nostamp is used in images which get rebuilt every time
pretec_ has quit [Ping timeout: 264 seconds]
<Noor>
Actually I want gdb to be used outside from build folder. Currently using it from its image folder. so when it is built from cached image folder is not there so we can't find working binary
<wmills_>
I use screen /dev/ttyUSBX for a serial terminal. With recent builds from master my text cursor disappears. This has not happended to me before. Does anyone know why?
<rburton>
Noor: so you're battling sstate because of something to do with gdb?
<rburton>
if you want to run gdb-native then running it from inside the build tree is just the wrong thing to do
<rburton>
because, yeah, if it came from sstate then the build tree does not exist
<rburton>
Noor: oe-run-native is the tool you want to run an arbitrary binary from an arbitrary native recipe. that will handle setting paths and dependencies so linkage actually works.
<rburton>
classic example of 'explain your problem not what you think the solution is' to be honest :)
Saur75 has quit [Ping timeout: 250 seconds]
pretec_ has joined #yocto
pretec__ has quit [Ping timeout: 255 seconds]
goliath has quit [Quit: SIGSEGV]
pretec_ has quit [Quit: Leaving]
lexano has joined #yocto
Daanct12 has quit [Quit: WeeChat 4.2.1]
Saur75 has joined #yocto
<Noor>
:)
<Noor>
we need gdb binary to debugging purpose. yocto docs recommended to build gdb-cross-<architecture>. We found that "tmp/work/x86_64-linux/gdb-cross-aarch64/11.2-r0/image/home/ahsann/mel/releases/builds/ginkgo/hycon/build_hycon/tmp/work/x86_64-linux/gdb-cross-aarch64/11.2-r0/recipe-sysroot-native/usr/bin/aarch64-oe-linux/aarch64-oe-linux-gdb" can be
<Noor>
used. But when gdb-cross-<architecure> is built from sstate image folder is not created as install tasks is not executed. So I want to build gdb-cross-<architecture> from scratch (don't use sstate) so that we can get a working binary in image folder.
<Noor>
I hope explained the problem and my solution both above
<rburton>
_never_ run stuff from the build tree directly
<rburton>
oe-run-native will tell you to run bitbake gdb-cross-aarch64 -caddto_recipe_sysroot, which will populate the sysroot for you
<rburton>
you don't want to build it from scratch, you want a sysroot
<rburton>
as i said: explain the problem not what you think the solution is
<landgraf>
rburton: "_never_ run stuff from the build tree directly" - unless you want to debug some weird issue with built binaries :)
<rburton>
Noor: problem: i need to run a binary from a native recipe. your solution: i want to turn off sstate so when. i build gdb it always rebuilds. actual solution: populate the sysroot for the native recipe.
* landgraf
ran opkg from build tree few times
<rburton>
landgraf: shush :)
xmn has joined #yocto
Chaser has quit [Ping timeout: 252 seconds]
Chaser has joined #yocto
ctraven_ has quit [Ping timeout: 255 seconds]
ctraven has quit [Ping timeout: 256 seconds]
rsalveti has quit [Quit: Connection closed for inactivity]
<Noor>
rburton: (y)
<Noor>
now next step. I have my solution I will not share it :). I want to have this gdb binary available when we build the virtual kernel so that our customer don't have to build gdb-cross-aarch64 by themselves. They just be able to find the binary and mentioned place when kernel is built
<Noor>
now how to add gdb-cross-aarch64 -> addto_recipe_sysroot dependency with virtual kernel so that when kernel is built binary is available for the customer
rob_w has quit [Remote host closed the connection]
Guest2 has joined #yocto
Xagen has joined #yocto
mvlad has joined #yocto
gvmeson is now known as vmeson
joekale has joined #yocto
Guest2 has quit [Quit: Client closed]
Xagen has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<rburton>
that's just a [depends]
rsalveti has joined #yocto
Chach_Deenu has joined #yocto
rfuentess has joined #yocto
Noor has quit [Quit: Client closed]
Noor has joined #yocto
<Noor>
rburton: but in that case we will not have gdb binary available to the customer or I am missing something
<rburton>
Noor: you're not explaining what the customer builds or is given
<rburton>
Noor: if the customer builds the kernel and you have a dependency on gdb-cross in the kernel then they'll also build gdb-cross
Michael_Guest has joined #yocto
Michael_Guest has quit [Client Quit]
Chaser has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<Noor>
rburton: We our customer has our BSP. So we want to provide them a faciltiy to debug the kernel. We don't want him to do it via SDK. So when he build a BSP we want to provide a working binary of gdb-cross so that he can debug kernel. we also want if customer remove the tmp folder and build is executed from sstate use still be able to get the
<Noor>
gdb-cross bianry.
<rburton>
Noor: as i said, use addto_recipe_sysroot and it doesn't matter if sstate is used or not.
<rburton>
if you want to provide tools that work without a tmpdir then that's called a SDK
Chaser has joined #yocto
<Noor>
ah now I get it. You mean something like do_compile[depends] += "gdb-cross-aarch64:do_addto_recipe_sysroot" in kernel recipe
<rburton>
do_build[depends] but yeah
<Noor>
one last questions :). is there a way to use override in do_build[depends] syntax. So that this is only effective when this override is there
belsirk has joined #yocto
<rburton>
no
rfuentess has quit [Ping timeout: 240 seconds]
Xagen has joined #yocto
roussinm has joined #yocto
Chaser has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<alperak>
lets say, need to add a recipe(foo.bb) to meta-oe but there is a meta-foo only for this recipe(foo.bb) and this layer is depends on meta-oe. what should i do in this case? should i use RCONFLICTS or PROVIDES?
Saur75 has quit [Quit: Client closed]
Saur75 has joined #yocto
vladest has quit [Quit: vladest]
<roussinm>
My recipe depends on `tbb`, but `tbb` isn't installed in my image. From my current knowledge of yocto, if my recipe DEPENDS on a recipe, most of the time, it should be found in the resulting image. Anyone depending on `tbb` having to do something special?
<rburton>
roussinm: that's not true though
<rburton>
DEPENDS is *only* about what is in the sysroot at build time
<roussinm>
I think the current version that my recipe doesn't link on tbb yet, but futur version will, but the TBB dependency is already present. It is because my current recipe doesn't link on it bitbake doesn't bother adding the RDEPEND on it?
<rburton>
what does happen is that recipe foo will DEPEND on libbar, and then produce a binary foo that links to libbar.so. Bitbake knows that libbar.so is provided by libbar and add RDEPENDS for you. This is why you don't need to specify RDEPENDS=libc for every recipe, but also why no recipes pull in gcc into the target (which would be the case is DEPENDS=RDEPENDS)
<roussinm>
If that binary doesn't link on it, bitbake will omit that RDEPENDS, correct?
<rburton>
if there's no linkage theres nothing to add, right
<rburton>
it literally finds binaries in the packages, identifies what they link to, and adds RDEPENDS as needed
<rburton>
so your C helloworld will automatically depend on libc
<roussinm>
Ok, ya it all make sense now. thanks!
sakoman has quit [Quit: Leaving.]
sakoman has joined #yocto
sev99 has joined #yocto
Vonter has quit [Ping timeout: 268 seconds]
vladest has joined #yocto
Vonter has joined #yocto
belsirk has quit [Remote host closed the connection]
Saur75 has quit [Quit: Client closed]
Saur75 has joined #yocto
luc4 has quit [Ping timeout: 260 seconds]
Chaser has joined #yocto
<qschulz>
can someone tell me the purpose of the stable/<release>-nut branches for meta-openembedded ?
<qschulz>
it's way behind <release> and <release>-next branches, so wondering what they are for
<rburton>
-nut is next-under-test, so they're just staging branches that the maintainer was using
<rburton>
ignore them :)
<qschulz>
thanks :) (the stable prefix is very misleading though ;) )
Noor has quit [Ping timeout: 250 seconds]
jmd has joined #yocto
Chach_Deenu has quit [Quit: Leaving]
Saur75 has quit [Quit: Client closed]
Saur75 has joined #yocto
goliath has joined #yocto
<moto-timo>
qschulz: would stable-next make more sense somehow?
<moto-timo>
naming is hard
<rburton>
blame sgw
<rburton>
he called his staging branches mut from master-under-test
<rburton>
that turned into next-under-test for some for the branches that are not even next
<khem>
RP: seeing `| chown: cannot access '/mnt/b/yoe/master/build/tmp/work/riscv64-yoe-linux/systemd/255.4/image/var/log/journal': No such file or directory`
<khem>
I wonder if its related to version bump staged in master-next
mckoan is now known as mckoan|away
<qschulz>
moto-timo: what's wrong with <release>-next?
<moto-timo>
qschulz: well, -nut implies more (it is now a release candidate)
<moto-timo>
<release>-next just means it is a candidate to getting merged into the branch, but not a TAG
<qschulz>
moto-timo: not getting it sorry, can you try rephrasing?
<moto-timo>
qschulz: I was trying to figure out if a different naming of the branch would have not confused you.
<moto-timo>
qschulz: we're now beating a dead horse I think ;)
<qschulz>
didn't get the "-nut implies more (it is now a release candidate)" part :)
<qschulz>
moto-timo: i know which branch i should be using now, but i don't know why :)
<moto-timo>
qschulz: -nut is only used when that build is being targeted as a release candidate. One that will be released and tagged as a release.
<RP>
khem: I haven't seen that issue :/
Chaser has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
leon-anavi has quit [Remote host closed the connection]
<vvn>
oe-init-build-env is meant to be idempotent, correct?
<rburton>
yes
<rburton>
i think there was a bug in older releases where it wasn't quite
<qschulz>
moto-timo: why is stable/kirkstone-nut last commit 4 months ago then?
<moto-timo>
humans
<rburton>
qschulz: point at the policy that says that branch will 1) exist 2) be continually updated
<rburton>
:)
<rburton>
personally i'd always push next and testing branches to the contrib repo, but i'm not a meta-oe maintainer so that's moot
Chaser has joined #yocto
<qschulz>
moto-timo: a mistkae it is then, fine with this explanation :) thx
zpfvo has quit [Read error: Connection reset by peer]
<qschulz>
moto-timo: jus tto be sure: stable/<release>-nut (-rc0, -rc1, etc... basically) until <release> is released, then commits to <release>-next first and merged into <release> every now and then
<moto-timo>
qschulz: probably better answered by sakoman, armpit or khem
<moto-timo>
I am also not a stable release maintainer
<moto-timo>
and for layers like meta-java, I do not personally use -nut nor do I tag releases.
<moto-timo>
that would imply things in that layer work
<moto-timo>
"patches welcome"
<rburton>
qschulz: i'd ignore those branches, they're just staging branches used by the maintainer. the contents might be stale, or broken, or anything else.
<rburton>
if you actually watch oe-core master-next you'll see all sort of things pop in and out
<sakoman>
I use stable/xxx-nut for initial patch testing, patches will come and go in this branch, so you shouldn't use it for anything other than to check whether a new patch on the list has entered testing
<sakoman>
after I have a patch set that passes testing, I will send a review request to the mailing list, and move the patchset to stable/xxx-next
<sakoman>
If there are no comments I will pull the patches from stable/xxx-next into xxx
DvorkinDmitry has joined #yocto
<DvorkinDmitry>
ryzen 7700 vs 7900 - how big difference will I have with the same SSD/mem at OE/Yocto build speed?
<rburton>
if you have the clout, ask your hardware provider for test machines
<rburton>
best way to answer that question
<qschulz>
sakoman: so -nut branches are personal playground, got it. -next may be rebased, <release> is the real deal
florian has quit [Quit: Ex-Chat]
<khem>
RP: and the patch it in now sadly, I would suggest to revert it
<khem>
da9db878a15 systemd: fix dead link /var/log/README
<khem>
RP: this happens when VOLATILE_LOG_DIR = "no" is set
<khem>
I think poky defaults are different perhaps ?
sev99 has quit [Quit: Client closed]
<DvorkinDmitry>
rburton, I haven't ofcause. the question is how big improvement will be if I'll have 16 cores on 7700 or 24 cores on 7900 ?
<DvorkinDmitry>
is the RAM speed/size and SSD speed mre significant? Now I'm running my builds on 24GB RAM, 12 cores and RAID of two old hdds... And I feel it is very slow. The complete build takes about 12 hours.
Saur75 has quit [Quit: Client closed]
<DvorkinDmitry>
I'm buying the new machine for my builds, SSD instead of HDD, RAM 64GB ddr5 instead of 24GB DDR3, ryzen 7700 (or 7900) instead of core i7 (12 threads)
Saur75 has joined #yocto
Chaser has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
Chaser has joined #yocto
<DvorkinDmitry>
so I'm thinking if the CPU will be the bottleneck in my config? Do I have to buy 7900?
<gmorell>
DvorkinDmitry: the 7900 won't hurt, but I think the ssd/ram will definitely be far more significant for builds
<DvorkinDmitry>
gmorell, ok! If i'm building my current image in 12 hours, how fast do you think I'll build it with new config on 7700 or in 7900 ?
<DvorkinDmitry>
The price for 7900 is 1.5 bigger then 7700 for me.
florian_kc has joined #yocto
<gmorell>
I can't tell you about time, too many variables
<gmorell>
my personal philosophy is to max these out because you'll likely be running these for years on end
<DvorkinDmitry>
gmorell, hdd will be 10 times faster. RAM * 2.5 and 2 times faster. bogomis for 7700 is *3, for 7900 is *4.
<DvorkinDmitry>
so I'm thinking is cpu *3 or *4 will give the same performance improvement, or it will be less significant ?
<gmorell>
they're within spitting distance
<gmorell>
you have 50% more cores, but at a lower clock because of cooling factors
<khem>
RP: sent a patch to fix systemd fallout
<Xogium>
DvorkinDmitry: by the way, bogomips are worthless and don't reflect the actual performance of a machine
<Xogium>
just saying
<gmorell>
this too
<gmorell>
i wonder if anyone has benched yocto with x3d units vs non x3d units
<gmorell>
if the extra pile of L3 actaully helps at all
<Xogium>
tbh I personally stick to amd these days, because it feels like intel sucks more and more
<Xogium>
and not just in terms ot tdp ;)
<Xogium>
*of tdp, rather
florian_kc has quit [Ping timeout: 264 seconds]
frieder has quit [Remote host closed the connection]
Chaser has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
vladest has quit [Remote host closed the connection]
prabhakalad has quit [Ping timeout: 246 seconds]
prabhakalad has joined #yocto
ptsneves has quit [Ping timeout: 255 seconds]
<LetoThe2nd>
zeddii: the suggested incantation "k3s agent -t /var/lib/rancher/k3s/server/token --server http://localhost:6443/" errs out because of not https, and changing to "k3s agent -t /var/lib/rancher/k3s/server/token --server https://localhost:6443/" complains "failed to get CA certs: Get "https://127.0.0.1:6444/cacerts": read tcp 127.0.0.1:40214->127.0.0.1:6444: read: connection reset by peer"
joekale_ has joined #yocto
joekale has quit [Ping timeout: 268 seconds]
joekale has joined #yocto
joekale_ has quit [Ping timeout: 260 seconds]
florian has joined #yocto
alessioigor_ has joined #yocto
alessioigor has quit [Ping timeout: 268 seconds]
alessioigor_ is now known as alessioigor
mvlad has quit [Remote host closed the connection]
* RP
notes that swat isn't doing the work so does it himself to send the reminders
Xagen has joined #yocto
Xagen has quit [Client Quit]
<sakoman>
zeddii: your patch seems to have fixed the parted ptest issues, but now {'util-linux': ['fdisk:_gpt-resize']} ptest is failing :-(
<zeddii>
I'm out of time trying to update the old kernel. I'll pick it up in a few weeks again.
<sakoman>
zeddi: should I revert that linux-yocto 5.15 version bump series then?
<sakoman>
I really screwed up in merging them :-(
<zeddii>
trying the ubuntu patch might be another option, but it needs to have it's context fixed for our tree versus theirs.
Xagen has joined #yocto
<zeddii>
sakoman: that's probably your only option (revert), I have to turn my attention back to meta-virt and some internal things that are due very shortly.
<sakoman>
zeddi: OK, will do
|Xagen has joined #yocto
<zeddii>
the wind river guys are still using 5.15, so asking them what they are doing for this might also be useful.
<zeddii>
Kexin is sanity testing the -stable updates for their SDK based BSPs, so presumably they are keeping up to date with the tip of my tree and might not even know they have this problem.