leon-anavi has quit [Remote host closed the connection]
<Entei[m]>
Hey guys, I have a project to build a RISC-V_64 (with rpm support) image for qemu. I am starting with x86_64 to learn with Yocto for now. I wish to learn how to configure all the layers and packages that would be installed. I have watched many youtube videos on the topic, but nothing much comes up regarding configuration.
<Entei[m]>
So far I just did `bitbake core-image-full-commandline`, and got a very barebone image. I'd like it to have packages like `rpmbuild, systemd, gcc` etc.
<Entei[m]>
Could you guys help with some links or tutorials? This is my first time developing Linux distro.
<khem>
Entei: build something like core-image-weston-sdk or core-image-sato-sdk if you want X11
sakoman has quit [Quit: Leaving.]
amitk has joined #yocto
prabhakarlad has quit [Ping timeout: 260 seconds]
advi[1] has quit [Ping timeout: 260 seconds]
davidinux has quit [Ping timeout: 255 seconds]
davidinux has joined #yocto
rob_w has joined #yocto
Thorn has quit [Ping timeout: 255 seconds]
<JaMa>
abelloni: "e9f4bff7d1 insane.bbclass: make patch-fuzz a warning again" should be dropped from your contrib/abelloni/master-next as you added "54cf52189c devtool: ignore patch-fuzz errors when extracting source" which replaces it, right?
amitk_ has joined #yocto
kanavin has quit [Quit: Leaving]
goliath has joined #yocto
Thorn has joined #yocto
ecdhe has quit [Ping timeout: 252 seconds]
ecdhe_ has joined #yocto
<landgraf>
What's the "best/proper" way of building configuration file (boot.scr to be precise) and place it into IMAGE_BOOT_FILES. It depends on few other recipes (xen, qemu-helper and few native recipes). I've done it with bbclass and -config-native.bb recipe and both approaches work but neither way looks good :-/
Michael23 has joined #yocto
leon-anavi has joined #yocto
<Entei[m]>
<khem> "Entei: build something like..." <- That's still a reference image. I need to customize as per my needs.
<LetoThe2nd>
yo dudX
amitk_ has quit [Remote host closed the connection]
Guest76 has joined #yocto
goliath has quit [Quit: SIGSEGV]
Guest76 is now known as kaari
kaari is now known as _kaari
<_kaari>
Hey guys, I've started playing with yocto. Steep learning curve! Anyway, I know this has been asked a lot of times before, but why the well does the kernel take so long to fetch ?? DL speed is 1Mbps and it's been 30 hours, still at 50%. Any idea to optimize this ? What is it downloading exactly ? I hope not the whole git tree.
<LetoThe2nd>
_kaari: well the kernel is HUGE! ;-) and this seems to be a connection problem then, most kernel mirrors can provide much more than that.
<LetoThe2nd>
_kaari: are you using some board that maybe points to a non-standard or custom repository?
<landgraf>
_kaari: you can try to change the recipe to use shallow clone but it depends on the recipe/kernel's vendor/etc 30 hours is way to much
<_kaari>
Thx for the response guys :D I'm merely building a standard poky with systemd, on my dev machine in a docker container. I'd like to try the shallow clone / depth=1 but I'm not sure how, could you please point me to the right direction ?
Schlumpf has joined #yocto
zpfvo has joined #yocto
<RP>
khem: interesting. I'd just worry about the overhead of ptrace in our workloads
<JaMa>
wrt question about using ECC (so that it's not completely OT), damn, everything should have a checksum; I'm synchronizing some files betwen local disk and NAS and was surprised that some files are slightly different, 1 family video has 2 bit-flips and another has 4, but both seem to play the same, so I don't know which one is the "original"
<JaMa>
I know ECC doesn't prevent this as well, but at least lowers the probability of the issue being caused by ram not disk or network
<rburton>
huh
<JaMa>
I guess I'll add md5sums files to every directory on NAS, so that at least next time I'll know if it changed since today
amitk__ has joined #yocto
amitk_ has quit [Ping timeout: 260 seconds]
Schlumpf has quit [Ping timeout: 260 seconds]
_kaari has quit [Quit: Client closed]
<RP>
p34nuts[m]: Wouldn't you want this by download rather than by subdir?
invalidopcode1 has quit [Remote host closed the connection]
<linex[m]>
I'm getting weird behavior, If I specify a preferred version of package, bitbake tells me that version is not available, and when I switch, the same happens again
<linex[m]>
```
<linex[m]>
WARNING: preferred version v1.2.14+git% of containerd-opencontainers not available (for item virtual-containerd)
<linex[m]>
WARNING: versions of containerd-opencontainers available: v1.5.8+gitAUTOINC+1e5ef943eb
<linex[m]>
the difference in the error messages is virtual-containerd and virtual/containerd
<RP>
linex[m]: it sounds like the PROVIDES/DEPENDS of some recipe is incorrect/mismatched
seninha has quit [Quit: Leaving]
Michael23 has quit [Quit: Client closed]
vvmeson has quit [Quit: Konversation terminated!]
<linex[m]>
yeah I'm not sure why there is virtual-containerd and virtual/containerd
sakoman has joined #yocto
vvmeson has joined #yocto
AKN has joined #yocto
rob_w has quit [Remote host closed the connection]
davidinux has quit [Ping timeout: 248 seconds]
seninha has joined #yocto
davidinux has joined #yocto
AKN has quit [Read error: Connection reset by peer]
davidinux has quit [Ping timeout: 255 seconds]
davidinux has joined #yocto
Thorn has quit [Quit: Oh, so they have Internet on computers now!]
ThomasAnderson[m has joined #yocto
Guest49 has quit [Ping timeout: 260 seconds]
otavio has joined #yocto
DvorkinDmitry has quit [Remote host closed the connection]
<zeddii>
one is depends, and one is rdepends
td72 has joined #yocto
<td72>
Hi,
<td72>
is there some reason why pseudo abort was missed during introduction of inclusive language to Yocto?
<p34nuts[m]>
<RP> "p34nuts: Wouldn't you want..." <- Actually I'm playing with different use cases and I'm realizing that grouping by subdir isn't a good idea anyway, because many `file://` src uris wouldn't fit into that scheme. I'm still thinking...
td72 has quit [Quit: Client closed]
td26 has joined #yocto
<p34nuts[m]>
p34nuts[m]: the final goal is to get and represent this information (which would require some further steps beyond unpack): upstream source file => workdir source file => debug source file => binary file
<p34nuts[m]>
* would require tracing some further
kscherer has joined #yocto
<marex>
ERROR: Nothing RPROVIDES 'tk' (but /wd/meta-openembedded/meta-python/recipes-devtools/python/python3-pillow_9.4.0.bb RDEPENDS on or otherwise requires it)
<marex>
tk was skipped: missing required distro feature 'x11' (not in DISTRO_FEATURES)
<marex>
is this known ?
<marex>
this started popping up since meta-openembedded kirkstone ffe9a543e ("python3-pillow: add ptest support")
<marex>
khem: ^
goliath has quit [Quit: SIGSEGV]
<marex>
and yes, I dont have x11 in DISTRO_FEATURES
<JaMa>
armpit: ^^
<JaMa>
armpit: marex: it's fixed in master with 6e8c90560e0 which wasn't backported to kirkstone
<JaMa>
armpit: khem: while kirkstone and langdale have "9bb8195c84 python3-pillow: Add distutils, unixadmin for ptest" which isn't in master
<marex>
JaMa: ah, thanks !
<RP>
p34nuts[m]: right, you likely want to be easily able to translate a file back to a uri
<p34nuts[m]>
RP: ideally, some graph representation language would be the easiest option to put all information together... but AFAIK standard python doesn't support any of such languages
roussinm has quit [Quit: WeeChat 3.0]
roussinm has joined #yocto
invalidopcode1 has quit [Remote host closed the connection]
invalidopcode1 has joined #yocto
<p34nuts[m]>
<p34nuts[m]> "ideally, some graph representati..." <- would it be acceptable to create some simple custom data class in python and store data in pickle files?
kanavin has joined #yocto
<marex>
JaMa: shall I just send it to sakoman or is the backport already underway ?
<JaMa>
marex: it's in meta-oe so armpit is maintainer
<JaMa>
marex: I don't even use pillow, so if you can send the missing change to master and that tk backport to langdale and kirkstone then go ahead please
GNUmoon has quit [Remote host closed the connection]
GNUmoon has joined #yocto
<JaMa>
marex: are you going to send the other one as well?
<marex>
JaMa: the other one ?
<JaMa>
16:42 < JaMa> armpit: khem: while kirkstone and langdale have "9bb8195c84 python3-pillow: Add distutils, unixadmin for ptest" which isn't in master
Haxxa has quit [Quit: Haxxa flies away.]
<JaMa>
and langdale should be tagged as well in the one you sent already
<marex>
JaMa: I can send it, but I cannot test it beyond building it
<JaMa>
it's the same version and simple patch, lets keep it in sync in all 3 branches
Haxxa has joined #yocto
<JaMa>
vmeson will see it as well, when it's sent to ML
<khem>
thanks for finding that JaMa I have applied it into master-next with needed rebase
pabigot has joined #yocto
<marex>
khem: thanks
rsalveti has quit [Quit: Connection closed for inactivity]
<khem>
I usually ignore release tagged patches and let armit sort them out
<khem>
unless they have [master/kirkstone] or some such
<marex>
khem: while I have your attention ... do you mind a question about meta-browser ?
<marex>
is that still something you maintain ?
<khem>
sometimes same patch applies to multiple branches
<JaMa>
khem: thanks, you're the best! :)
<khem>
when some patch contains [releasename] I assume its always for that release and fix is not needed in master
<khem>
marex: sure go for it
<marex>
I need tweak like this for chromium
<marex>
do_configure:prepend() { sed -i "/kMainBrowserContentsMinimumWidth/ s@500@480@" ${S}/chrome/browser/ui/views/frame/browser_view_layout.h
<marex>
}
<marex>
khem: to support panels which are 480px wide , chromium defaults to 500px wide minimum
<marex>
so when I run chromium --fullscreen --kiosk , the fails on those small panels
<marex>
khem: I am now trying to get the change into chromium itself, let's see how it goes, but if it doesn't go, would you be conductive to including this, in patch form most likely, in meta-browser ?
Chocobo has joined #yocto
<JaMa>
khem: small nitpick to keep the files in sync, the tk dependency should be last (after ${PYTHON_PN}-unixadmin)
<khem>
hmm can we turn that into a knob ?
<JaMa>
but there is also missing space before / on unixadmin which would be another difference from already applied langdale and kirkstone, my OCD is triggered easily ..
<otavio>
marex: maybe adding a PACKAGECONFIG to set this minimum width? Did you consider using cog?
<marex>
otavio: does cog launch chromium now ?
<marex>
otavio: or is it still webkit only ?
<otavio>
marex: no. As webview.
<marex>
so, no, chromium is requirement
<otavio>
marex: webkit (wpe in fact)
<Chocobo>
Hi all. Have a frustrating problem that I am hoping you might be able to help with - I am generating a btrfs rootfs image and I would like it to contain the image.ub (kernel + dtb). Unfortunately I think there is a circular dependecy. I created a ROOTFS_POSTPROCESS_COMMAND_append function that does "install -m 0644 ${DEPLOY_DIR_IMAGE}/fitImage ${IMAGE_ROOTFS}/boot/image.ub
<Chocobo>
but image.ub seems to grow unbounded.
<khem>
what advantages do you have with cog that chrome driver does not offer
<otavio>
khem: it is lighter
<otavio>
khem: faster to build
<marex>
and the JS interpreter is slower
<otavio>
khem: and smaller
<marex>
and users test with chromium on their desktop, so they want chromium on their target
<otavio>
marex: this too
<khem>
JaMa: right I got carried with 'tk' sorting before 'unix' but its actually 'python3-un...
<marex>
for small/slow/old systems, wpe is fantastic
<marex>
for anything new-ish, chromium it is
<marex>
otavio: re packageconfig, I would much rather have this land in chromium itself rather than have it in OE in some way
<otavio>
marex: it wouldbe good
<otavio>
i
<otavio>
marex: it would be good indeed
<marex>
Chocobo: isnt the kernel automatically deployed into /boot/ of the rootfs ?
<JaMa>
khem: I was just checking git diff origin/kirkstone origin/master-next -- meta-python/recipes-devtools/python/python3-pillow_9.4.0.bb (not alphabetical order)
<marex>
Chocobo: which OE release is that ?
<khem>
marex: in grand scheme of things when you are ready to use chrome optimizing to run it using a cog driver vs chrome driver could be a drop in ocean
frieder has quit [Remote host closed the connection]
<khem>
JaMa: I fixed it
<Chocobo>
marex: That may be part of my problem... this is petalinux, so it has some Xilinx crud around it. I haven't had much luck asking them so I thought I might try here.
<marex>
khem: I just run it directly , without cog
<khem>
I get the argument on WPE vs Chromium
<JaMa>
khem: thanks
<marex>
Chocobo: oh ... yeah ...
<marex>
petalinux ...
<marex>
zeddii: ^
<marex>
Chocobo: I just avoid petalinux like plague
<Chocobo>
I wish I could
<marex>
you actually can, it is just a weird wrapper around meta-xilinx* layers
<smurray>
khem: would you be open to backporting your mpd updates in meta-openembedded to kirkstone? There are PipeWire output fixes in 0.23.8 that I'd like to get so I can drop carrying patches
<Chocobo>
well... politically I might not be able to :)
<khem>
smurray: I will leave that to armpit to decide
<khem>
but they look fine to get backported
<smurray>
khem: ah, right. I can pull them out and submit them for kirkstone (and langdale, I guess?)
<vmeson>
where/how does one find recipes for node packages for nodejs? For example, trying to add more tests for node-18, we need: https://www.npmjs.com/package/tap
<vmeson>
Maybe we should NOT bother to run all these tests after all. We've added the basics to nodejs/master and the rest could be more trouble than they are worth... ;-)
* vmeson
nudges moto-timo
<moto-timo>
vmeson: most likely you do not want a recipe per npm package. This is why I mentioned the npm:// fetcher the other day. We want the npm dependencies staged in a cache like what the crate:// fetcher does.
brazuca has quit [Quit: Client closed]
<vmeson>
moto-timo: ok, thanks. Is that a WIP (or Work looking for developers) or is there a good example to look at a recipe that uses the npm fetcher?
<vmeson>
I'll spend some time learning about what's involved and decide if we're going to add the next round of ptests in the coming days.
<vmeson>
moto-timo: thanks, I'll take a look tomorrow once I get the chromium off me!
prabhakarlad has quit [Quit: Client closed]
<Chocobo>
Is there a way to completely scrape a package state and all of it's dependencies? I keep getting myself into situations where it seems like I need to go all the way to mrproper and that takes forever to build.
<vmeson>
moto-timo: well, the graves for MIPS and 32-bit OSes have been dug but I don't think they're dead yet!
<Chocobo>
It is like.. once I get it in a a bad state there is nothing I can do to recover it.
roussinm has quit [Quit: WeeChat 3.0]
roussinm has joined #yocto
DvorkinDmitry has joined #yocto
kevinrowland has joined #yocto
<kevinrowland>
How exactly do the files in tmp/deploy/images/*/ get removed when I clean the recipe that puts them there? I'm trying to write my own BBCLASS that deploys some special test results to tmp/deploy/tests/. So far I mimicked deploy.bbclass so that my_task[sstate-inputdirs] is a directory relative to WORKDIR, where I place the results, and then
<kevinrowland>
my_stask[sstate-outputdirs] is basically ${DEPLOY_DIR}/tests/. That actually works, and my results magically show up in ${DEPLOY_DIR}/tests/. Now I want those results to be removed when I clean the recipe, but it's not working.
<kevinrowland>
I discovered the manifest file in tmp/sstate-control/ -- and I see the sstate_clean_manifest function in sstate.bbclass.. still when I run :do_cleanall the files in the manifest aren't removed
<kevinrowland>
I also see that do_clean should call all of the functions in CLEANFUNCS, which includes sstate_cleanall, which calls sstate_clean_manifest.
<kevinrowland>
But again, after a :do_clean, the files are still there! I can start to instrument the python with print()s, but was hoping someone here had a clue
<kevinrowland>
Ok it looks like SSTATETASKS is empty when state_cleanall runs, although when I do `bitbake -e` I do see that my task is in SSTATETASKS along with the usual suspects
<kevinrowland>
Nevermind, SSTATETASKS is not empty, just operator error