<LetoThe2nd>
RP: added patches and status yesterday, yes. Did something go wrong?
starblue has quit [Ping timeout: 264 seconds]
starblue has joined #yocto
prabhakarlad has quit [Ping timeout: 250 seconds]
mvlad has joined #yocto
thomas_34 has joined #yocto
Daanct12 has quit [Quit: WeeChat 4.2.1]
<RP>
LetoThe2nd: I can't see them there? :/
<LetoThe2nd>
RP: Strange, I’ll check it in a few.
<RP>
LetoThe2nd: there were some other tweaks I was going to mention
enok has joined #yocto
Daanct12 has joined #yocto
<thomas_34>
I've got some issues with the deploy-task of my recipe. I specify a directory which should get used (DEPLOYDIR): https://pastebin.com/n1GBju9L
<thomas_34>
Can someone explain to me, why DEPLOYDIR for my main machine is NOT set to "${CUSTOM_PATH_DEPLOY_DIR_IMAGE}" ?
<thomas_34>
For the k3r5-machine it is working, and the variable got set correctly
<thomas_34>
I dont understand line 11 in particular. Why is there a pre-expansion value?
<RP>
thomas_34: you're setting the variable with an override so it would only apply if the override is active?
<thomas_34>
RP, I set DEPLOYDIR for the default machine config (line 5), and for k3r5 machine (line 9)
<thomas_34>
Both times, I set to the same variable. I don't understand why the "result" differs. (Line 13 and 18)
ehussain has joined #yocto
<RP>
thomas_34: line 18 is telling you the value of the variable "DEPLOYDIR:k3r5"
<thomas_34>
Shouldn't be DEPLOYDIR and DEPLOYDIR:k3r5 be both equal? If I execute bitbake once with default machine config, and then with k3r5 config?
<RP>
line 5 it is set in your recipe, then the recipe probably does "inherit deploy" and it is then set at line 7 by deploy.bbclass to something else?
<RP>
the override version works since override expansion happens later again
chep has joined #yocto
<thomas_34>
So wait, it is because of the order? Yes, I first set DEPLOYDIR in my recipe, and then do the inherit deploy.bbclass.
<thomas_34>
ahhhhh okay, now I understand...
ehussain has quit [Ping timeout: 256 seconds]
<thomas_34>
Okay - And what does "pre-expansion value" exactly means? The value of the variable before bitbake applies the override expansion?
Michael_Guest has quit [Quit: Client closed]
chep has quit [Ping timeout: 256 seconds]
Daanct12 has quit [Quit: WeeChat 4.2.1]
<RP>
thomas_34: pre expansion means it hasn't expanded any ${}
Daanct12 has joined #yocto
<thomas_34>
Okay, thank you - It still a kind of a mystery for me how actually the variables got set in detail.
lexano has joined #yocto
Saur_Home85 has quit [Quit: Client closed]
Saur_Home85 has joined #yocto
thomas_24 has joined #yocto
thomas_34 has quit [Ping timeout: 250 seconds]
Saur_Home85 has quit [Quit: Client closed]
Saur_Home85 has joined #yocto
enok has quit [Quit: enok]
enok71 has joined #yocto
prabhakarlad has joined #yocto
enok71 is now known as enok
simonew has quit [Quit: Client closed]
simonew has joined #yocto
thomas_24 has quit [Quit: Client closed]
chep has joined #yocto
Fahad has quit [Quit: Client closed]
Guest61 has joined #yocto
<Guest61>
Hi, I got a project on hardknott. I want to update. As I see that Scarthgap ist not out yet (and layers that I use are longer to catch up) is it better to (1) update to kirkstone and later update to scarthgap, or better update to nanbield and also later to scarthgap?
<Guest61>
goal is to avoid double work as much as possible
<JaMa>
Guest61: doing it in multiple steps is usually easier and you can maintain branch for each release, so whatever you fix for kirkstone will also be needed for nanbield and scarthgap, so won't double work (except the testing)
<Guest61>
JaMa Ok thanks, so probably going through kirkstone makes more sense
<JaMa>
I'm maintaining branch for each release (even when the intermediate releases between LTS are just build tested from time to time or while "bisecting" some failure), but it's often useful to find out that something got broken e.g. between mickledore and langdale
prabhakarlad has quit [Quit: Client closed]
<Guest61>
My initial plan is go for the unreleased scarthgap, but i see that NXP does not yet have a meta-qoriq for it
<Guest61>
was*
sev99 has joined #yocto
prabhakalad has quit [Ping timeout: 252 seconds]
Net147 has quit [Quit: Quit]
Net147 has joined #yocto
Daanct12 has quit [Quit: WeeChat 4.2.1]
Net147 has joined #yocto
Net147 has quit [Changing host]
thomas_34 has joined #yocto
<RP>
Guest61: FWIW scarthgap is pretty much ready for release now, it won't be changing much more
<ldywicki>
Hello, does anyone have a working reference of basic build which has qemu x86-64, efi/grub and systemd working? I started assembling this myself, but I think I've lost track of it and now I'm stuck in kernel panics due to /dev/vda mounts (`/dev/vda: Can't open blockdev`; `switch_root: can't execute '/sbin/init': No such file or directory`)
splatch has joined #yocto
<Guest61>
RP: My issue is that meta-qoriq does not yet have a scarthgap branch. But i could also try to work with their nanbield branch..
ldywicki has quit [Quit: Client closed]
<RP>
Guest61: right, I can't really help with that I'm afraid
<moto-timo>
Guest61: I would approach the meta-qoriq layer maintainers.
BCMM has joined #yocto
<Guest61>
thanks
Saur_Home85 has quit [Quit: Client closed]
Saur_Home85 has joined #yocto
goliath has quit [Quit: SIGSEGV]
Saur_Home85 has quit [Quit: Client closed]
Saur_Home85 has joined #yocto
vladest has quit [Ping timeout: 268 seconds]
afong_ has quit [Remote host closed the connection]
<simonew>
Hi, is the NVD situation on the sync agenda today?
flom84 has quit [Remote host closed the connection]
zpfvo has quit [Remote host closed the connection]
dankm has quit [Ping timeout: 255 seconds]
dankm has joined #yocto
npcomp has quit [Ping timeout: 252 seconds]
npcomp has joined #yocto
goliath has joined #yocto
<splatch>
hello :)
tangofoxtrot has quit [Remote host closed the connection]
<splatch>
coming back to earlier question from ldywicki, does anyone know publicly available project which assembles efi, systemd onto qemu x86-64 without making it too complex? When I look for examples there are some, but level of their complexity exceeds my current knowledge (and need).
<splatch>
Looking i.e. at meta-mender I see quite complex stuff being involved there, same applies to eclipse-leda. Their build pulls multiple classes, processors and include files making it rather hard to find core vertices of config.
tangofoxtrot has joined #yocto
Vonter has quit [Ping timeout: 240 seconds]
<thomas_34>
I know, chances are low - But has anyone an idea, why the kernel artifacts gets deployed in "e3g-e3mc/default_image"?
<thomas_34>
DEBUG: Staging files from /home/dock/oe/arago/build/tmp/arago-tmp-default-glibc/work/e3g_e3mc-oe-linux/linux-ti-staging/6.1.y+gitAUTOINC+637a6a8103-b.arago5_tisdk_1_edgeai_0_edgeai_7/deploy-linux-ti-staging to /home/dock/oe/arago/build/deploy-ti/e3g-e3mc/default-image
<thomas_34>
If I check with bitbake -e the environment, DEPLOY_DIR_IMAGE is set like this: DEPLOY_DIR_IMAGE="/home/dock/oe/arago/build/deploy-ti/e3g-e3mc/e3g-dev-e3mc-image"
<thomas_34>
I thought, do_deploy will put the files into $DEPLOY_DIR_IMAGE. Is that assumption wrong?
prabhakalad has joined #yocto
florian has joined #yocto
tgamblin has quit [Remote host closed the connection]
tgamblin has joined #yocto
<denix>
thomas_34: looks like e3g-e3mc is your MACHINE and e3g-dev-e3mc-image is your image. but check the environment where does "default-image" come from? I see you are using tisdk and edgeai. and for the record, TI uses "tisdk-default-image" naming, but not "default-image"...
<thomas_34>
denix, thank you for your attention. I have this configuration: 1. e3g-e3mc MACHINE config sets DEPLOY_DIR_IMAGE to "e3g-e3mc/default-image". 2. Image config (e3g-dev-e3mc-image) overrides DEPLOY_DIR_IMAGE to "e3g-e3mc/e3g-dev-e3mc-image".
<thomas_34>
I assumed, when I'm building e3g-dev-e3mc-image with the MACHINE e3g-e3mc, DEPLOY_DIR_IMAGE gets set to "e3g-e3mc/e3g-dev-e3mc-image" and is also used for the kernel which is part of that image.
<thomas_34>
If that assumption is wrong, then it makes no sense to search somewhere else for the error...
<denix>
thomas_34: you shouldn't be adjusting DEPLOY_DIR_IMAGE from the recipe - it's a global variable usually set in one of the conf files, e.g. bitbake.conf, layer.conf or in case of meta-ti, it also adjusts it in multiconfig config
BCMM has quit [Ping timeout: 264 seconds]
<thomas_34>
Okay, so machine configuration != image recipes? In that case?
<denix>
thomas_34: configurations are in .conf files, recipes are in .bb files
nerdboy has quit [Ping timeout: 272 seconds]
<thomas_34>
Okay that's bad.... I wanted to deploy the necessary boot-files (u-boot, kernel, etc..) for a particular image into a defined path
<thomas_34>
So, what I've done is this:
<thomas_34>
1. machine.conf
<thomas_34>
DEPLOY_PATH ?= "default-image"
<denix>
thomas_34: in very simple terms - variables defined in .conf files are global and visible to all recipes, but changing variables in any .bb recipe only has effect on that recipe, nothing else
<thomas_34>
DEPLOY_DIR_IMAGE = DEPLOY_PATH
<thomas_34>
2. image.bb
<thomas_34>
DEPLOY_PATH = "specific-image-path"
<thomas_34>
3. image2.bb
<thomas_34>
DEPLOY_PATH = "specific-image2-path"
<thomas_34>
denix, okay. So my example above does not work?
<denix>
thomas_34: so, those images will deploy their stuff where you want them, but any other recipes like the kernel won't see the change, will still use "default-image"
<thomas_34>
When I tested this with MACHINE=machine bitbake -e image2, DEPLOY_PATH seemed to set correctly to specific-image2-path
<thomas_34>
denix, okay..... damn it. Is the reason because bitbake starts a "new bitbake process", when it builds the kernel for the image? And therefore the DEPLOY_PATH variable from image2.bb is "not visible" anymore?
<denix>
thomas_34: no, it's not about a separate bitbake process, it's about recipes having own local variable namespace, so to speak. or think about them as containers - they cannot affect each other
nerdboy has joined #yocto
nerdboy has joined #yocto
nerdboy has quit [Changing host]
jmd has quit [Remote host closed the connection]
alperak has quit [Quit: Connection closed for inactivity]
mattsm3 has joined #yocto
goliath has quit [Quit: SIGSEGV]
mattsm has quit [Ping timeout: 246 seconds]
mattsm3 is now known as mattsm
jmd has joined #yocto
<thomas_34>
denix, does this also applies to $OVERRIDES ? If I would do something like this:
<thomas_34>
Thats my last hope. That overrides a "special" case for bitbake and would allow this
<denix>
again, changing OVERRIDES list in the recipe will be local to that recipe - kernel won't see those changes
mvlad has quit [Remote host closed the connection]
<denix>
BTW, where do you want the kernel artifacts to be deployed? it won't deploy in both of the locations
<thomas_34>
thank you very much denix for clarification. I hope your work at ti is fine :)
mattsm8 has joined #yocto
<thomas_34>
Well, I wanted to mimic the default behaviour of TI's arago. That it deploys all boot artefacts in the directory where the image also gets deployed
<thomas_34>
Especially with the k3r5 multiconfig, which builds stuff for the R5 of TDA4
mattsm has quit [Ping timeout: 260 seconds]
mattsm8 is now known as mattsm
<denix>
I don't work at TI... and Arago does not deploy boot artifacts on per-image basis
florian has quit [Ping timeout: 240 seconds]
<thomas_34>
Oh sorry - I assumend, because I thought I have seen your nick couple of times in their meta-repositories.
alperak has joined #yocto
ptsneves has joined #yocto
florian has joined #yocto
vladest has quit [Remote host closed the connection]
Hazza has quit [Quit: Haxxa flies away.]
Haxxa has joined #yocto
vladest has joined #yocto
vladest has quit [Client Quit]
simonew has joined #yocto
roussinm has joined #yocto
jmd has quit [Remote host closed the connection]
vladest has joined #yocto
thomas_34 has quit [Quit: Client closed]
goliath has joined #yocto
florian has quit [Ping timeout: 240 seconds]
rfuentess has quit [Remote host closed the connection]