mattes-bru has quit [Remote host closed the connection]
mattes-bru has joined #yocto
mattes-bru has quit [Remote host closed the connection]
mattes-bru has joined #yocto
invalidopcode has quit [Quit: Ping timeout (120 seconds)]
invalidopcode has joined #yocto
mckoan|away is now known as mckoan
<mckoan>
good morning
tomzy_0 has joined #yocto
zpfvo has joined #yocto
rfuentess has joined #yocto
goliath has quit [Quit: SIGSEGV]
manuel1985 has joined #yocto
mattes-b_ has joined #yocto
thomasd13 has joined #yocto
mckoan_ has joined #yocto
mattes-bru has quit [Ping timeout: 264 seconds]
mckoan has quit [Ping timeout: 248 seconds]
zpfvo has quit [Quit: Leaving.]
<qschulz>
o/
mckoan_ has quit [Ping timeout: 256 seconds]
leon-anavi has joined #yocto
rob_w_ has quit [Quit: Leaving]
zpfvo has joined #yocto
florian has joined #yocto
<LetoThe2nd>
yo dudX
<tomzy_0>
Hello
goliath has joined #yocto
<RP>
morning!
prabhakarlad has joined #yocto
mckoan has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
<JaMa>
morning
seninha has joined #yocto
seninha has quit [Remote host closed the connection]
seninha has joined #yocto
ptsneves has joined #yocto
ptsneves has quit [Ping timeout: 260 seconds]
frieder has joined #yocto
pbergin has joined #yocto
haroon-m[m] has joined #yocto
<thomasd13>
Do I generate the poky SDK with bitbake core-iamge-minimal-sdk ?
Kleist has joined #yocto
Kleist has quit [Client Quit]
nemik_ has quit [Ping timeout: 268 seconds]
<thomasd13>
ahhh. I do <image> -c populate_sdk. TI workflow spoiled me...
nemik_ has joined #yocto
nemik_ has quit [Ping timeout: 260 seconds]
nemik_ has joined #yocto
amsobr has joined #yocto
d-fens has quit [Read error: Connection reset by peer]
starblue has quit [Ping timeout: 256 seconds]
starblue has joined #yocto
<rburton>
thomasd13: yeah, ideally you build a sdk for a specific image. you *can* build a dedicated SDK recipe but there's no point when every image can build its own SDK.
florian_kc has joined #yocto
yann has quit [Ping timeout: 256 seconds]
jmk1 has joined #yocto
jmk1 has left #yocto [#yocto]
jmk1 has joined #yocto
yann has joined #yocto
Frank33 has joined #yocto
amitk_ has joined #yocto
seninha has quit [Ping timeout: 260 seconds]
<phako[m]>
what is the easiest way to build an image for virtualbox (i.e. skipping qemu-native for example) - reason: I need to hook something with connman into a rather weird virtual network I have set up based on virtualbox...
<rburton>
you need qemu-native to build some recipes
<rburton>
you can build a virtualbox image by setting the image fstype
<phako[m]>
ah. ok, then just adding wic.vdi is the minimalest thing I can do
<rburton>
yeah
<phako[m]>
right
<rburton>
you could make a new machine which doesn't need qemu-system-native, but that's only really useful if you want to tune the compiler flags or do further tweaks
<rburton>
if you also had some virtualbox kernel modules or userspace tools that could be the right thing to do
<rburton>
hm, i wonder if we should package the tools into qemu-native and not qemu-system-native.
<phako[m]>
actuallz, now that I have built it once, I proably don't have to care anymore anyway
<kanavin>
" I'm sure the intentions were good, but it only made me anxious about possible misuses of this technology."
<rburton>
absolutely
<rburton>
its equally impressive and terrifying
<rburton>
i asked it to produce a haiku arguing that Alien is a christmas film
<rburton>
never seen it sit for 20 seconds before writing, but it did produce one
jmk1 has left #yocto [#yocto]
<kanavin>
rburton, it's not just text. AI can nowadays isolate individual instruments from a stereo track. Which is how the latest Revolver reissue was re-mixed.
<kanavin>
rburton, and technology to age or de-age actors convincingly is coming soon as well.
<rburton>
already has, disney iirc had a demo last week
<kanavin>
not yet in an actual movie, but soon :)
<kanavin>
and I'm definitely going to that abba show :)
<kanavin>
that article made some people very, very angry. Those poor souls that believe in existence of a 'soul'.
xmn has joined #yocto
<matthias__>
Hi everyone. I have a question regarding bitbake vs devtool. When running devtool build <recipe> the cache is loaded twice: first time very fast and then again, but slow. My output reads as:
<matthias__>
"Loading cache: 100% (This is the fast one - just a second)
<matthias__>
Loaded 3591 entries from dependency cache.
<matthias__>
Parsing recipes: 100%
<matthias__>
Parsing of 2218 .bb files complete ...
<matthias__>
Loading cache 100% (This one takes 16seconds)
<matthias__>
Loaded 3591 entries from dependency cache."
<matthias__>
Any idea why i have two passes of the loading cache step?
<matthias__>
It does not happen if i build the exact same recipe with bitbake,
<kanavin>
matthias__, sadly we do not have a devtool maintainer, and it's not likely someone can give a quick answer but if you can investigate and propose a fix that would certainly be most welcome.
<kanavin>
devtool should not be doing things that subvert parse times
<matthias__>
Have you ever heard of that behavior before (i.e. is this maybe a regression?)
<kanavin>
matthias__, I never run 'devtool build', rather always bitbake directly
<matthias__>
Me too normally, but i am now on the ext sdk and there is no way to run bitbake standalone it seems,
<kanavin>
matthias__, I use other devtool commands a lot (like modify, finish etc.) and didn't notice it to the point it would really get in the way and become annoying
<kanavin>
matthias__, I can only suspect devtool modifies the build in a way that forces bitbake into full reparse
<kanavin>
e.g. something goes into global config
<matthias__>
nah - i checked this.
<matthias__>
And even if you are not using the sdk: try a build of a recipe that you have "checked out" with devtool modify. Once with devtool build and once with pure bitbake. For me i have this annoying described behavior. Can you maybe check if you can reproduce?
<kanavin>
even if I can, I'm not going to look into it now
<kanavin>
you can clone plain poky master, and try it there, and if it's clearly visible, then there is no need for someone else to see it
<matthias__>
ok. i will try that.
seninha has joined #yocto
thomasd13 has quit [Ping timeout: 256 seconds]
<phako[m]>
interesting. I cannot get that image to let me log
ArgaKhan___ has quit [Remote host closed the connection]
ArgaKhan___ has joined #yocto
vvn has joined #yocto
zhmylove has joined #yocto
<paulbarker>
I'm in a place where I could benefit from using multiconfigs, but I've always avoided them as I can't see how to use them cleanly
<barath>
I've tried figuring out the various use-cases of multiconfigs. is there a general advantage to building multiple images "concurrently" or does it mostly make sense when building an image which depends on another image?
<d-fens>
how can i see what overrides the IMAGE_INSTALL from core-image.bbclass in my image?
<paulbarker>
barath: In my case I want to be able to build for SD card, SPI flash or both. SPI flash images have a different u-boot config, stripped down kernel config and a different partition layout
<paulbarker>
I'm trying to avoid defining an entirely separate machine so that I can maximise reuse of build artifacts
pabigot has quit [Client Quit]
<qschulz>
paulbarker: how do you do the different defconfig pick without a machine configuration file?
<qschulz>
because a distro is even worse isn't it
pabigot has joined #yocto
<barath>
right... so reusing artifacts. but arent those reused as long as one makes sure that they can be, by making sure theyre arch compatible? if that makes sense
<qschulz>
barath: the issue here is how to build two u-boot/kernel recipes I believe
<paulbarker>
barath: There's several MACHINE-specific packages that shouldn't need to change between the SD card and SPI flash cases
<paulbarker>
qschulz: I'm still in the process of figuring out what I can do with multiconfigs here
<paulbarker>
Even if I do need a separate MACHINE, I'd want multiconfigs so I can build both at once and so I can include the SPI flash image into an installer SD card image in the future
<qschulz>
paulbarker: aaaaa true, since multiconfigs are configuration files, maybe it'd be possible to have a UBOOT_MACHINE in there... mmmm
<qschulz>
paulbarker: I'm actually wondering if you can't have an image build two u-boot/kernel recipes? I guess not because of the virtual package?
<paulbarker>
qschulz: It may not work... I need to think what happens with sstate if I build two kernels for the same MACHINE but with different defconfigs
<qschulz>
but having a second recipe without the PROVIDES?? mmm, hacks hacks hacks :)
<barath>
Hm I get one the case of one image including another, but I dont immediately get the need for multiconfig. Doesn't the estate cache work the same either way?
<paulbarker>
I'm going to experiment a little and see which path minimises hacks
<qschulz>
paulbarker: also, if you figure a way to only have one image built but with those two kernel sand bootloaders configurations, then it's just a matter of adding a "build multiple images with wic" feature
<qschulz>
(for the partition layout)
<paulbarker>
qschulz: I need to extend wic anyway
<qschulz>
don't be fooled by its name WKS_FILES is not what you're after
<qschulz>
(i haven;'t followed closely but it wasn't supported months ago when I looked at it)
<qschulz>
(or was it already years ago? time flies)
<qschulz>
paulbarker: let us know how it goes!
<paulbarker>
qschulz: The SPI flash image will have less in the rootfs, so at the least I have different image recipes
<paulbarker>
And wic needs extending to support writing a "bare" image that doesn't start with an msdos/gpt header
<paulbarker>
By "header" I mean partition table
<qschulz>
paulbarker: makes sense, but like barath not entirely sure multiconfig is beneficial here?
<qschulz>
paulbarker: I guess you want to avoid this partition table on the SPI flash to save some precious space?
<paulbarker>
qschulz: No, for SPI flash the u-boot SPL needs to be written to sector 0 so there's no space for the partition table
<paulbarker>
The partition layout can be set by the device tree
<RP>
paulbarker: the annoying thing with multiconfig in that scenario will be the parse time
<qschulz>
paulbarker: ah true, I forgot about the load offset by the BOOTROM for U-Boot SPL
GNUmoon has quit [Remote host closed the connection]
GNUmoon has joined #yocto
<barath>
hm
<barath>
but let's say I want something similar (?) like having two identical images, except one with a regular kernel and one with a "debug" kernel with a bunch of debug-related config fragments. would it make sense to use multiconfig then? in my mind, I could build sequentially and the cache from the first image would/could be reused in the second?
<barath>
I must be missing some fundamental thing about multiconfigs so far
amsobr has quit [Quit: Client closed]
<barath>
the most obvious use-case seems if your aim is to build one image which depends directly on another image, such that you need to build both anyways. it seems to me like all other cases should be equivalent to building images sequentially
<paulbarker>
barath: With multiconfigs you should also see parallelisation when you build both images, so it should be quicker overall than a sequential build
Tyaku has joined #yocto
<barath>
Mhm, that's worth testing either way
<qschulz>
paulbarker: I assume bitbake recipe-1 recipe-2 should do this parallelism just fine?
<paulbarker>
qschulz: Not if they're different MACHINEs
<paulbarker>
Or other conf file changes
<barath>
right, so the thinking is that recipes which can't be re-used across images can grab idle threads when doing multiconfig builds
<barath>
whereas when building sequentially/in general, there might be bottlenecks where threads are idle
invalidopcode has quit [Remote host closed the connection]
invalidopcode has joined #yocto
nemik_ has joined #yocto
falk0n[m] has joined #yocto
prabhakarlad has quit [Ping timeout: 260 seconds]
matthias__ has quit [Quit: Client closed]
zpfvo has quit [Quit: Leaving.]
<mischief>
when i tried multiconfig, parsing was extremely slow
florian has quit [Quit: Ex-Chat]
florian_kc has quit [Ping timeout: 252 seconds]
prabhakarlad has joined #yocto
frieder has quit [Remote host closed the connection]
rfuentess has quit [Remote host closed the connection]
<RP>
mischief: it will add an extra parsing time for each config that is added. Not much we can do about that
<JPEW>
mischief: It should be roughly linear with each multiconfig you add
<JPEW>
IIRC
mckoan is now known as mckoan|away
gsalazar has quit [Ping timeout: 252 seconds]
leon-anavi has quit [Remote host closed the connection]
goliath has joined #yocto
alessioigor has quit [Quit: alessioigor]
manuel1985 has quit [Ping timeout: 265 seconds]
manuel1985 has joined #yocto
rstreif has joined #yocto
gsalazar has joined #yocto
gsalazar_ has joined #yocto
gsalazar_ has quit [Client Quit]
Frank33 has quit [Ping timeout: 260 seconds]
<mischief>
it's a lot of time when there's ~10 configs in the multiconfig :-)
florian_kc has joined #yocto
Haxxa has quit [Quit: Haxxa flies away.]
<JPEW>
mischief: Ya, that's a lot. Why so many?
<mischief>
because that's how many models of hardware we have, and thus $MACHINEs
<JPEW>
And you need all of them at once?
Haxxa has joined #yocto
<mischief>
sometimes, yes
<mischief>
we don't use multiconfig right now though, instead we just launch parallel bitbakes
<JPEW>
mischief: Fair enough. If you can come up with a way for users to reasonably set BBMULTICONFIG to select only what they need, that will help. I suspect for the case where you need everything though, even with the long parse times it will be faster than parallel bitbake
<JPEW>
(or at a minimum, require less wrapping script if that's how you are doing it)
<JPEW>
The parsing process is highly parallel, so it should be able to peg your CPUs while parseing
manuel1985 has quit [Ping timeout: 246 seconds]
matthias__ has joined #yocto
<matthias__>
I'd like to add a SAS Token to the SSTATE_MIRROR url. In local.conf I have SSTATE_MIRRORS = "file://.* az://localhost:8000/sstate/PATH" and AZ_SAS="HELLO". However the az_sas variable is not picked up. I have narrowed it down to lib/bb/fetch2/az.py. I added console logs like so:
<matthias__>
az_sas = d.getVar('AZ_SAS')
<matthias__>
if az_sas and az_sas not in ud.url:
<matthias__>
ud.url += az_sas
<matthias__>
else:
<matthias__>
bb.plain("AZ_SAS is not defined")
<matthias__>
bb.plain("trying with:"+ud.url)
<matthias__>
In the console I see "AZ_SAS is not defined". Can anybody give me some pointer why the variable is not there?
gsalazar has quit [Remote host closed the connection]
gsalazar has joined #yocto
manuel1985 has joined #yocto
prabhakarlad has quit [Ping timeout: 260 seconds]
gsalazar has quit [Remote host closed the connection]
gsalazar has joined #yocto
manuel1985 has quit [Ping timeout: 264 seconds]
malsyned has joined #yocto
<malsyned>
I'm trying to get systemd-timesyncd to use a specific fallback time in the event that it can't get a correct time from the RTC.
<malsyned>
systemd-timesyncd(8) says it gets this value from /var/lib/systemd/timesync/clock and if that doesn't exist, "At the minimum, it will be set to the systemd build date"
<malsyned>
I dug through systemd's build process, and found that it will get that build date from either the timestamp of the NEWS file or, if it's set, the environment variable SOURCE_DATE_EPOCH
invalidopcode has quit [Remote host closed the connection]
invalidopcode has joined #yocto
<malsyned>
Go, great, that means this is already accessible through Yocto's reproduceable build infrastructure. But the catch is, that variable is listed in BB_BASEHASH_IGNORE_VARS, meaning that just setting SOURCE_DATE_EPOCH from a bbappend or SOURCE_DATE_EPOCH:pn-systemd from a conf file doesn't actually
<malsyned>
E_BUILD_EPOCH.
<malsyned>
cause the recipe to be re-run when I change the SOURC
<malsyned>
I don't think I want to remove SOURCE_BUILD_EPOCH from the BB_BASEHASH_IGNORE_VARS globally, but removing it from a systemd_%.bbappend appears to have no effect (even though `bitbake -e systemd` shows that my :remove is being processed correctly)
<malsyned>
Anybody have any advice on how to get Yocto to do what I want it to?
<JPEW>
malsyned: Maybe something like do_compile:prepend() { export SOURCE_DATE_EPOCH=123 } would work?
<malsyned>
You think that would override the one that comes from Yocto's reproducible build infrastructure?
<JPEW>
It might?
jmk1 has joined #yocto
<malsyned>
I'll give it a try. My current hack is PR .= ".1.${SOURCE_DATE_EPOCH}" but I don't love it.
<JPEW>
malsyned: Ya you probably don't want that
<malsyned>
Oh I sure don't.
<malsyned>
I think it would be do_configure, not do_compile, though.
<JPEW>
malsyned: Ya, I wasn't sure
<malsyned>
I need meson to pick it up and dump it into config.h I believe
sakoman has joined #yocto
jclsn has quit [Quit: WeeChat 3.7.1]
jclsn has joined #yocto
jclsn has quit [Client Quit]
jclsn has joined #yocto
jclsn has quit [Client Quit]
<malsyned>
JPEW you're on to something, but it doesn't work quite as you've written. The systemd recipe generates a -Dtime-epoch= from the bitbake SOURCE_DATE_EPOCH variable, causing meson to ignore the SOURCE_DATE_EPOCH. But it does cause the recipe to rebuild, so I think I can figure something out that will
jclsn has joined #yocto
<JPEW>
malsyned: Ah, nice
jclsn has quit [Client Quit]
jclsn has joined #yocto
<malsyned>
work.
<hsv>
Is there a way to find out which version of yocto is on a target?
<malsyned>
JPEW this appears to be working, you see any pitfalls to it that I am missing?
<malsyned>
Anybody know why Yocto downloads a completely fresh Linux git repository every time I change SRCREV in my recipe? Seems to me it should be possible to reuse the one already downloaded and just fetch the few new commits.
<malsyned>
It's adding like 30 minutes to a build that would otherwise take just a couple.
kscherer has quit [Quit: Konversation terminated!]
matthias__ has quit [Quit: Client closed]
<JPEW>
malsyned: That SOURCE_DATE_EPOCH seems good enough I think; you could also maybe just set the global SDE to the value you want :)
<JPEW>
But, ya, that wouldn't cause systemd to rebuild