ndec changed the topic of #yocto to: "Welcome to the Yocto Project | Learn more: https://www.yoctoproject.org | Join us or Speak at Yocto Project Summit (2022.11) Nov 29-Dec 1, more: https://yoctoproject.org/summit | Join the community: https://www.yoctoproject.org/community | IRC logs available at https://www.yoctoproject.org/irc/ | Having difficulty on the list or with someone on the list, contact YP community mgr ndec"
mattes-bru has joined #yocto
florian has quit [Ping timeout: 260 seconds]
kpo has quit [Remote host closed the connection]
kpo has joined #yocto
amsobr has joined #yocto
yann has joined #yocto
amsobr is now known as aoliveira
aoliveira is now known as to
to is now known as amsobr
sakoman has quit [Quit: Leaving.]
mattes-bru has quit [Ping timeout: 256 seconds]
goliath has quit [Quit: SIGSEGV]
GNUmoon has quit [Ping timeout: 255 seconds]
GNUmoon has joined #yocto
nemik_ has quit [Ping timeout: 260 seconds]
nemik_ has joined #yocto
nemik_ has quit [Ping timeout: 268 seconds]
nemik_ has joined #yocto
nemik_ has quit [Ping timeout: 260 seconds]
nemik_ has joined #yocto
nemik_ has quit [Ping timeout: 268 seconds]
nemik_ has joined #yocto
amsobr has quit [Quit: Client closed]
Tokamak__ has joined #yocto
Tokamak_ has quit [Ping timeout: 255 seconds]
Tokamak__ has quit [Client Quit]
Tokamak_ has joined #yocto
starblue has quit [Ping timeout: 248 seconds]
starblue has joined #yocto
mattes-bru has joined #yocto
vvn has quit [Quit: WeeChat 3.7.1]
sakoman has joined #yocto
GNUmoon has quit [Ping timeout: 255 seconds]
GNUmoon has joined #yocto
Estrella has joined #yocto
jclsn has quit [Ping timeout: 256 seconds]
jclsn has joined #yocto
money_ has joined #yocto
money_ has quit [Client Quit]
seninha has quit [Remote host closed the connection]
nemik_ has quit [Ping timeout: 268 seconds]
mattes-bru has quit [Ping timeout: 264 seconds]
xmn has quit [Ping timeout: 256 seconds]
kpo has quit [Read error: Connection reset by peer]
nemik_ has joined #yocto
kpo has joined #yocto
mattes-bru has joined #yocto
GNUmoon has quit [Remote host closed the connection]
GNUmoon has joined #yocto
mattes-bru has quit [Ping timeout: 246 seconds]
Tokamak_ has quit [Ping timeout: 260 seconds]
argonautx[m] has joined #yocto
Tokamak_ has joined #yocto
PhoenixMage has quit [Ping timeout: 256 seconds]
PhoenixMage has joined #yocto
Tokamak_ has quit [Quit: Tokamak_]
PhoenixMage has quit [Ping timeout: 252 seconds]
PhoenixMage has joined #yocto
amitk has joined #yocto
sakoman has quit [Quit: Leaving.]
Wouter01006 has joined #yocto
Wouter0100 has quit [Quit: The Lounge - https://thelounge.chat]
Wouter01006 is now known as Wouter0100
thomasd13 has joined #yocto
mattes-bru has joined #yocto
thomasd13 has quit [Quit: Leaving]
olani_ has quit [Ping timeout: 268 seconds]
rstreif has quit [Ping timeout: 255 seconds]
mattes-bru has quit [Ping timeout: 252 seconds]
Wouter0100 has quit [Quit: The Lounge - https://thelounge.chat]
Wouter01006 has joined #yocto
camus has joined #yocto
goliath has joined #yocto
rob_w_ has joined #yocto
mvlad has joined #yocto
alessioigor has joined #yocto
mattes-bru has joined #yocto
mattes-bru has quit [Remote host closed the connection]
mattes-bru has joined #yocto
mattes-bru has quit [Remote host closed the connection]
mattes-bru has joined #yocto
invalidopcode has quit [Quit: Ping timeout (120 seconds)]
invalidopcode has joined #yocto
mckoan|away is now known as mckoan
<mckoan> good morning
tomzy_0 has joined #yocto
zpfvo has joined #yocto
rfuentess has joined #yocto
goliath has quit [Quit: SIGSEGV]
manuel1985 has joined #yocto
mattes-b_ has joined #yocto
thomasd13 has joined #yocto
mckoan_ has joined #yocto
mattes-bru has quit [Ping timeout: 264 seconds]
mckoan has quit [Ping timeout: 248 seconds]
zpfvo has quit [Quit: Leaving.]
<qschulz> o/
mckoan_ has quit [Ping timeout: 256 seconds]
leon-anavi has joined #yocto
rob_w_ has quit [Quit: Leaving]
zpfvo has joined #yocto
florian has joined #yocto
<LetoThe2nd> yo dudX
<tomzy_0> Hello
goliath has joined #yocto
<RP> morning!
prabhakarlad has joined #yocto
mckoan has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
<JaMa> morning
seninha has joined #yocto
seninha has quit [Remote host closed the connection]
seninha has joined #yocto
ptsneves has joined #yocto
ptsneves has quit [Ping timeout: 260 seconds]
frieder has joined #yocto
pbergin has joined #yocto
haroon-m[m] has joined #yocto
<thomasd13> Do I generate the poky SDK with bitbake core-iamge-minimal-sdk ?
Kleist has joined #yocto
Kleist has quit [Client Quit]
nemik_ has quit [Ping timeout: 268 seconds]
<thomasd13> ahhh. I do <image> -c populate_sdk. TI workflow spoiled me...
nemik_ has joined #yocto
nemik_ has quit [Ping timeout: 260 seconds]
nemik_ has joined #yocto
amsobr has joined #yocto
d-fens has quit [Read error: Connection reset by peer]
starblue has quit [Ping timeout: 256 seconds]
starblue has joined #yocto
<rburton> thomasd13: yeah, ideally you build a sdk for a specific image. you *can* build a dedicated SDK recipe but there's no point when every image can build its own SDK.
florian_kc has joined #yocto
yann has quit [Ping timeout: 256 seconds]
jmk1 has joined #yocto
jmk1 has left #yocto [#yocto]
jmk1 has joined #yocto
yann has joined #yocto
Frank33 has joined #yocto
amitk_ has joined #yocto
seninha has quit [Ping timeout: 260 seconds]
<phako[m]> what is the easiest way to build an image for virtualbox (i.e. skipping qemu-native for example) - reason: I need to hook something with connman into a rather weird virtual network I have set up based on virtualbox...
<rburton> you need qemu-native to build some recipes
<rburton> you can build a virtualbox image by setting the image fstype
<phako[m]> ah. ok, then just adding wic.vdi is the minimalest thing I can do
<rburton> yeah
<phako[m]> right
<rburton> you could make a new machine which doesn't need qemu-system-native, but that's only really useful if you want to tune the compiler flags or do further tweaks
<rburton> if you also had some virtualbox kernel modules or userspace tools that could be the right thing to do
<rburton> hm, i wonder if we should package the tools into qemu-native and not qemu-system-native.
<phako[m]> actuallz, now that I have built it once, I proably don't have to care anymore anyway
<rburton> yeah exactly
<kanavin> rburton, I had a rather different reaction on second thought https://www.linkedin.com/feed/update/urn:li:activity:7005453205488685057/
<kanavin> " I'm sure the intentions were good, but it only made me anxious about possible misuses of this technology."
<rburton> absolutely
<rburton> its equally impressive and terrifying
<rburton> i asked it to produce a haiku arguing that Alien is a christmas film
<rburton> never seen it sit for 20 seconds before writing, but it did produce one
jmk1 has left #yocto [#yocto]
<kanavin> rburton, it's not just text. AI can nowadays isolate individual instruments from a stereo track. Which is how the latest Revolver reissue was re-mixed.
<kanavin> rburton, and technology to age or de-age actors convincingly is coming soon as well.
<rburton> already has, disney iirc had a demo last week
<kanavin> not yet in an actual movie, but soon :)
<kanavin> and I'm definitely going to that abba show :)
pbergin has quit [Quit: Leaving]
<rburton> kanavin: you might like https://www.amazon.co.uk/Deep-Fakes-Infocalypse-What-Urgently/dp/1913183521/, i've been meaning to read that for a while now. though the book is now a whole two years out of date so it would be interesting to see how fast tech has moved.
dmoseley has quit [Quit: ZNC 1.8.2 - https://znc.in]
dmoseley has joined #yocto
Guest5713 has joined #yocto
Guest5713 has quit [Client Quit]
matthias__ has joined #yocto
<kanavin> rburton, I think https://www.bloomberg.com/opinion/articles/2017-05-03/the-mozart-in-the-machine covers the subject succinctly.
<kanavin> I read the whole book a couple years ago https://www.ynharari.com/book/21-lessons-book/
<kanavin> that article made some people very, very angry. Those poor souls that believe in existence of a 'soul'.
xmn has joined #yocto
<matthias__> Hi everyone. I have a question regarding bitbake vs devtool. When running devtool build <recipe> the cache is loaded twice: first time very fast and then again, but slow. My output reads as:
<matthias__> "Loading cache: 100% (This is the fast one - just a second)
<matthias__> Loaded 3591 entries from dependency cache.
<matthias__> Parsing recipes: 100%
<matthias__> Parsing of 2218 .bb files complete ...
<matthias__> Loading cache 100% (This one takes 16seconds)
<matthias__> Loaded 3591 entries from dependency cache."
<matthias__> Any idea why i have two passes of the loading cache step?
<matthias__> It does not happen if i build the exact same recipe with bitbake,
<kanavin> matthias__, sadly we do not have a devtool maintainer, and it's not likely someone can give a quick answer but if you can investigate and propose a fix that would certainly be most welcome.
<kanavin> devtool should not be doing things that subvert parse times
<matthias__> Have you ever heard of that behavior before (i.e. is this maybe a regression?)
<kanavin> matthias__, I never run 'devtool build', rather always bitbake directly
<matthias__> Me too normally, but i am now on the ext sdk and there is no way to run bitbake standalone it seems,
<kanavin> matthias__, I use other devtool commands a lot (like modify, finish etc.) and didn't notice it to the point it would really get in the way and become annoying
<kanavin> matthias__, I can only suspect devtool modifies the build in a way that forces bitbake into full reparse
<kanavin> e.g. something goes into global config
<matthias__> nah - i checked this.
<matthias__> And even if you are not using the sdk: try a build of a recipe that you have "checked out" with devtool modify. Once with devtool build and once with pure bitbake. For me i have this annoying described behavior. Can you maybe check if you can reproduce?
<kanavin> even if I can, I'm not going to look into it now
<kanavin> you can clone plain poky master, and try it there, and if it's clearly visible, then there is no need for someone else to see it
<matthias__> ok. i will try that.
seninha has joined #yocto
thomasd13 has quit [Ping timeout: 256 seconds]
<phako[m]> interesting. I cannot get that image to let me log
ArgaKhan___ has quit [Remote host closed the connection]
ArgaKhan___ has joined #yocto
vvn has joined #yocto
zhmylove has joined #yocto
<paulbarker> I'm in a place where I could benefit from using multiconfigs, but I've always avoided them as I can't see how to use them cleanly
<paulbarker> https://docs.yoctoproject.org/dev/dev-manual/building.html#building-images-for-multiple-targets-using-multiple-configurations, re multiconfig conf files, says: "They must reside in the current Build Directory in a sub-directory of conf named multiconfig or within a layer’s conf directory under a directory named multiconfig."
<paulbarker> Is that still true?
<LetoThe2nd> paulbarker: i think so, what is your concern about that?
<paulbarker> Being able to put them in a layers conf/multiconfig directory is good, I'm a bit concerned about name clashes though
<paulbarker> I guess prefixing the names would help
<LetoThe2nd> yup, for example
<paulbarker> Ah I see my confusion, dunfell & kirkstone have different documentation for this
sakoman has joined #yocto
amitk_ has quit [Ping timeout: 260 seconds]
rsalveti has joined #yocto
kscherer has joined #yocto
camus has quit [Ping timeout: 264 seconds]
<paulbarker> Right, and the dunfell documentation doesn't match the bitbake behaviour in dunfell
<paulbarker> Is that worth a bug at this stage? Or shall we just leave dunfell docs as-is?
<qschulz> paulbarker: needs to be fixed, dunfell docs is still supported
<qschulz> s/supported/maintained/
kpo has quit [Read error: Connection reset by peer]
* paulbarker goes off to file a bug
<qschulz> thx!
<qschulz> paulbarker: I think michaelo should get a notification if filed in the docs section
d-fens has joined #yocto
kpo has joined #yocto
<qschulz> thx for filing it
<qschulz> a patch is welcome too :)
<qschulz> paulbarker: I assume cb35f75bfc98c9098f9af64b9dd040b25779da36 should be backported?
<qschulz> michaelo: ^
<paulbarker> qschulz: That looks right to me
Wouter01006 has quit [Quit: The Lounge - https://thelounge.chat]
Wouter01006 has joined #yocto
pabigot has quit [Quit: Leaving.]
pabigot has joined #yocto
<barath> I've tried figuring out the various use-cases of multiconfigs. is there a general advantage to building multiple images "concurrently" or does it mostly make sense when building an image which depends on another image?
<d-fens> how can i see what overrides the IMAGE_INSTALL from core-image.bbclass in my image?
<LetoThe2nd> d-fens: bitbake-getvar -r your-image IMAGE_INSTALL
<d-fens> thx!
<paulbarker> barath: In my case I want to be able to build for SD card, SPI flash or both. SPI flash images have a different u-boot config, stripped down kernel config and a different partition layout
<paulbarker> I'm trying to avoid defining an entirely separate machine so that I can maximise reuse of build artifacts
pabigot has quit [Client Quit]
<qschulz> paulbarker: how do you do the different defconfig pick without a machine configuration file?
<qschulz> because a distro is even worse isn't it
pabigot has joined #yocto
<barath> right... so reusing artifacts. but arent those reused as long as one makes sure that they can be, by making sure theyre arch compatible? if that makes sense
<qschulz> barath: the issue here is how to build two u-boot/kernel recipes I believe
<paulbarker> barath: There's several MACHINE-specific packages that shouldn't need to change between the SD card and SPI flash cases
<paulbarker> qschulz: I'm still in the process of figuring out what I can do with multiconfigs here
<paulbarker> Even if I do need a separate MACHINE, I'd want multiconfigs so I can build both at once and so I can include the SPI flash image into an installer SD card image in the future
<qschulz> paulbarker: aaaaa true, since multiconfigs are configuration files, maybe it'd be possible to have a UBOOT_MACHINE in there... mmmm
<qschulz> paulbarker: I'm actually wondering if you can't have an image build two u-boot/kernel recipes? I guess not because of the virtual package?
<paulbarker> qschulz: It may not work... I need to think what happens with sstate if I build two kernels for the same MACHINE but with different defconfigs
<qschulz> but having a second recipe without the PROVIDES?? mmm, hacks hacks hacks :)
<barath> Hm I get one the case of one image including another, but I dont immediately get the need for multiconfig. Doesn't the estate cache work the same either way?
<paulbarker> I'm going to experiment a little and see which path minimises hacks
<qschulz> paulbarker: also, if you figure a way to only have one image built but with those two kernel sand bootloaders configurations, then it's just a matter of adding a "build multiple images with wic" feature
<qschulz> (for the partition layout)
<paulbarker> qschulz: I need to extend wic anyway
<qschulz> don't be fooled by its name WKS_FILES is not what you're after
<qschulz> (i haven;'t followed closely but it wasn't supported months ago when I looked at it)
<qschulz> (or was it already years ago? time flies)
<qschulz> paulbarker: let us know how it goes!
<paulbarker> qschulz: The SPI flash image will have less in the rootfs, so at the least I have different image recipes
<paulbarker> And wic needs extending to support writing a "bare" image that doesn't start with an msdos/gpt header
<paulbarker> By "header" I mean partition table
<qschulz> paulbarker: makes sense, but like barath not entirely sure multiconfig is beneficial here?
<qschulz> paulbarker: I guess you want to avoid this partition table on the SPI flash to save some precious space?
<paulbarker> qschulz: No, for SPI flash the u-boot SPL needs to be written to sector 0 so there's no space for the partition table
<paulbarker> The partition layout can be set by the device tree
<RP> paulbarker: the annoying thing with multiconfig in that scenario will be the parse time
<qschulz> paulbarker: ah true, I forgot about the load offset by the BOOTROM for U-Boot SPL
GNUmoon has quit [Remote host closed the connection]
GNUmoon has joined #yocto
<barath> hm
<barath> but let's say I want something similar (?) like having two identical images, except one with a regular kernel and one with a "debug" kernel with a bunch of debug-related config fragments. would it make sense to use multiconfig then? in my mind, I could build sequentially and the cache from the first image would/could be reused in the second?
<barath> I must be missing some fundamental thing about multiconfigs so far
amsobr has quit [Quit: Client closed]
<barath> the most obvious use-case seems if your aim is to build one image which depends directly on another image, such that you need to build both anyways. it seems to me like all other cases should be equivalent to building images sequentially
<paulbarker> barath: With multiconfigs you should also see parallelisation when you build both images, so it should be quicker overall than a sequential build
Tyaku has joined #yocto
<barath> Mhm, that's worth testing either way
<qschulz> paulbarker: I assume bitbake recipe-1 recipe-2 should do this parallelism just fine?
<paulbarker> qschulz: Not if they're different MACHINEs
<paulbarker> Or other conf file changes
<barath> right, so the thinking is that recipes which can't be re-used across images can grab idle threads when doing multiconfig builds
<barath> whereas when building sequentially/in general, there might be bottlenecks where threads are idle
Tyaku has quit [Quit: Lost terminal]
goliath has quit [Quit: SIGSEGV]
<barath> yeah seems like that's what's described here https://youtu.be/YvtlGjWrL-M?t=2695
paulg has joined #yocto
Tokamak_ has joined #yocto
nemik_ has quit [Ping timeout: 264 seconds]
nemik_ has joined #yocto
nemik_ has quit [Ping timeout: 246 seconds]
zhmylove has quit [Quit: Leaving]
invalidopcode has quit [Remote host closed the connection]
invalidopcode has joined #yocto
nemik_ has joined #yocto
falk0n[m] has joined #yocto
prabhakarlad has quit [Ping timeout: 260 seconds]
matthias__ has quit [Quit: Client closed]
zpfvo has quit [Quit: Leaving.]
<mischief> when i tried multiconfig, parsing was extremely slow
florian has quit [Quit: Ex-Chat]
florian_kc has quit [Ping timeout: 252 seconds]
prabhakarlad has joined #yocto
frieder has quit [Remote host closed the connection]
rfuentess has quit [Remote host closed the connection]
<RP> mischief: it will add an extra parsing time for each config that is added. Not much we can do about that
<JPEW> mischief: It should be roughly linear with each multiconfig you add
<JPEW> IIRC
mckoan is now known as mckoan|away
gsalazar has quit [Ping timeout: 252 seconds]
leon-anavi has quit [Remote host closed the connection]
goliath has joined #yocto
alessioigor has quit [Quit: alessioigor]
manuel1985 has quit [Ping timeout: 265 seconds]
manuel1985 has joined #yocto
rstreif has joined #yocto
gsalazar has joined #yocto
gsalazar_ has joined #yocto
gsalazar_ has quit [Client Quit]
Frank33 has quit [Ping timeout: 260 seconds]
<mischief> it's a lot of time when there's ~10 configs in the multiconfig :-)
florian_kc has joined #yocto
Haxxa has quit [Quit: Haxxa flies away.]
<JPEW> mischief: Ya, that's a lot. Why so many?
<mischief> because that's how many models of hardware we have, and thus $MACHINEs
<JPEW> And you need all of them at once?
Haxxa has joined #yocto
<mischief> sometimes, yes
<mischief> we don't use multiconfig right now though, instead we just launch parallel bitbakes
<JPEW> mischief: Fair enough. If you can come up with a way for users to reasonably set BBMULTICONFIG to select only what they need, that will help. I suspect for the case where you need everything though, even with the long parse times it will be faster than parallel bitbake
<JPEW> (or at a minimum, require less wrapping script if that's how you are doing it)
<JPEW> The parsing process is highly parallel, so it should be able to peg your CPUs while parseing
manuel1985 has quit [Ping timeout: 246 seconds]
matthias__ has joined #yocto
<matthias__> I'd like to add a SAS Token to the SSTATE_MIRROR url. In local.conf I have SSTATE_MIRRORS = "file://.* az://localhost:8000/sstate/PATH" and AZ_SAS="HELLO". However the az_sas variable is not picked up. I have narrowed it down to lib/bb/fetch2/az.py. I added console logs like so:
<matthias__>  az_sas = d.getVar('AZ_SAS')
<matthias__>         if az_sas and az_sas not in ud.url:
<matthias__>             ud.url += az_sas
<matthias__>         else:
<matthias__>             bb.plain("AZ_SAS is not defined")
<matthias__>         bb.plain("trying with:"+ud.url)
<matthias__> In the console I see "AZ_SAS is not defined". Can anybody give me some pointer why the variable is not there?
gsalazar has quit [Remote host closed the connection]
gsalazar has joined #yocto
manuel1985 has joined #yocto
prabhakarlad has quit [Ping timeout: 260 seconds]
gsalazar has quit [Remote host closed the connection]
gsalazar has joined #yocto
manuel1985 has quit [Ping timeout: 264 seconds]
malsyned has joined #yocto
<malsyned> I'm trying to get systemd-timesyncd to use a specific fallback time in the event that it can't get a correct time from the RTC.
<malsyned> systemd-timesyncd(8) says it gets this value from /var/lib/systemd/timesync/clock and if that doesn't exist, "At the minimum, it will be set to the systemd build date"
<malsyned> I dug through systemd's build process, and found that it will get that build date from either the timestamp of the NEWS file or, if it's set, the environment variable SOURCE_DATE_EPOCH
invalidopcode has quit [Remote host closed the connection]
invalidopcode has joined #yocto
<malsyned> Go, great, that means this is already accessible through Yocto's reproduceable build infrastructure. But the catch is, that variable is listed in BB_BASEHASH_IGNORE_VARS, meaning that just setting SOURCE_DATE_EPOCH from a bbappend or SOURCE_DATE_EPOCH:pn-systemd from a conf file doesn't actually
<malsyned> E_BUILD_EPOCH.
<malsyned> cause the recipe to be re-run when I change the SOURC
<malsyned> I don't think I want to remove SOURCE_BUILD_EPOCH from the BB_BASEHASH_IGNORE_VARS globally, but removing it from a systemd_%.bbappend appears to have no effect (even though `bitbake -e systemd` shows that my :remove is being processed correctly)
<malsyned> Anybody have any advice on how to get Yocto to do what I want it to?
manuel1985 has joined #yocto
<malsyned> erm, s/SOURCE_BUILD_EPOCH/SOURCE_DATE_EPOCH/g
florian_kc is now known as florian
manuel1985 has quit [Ping timeout: 255 seconds]
sakoman has quit [Quit: Leaving.]
gsalazar has quit [Ping timeout: 256 seconds]
jmk1 has joined #yocto
jmk1 has left #yocto [#yocto]
<JPEW> malsyned: Maybe something like do_compile:prepend() { export SOURCE_DATE_EPOCH=123 } would work?
<malsyned> You think that would override the one that comes from Yocto's reproducible build infrastructure?
<JPEW> It might?
jmk1 has joined #yocto
<malsyned> I'll give it a try. My current hack is PR .= ".1.${SOURCE_DATE_EPOCH}" but I don't love it.
<JPEW> malsyned: Ya you probably don't want that
<malsyned> Oh I sure don't.
<malsyned> I think it would be do_configure, not do_compile, though.
<JPEW> malsyned: Ya, I wasn't sure
<malsyned> I need meson to pick it up and dump it into config.h I believe
sakoman has joined #yocto
jclsn has quit [Quit: WeeChat 3.7.1]
jclsn has joined #yocto
jclsn has quit [Client Quit]
jclsn has joined #yocto
jclsn has quit [Client Quit]
<malsyned> JPEW you're on to something, but it doesn't work quite as you've written. The systemd recipe generates a -Dtime-epoch= from the bitbake SOURCE_DATE_EPOCH variable, causing meson to ignore the SOURCE_DATE_EPOCH. But it does cause the recipe to rebuild, so I think I can figure something out that will
jclsn has joined #yocto
<JPEW> malsyned: Ah, nice
jclsn has quit [Client Quit]
jclsn has joined #yocto
<malsyned> work.
<hsv> Is there a way to find out which version of yocto is on a target?
<malsyned> JPEW this appears to be working, you see any pitfalls to it that I am missing?
<malsyned> SOURCE_DATE_EPOCH_OVERRIDE="1670025606"
<malsyned> SOURCE_DATE_EPOCH = "${SOURCE_DATE_EPOCH_OVERRIDE}"
<malsyned> do_configure[vardeps] += "SOURCE_DATE_EPOCH_OVERRIDE"
<malsyned> hsv cat /etc/os-release works for me
<hsv> No such file or directory
<hsv> it's a minimal image, maybe that's why (?)
jclsn has quit [Quit: WeeChat 3.7.1]
jclsn has joined #yocto
nemik_ has quit [Ping timeout: 252 seconds]
nemik_ has joined #yocto
<mischief> sigh. dealing with qualcomm code sucks
<malsyned> hsv I have a /etc/issue that also mentions kirkstone. Maybe you have that?
<mischief> ethernet driver works in our dunfell image but not kirkstone :(
<hsv> malsyned: thanks
<hsv> Poky (Yocto Project Reference Distro) 4.0.5 \n \l
nemik_ has quit [Ping timeout: 264 seconds]
nemik_ has joined #yocto
jclsn has quit [Quit: WeeChat 3.7.1]
jclsn has joined #yocto
mvlad has quit [Remote host closed the connection]
<malsyned> hsv looks like a recent kirkstone image https://wiki.yoctoproject.org/wiki/Releases
jclsn has quit [Client Quit]
jclsn has joined #yocto
rstreif has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
<malsyned> Anybody know why Yocto downloads a completely fresh Linux git repository every time I change SRCREV in my recipe? Seems to me it should be possible to reuse the one already downloaded and just fetch the few new commits.
<malsyned> It's adding like 30 minutes to a build that would otherwise take just a couple.
kscherer has quit [Quit: Konversation terminated!]
matthias__ has quit [Quit: Client closed]
<JPEW> malsyned: That SOURCE_DATE_EPOCH seems good enough I think; you could also maybe just set the global SDE to the value you want :)
<JPEW> But, ya, that wouldn't cause systemd to rebuild
Wouter01006 has quit [Quit: The Lounge - https://thelounge.chat]
Wouter01006 has joined #yocto
malsyned has quit [Quit: First shalt thou take out the Holy Pin. Then, shalt thou count to three. No more. No less.]
nemik_ has quit [Ping timeout: 260 seconds]
nemik_ has joined #yocto
nemik_ has quit [Ping timeout: 268 seconds]
nemik_ has joined #yocto
Habbie has quit [Ping timeout: 256 seconds]
otavio has quit [Ping timeout: 260 seconds]
otavio has joined #yocto
money_ has joined #yocto
d4rkn0d3z has quit [Ping timeout: 256 seconds]
d4rkn0d3z has joined #yocto