LetoThe2nd changed the topic of #yocto to: Welcome to the Yocto Project | Learn more: https://www.yoctoproject.org | Community: https://www.yoctoproject.org/community | IRC logs: http://irc.yoctoproject.org/irc/ | Having difficulty on the list, with someone on the list or on IRC, contact Yocto Project Community Manager Letothe2nd | CoC: https://www.yoctoproject.org/community/code-of-conduct
qschulz has quit [Read error: Connection reset by peer]
qschulz has joined #yocto
davidinux has quit [Ping timeout: 246 seconds]
davidinux has joined #yocto
Ablu has quit [Ping timeout: 255 seconds]
Ablu has joined #yocto
jclsn has quit [Ping timeout: 260 seconds]
jclsn has joined #yocto
Vonter has joined #yocto
otavio has quit [Ping timeout: 246 seconds]
otavio has joined #yocto
prabhakarlad has quit [Quit: Client closed]
pabigot has quit [Ping timeout: 240 seconds]
Wouter0100670440 has quit [Quit: The Lounge - https://thelounge.chat]
Wouter0100670440 has joined #yocto
pabigot has joined #yocto
Thorn has joined #yocto
Thorn_ has quit [Ping timeout: 246 seconds]
Schlumpf has joined #yocto
kpo has quit [Ping timeout: 245 seconds]
rob_w has joined #yocto
alessioigor has joined #yocto
kayterina has joined #yocto
Kubu_work has quit [Quit: Leaving.]
manuel1985 has quit [Ping timeout: 245 seconds]
rfuentess has joined #yocto
frieder has joined #yocto
goliath has joined #yocto
_lore_ has joined #yocto
brrm has quit [Ping timeout: 245 seconds]
brrm has joined #yocto
davidinux has quit [Ping timeout: 245 seconds]
davidinux has joined #yocto
mckoan|away is now known as mckoan
<mckoan> good morning
<alessioigor> morning to you
Wouter0100670440 has quit [Quit: The Lounge - https://thelounge.chat]
Wouter0100670440 has joined #yocto
zpfvo has joined #yocto
kayterina has quit [Ping timeout: 246 seconds]
Vonter has quit [Ping timeout: 248 seconds]
Vonter has joined #yocto
ptsneves has joined #yocto
ptsneves has quit [Ping timeout: 246 seconds]
varjag has joined #yocto
manuel1985 has joined #yocto
valdemaras has joined #yocto
olani- has quit [Ping timeout: 240 seconds]
<RP> khem: between fb51e196a978d452e6a14a8343832659da97fdc7 and 90801cd8cb23719031aaaba1578a8446e1824cad now. I'm running more builds 8 possible commits if my tests are right
nedko has quit [Remote host closed the connection]
valdemaras has quit [Remote host closed the connection]
olani- has joined #yocto
Kubu_work has joined #yocto
Guest39 has joined #yocto
Guest23 has joined #yocto
xmn has quit [Ping timeout: 248 seconds]
florian has joined #yocto
gsalazar has joined #yocto
mrnuke_ has quit [Ping timeout: 246 seconds]
mbulut has joined #yocto
prabhakarlad has joined #yocto
Guest24 has joined #yocto
Guest24 has left #yocto [#yocto]
zpfvo has quit [Ping timeout: 245 seconds]
Schlumpf has quit [Quit: Client closed]
Guest39 has quit [Ping timeout: 246 seconds]
mrnuke has joined #yocto
ptsneves has joined #yocto
amelius__ has joined #yocto
zpfvo has joined #yocto
amelius__ has quit [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.]
zpfvo has quit [Ping timeout: 245 seconds]
zpfvo has joined #yocto
<__ad> hi, any sample on how to use different kernel defconfig based on machine ?
dmoseley has quit [Ping timeout: 260 seconds]
<RP> looking like 12d9280c3de24c1c2b835e80fa1b8be72e9bc63a or earlier which is getting a bit strange
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
Guest39 has joined #yocto
<Guest39> Hello, Can you please help me in resolving yocto build error while building AGL software. It happens during rootfs packaging time with one of the AGL supported image. agl-linux/agl-demo-platform/1.0-r0/rootfs --gid 989 --system systemd-journal]
<Guest39> configuration error - unknown item 'SYSLOG_SU_ENAB' (notify administrator)
<Guest39> configuration error - unknown item 'SYSLOG_SG_ENAB' (notify administrator)
<Guest39> performing groupadd with [--root /home/yocto/yocto/build-domd/tmp/work/board-agl-linux/agl-demo-platform/1.0-r0/rootfs --gid 989 --system systemd-journal]
<Guest39> configuration error - unknown item 'SYSLOG_SU_ENAB' (notify administrator)
<Guest39> configuration error - unknown item 'SYSLOG_SG_ENAB' (notify administrator)
leon-anavi has joined #yocto
<JaMa> Guest39: does it fail the same all the time? I'm seeing this randomly in some builds (not AGL) and haven't found what triggers it, so if you have reliable reproducer then I would be interested to see what was causing it
<khem> RP: libarchive is used by rpm cmake and elfutils so it could also be in picture
leon-anavi has quit [Remote host closed the connection]
<RP> khem: seems to be prior to that
<LetoThe2nd> yo dudX
kayterina has joined #yocto
river has joined #yocto
<river> when you build a cross compiler toolchain are the files in a sysroot used by the host or the target?
<rburton> well, yocto has host and target sysroots
<rburton> but typically, target
<river> i see. thank you
<RP> rburton: at least it seems roughly reproducible
<rburton> are the TCG warnings new?
<RP> rburton: not sure
<rburton> they feel new
<RP> I was wondering. I've struggled just to narrow it down to this so far
<RP> this is the second build I've seen this pattern in
<rburton> and it says x2apic which was introduced in 2008
<RP> rburton: trouble is on a successful build they're probably just hidden
<rburton> or maybe qemu is being a bit more precise
<RP> I did wonder if it was worker specific but looking at the failing hosts, no pattern jumps out
Guest39 has quit [Ping timeout: 246 seconds]
varjag has quit [Quit: ERC (IRC client for Emacs 27.1)]
varjag has joined #yocto
leon-anavi has joined #yocto
kayterina has quit [Quit: Client closed]
Marian67 has joined #yocto
<Marian67> Hi, I'm building a wic image that I dd on /dev/sda  + parted -s $DESTINATION_DEV "resizepart 3 -1" the last partition up to disk size. I'm trying to integrate resize2fs to extend /dev/root up to the partition size. Is exactly the problem described here:
<rburton> Marian67: if you use systemd then it can auto-expand on first boot
<Marian67> yes, I'm using systemd, how can auto-expand on first boot??
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
<Marian67> I will look also on this, but my installer is running from initramfs and is dd the wic, parted resize and I want also to resize2fs, do you have any input on this?
<rburton> sure you could do that from your installer
<rburton> as you say, resize the partition then grow the file system
<rburton> RP: old qemu doesn't produce those warnings
<Marian67> dd if=$INSTALLER_FILE of=$DESTINATION_DEV bs=1M status=progress && sgdisk $DESTINATION_DEV -e && parted -s $DESTINATION_DEV "resizepart 3 -1"
<Marian67> this is what I'm doing
Guest23 has quit [Quit: Client closed]
<Marian67>     DESTINATION_DEV="/dev/sda"
<Marian67>     INSTALLER_FILE=$(ls /boot/*wic)
<rburton> you haven't actually asked a question or explained a problem yet
<rburton> you'll missing a filesystem resize call there of course
<RP> rburton: hmm, so what changed? :/
<Marian67> yes, I'm missing resize2fs /dev/sda3
<RP> rburton: I was just getting to a position to try and debug it...
<rburton> RP: just building qemu 8.1 now to verify. i suspect our qemu flags are not quite right and new qemu is being pedantic.
<RP> khem: qemuppc issue seems to be coming down to https://git.yoctoproject.org/poky/commit/?id=ffd73bef9b9bb5c94c050387941eee29719ca697 - the uninative upgrade
<Marian67> I'm trying to get resize2fs exactly how it's described here and is not working https://stackoverflow.com/questions/65200349/how-to-set-dev-root-filesystem-size-to-the-partition-size
<RP> rburton: why only those two tests though?
<rburton> god knows
<Marian67> IMAGE_iNSTALL:append " e2fsprogs e2fsprogs-resize2fs"
dmoseley has joined #yocto
<rburton> good news is that i can replicate the tcg thing trivially
dmoseley has quit [Ping timeout: 240 seconds]
<RP> rburton: I just confirmed the warnings aren't there previously
Piraty has quit [Quit: -]
<RP> rburton: well, it doesn't show them in the log but it doesn't hang on boot and dump stdout so we can't be sure :(
<rburton> runqemu nographic here has a working boot with master and tcg aborts with 8.1
Piraty has joined #yocto
<RP> rburton: interesting. I wonder why everything else worked :/
<rburton> yeah
<rburton> i'll dig
<RP> I guess this means I can't avoid qemuppc
<rburton> that's basically my plan yes
<RP> rburton: damnit ;-)
<rburton> RP: 'qemu has always emitted those warnings, we suggest you bisect'
<RP> rburton: that was my worry
<RP> I think runqemu is only dumping stdout upon failure
pidge has joined #yocto
<RP> rburton: it only fails if kvm isn't on the commandline
zpfvo has quit [Ping timeout: 246 seconds]
zpfvo has joined #yocto
Kubu_work has quit [Read error: Connection reset by peer]
Kubu_work1 has joined #yocto
vladest has quit [Ping timeout: 245 seconds]
Schlumpf has joined #yocto
Wouter0100670440 has quit [Quit: The Lounge - https://thelounge.chat]
Wouter0100670440 has joined #yocto
amitk has joined #yocto
amitk has quit [Ping timeout: 246 seconds]
<RP> rburton: and without nographic it gets really funky :/
<rburton> stop looking at the qemu one
<RP> rburton: I was mainly curious to see if reverting uninative helped which it doesn't (since that was implicted for qemuppc)
zpfvo has quit [Ping timeout: 245 seconds]
zpfvo has joined #yocto
<RP> rburton: there is something in common here in that non-kvm timing seems messed up :/
<rburton> interesting
<rburton> the x86 one is qemu segfaulting
<Ablu> Is there some support for meson wrap files (https://mesonbuild.com/Using-wraptool.html) available? QEMU seems to have replaced submodules with it.
varjag has quit [Quit: ERC (IRC client for Emacs 27.1)]
Kubu_work1 has quit [Ping timeout: 260 seconds]
Xagen has joined #yocto
dmoseley has joined #yocto
<rburton> Ablu: there's a qemu 8.1 recipe already on the list, but we generally dislike wrap because it means fetching outside of do_fetch
rob_w has quit [Quit: Leaving]
<rburton> wraps are typically a fallback for when libraries are not present: the yocto way has you writing recipes
mvlad has joined #yocto
dmoseley has quit [Ping timeout: 245 seconds]
<JaMa> interesting if I use SKIP_RECIPE[efivar] = "foo" in local.conf, then it blacklists lib32-efivar as well, but if I add this to DISTRO config then I have to blacklist lib32-efivar as well as efivar, strange
xmn has joined #yocto
<Ablu> rburton: Sure, I understand that something may sneak in that should it it's own recipe. But for the downloading, couldn't the download just be done as part of do_fetch (similary how rust crates are loaded)?
<rburton> not trivially, but sure if you want to do that, you can work on it
<rburton> RP: got a fix for qemu
<rburton> might be worth trying with the ppc thing as it talks about a race...
amitk has joined #yocto
<rburton> RP: see ross/tcg
<Ablu> ACK. Sure. Thanks for the pointer to the update.
<RP> rburton: talks about RCUs in common code too which makes me wonder. I'll try and queue a test
<rburton> yeah
<rburton> two birds with one stone would be good
<RP> rburton: They felt somehow connected...
<marex> hey, if things like do_rootfs or do_populate_sdk take too long, is there any way to help those out ? Like build in tmpfs maybe ?
<rburton> marex: sdk generation takes ages as it has to compress a huge filesystem
<rburton> fwiw i have SDK_XZ_COMPRESSION_LEVEL = "-5" in my local.conf to speed that up
<marex> rburton: yep
<marex> I use --fast
<rburton> rootfs is just constructing the rootfs, so arguably not using rpm/dnf might make it noticably faster. worth a test :)
<marex> rburton: would tmpfs have any impact ? I suspect it might
<marex> (I'm just testing that)
<RP> I have wondered if we should try a non-pkgmanager rootfs build...
<rburton> marginal
<marex> rburton: it should at least avoid the sync() which I assume happens
<RP> rburton: test running
<RP> marex: pseudo intercepts and drops sync calls ;-)
Xagen has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<marex> oh :)
<marex> I didnt know that
Kubu_work has joined #yocto
Xagen has joined #yocto
sakoman1 has joined #yocto
amitk has quit [Ping timeout: 255 seconds]
prabhakarlad has quit [Quit: Client closed]
sakoman has quit [Quit: Leaving.]
dmoseley has joined #yocto
dmoseley has quit [Ping timeout: 246 seconds]
<khem> RP: maybe build uninative glibc without --enable-fortify-source
<khem> I doubt it though will be helpful
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
<RP> khem: we're going to try the tcg fix ross pointed at
Haxxa has quit [Remote host closed the connection]
Haxxa has joined #yocto
prabhakarlad has joined #yocto
Schlumpf has quit [Quit: Client closed]
leon-anavi has quit [Remote host closed the connection]
rfuentess has quit [Remote host closed the connection]
mckoan is now known as mckoan|away
Kubu_work has quit [Quit: Leaving.]
frieder has quit [Remote host closed the connection]
dmoseley has joined #yocto
zpfvo has quit [Remote host closed the connection]
manuel1985 has quit [Remote host closed the connection]
manuel1985 has joined #yocto
<RP> JPEW: link is live and working :)
manuel1985 has quit [Ping timeout: 250 seconds]
kayterina has joined #yocto
ptsneves has quit [Ping timeout: 245 seconds]
<JPEW> \o/
alessioigor has quit [Quit: alessioigor]
gsalazar has quit [Ping timeout: 255 seconds]
Vonter has quit [Ping timeout: 250 seconds]
Vonter has joined #yocto
Kubu_work has joined #yocto
<yates_work> i'm having trouble with SRC_URI configuration: i have some files in the standard files/ folder, but i also want to specify all files in a file/images/ folder. how to do?
<yates_work> sorry, meant files/images subfolder
<yates_work> do i have to specify them one-by-one? e.g., file://images/a.png? there are a bunch and i was hoping to just be able to specify the subfolder, but file://images/ didn't work
<yates_work> wait!
<yates_work> it does work. i had a missing space in the previous continuation line. bleh!
kpo has joined #yocto
Vonter has quit [Ping timeout: 246 seconds]
Vonter has joined #yocto
Xagen has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
Thorn_ has joined #yocto
Xagen has joined #yocto
<LetoThe2nd> ok slackers, thats it - i'm finally out for some time, see you all in about 2 weeks.🤘🏽
Thorn has quit [Ping timeout: 244 seconds]
Vonter has quit [Ping timeout: 246 seconds]
kayterina has quit [Ping timeout: 246 seconds]
sudip has quit [Quit: ZNC - http://znc.in]
sudip has joined #yocto
mbulut has quit [Ping timeout: 246 seconds]
Wenqing has quit [Remote host closed the connection]
Wenqing has joined #yocto
old_boy has joined #yocto
dmoseley has quit [Quit: ZNC 1.8.2 - https://znc.in]
<RP> rburton: didn't fix qemuppc and seems to break qemumips/qemumips64: https://autobuilder.yoctoproject.org/typhoon/#/builders/154/builds/431 :(
<RP> khem: didn't help qemuppc so we still need to track that down
dmoseley has joined #yocto
<RP> LetoThe2nd: enjoy :)
flom84 has joined #yocto
<RP> JPEW: around and have a minute to sanity check something?
<JPEW> Ya
<RP> JPEW: I've logs showing a bitbake client/server timeout and I'm trying to work out what can be going wrong
<rburton> RP: damnit!
<RP> JPEW: the client has sent the "ping" command to the server. After two minutes, the server is tracebacking in self.command_channel_reply.send() for the ping reply BrokenPipeError: [Errno 32] Broken pipe
<RP> JPEW: that means the UI probably gave up waiting for the ping reply after 30s
<RP> er. 60s
flom84 has quit [Quit: Leaving]
<JPEW> Ya makes sense
<RP> JPEW: what I'm puzzled about is why the UI couldn't have read anything and got the reply
flom84 has joined #yocto
<RP> JPEW: I was going to say, the UI as I understand it has a separate python thread queuing the events it receives until it can process them. Of course this reply isn't an event, its a reply on the "command" channel :/
<RP> can we somehow deadlock the command channel pipe?
flom84 has quit [Read error: Connection reset by peer]
<JPEW> RP: It should be full duplex
<RP> We do have three pipes in play and two of them could in theory deadlock?
Xagen has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<JPEW> Ya maybe
<RP> connectProcessServer() in process.py
<JPEW> If it's all blocking writes, then yes
<JPEW> (if the pipes are full)
amitk has joined #yocto
<JPEW> You may be relying on the good graces of the pipe's buffer
<RP> JPEW: I don't know how the pipes could be full :/
<JPEW> In my experience, using more threads is never as good as robust non-blocking sockets
vladest has joined #yocto
<JPEW> or non-blocking pipes
<RP> our event code and the thread it uses is fine. I think these are nonblocking pipes
<JPEW> They are created blocking, I haven't found where they are set to non-blocking (yet?) and send_bytes() bascially means blocking
<RP> JPEW: I think I'm confusing with the runqueue ones
<RP> so yes, these are blocking
<RP> but we're sending something and waiting for a reply, which should work?
belgianguy has joined #yocto
flom84 has joined #yocto
<belgianguy> Does anyone have good suggestion on Embedded Linux books they read themselves? For someone who has quite some Linux experience, just not much of Embedded... :$
<belgianguy> (Or courses, sites, ...)
<RP> JPEW: I'm trying to debug this with Crofton, https://git.openembedded.org/bitbake/commit/?h=master-next&id=836ff1c2c5a05e104d130791d6e857be272405a3 is what I have so far. Would any other logging be helpful?
<JPEW> RP: Ya, I was going to suggest reporting the time spent blocking on pipes
<JPEW> The timestamps
<JPEW> If the server is waiting for ~30 seconds before dying with EPIPE, it almost surely a deadlock
alessioigor has joined #yocto
Xagen has joined #yocto
<Crofton> I switched bitbake to master-next
Xagen has quit [Client Quit]
flom84 has quit [Ping timeout: 245 seconds]
<RP> JPEW: I'm struggling to know what else this might be :/
<JPEW> Deadlock seems likely to be TBH
<JPEW> It might be a very complicated deadlock with a lot of threads all circularly locked on each other.... but deadlock all the same
<Crofton> running
<RP> JPEW: that is my worry. I was going to ask you to sanity check the threading on the event pipe but I don't think that is where any issue is now I've talked it out loud
<Crofton> do you mind if I over write filenames in dropbox?
Xagen has joined #yocto
<Crofton> when it comes time
<RP> Crofton: no
<RP> rburton: too good to be true really
amitk has quit [Ping timeout: 248 seconds]
Xagen has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<JPEW> RP: I'm having trouble finding where the UI even sends events
<JPEW> I have the sneaking suspiciion that a some of the "Server" classes run on the actual server side, and some are proxies for the server on the client side
<RP> JPEW: the UI doesn't, the server sends events to the UI
<JPEW> Sorry commands
<RP> ServerCommunicator.runCommand in process.py
Haxxa has quit [Quit: Haxxa flies away.]
Xagen has joined #yocto
<RP> We really need to merge BitBakeProcessServerConnection, ServerCommunicator and BBUIEventQueue. They used to do a lot of different things but the separation is probably obsolete now
<RP> certainly BitBakeProcessServerConnection and ServerCommunicator
Haxxa has joined #yocto
alessioigor has quit [Quit: alessioigor]
Wouter0100670440 has quit [Quit: The Lounge - https://thelounge.chat]
Wouter0100670440 has joined #yocto
alessioigor has joined #yocto
starblue has quit [Quit: WeeChat 3.8]
<Crofton> Failed!
<JaMa> FWIW: I'm seeing this timeout as well, I've increased the timeout to 60s/120s and it seems to be triggered as often as when it was 30s/60s, so when it gets stuck, then it's for real, interstingly when I've tried even longer timeout 300s/600s, then it was usually jenkins job getting timeout (as instead of Timeout message from bitbake I get java exception from jenkins:
<JaMa> + # NOTE: recipe g-camera-pipeline-1.0.0-gav.40-r14: task do_populate_sysroot: Succeeded
<JaMa> + # FATAL: command execution failed
<JaMa> + # java.net.SocketException: Socket closed
<JaMa> + # java.base/java.net.SocketInputStream.socketRead0(Native Method)
<JaMa> + # Caused: hudson.remoting.ChannelClosedException: Channel "hudson.remoting.Channel@6acbac8c:c04a-009wgnozd2kyr": Remote call on c04a-009wgnozd2kyr failed. The channel is closing down or has closed down
<JaMa> and it's more likely to happen on overloaded systems it seems
<JPEW> Crofton: No "Sending reply" messages?
florian_kc has joined #yocto
<JPEW> Or maybe they are in the bitbake server log file
<Crofton> giv eme a bit, machine machine being stupid
<Crofton> and need to pack some stuff
<JaMa> interestingly it happens shortly after "2586 12:57:11.483486 Parse cache invalidated"
<JaMa> at least in my case above, not sure about Crofton's
<Crofton> there you go and afk until Happy hour
<RP> 538490 16:22:34.007760 Running command ['ping'] 538490 16:24:04.273232 Sending reply ('Still alive!', None)
<RP> why on earth did that take two minutes?!
<RP> that shows it isn't the pipes anyway
<JPEW> Ya
<RP> looking at runCommand in command.py, I can see what it might be doing :/
<RP> I think we special case ping
<JPEW> RP: stuck in process_inotify_updates_apply()?
<JPEW> That's the only place I can see where it might block
<RP> JPEW: self.cooker.updateCacheSync and self.cooker.init_configdata might hit IO and block
<Crofton> git pull --force?
<JPEW> Ah right
<Crofton> lol no force
* JaMa steals it as well
<RP> JPEW: inotify is the most likely but I'm covering all options :)
<RP> vmeson: you might like this one ;-)
<Crofton> you are so lucky I just didn't update the machine .....
<RP> Crofton: yes, having someone reproducing this at will is helpful!
<JPEW> Crofton: Spinny hard drive?
<Crofton> yes
<JPEW> Ya, that makes sense
<Crofton> noisy as fuck
<RP> so old fashioned :)
florian_kc has quit [Ping timeout: 255 seconds]
<Crofton> the old ways are the best
<Crofton> does the auto builder need some spinny hard drives?
* vmeson reads
<Crofton> for better test coverage
<JPEW> Crofton: Probably better to use cgroups to limit the I/O for a particular test run..... leaves the rest of the capacity for other tests running at the same time
<RP> vmeson: a patch which might fix your ping timeouts
<Crofton> lol the machine I do interactive master buids on has nvme
<RP> JPEW: thanks for giving me someone to talk to, I really did think this was going to be a pipe/thread issue
<JPEW> It did quack like that duck
<RP> JaMa: since you see cache invalidation which is almost certainly from inotify processing, it should help your case
<vmeson> RP: Nice and just in time for happy hour!
* vmeson pours a glass
* JaMa gets beer as well
alessioigor has quit [Quit: alessioigor]
<Crofton> hmmm
<Crofton> Happy hour, sgw is pouring beer
<sgw> Yup, Homebrew Smoked Porter
<JaMa> even with builds on NVME I get some IO pressure :) NOTE: Pressure status changed to CPU: True, IO: None, Mem: None (CPU: 721545.8/200000.0, IO: 11467.0/None, Mem: 1076.6/None)
mvlad has quit [Remote host closed the connection]
florian_kc has joined #yocto
* RP sent the patch to the bitbake list
Xagen has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
Kubu_work has quit [Quit: Leaving.]
Guest61 has joined #yocto
Guest61 has quit [Quit: Client closed]
<yates_work> i have found that using file:// SRC_URI causes the source code to be placed in ${WORKDIR}, whereas the OpenEmbedded build system defaults to ${WORKDIR}/${BPN}-${PV}. thus to use file:// SRC_URIs I must set S in my recipe.
<yates_work> why doesn't the fetcher put file:// files in ${WORKDIR}/${BPN}-${PV}?
<yates_work> then it would Just Work(TM)
goliath has quit [Quit: SIGSEGV]
<JaMa> yates_work: do_unpack unpacks the tarball in ${WORKDIR}, ${BP} is just the typical default for S
<JaMa> yates_work: so file:// works consistently with it
<yates_work> when using a tarball for SRC_URI, what is responsible for moving the files from ${WORKDIR} to ${WORKDIR}/${BPN}-${PV} after do_unpack?
<yates_work> oh wait,you're saying you have to do that same "S" definition in the recipe when using tarballs?
<yates_work> i sorta get you, but why not design these oe components such that a redefinition of S is never needed? that would make yocto easier to use, as far as i can tell
<yates_work> perhaps it's historical.
<JaMa> there was discussion about that on ML, because there are still some issues with recipes where S = "${WORKDIR}", so instead of unpacking archives in ${WORKDIR} it would unpack them in ${WORKDIR}/source or some directory like that and optionally strip first directory
<JaMa> "what is responsible for moving the files from ${WORKDIR} to ${WORKDIR}/${BPN}-${PV}" nothing is moving anything, they are _UNPACKED_ in ${WORKDIR}
<JaMa> if the tarball contains directory "foo", then you need to set S = "${WORKDIR}/foo" to run the tasks inside this directory
<JaMa> quite common convention is that tarballs do have top level directory which happens to be called ${BPN}-${PV} == ${BP}, so that is the default ${S} value in OE
<JaMa> but e.g. for git fetcher you always need to change it to ${WORKDIR}/git
<JaMa> and with file:// which aren't archives there isn't anything to unpack, so they are added to ${WORKDIR} as they are
dgriego has quit [Quit: Bye]
<DvorkinDmitry> I need additional file to install with kernel module into /etc/modules-load.d/ (to load it faster). How to do it correctly in Mickledore? kernel module recipe doesn't accept additional FILES +=