<RP>
khem: between fb51e196a978d452e6a14a8343832659da97fdc7 and 90801cd8cb23719031aaaba1578a8446e1824cad now. I'm running more builds 8 possible commits if my tests are right
nedko has quit [Remote host closed the connection]
valdemaras has quit [Remote host closed the connection]
<Guest39>
Hello, Can you please help me in resolving yocto build error while building AGL software. It happens during rootfs packaging time with one of the AGL supported image. agl-linux/agl-demo-platform/1.0-r0/rootfs --gid 989 --system systemd-journal]
<JaMa>
Guest39: does it fail the same all the time? I'm seeing this randomly in some builds (not AGL) and haven't found what triggers it, so if you have reliable reproducer then I would be interested to see what was causing it
<khem>
RP: libarchive is used by rpm cmake and elfutils so it could also be in picture
leon-anavi has quit [Remote host closed the connection]
<RP>
khem: seems to be prior to that
<LetoThe2nd>
yo dudX
kayterina has joined #yocto
river has joined #yocto
<river>
when you build a cross compiler toolchain are the files in a sysroot used by the host or the target?
<rburton>
well, yocto has host and target sysroots
<RP>
rburton: at least it seems roughly reproducible
<rburton>
are the TCG warnings new?
<RP>
rburton: not sure
<rburton>
they feel new
<RP>
I was wondering. I've struggled just to narrow it down to this so far
<RP>
this is the second build I've seen this pattern in
<rburton>
and it says x2apic which was introduced in 2008
<RP>
rburton: trouble is on a successful build they're probably just hidden
<rburton>
or maybe qemu is being a bit more precise
<RP>
I did wonder if it was worker specific but looking at the failing hosts, no pattern jumps out
Guest39 has quit [Ping timeout: 246 seconds]
varjag has quit [Quit: ERC (IRC client for Emacs 27.1)]
varjag has joined #yocto
leon-anavi has joined #yocto
kayterina has quit [Quit: Client closed]
Marian67 has joined #yocto
<Marian67>
Hi, I'm building a wic image that I dd on /dev/sda + parted -s $DESTINATION_DEV "resizepart 3 -1" the last partition up to disk size. I'm trying to integrate resize2fs to extend /dev/root up to the partition size. Is exactly the problem described here:
<Marian67>
I will look also on this, but my installer is running from initramfs and is dd the wic, parted resize and I want also to resize2fs, do you have any input on this?
<rburton>
sure you could do that from your installer
<rburton>
as you say, resize the partition then grow the file system
<rburton>
RP: old qemu doesn't produce those warnings
varjag has quit [Quit: ERC (IRC client for Emacs 27.1)]
Kubu_work1 has quit [Ping timeout: 260 seconds]
Xagen has joined #yocto
dmoseley has joined #yocto
<rburton>
Ablu: there's a qemu 8.1 recipe already on the list, but we generally dislike wrap because it means fetching outside of do_fetch
rob_w has quit [Quit: Leaving]
<rburton>
wraps are typically a fallback for when libraries are not present: the yocto way has you writing recipes
mvlad has joined #yocto
dmoseley has quit [Ping timeout: 245 seconds]
<JaMa>
interesting if I use SKIP_RECIPE[efivar] = "foo" in local.conf, then it blacklists lib32-efivar as well, but if I add this to DISTRO config then I have to blacklist lib32-efivar as well as efivar, strange
xmn has joined #yocto
<Ablu>
rburton: Sure, I understand that something may sneak in that should it it's own recipe. But for the downloading, couldn't the download just be done as part of do_fetch (similary how rust crates are loaded)?
<rburton>
not trivially, but sure if you want to do that, you can work on it
<rburton>
RP: got a fix for qemu
<rburton>
might be worth trying with the ppc thing as it talks about a race...
amitk has joined #yocto
<rburton>
RP: see ross/tcg
<Ablu>
ACK. Sure. Thanks for the pointer to the update.
<RP>
rburton: talks about RCUs in common code too which makes me wonder. I'll try and queue a test
<rburton>
yeah
<rburton>
two birds with one stone would be good
<RP>
rburton: They felt somehow connected...
<marex>
hey, if things like do_rootfs or do_populate_sdk take too long, is there any way to help those out ? Like build in tmpfs maybe ?
<rburton>
marex: sdk generation takes ages as it has to compress a huge filesystem
<rburton>
fwiw i have SDK_XZ_COMPRESSION_LEVEL = "-5" in my local.conf to speed that up
<marex>
rburton: yep
<marex>
I use --fast
<rburton>
rootfs is just constructing the rootfs, so arguably not using rpm/dnf might make it noticably faster. worth a test :)
<marex>
rburton: would tmpfs have any impact ? I suspect it might
<marex>
(I'm just testing that)
<RP>
I have wondered if we should try a non-pkgmanager rootfs build...
<rburton>
marginal
<marex>
rburton: it should at least avoid the sync() which I assume happens
<RP>
rburton: test running
<RP>
marex: pseudo intercepts and drops sync calls ;-)
Xagen has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<marex>
oh :)
<marex>
I didnt know that
Kubu_work has joined #yocto
Xagen has joined #yocto
sakoman1 has joined #yocto
amitk has quit [Ping timeout: 255 seconds]
prabhakarlad has quit [Quit: Client closed]
sakoman has quit [Quit: Leaving.]
dmoseley has joined #yocto
dmoseley has quit [Ping timeout: 246 seconds]
<khem>
RP: maybe build uninative glibc without --enable-fortify-source
<khem>
I doubt it though will be helpful
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
<RP>
khem: we're going to try the tcg fix ross pointed at
Haxxa has quit [Remote host closed the connection]
Haxxa has joined #yocto
prabhakarlad has joined #yocto
Schlumpf has quit [Quit: Client closed]
leon-anavi has quit [Remote host closed the connection]
rfuentess has quit [Remote host closed the connection]
mckoan is now known as mckoan|away
Kubu_work has quit [Quit: Leaving.]
frieder has quit [Remote host closed the connection]
dmoseley has joined #yocto
zpfvo has quit [Remote host closed the connection]
manuel1985 has quit [Remote host closed the connection]
manuel1985 has joined #yocto
<RP>
JPEW: link is live and working :)
manuel1985 has quit [Ping timeout: 250 seconds]
kayterina has joined #yocto
ptsneves has quit [Ping timeout: 245 seconds]
<JPEW>
\o/
alessioigor has quit [Quit: alessioigor]
gsalazar has quit [Ping timeout: 255 seconds]
Vonter has quit [Ping timeout: 250 seconds]
Vonter has joined #yocto
Kubu_work has joined #yocto
<yates_work>
i'm having trouble with SRC_URI configuration: i have some files in the standard files/ folder, but i also want to specify all files in a file/images/ folder. how to do?
<yates_work>
sorry, meant files/images subfolder
<yates_work>
do i have to specify them one-by-one? e.g., file://images/a.png? there are a bunch and i was hoping to just be able to specify the subfolder, but file://images/ didn't work
<yates_work>
wait!
<yates_work>
it does work. i had a missing space in the previous continuation line. bleh!
kpo has joined #yocto
Vonter has quit [Ping timeout: 246 seconds]
Vonter has joined #yocto
Xagen has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
Thorn_ has joined #yocto
Xagen has joined #yocto
<LetoThe2nd>
ok slackers, thats it - i'm finally out for some time, see you all in about 2 weeks.🤘🏽
<RP>
khem: didn't help qemuppc so we still need to track that down
dmoseley has joined #yocto
<RP>
LetoThe2nd: enjoy :)
flom84 has joined #yocto
<RP>
JPEW: around and have a minute to sanity check something?
<JPEW>
Ya
<RP>
JPEW: I've logs showing a bitbake client/server timeout and I'm trying to work out what can be going wrong
<rburton>
RP: damnit!
<RP>
JPEW: the client has sent the "ping" command to the server. After two minutes, the server is tracebacking in self.command_channel_reply.send() for the ping reply BrokenPipeError: [Errno 32] Broken pipe
<RP>
JPEW: that means the UI probably gave up waiting for the ping reply after 30s
<RP>
er. 60s
flom84 has quit [Quit: Leaving]
<JPEW>
Ya makes sense
<RP>
JPEW: what I'm puzzled about is why the UI couldn't have read anything and got the reply
flom84 has joined #yocto
<RP>
JPEW: I was going to say, the UI as I understand it has a separate python thread queuing the events it receives until it can process them. Of course this reply isn't an event, its a reply on the "command" channel :/
<RP>
can we somehow deadlock the command channel pipe?
flom84 has quit [Read error: Connection reset by peer]
<JPEW>
RP: It should be full duplex
<RP>
We do have three pipes in play and two of them could in theory deadlock?
Xagen has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<JPEW>
Ya maybe
<RP>
connectProcessServer() in process.py
<JPEW>
If it's all blocking writes, then yes
<JPEW>
(if the pipes are full)
amitk has joined #yocto
<JPEW>
You may be relying on the good graces of the pipe's buffer
<RP>
JPEW: I don't know how the pipes could be full :/
<JPEW>
In my experience, using more threads is never as good as robust non-blocking sockets
vladest has joined #yocto
<JPEW>
or non-blocking pipes
<RP>
our event code and the thread it uses is fine. I think these are nonblocking pipes
<JPEW>
They are created blocking, I haven't found where they are set to non-blocking (yet?) and send_bytes() bascially means blocking
<RP>
JPEW: I think I'm confusing with the runqueue ones
<RP>
so yes, these are blocking
<RP>
but we're sending something and waiting for a reply, which should work?
belgianguy has joined #yocto
flom84 has joined #yocto
<belgianguy>
Does anyone have good suggestion on Embedded Linux books they read themselves? For someone who has quite some Linux experience, just not much of Embedded... :$
<JPEW>
RP: Ya, I was going to suggest reporting the time spent blocking on pipes
<JPEW>
The timestamps
<JPEW>
If the server is waiting for ~30 seconds before dying with EPIPE, it almost surely a deadlock
alessioigor has joined #yocto
Xagen has joined #yocto
<Crofton>
I switched bitbake to master-next
Xagen has quit [Client Quit]
flom84 has quit [Ping timeout: 245 seconds]
<RP>
JPEW: I'm struggling to know what else this might be :/
<JPEW>
Deadlock seems likely to be TBH
<JPEW>
It might be a very complicated deadlock with a lot of threads all circularly locked on each other.... but deadlock all the same
<Crofton>
running
<RP>
JPEW: that is my worry. I was going to ask you to sanity check the threading on the event pipe but I don't think that is where any issue is now I've talked it out loud
<Crofton>
do you mind if I over write filenames in dropbox?
Xagen has joined #yocto
<Crofton>
when it comes time
<RP>
Crofton: no
<RP>
rburton: too good to be true really
amitk has quit [Ping timeout: 248 seconds]
Xagen has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<JPEW>
RP: I'm having trouble finding where the UI even sends events
<JPEW>
I have the sneaking suspiciion that a some of the "Server" classes run on the actual server side, and some are proxies for the server on the client side
<RP>
JPEW: the UI doesn't, the server sends events to the UI
<JPEW>
Sorry commands
<RP>
ServerCommunicator.runCommand in process.py
Haxxa has quit [Quit: Haxxa flies away.]
Xagen has joined #yocto
<RP>
We really need to merge BitBakeProcessServerConnection, ServerCommunicator and BBUIEventQueue. They used to do a lot of different things but the separation is probably obsolete now
<RP>
certainly BitBakeProcessServerConnection and ServerCommunicator
<JaMa>
FWIW: I'm seeing this timeout as well, I've increased the timeout to 60s/120s and it seems to be triggered as often as when it was 30s/60s, so when it gets stuck, then it's for real, interstingly when I've tried even longer timeout 300s/600s, then it was usually jenkins job getting timeout (as instead of Timeout message from bitbake I get java exception from jenkins:
<JaMa>
+ # Caused: hudson.remoting.ChannelClosedException: Channel "hudson.remoting.Channel@6acbac8c:c04a-009wgnozd2kyr": Remote call on c04a-009wgnozd2kyr failed. The channel is closing down or has closed down
<JaMa>
and it's more likely to happen on overloaded systems it seems
<JPEW>
Crofton: No "Sending reply" messages?
florian_kc has joined #yocto
<JPEW>
Or maybe they are in the bitbake server log file
<Crofton>
giv eme a bit, machine machine being stupid
<RP>
JPEW: inotify is the most likely but I'm covering all options :)
<RP>
vmeson: you might like this one ;-)
<Crofton>
you are so lucky I just didn't update the machine .....
<RP>
Crofton: yes, having someone reproducing this at will is helpful!
<JPEW>
Crofton: Spinny hard drive?
<Crofton>
yes
<JPEW>
Ya, that makes sense
<Crofton>
noisy as fuck
<RP>
so old fashioned :)
florian_kc has quit [Ping timeout: 255 seconds]
<Crofton>
the old ways are the best
<Crofton>
does the auto builder need some spinny hard drives?
* vmeson
reads
<Crofton>
for better test coverage
<JPEW>
Crofton: Probably better to use cgroups to limit the I/O for a particular test run..... leaves the rest of the capacity for other tests running at the same time
<RP>
vmeson: a patch which might fix your ping timeouts
<Crofton>
lol the machine I do interactive master buids on has nvme
<RP>
JPEW: thanks for giving me someone to talk to, I really did think this was going to be a pipe/thread issue
<JPEW>
It did quack like that duck
<RP>
JaMa: since you see cache invalidation which is almost certainly from inotify processing, it should help your case
<vmeson>
RP: Nice and just in time for happy hour!
* vmeson
pours a glass
* JaMa
gets beer as well
alessioigor has quit [Quit: alessioigor]
<Crofton>
hmmm
<Crofton>
Happy hour, sgw is pouring beer
<sgw>
Yup, Homebrew Smoked Porter
<JaMa>
even with builds on NVME I get some IO pressure :) NOTE: Pressure status changed to CPU: True, IO: None, Mem: None (CPU: 721545.8/200000.0, IO: 11467.0/None, Mem: 1076.6/None)
mvlad has quit [Remote host closed the connection]
florian_kc has joined #yocto
* RP
sent the patch to the bitbake list
Xagen has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
Kubu_work has quit [Quit: Leaving.]
Guest61 has joined #yocto
Guest61 has quit [Quit: Client closed]
<yates_work>
i have found that using file:// SRC_URI causes the source code to be placed in ${WORKDIR}, whereas the OpenEmbedded build system defaults to ${WORKDIR}/${BPN}-${PV}. thus to use file:// SRC_URIs I must set S in my recipe.
<yates_work>
why doesn't the fetcher put file:// files in ${WORKDIR}/${BPN}-${PV}?
<yates_work>
then it would Just Work(TM)
goliath has quit [Quit: SIGSEGV]
<JaMa>
yates_work: do_unpack unpacks the tarball in ${WORKDIR}, ${BP} is just the typical default for S
<JaMa>
yates_work: so file:// works consistently with it
<yates_work>
when using a tarball for SRC_URI, what is responsible for moving the files from ${WORKDIR} to ${WORKDIR}/${BPN}-${PV} after do_unpack?
<yates_work>
oh wait,you're saying you have to do that same "S" definition in the recipe when using tarballs?
<yates_work>
i sorta get you, but why not design these oe components such that a redefinition of S is never needed? that would make yocto easier to use, as far as i can tell
<yates_work>
perhaps it's historical.
<JaMa>
there was discussion about that on ML, because there are still some issues with recipes where S = "${WORKDIR}", so instead of unpacking archives in ${WORKDIR} it would unpack them in ${WORKDIR}/source or some directory like that and optionally strip first directory
<JaMa>
"what is responsible for moving the files from ${WORKDIR} to ${WORKDIR}/${BPN}-${PV}" nothing is moving anything, they are _UNPACKED_ in ${WORKDIR}
<JaMa>
if the tarball contains directory "foo", then you need to set S = "${WORKDIR}/foo" to run the tasks inside this directory
<JaMa>
quite common convention is that tarballs do have top level directory which happens to be called ${BPN}-${PV} == ${BP}, so that is the default ${S} value in OE
<JaMa>
but e.g. for git fetcher you always need to change it to ${WORKDIR}/git
<JaMa>
and with file:// which aren't archives there isn't anything to unpack, so they are added to ${WORKDIR} as they are
dgriego has quit [Quit: Bye]
<DvorkinDmitry>
I need additional file to install with kernel module into /etc/modules-load.d/ (to load it faster). How to do it correctly in Mickledore? kernel module recipe doesn't accept additional FILES +=