kpo has quit [Read error: Connection reset by peer]
kpo has joined #yocto
kpo has quit [Remote host closed the connection]
kpo has joined #yocto
sakoman has quit [Quit: Leaving.]
mpb27 has quit [Ping timeout: 246 seconds]
Estrella has quit [Ping timeout: 258 seconds]
zelgomer has quit [Ping timeout: 246 seconds]
Chaser has joined #yocto
zelgomer has joined #yocto
Chaser has quit [Quit: Chaser]
Nixkernal has joined #yocto
Nixkernal has quit [Remote host closed the connection]
kpo has quit [Quit: Konversation terminated!]
Nixkernal has joined #yocto
Schlumpf has joined #yocto
<LetoThe2nd>
Marian78: no the first step would be to rearrange he partitions so / actually is the last one. after that, you can use systemd-growfs for example (always talking about in-system operations)
<Marian78>
Yes, I'm trying first to re-arrange the partition, I thought that WKS_FILE can be putted also in the .bb, it seems to work only on the local.conf and the wks.ini needs to be on my_layer/wic/myfile.wks.in to work
<adrianf>
Marian78: It's simple if you do it after dd. Doing it at the first boot of your target device is more complicated. Therefore you probably need an initramfs. It's usually not possible to grow a mounted partition and a mounted file-system.
<Marian78>
yes, I'm booting initramfs and I have the wik in /boot
Chaser has joined #yocto
<Marian78>
I want to dd and grow the last partition
<Marian78>
the problem is that I'm not able to change the partition order using WKS_FILE
rfuentess has joined #yocto
Chaser has quit [Client Quit]
Nixkernal has quit [Remote host closed the connection]
Nixkernal has joined #yocto
<LetoThe2nd>
adrianf: systemd-growfs does exactly that, its commonly used to create one-size-fits-all sd card images for the raspi and likes.
<LetoThe2nd>
Marian78: why are you not able to change the order?
amitk_ has quit [Quit: leaving]
<Marian78>
I finally managed, I didn't expect that the wks.in needs to be putted as a SRC_URI to be transferred to WORK_DIR
goliath has joined #yocto
varjag has joined #yocto
Guest29 has joined #yocto
<Marian78>
+LetoThe2nd can you please point me to the "systemd-growfs"?
<adrianf>
+LetoThe2nd: Yes, I know about systemd-gowfs. if there is an initramfs and a recent version of systemd it just works. But that's not always the case.
<LetoThe2nd>
adrianf: yeah systemd being around and working is not always a given. but at least on ARM stuff, it definitely works without an initramfs too.
<LetoThe2nd>
ok, not sure about a live partition. it might be limited to non-mounted ones.
<LetoThe2nd>
e.g. an additional data partition or such.
Guest29 has quit [Ping timeout: 246 seconds]
Guest89 has joined #yocto
<adrianf>
+LetoThe2nd: Good to know. I remember there was an issue with growing the GPT partition and with growing the mounted ext4 when I tried that some time ago. I will think about your advise if I need to improve our implementation some when.
<Guest89>
trying to find a gnu parallel recipe for the dunfell version. but i couldn't find it in the current version. where can i find it?
<LetoThe2nd>
adrianf: np as we're talking about it, i realize that we're using it just for the relatively limited usecase of growing the data partition located at the end, so it might be too limited for a number of use cases indeed.
<kanavin>
RP: if there's some nasty bisect job with ppc fails I could do, I have a few days. Or something else bringing it closer to being fixed.
<RP>
kanavin: The only way I'm having any "progress" with it is to load the autobuilder and run tests to rule out different combinations. If I run 2-3 qemuppc-alt and one fails, I know that the issue is still present
<RP>
kanavin: I'm kind of running out of "likely suspects" though
leon-anavi has quit [Remote host closed the connection]
leon-anavi has joined #yocto
florian_kc has joined #yocto
rob_w has joined #yocto
<LetoThe2nd>
does oe-pkgdata-util help with finding out what puts something into the deploy dir, even if it is not in the rootfs? specifically, a closed bootloader
Schlumpf has quit [Quit: Client closed]
xmn has quit [Remote host closed the connection]
<ptsneves>
LetoThe2nd: I know no reason it would do so
<LetoThe2nd>
ptsneves: thats my guessing too :-(
<ptsneves>
I guess you could search the workdir for all the recipes that have the file in it. Before something goes to the deploy dir it needs to be in the recipe's workdir directory.
<ptsneves>
DEPLOYDIR = "${WORKDIR}/deploy-${PN}" is the recipe related input directory
<kanavin>
mcfrisk, they absolutely are. A lot of ptests pipe output into sed, which obscures the return value but we're slowly fixing that
<kanavin>
quite often sed doesn't catch all possible fails, or the output looks fine but the return value is not
<kanavin>
mcfrisk, *the are supposed to catch* :) english rules of negation are a pita, sorry
prabhakarlad has quit [Quit: Client closed]
<mcfrisk>
ok, then I'll try to add some "set -eu" or similar. It's also hard to capture additional traces and log files when a test fails if errors are not capured systematically and lots of wrapper scripts are used..
<kanavin>
mcfrisk, which ptest are we talking about?
<mcfrisk>
"set euxo pipefail" would help there, and IMO a lot of other places too
<kanavin>
i.e. add || echo "FAIL: python3" prior to pipingo into sed
<kanavin>
you need to ensure the whole thing fails, not just any particular command in the script
<mcfrisk>
but return value is always zero in that too
<kanavin>
that's ok, having a FAIL: token will ensure the fail will be reported
<mcfrisk>
yes, but if you can't tell which/when test failed, other than the stdout later on, then you can't do extra things like capturing log and temp files
<mcfrisk>
IMO should be || (echo "FAIL: python3" ; exit 1)
<kanavin>
mcfrisk, we considered that, but then you lose per-test split reporting done by sed
<mcfrisk>
but even that fails without -o pipefail when output to sed
<mcfrisk>
this doesn't work now and I can't inspect what is going on with the system, in kernel, in logs from test and syslog: while true; do run-ptest openssh || break; done
<RP>
rburton: thanks, looks good. Need to get the qemu upgrade in!
<rburton>
yeah i'd send backports but the upgrade is queued in next so i won't bother :)
<RP>
rburton: fair enough. The update is horrible :/
<rburton>
urgh
<rburton>
if its that bad i can post backports and we can hold off 8.1
<rburton>
i worried they were going to do crazy things when i asked if the goal was to be able to build without using the configure script, but no they wanted to explicitly wrap meson
<RP>
rburton: they require a pyenv which doesn't work. I found a way to hack it
<rburton>
venvs with pynative should work though, so that needs to be fixed too :/
<RP>
rburton: it sees meson is up to date in the native sysroot and says "don't need to do anything"
<RP>
rburton: qemu wants the meson binary in the venv bin/
<rburton>
oh they're venving meson? wtf.
<RP>
kanavin: FWIW I've now ruled out the binutils upgrade
* RP
isn't entirely sure what to test next for qemuppc
<kanavin>
RP: maybe roll back weeks and weeks of commits and test that to at least establish a clean point in the past
<RP>
kanavin: some good work there and scary finds!
Chaser has joined #yocto
<kanavin>
RP: pretty much all of this was found from running ptests in a simulated year 2040. I kinda have to roll back what I said previously about them :)
<RP>
kanavin: they do have some uses :)
<kanavin>
the scary part is that if a component has no ptests, there's no other way to confirm it will not break in post-2038
<kanavin>
there's overall system testing via testimage of course, but it only goes so far
* mcfrisk
applied some patches to openssh ptest and left it looping, beer 'o clock! cheers!
nedko has quit [Remote host closed the connection]
zelgomer has quit [Remote host closed the connection]
GNUmoon has quit [Remote host closed the connection]
nedko has joined #yocto
zelgomer has joined #yocto
GNUmoon has joined #yocto
<ptsneves>
Has anybody ever seen task hash mismatches happening only in tinfoil.build_targets? Is there anything special about it? I cannot even see any stamp generated to debug
florian_kc has quit [Ping timeout: 245 seconds]
varjag has quit [Quit: ERC (IRC client for Emacs 27.1)]
DvorkinDmitry has joined #yocto
<DvorkinDmitry>
I need to have NTP enabled with systemd (to sync the local time from the servers) + NTP server that will provide NTPD data for internal subnet in Yocto. What recipe would you recommend? NTP or Crony or some else?
* kanavin
wonders if systemd nowadays provides all of it :D
<kanavin>
if it has a bootloader, then why not ntp server *and* client too
<DvorkinDmitry>
kanavin, yes. It is only the client :(
<adrianf>
just found my bug in the devtool ide test. sorry for the noise.
<rburton>
DvorkinDmitry: if you want to run a proper server then ntpd seems like the safest bet
<DvorkinDmitry>
rburton, yes. thank you. classics is the best choice if unsure
<ptsneves>
Am i right that if a download of a sstate cache archive fails halfway and artifact exists in the DL_DIR then the next run will use the halfway downloaded sstate cache artifact? That is the only explanation to what I am seeing
valdemaras has joined #yocto
Chaser has quit [Quit: Chaser]
<rburton>
ptsneves: that would be a bug if so, but i could believe it happens
Chaser has joined #yocto
Chaser has quit [Client Quit]
<ptsneves>
rburton: Oh I see that wget.py had some attempts at avoiding similar situations by checking for file size != 0 and if the file exists
Chaser has joined #yocto
Chaser has quit [Client Quit]
<DvorkinDmitry>
may I set DEFAULT_TIMEZONE in the image file? feels like it is not accepted there
<ptsneves>
I think it is tricky because contrary to normal downloads, for sstate cache files we do not have a hash of the file. Or do we?
<rburton>
the download should just write to a temporary file and rename when it's actually complete
<rburton>
DvorkinDmitry: as discussed yesterday, you can't do that
<rburton>
you need to use local.conf, your distro.conf, or a bbappend to set DEFAULT_TIMEZONE
<ptsneves>
rburton: Yes but for wget it is notthe case. It just goes to DL_DIR
leon-anavi has quit [Quit: Leaving]
Perflosopher has quit [Quit: Ping timeout (120 seconds)]
Perflosopher has joined #yocto
fray has quit [Remote host closed the connection]
Chaser has joined #yocto
Chaser has quit [Client Quit]
Nixkernal has quit [Ping timeout: 245 seconds]
ecdhe has quit [Read error: Connection reset by peer]
ecdhe has joined #yocto
vladest has quit [Ping timeout: 240 seconds]
valdemaras has quit [Quit: valdemaras]
ptsneves has quit [Ping timeout: 256 seconds]
rty has quit [Quit: Client closed]
florian has quit [Quit: Ex-Chat]
valdemaras has joined #yocto
Chaser has joined #yocto
mpb27 has joined #yocto
amitk has joined #yocto
lexano has joined #yocto
amitk has quit [Remote host closed the connection]
amitk has joined #yocto
florian_kc has joined #yocto
mpb27 has quit [Ping timeout: 246 seconds]
flom84 has joined #yocto
<khem>
RP: any breakthoughs for the qemuppc timeouts ?
<khem>
how many taps should be created ? I created same number as ncpu and it still runs out of taps when building core-image-ptest-all