<kanavin>
RP: I'm not exactly sure what to do. I could add code to ignore such zombie files, but then this would mask the real issue and it may result in a completely different fail elsewhere if the calling code relies on the information about the file.
<kanavin>
but if you're oka with that...
<kanavin>
I was also hoping there would be investigation from halstead first
kpo has quit [Ping timeout: 256 seconds]
florian_kc has quit [Ping timeout: 252 seconds]
rber|res has joined #yocto
florian_kc has joined #yocto
mrpelotazo has quit [Quit: Hasta la vista!]
mrpelotazo has joined #yocto
RP has joined #yocto
florian_kc has quit [Read error: Connection reset by peer]
<kanavin>
RP: I'll write a patch that ignores the files, just so we have it available if needed
florian has joined #yocto
<RP>
kanavin: thanks. There is also the question of the CDN issues. I think we're going to have to run the test twice, once to populate them and then secondly to verify them.
<RP>
kanavin: we think there is a bandwidth issue between the CDN nodes and the data centre. We're in the middle of planning a data centre move so resolving that is probably going to have to wait until after we've moved (May)
<kanavin>
CDN has been returning sporadic 5xx errors for bigger objects, I don't think that'll help
<kanavin>
?
<RP>
kanavin: it is only on the first fetch, then it should work
<RP>
kanavin: it is because we can't get the data to the CDN nodes fast enough and the first one times out
<marex>
halstead: could be isolated to Bavaria though
<marex>
halstead: I am getting reports of 36 KiB/s , I can also reproduce it at 73 KiB/s
<marex>
it goes into the AMS datacenter per traceroute
<marex>
I am starting to suspect it is this f-ed12-i.F.DE.NET.DTAG.DE
Saur has joined #yocto
<kanavin>
marex, I have the same issue, except the machine is in Baden-Wurttemberg. No issues in Berlin.
<rburton>
mcfrisk_: re "oeqa parselogs.py: load ignore files from sys.path", is there a problem with adding 'addpylib' to the layer that provides the tests? It _does_ have a python library code in, as test cases. I'm actively trying to move oeqa away from all the crazy and loading python code by hand is exactly the sort of thing i want to remove.
<rburton>
(i've an unsent patch to change the loader to use standard unittest discovery)
<mcfrisk_>
rburton: in my case there is no python library code to load, just oeqa tests. I can add this for sure, but the only need so far was to load the extra files needed by the oeqa test which were loaded differently.
<rburton>
well oeqa tests _are_ python code
<rburton>
so you will need the change in the future when i rip out the next bit :)
<mcfrisk_>
yes, but have been loaded without addpylib. if you remove that, then they are python code and need it. I don't mind. debugging this was "interesting"
<rburton>
yeah, for sure
<rburton>
writing it was 'fun' too as py 3.7/3.8/3.9/3.10 all behave slightly differently
<mcfrisk_>
yes, saw that. the APIs have changed a lot for file loading too.
lexano has joined #yocto
<RP>
mcfrisk_: I've held of merging that patch as I think we do need to move over to addpylib being the way to say "use these python bits"
<mcfrisk_>
RP: I don't mind. would be nice to have the paths somehow visible in "bitbake -e" output if users need to investigate why files are not found.
<mcfrisk_>
would be nice to re-use upstream poky etc tests but override some details like ignore list resource files
<RP>
mcfrisk_: I'd be happy to see some way to reflect the python path in a variable
<RP>
mcfrisk_: wasn't some of rburton's patches related to allowing layer changes to config like that?
* RP
fully supports rburton wanting to simplify the oeqa mess
dsfgasdf has joined #yocto
<mcfrisk_>
I don't strong opinions, just want things to work. currently addpylib details are not visible in bitbake -e output making errors in test and test resource loading a pain to debug
rm5248 has joined #yocto
Daanct12 has quit [Quit: WeeChat 4.2.1]
<marex>
kanavin: I believe the problem is DT
<marex>
kanavin: telecom
<marex>
kanavin: mnet is fine from what I hear
florian_kc has quit [Ping timeout: 252 seconds]
Xagen has joined #yocto
Xagen has quit [Client Quit]
dsfgasdf has quit [Quit: Leaving]
joekale has joined #yocto
Ram-Z has quit [Ping timeout: 255 seconds]
Ram-Z has joined #yocto
<kanavin>
marex, yes, and I think I've seen that in daimler intranet too years ago which was connected via deutsche telekom as well if memory serves
<kanavin>
not sure what to do about it
<kanavin>
they're overpriced and arrogant anyway :)
<kanavin>
every country seems to have one of those former government monopolists
florian_kc has joined #yocto
<RP>
mcfrisk_: that is relatively easy to fix
<marex>
kanavin: heh
xmn has joined #yocto
jclsn has quit [Quit: WeeChat 4.2.1]
jclsn has joined #yocto
leon-anavi has quit [Quit: Leaving]
Ram-Z has quit [Ping timeout: 264 seconds]
Ram-Z has joined #yocto
Chaser has joined #yocto
Xagen has joined #yocto
Xagen has quit [Client Quit]
Xagen has joined #yocto
alperak has joined #yocto
sev99 has joined #yocto
<Ad0>
home + end and norwegian letters worked fine in dunfell, but not in kirkstone
rber|res has quit [Remote host closed the connection]
Ram-Z has joined #yocto
rfuentess has quit [Remote host closed the connection]
rob_w has quit [Remote host closed the connection]
jmiehe has joined #yocto
goliath has quit [Quit: SIGSEGV]
Guest18 has joined #yocto
<Guest18>
Hi all
<Guest18>
I want to remove sshd@.service file from its default install path which is /lib/systemd/system to some other location,/home/root/testdir to disable loading of service. Can someone provide me example how to do that(may be by using pkg_postinst_ontarget_${PN} or pkg_postinst_${PN} ?? )
luc4 has quit [Ping timeout: 255 seconds]
<rburton>
Guest18: if you want to disable a service set SYSTEMD_AUTO_ENABLE="disable" in the recipe
ptsneves has quit [Ping timeout: 252 seconds]
<Guest18>
As far as i know this will just stop launching of service but not loading which systemd will do during boot
<rburton>
you can use a do_install:append() to just delete the service file
<Guest18>
Can you tell how i can use do_install:append() to delete if you dont mind ?
<Guest18>
i am really not sure how to do so
<rburton>
do_install:append() { rm -f ${D}${systemd_system_unitdir}/ssh*.service } will delete all of the service files
jmiehe has quit [Quit: jmiehe]
amitk_ has joined #yocto
<Guest18>
ok and how about copying sshd@.service file from its default install path which is /lib/systemd/system to some other location,/home/root/testdir ?
simonew has joined #yocto
<rburton>
why would you want them in some other directory?
<rburton>
trying to understand what you're _actually_ trying to do
<Guest18>
As my boot time is getting affected due to this and hence i will have to launch that service manually later may be by another script when my firmware is up
sev99 has quit [Quit: Client closed]
<rburton>
leave the units where they are (as otherwise systemd won't find them), don't start them on boot, and use a timer or some other unit to start it instead
jmd has joined #yocto
<Guest18>
but having sshd@.service in the path itself creating a problem for me as systemd is loading it though it is not getting launched
<Guest18>
Which is why i am thinking of copying of these files to some other path and once device is booted launch them with firmware itself
<Xogium>
tbh I'd use something like socket activation for this
zpfvo has quit [Quit: Leaving.]
mckoan is now known as mckoan|away
<rburton>
systemd loading the file won't be taking any time
<Xogium>
it parses the units once during boot, that is what may take the most time
<Xogium>
but other than that
<Xogium>
this took at most one second on a relatively slow stm32mp157c
<rburton>
arguably if systemd parsing units is taking too much time, don't use systemd?
<Xogium>
that too
<rburton>
we have socket activation for sshd, the big pause is the 'generate host keys on first boot' bit
<Guest18>
I couldnt able to find now why it is taking more for me but when removed sshd i noticed its booting within time
<Xogium>
I mean if you do like I did for an experiment and build a minimal systemd along with as few services as possible, then rely on ubifs/ubi because spi flash of 128 MB, then yeah it will take time to boot ;) it will, but it will be slow (64 MB of ram)
<Xogium>
first boot took 2 minutes :D
<Xogium>
subsequent boots were down to 40 seconds, surprisingly enough. I expected way worse for spi, not even qspi
<Guest18>
rburton yes first boot i understand but with every boot same story (i am storing keys to a preserve path so it wont generate keys again )
<rburton>
i'd be digging out a system-wide profiler at this point
<rburton>
no point guessing what the problem is
<Xogium>
anyway yeah socket activation would do something a la xinetd and would launch a tiny unit that listens on port 22 or whatever port you want to use for ssh, then once something actually tries to connect to this port systemd would launch the actual sshd
<Xogium>
huh, was it xinetd or just inetd ? I always forget
<Xogium>
no need to move units with this method, and the actual sshd daemon will not launch at boot. Solves your problem in one go
<Guest18>
Xogium How ?
<rburton>
we already do socket activation
<rburton>
_but_ the socket unit depends on the key generation
<rburton>
so yeah either 1) your key preservation isn't working or 2) this isn't the problem
<Guest18>
rburton 1) I have tried with diff and timestamp so verified preservation was fine
<Guest18>
2) If this is not a problem then removing ssh is reducing my time drastically
<Xogium>
rburton: hm, my bad, sorry ! I didn't realized you already did
<Guest18>
Now, my service is just loaded and not started but still taking too long with this
<rburton>
erm why is that saying its an init script, ssh has its own units
<Xogium>
rburton: I smell a fork from some vendor
<Guest18>
rburton i am not sure why that is so, but i am able to see the sshd@.service in /lib/systemd/system which i believe responsible for loading of this ?
<rburton>
i'd try adding DISTRO_FEATURES_BACKFILL_CONSIDERED += "sysvinit" to your local.conf and seeing if that changes things
tlhonmey has joined #yocto
<Guest18>
sure, thanks
<halstead>
marex: thanks, let's take another look.
<Guest18>
But still request you to tell me if anyway by which we can copy this file from /lib/systemd/system to testdir as a last option
zkrx has quit [Ping timeout: 264 seconds]
<rburton>
well what we've discovered here is that the units are not being used anyway
<rburton>
you can use a do_install to copy the files, just like i showed you how to delete a file
<Guest18>
ok, thanks
zkrx has joined #yocto
zkrx has quit []
florian has quit [Quit: Ex-Chat]
linfax has quit [Ping timeout: 240 seconds]
goliath has joined #yocto
florian_kc has quit [Ping timeout: 246 seconds]
<simonew>
marex: Do you also face issues with sstate fetching? That is also abnormal slow for me since the weekend...
alperak has quit [Quit: Client closed]
goliath has quit [Quit: SIGSEGV]
zkrx has joined #yocto
vladest has quit [Quit: vladest]
<marex>
simonew: I have my own sstate locally, so I cannot tell
<marex>
simonew: also in DE and going to AMS via DTAG ?
<marex>
halstead: maybe there is just borked peering or DT does something to the traffic ?
pdlloyd has joined #yocto
<simonew>
marex: DE yes, DTAG no via kabel
<halstead>
marex: perhaps. It doesn't look like the host is overloaded at all.
vladest has joined #yocto
<pdlloyd>
Hi everyone. I am currently working on a Yocto-based project at my job that I inherited and I'm finding it pretty difficult to keep organized -- specifically with features and packages that get added, then removed, then added again, etc. on different layers, especially by vendors whose layers we don't have control over. Are there any established
<pdlloyd>
ways of managing problematic upstream layers or inspecting an equivalent "final" layer with all the overrides all in one place?
Vonter has quit [Ping timeout: 272 seconds]
Vonter has joined #yocto
xmn has quit [Read error: Connection reset by peer]
Chaser has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<Saur>
zeddii: I see you have an update to 1.1.11 staged for runc-opencontainers in meta-virtualization. Do you think 1.1.12 will make it in as well in time for Scarthgap? It solves CVE-2024-21626, which is announced as a high-severity vulnerability. I can send a patch, but since you already have 1.1.11 staged I thought I'd ask first.
<zeddii>
I'm updating them all right now, so I'll pick it up in the next couple of days
<Saur>
zeddii: Thank you, that sounds good.
kpo has joined #yocto
<pdlloyd>
marex, vmeson: I never knew about `bitbake-layers flatten`. This is something I can use to merge vendor changes into our company layers so we don't have to keep appending to their garbage
<pdlloyd>
In a related vein, I also want to encourage the company to more aggressively try to push changes upstream so we don't have to maintain mostly parallel forks of certain recipes forever.