<RP>
khem: "linux-yocto: Enable l2tp drivers when ptest featuee is on" breaks efibootpartition.GenericEFITest.test_boot_efi in oe-selftest :(
zpfvo has quit [Ping timeout: 265 seconds]
zpfvo has joined #yocto
<mcfrisk_>
are there some tricks that could make selftest execution faster, e.g. reuse download and sstate cache better? I feel like I'm downloading and building too much all the time
rfuentess has quit [Remote host closed the connection]
rob_w has joined #yocto
rfuentess has joined #yocto
tnovotny has joined #yocto
florian has joined #yocto
MrTatillon has joined #yocto
frieder has quit [Ping timeout: 248 seconds]
<mcfrisk_>
setting SSTATE_DIR and DL_DIR in base local.conf before running poky/scripts/oe-selftest seems to produce a lot of cache misses. Even if I just compiled the same selftest config manually, then removed the config bits and run the same as selftest, everything seems to be recompiled as if there is nothing in sstate cache.
<RP>
mcfrisk_: are you sharing a hashequivalence server to the selftest build?
<RP>
mcfrisk_: out the box they will each start their own. I've been meaning to try and teach selftest how to improve that
aduskett has joined #yocto
frieder has joined #yocto
sa7mfo has joined #yocto
<sa7mfo>
Hello, what is the best way to disable getty? I still want the service to be installed, just not enabled by default
<mcfrisk_>
RP: no I'm not. I'll have a look on how to configure this
<RP>
mcfrisk_: I start a system wide local hash server and then point to that in all my builds (and selftest)
<sa7mfo>
Create a ssytemd-serialgetty.bbappend with a do_install:append that removes the link is the best I can come up with, but are there any bett way?
prabhakalad has quit [Quit: Konversation terminated!]
prabhakalad has joined #yocto
bhstalel has joined #yocto
<bhstalel>
Hello, when recipe-sysroot[-native] was introduced in Yocto, or it was there since the first release ?
<bhstalel>
RP: a client is stuck at 2.0 (Jethro) for example, and I want to propose the upgrade to dunfell, kirkstone, ..., the risk for the client is that system recipes will upgrade too, and the system may behave not the same way, how would you approach this situation ?
<RP>
bhstalel: I'd explain the security implications of using something that old and that there were significant advantages to upgrading
vthor_ has quit [Excess Flood]
<bhstalel>
RP: Exactly, I did the same, but I am thinking of a way to present that the solution is technically possible, by, for example, getting full list of old packages, and the new list with the new versions, and do the test on the system that nothing will break
<RP>
bhstalel: A proof of concept could help if you can afford to do that work
<bhstalel>
RP: I am thinking of proposing an internship for that idea, an intern can do the job (basically learning Yocto the hard way in this situation hh)
<mckoan>
bhstalel: that's too difficult for an intern
vthor_ has joined #yocto
<RP>
there is a lot for an intern to learn there
<bhstalel>
mckoan: for sure its difficult, but in Tunisia, I will gift the community with a 50+ hours of free Yocto training, that will help the intern learn Yocto, then I will the support along the way
aduskett has quit [Read error: Connection reset by peer]
<bhstalel>
A loot to learn
aduskett has joined #yocto
aduskett has quit [Remote host closed the connection]
<bhstalel>
I am thinking of giving a new "Back to basics" talk again this year, maybe "Back to Basics | Yocto toolchain" or "Back to basics | BitBake Fetcher"
aduskett has joined #yocto
frieder has quit [Ping timeout: 246 seconds]
fabatera has joined #yocto
pbiel has joined #yocto
<pbiel>
hi
<pbiel>
I would like to create a recipe that downloads a tar.gz archive then unpacks it and installs files in the rootfs. The archive consists of several lua scripts and prebuild shared libraries. Is there a way that bitbake automatically detects what should be installed where or should I manually lists all the required files in the do_install?
<RP>
pbiel: that depends on the layout of your archive I guess
<RP>
pbiel: you'd probably have to give bitbake some info about where to put the files unless the archive matches the target
<RP>
bitbake is good but not psychic :)
<bhstalel>
RP: your opinion of the "Back to basics" idea ?
<bhstalel>
on*
<RP>
bhstalel: I'm not the target audience so hard for me to say!
<bhstalel>
RP: usually, back to basics videos gain lot of attention, and remove the "Yocto is complicated" idea for beginners
<RP>
bhstalel: that is always nice to do. I just don't know how many beginners need info about the toolchain for example
<bhstalel>
RP: I mean the idea is to present the compilation process, what is a toolchain, how BitBake determines the type of compilation, sets up build,host and target variables, how it creates the toolchain, and finally how you can use that toolchain (extract the SDK), ...
<bhstalel>
While typing all of this, I realised that yes, this can be quite difficult for beginners,
<RP>
bhstalel: it can. I think part of the challenges is "toolchain" means different things to different people
* RP
curses the ssh test. Probably best jonmason is asleep
<mcfrisk_>
entropy problem, or are the ssh keys static...
Ad0 has quit [Ping timeout: 245 seconds]
<RP>
mcfrisk_: maybe. I thought we used pregen host keys
<bhstalel>
I am confused, but, I know there is automatic runtime deps check, but by any chance, is there a way that BitBake knows what compile-time recipes to use, automatically (adding to DEPENDS automatically) ? I don't think so
<RP>
no, it doesn't do that
<Saur_Home84>
bitbake being psychic would have helped...
<RP>
Saur_Home84: I could have sworn we had a bug opened for that at one point!
ThomasRoos has joined #yocto
<ThomasRoos>
Hi, what is the easiest way to build a 4.19 kernel in scarthgap? Strategies?
<RP>
ThomasRoos: what is the issue you're running into? I'd find a 4.19 recipe and see what happened...
<mcfrisk_>
RP: pregen keys used, some other load issue then
fabatera has quit [Quit: Client closed]
testtttt has joined #yocto
mckoan is now known as mckoan|away
Ad0 has joined #yocto
testtttt has quit [Quit: Client closed]
goliath has joined #yocto
jmd has joined #yocto
bhstalel has quit [Quit: Client closed]
davidinux has quit [Quit: WeeChat 4.1.1]
davidinux has joined #yocto
florian has quit [Quit: Ex-Chat]
frieder has joined #yocto
MrTatillon has quit [Quit: Client closed]
frieder has quit [Remote host closed the connection]
<ThomasRoos>
RP yes, compiling after pointing to latest version in branch works. But it is not booting with systemd... guess there are config options missing.
<ThomasRoos>
took recipe from zeus
ray-san has quit [Ping timeout: 276 seconds]
ray-san has joined #yocto
<RP>
ThomasRoos: systemd tends to rely on the latest and greatest from the kernel too so it may actually be missing functionality
Chaser has quit [Quit: My Unrecognized Mac has gone to sleep. ZZZzzz…]
Chaser has joined #yocto
<mathieudb>
RP: OK, I have updated the sqlite ticket, I will create a new one for the pyc FileExistsError one
<RP>
mathieudb: thanks
Xagen has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
Chaser has quit [Quit: My Unrecognized Mac has gone to sleep. ZZZzzz…]
bhstalel has joined #yocto
luc4 has joined #yocto
Daanct12 has quit [Quit: WeeChat 4.4.2]
MrTatillon has joined #yocto
<luc4>
Hello! I'm in a weird situation: I built a new image using scarthgap. The build process is now at the end, but apparently bitbake fails saying that do_rootfs failed. The log file, however, does not seem to report any error at all. At the end, I see "DEBUG: Python function do_rootfs finished", after a long sequence of "downloading/installating". Any idea what I could do to investigate the problem? I also tried to ad -v and -D, but
<luc4>
I can't see an error log.
<luc4>
The log reports that the command "..opkg --volatile-cache -f..." is returning 255, but the remaining portion of the log does not seem to show a specific error.
<mcfrisk_>
luc4: check the task log for details, in tmp/work/*/$IMAGE/1.0/temp/log.do_image*. bitbake output does not contain all of the details
<luc4>
mcfrisk_: yes, that is what I read
<mcfrisk_>
for example when rootfs is generated using opkg, the error messages are in the do_rootfs log file there. and output is a bit cryptic for example when two files try to install same files/paths
<mcfrisk_>
or post-install script fails. the output may even missing completely. adding "set -x" to the post-install script in the recipe helps
Chaser has joined #yocto
<luc4>
mcfrisk_: I really cannot see an error here. However, it happened during the build that disk space finished. I therefore freed some space and ran the procedure again. Maybe something broke during that interruption? I tried to cleansstate, but nothing changed.
aduskett has quit [Read error: Connection reset by peer]
<mcfrisk_>
luc4: bitbake -c clean $IMAGE, there can be stale data
<mcfrisk_>
do_rootfs task uses the selected package manager to create rootfs. package managers error output logs are cryptic at times. but information what went wrong and with which binary package is there in the log
Wouter01002 has joined #yocto
florian has joined #yocto
<luc4>
mcfrisk_: weird, just before the error, I see this line "+ exit 0", then "do_rootfs: Python function do_rootfs finished". Which sounds good. But next, the error log is presented, saying "do_rootfs) failed with exit code '1'". cleansstate and clean do not seem to help.
<rburton>
RP: seriously i'm so close to sending a patch to opkg to let it have a control tree out of the source directory
Chaser has quit [Quit: My Unrecognized Mac has gone to sleep. ZZZzzz…]
<rburton>
ooooh i know what i did wrong, damnit
<rburton>
silly me
<mcfrisk_>
luc4: you can wipe the tmp directory and try again, but I'd try to understand what went wrong. you can try force running the rootfs task, bitbake -f -c do_rootfs $IMAGE. but if clean did not help then somewhere in tmp is a lot of corrupt data. I hope not in sstate
<rburton>
RP: were all the fails from insane that you looked at also due to DEBIAN/
<rburton>
because yes, logic error, and i never tested dpkg locally
Minvera has joined #yocto
Xagen has joined #yocto
aduskett has joined #yocto
<luc4>
mcfrisk_: that command seems to give the same result. The failing command seems to be opkg: https://pastebin.com/3PfZQ3Pq. The last line is probably this "Configuring kernel-v7-module-wishbone-serial-6.6.22-v7". Maybe I should clean the kernel?
Saur_Home84 has quit [Quit: Client closed]
Saur_Home84 has joined #yocto
ak77 has quit [Read error: Connection reset by peer]
<mcfrisk_>
luc4: do_rootfs error message is not clear, look for errors/warnings earlier in the task log
<mcfrisk_>
unable to install package(s), basically which package and why? are binary packages trying to overwrite files from other packages, or are post-install tasks failing
ak77 has joined #yocto
<luc4>
mcfrisk_: I really cannot see anything like that. Well, I guess I'll have to try to rebuild everything :-( thanks for your help!
wojci has quit [Ping timeout: 252 seconds]
<RP>
rburton: I didn't look in detail
<RP>
rburton: I saw do_package_qa and thought "ross" ;-)
<rburton>
to be honest, that's fair
<mcfrisk_>
luc4: full disk can cause annoying errors, hope your sstate is still ok and only tmp build directory was affected
<mcfrisk_>
RP: sigh, yes. I ran the test on genericarm64 and it passed. I need to try qemuarm64 too. will need to wait for x86_64 wic and uki selftests to complete first, they are still slow :/
<mcfrisk_>
is it better to just drop the tests, they cause too much issues
roussinm has joined #yocto
ray-san has quit [Ping timeout: 252 seconds]
<roussinm>
We are using clang-tidy, from the sdk, built from meta-clang, and we have to add a bunch of -extra-arg that points to the SDK path anyone else uses clang-tidy from yocto to check their application code? Those extra-args looks like this : `-extra-arg=--sysroot=${TARGETSDK_DIR}` `-extra-arg="-I${TARGETSDK_DIR}/usr/include/"`
ThomasRoos has joined #yocto
ThomasRoos has quit [Client Quit]
xmn has joined #yocto
<mcfrisk_>
RP: if that was the only failure from uki, can you revert the failing test? I don't think the failure is from uki or wic changes, it's just the test which has some dependency in genericarm64 which is not in qemuarm64 machine config. I will fix this ASAP. so sorry breaking tests all the time
Chaser has joined #yocto
<khem>
RP: that seems a kernel issue isn't it ?
rfuentess has quit [Remote host closed the connection]
luc4 has quit [Ping timeout: 252 seconds]
druppy has joined #yocto
<mcfrisk_>
khem: I added that test for systemd-boot and its config generetad by wic, something wrong in the boot setup. it works on genericarm64 but fails on qemuarm64. could be u-boot efi firmware related
hexbrex has joined #yocto
MrTatillon has quit [Quit: Client closed]
Saur_Home98 has joined #yocto
goliath has quit [Quit: SIGSEGV]
<RP>
khem: I think I got to the bottom of the link I sent earlier, was a space issue in the image
Chaser has quit [Quit: My Unrecognized Mac has gone to sleep. ZZZzzz…]
Saur_Home84 has quit [Ping timeout: 256 seconds]
MrTatillon has joined #yocto
<khem>
RP: yeah that makes sense
<RP>
mcfrisk_: you mean drop the tests part of the series?
<mcfrisk_>
yes, or just the failing test. I guess x86 variant and wic and uki tests are passing
<RP>
mcfrisk_: that would imply marking the test as x86 specific :/
<khem>
RP: I was wondering if I should throw glibc git into one of AB builds
<RP>
khem: could do
<khem>
yeah
<mcfrisk_>
RP: the tests currently are x86 only, no aarch64 atm. my patch adds one aarch64 compatible test
<mcfrisk_>
and that one is working on genericarm64 but failing on qemuarm64. I'll sort it out. you can drop the whole series or disable the test. I'll either send a new revision or fix for the test.
jmiehe has joined #yocto
MrTatillon has quit [Ping timeout: 256 seconds]
<RP>
mcfrisk_: thanks, I've just dropped it again for now (as well as most of rburton's patches, it isn't just yours!)
<mcfrisk_>
RP: no problem, really sorry for the breakage. trying to get to same test matrix as your CI
vthor_ has quit [Quit: kill -9 $pid]
<RP>
mcfrisk_: it happens, sometimes things are just awkward. The new CI pieces on our side are speeding things up which helps
Chaser has joined #yocto
vthor has joined #yocto
<moto-timo>
Has anyone ever used wic/bmaptool to install on more than one drive?
<moto-timo>
I would think it's theoretically possible, but likely not a tested use-case for either tool.
zpfvo has quit [Remote host closed the connection]
aduskett has quit [Ping timeout: 252 seconds]
jmiehe has quit [Quit: jmiehe]
<rburton>
moto-timo: i think wic is limited to a single output file
<moto-timo>
rburton: good point
<moto-timo>
rburton: I did have a wic image with /dev/sda and /dev/sdb, but it doesn't write the second disk partition table I don't think. And it tried to install to /dev/sdb8 (it was the 8th partition in the wks file)
<moto-timo>
better to create two images and investigate bmaptool installer changes to handle both.
<moto-timo>
(installer shell script would run twice is my hunch)
<rburton>
if you're talking about two actual drives then yeah run the copy twice
leon-anavi has quit [Quit: Leaving]
florian has quit [Ping timeout: 276 seconds]
tnovotny has quit [Quit: Leaving]
Chaser has quit [Quit: My Unrecognized Mac has gone to sleep. ZZZzzz…]