ablu has quit [Read error: Connection reset by peer]
ablu has joined #yocto
lthadeus has joined #yocto
DaveNuge has joined #yocto
Vonter has quit [Ping timeout: 268 seconds]
Vonter has joined #yocto
davenuge1 has joined #yocto
DaveNuge has quit [Ping timeout: 268 seconds]
amitk has joined #yocto
wooosaiiii1 has joined #yocto
wooosaiiii has quit [Ping timeout: 268 seconds]
wooosaiiii1 is now known as wooosaiiii
DaveNuge has joined #yocto
davenuge1 has quit [Ping timeout: 268 seconds]
davidinux has quit [Ping timeout: 260 seconds]
davidinux has joined #yocto
jmd has joined #yocto
jmd has quit [Remote host closed the connection]
DaveNuge has quit [Ping timeout: 245 seconds]
alessioigor has joined #yocto
lthadeus has quit [Ping timeout: 240 seconds]
ray-san has quit [Remote host closed the connection]
ray-san has joined #yocto
lthadeus has joined #yocto
jmd has joined #yocto
ray-san has quit [Ping timeout: 256 seconds]
manuel_ has quit [Ping timeout: 240 seconds]
jmiehe has joined #yocto
ray-san has joined #yocto
rob_w has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
mckoan|away is now known as mckoan
Dartmoness has joined #yocto
Dartmoness has quit [Remote host closed the connection]
goliath has joined #yocto
alperak has joined #yocto
manuel_ has joined #yocto
vladest has quit [Ping timeout: 260 seconds]
jmd has quit [Remote host closed the connection]
Kubu_work has joined #yocto
jmd has joined #yocto
jmd has quit [Remote host closed the connection]
rfuentess has joined #yocto
frieder has joined #yocto
leon-anavi has joined #yocto
zpfvo has joined #yocto
<LetoThe2nd>
yo dudX
* alessioigor
waves all
<landgraf>
(^_^)/
<mckoan>
good morning
vladest has joined #yocto
belsirk has joined #yocto
rfuentess has quit [Ping timeout: 264 seconds]
Guest97 has joined #yocto
Guest97 is now known as Rich_1234
lthadeus has quit [Ping timeout: 256 seconds]
barath has joined #yocto
egueli-AV has quit [Read error: Connection reset by peer]
prabhakarlad has joined #yocto
<yocton>
Hello :)
barath has quit [Ping timeout: 250 seconds]
barath has joined #yocto
<barath>
hey all! has anyone experienced issues around zfs, sync and poky's copyfile usage? we build a lot of images at the same time via multiconfig, and *sometimes* we get failed builds due to "truncated" ipk files, either when they're parsed in the oe-rootfs-repo or when trying to parse the ipks in the opkg volatile cache. In both cases, at the time of
<barath>
trying to read the files they look empty to the python / opkg code, but when I inspect them after they look fine. enabling zfs sync (which forces synchronous writes) seems to prevent this from happening...
linfax has joined #yocto
<barath>
symptoms/errors are things like "Failed to read archive header: Truncated input file (needed 16384 bytes, only 0 available) (errno=-1)" or "Subprocess output: extracting control.tar.gz from <path to ipk in oe-rootfs-repo> failed!"
zpfvo has quit [Ping timeout: 252 seconds]
vladest has quit [Quit: vladest]
radanter has joined #yocto
barath86 has joined #yocto
barath has quit [Quit: Client closed]
prabhakarlad has quit [Ping timeout: 250 seconds]
prabhakar has quit [Ping timeout: 255 seconds]
paulg has quit [Remote host closed the connection]
zpfvo has joined #yocto
lthadeus has joined #yocto
florian has joined #yocto
xmn has quit [Quit: ZZZzzz…]
vladest has joined #yocto
mvlad has joined #yocto
barath has joined #yocto
DaveNuge has joined #yocto
schtobia has quit [Quit: Bye!]
schtobia has joined #yocto
barath86 has quit [Ping timeout: 250 seconds]
Daanct12 has joined #yocto
mvlad has quit [Quit: Leaving]
mvlad has joined #yocto
DaveNuge has quit [Ping timeout: 276 seconds]
<rburton>
barath: that's weird. the ipk files are written by a separate process (opkg-build) so its not like there could be a left-open file that hasn't flushed.
paulg has joined #yocto
PaulB has joined #yocto
PaulB has quit [Client Quit]
PaulBuxton has joined #yocto
mbulut has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
<PaulBuxton>
Hi, all. Not sure if this is the right place to ask, I am trying to build https://github.com/beagleboard/xuantie-yocto for my Beagle V Ahead. I can build fine on my Ubuntu 20 installation using WSL, but I have a new system with Ubuntu 23.04 and it is failing to build. There are 4 recipes which fail, llvm-native, rust-llvmnative, harfbuzz and apt.
<PaulBuxton>
The errors look like the sort of thing you get compiling with a newer compiler than the code was developed with, am I right in my belief that OpenEmbedded/Yocto would includes it's own compilers? One strange behaviour is that I ran devtool modify harfbuzz with the intention to fix the issues, but when I run bitbake harfbuzz, it builds fine, but
<PaulBuxton>
running the bitbake command for the whole tree it fails again.
<PaulBuxton>
Am I missing something obvious?
vladest has quit [Ping timeout: 255 seconds]
lthadeus has quit [Remote host closed the connection]
<rburton>
PaulBuxton: we don't build a native compiler, the host gcc is used to build native tools. were the harfbuzz and apt recipes native too?
<PaulBuxton>
They are in the devtool recipes, so I guess so!
<rburton>
not sure what you mean by 'devtool recipes'
<rburton>
what's the eg harfbuzz error log?
leonanavi has joined #yocto
bq has quit [Ping timeout: 264 seconds]
leon-anavi has quit [Ping timeout: 256 seconds]
<PaulBuxton>
Sorry, the paths to the rust and llvm recipes have devtools in them e.g. /home/paul-buxton/beagle/xuantie-yocto/openembedded-core/meta/recipes-devtools/rust/rust-llvm_1.63.0.bb which is what I meant by them being devtools recipes.
<PaulBuxton>
The harfbuzz error is :-
<PaulBuxton>
../harfbuzz-5.1.0/test/threads/hb-subset-threads.cc: In function ‘void test_operation(operation_t, const char*, const test_input_t&)’:
<PaulBuxton>
| ../harfbuzz-5.1.0/test/threads/hb-subset-threads.cc:127:3: error: ‘printf’ was not declared in this scope
<PaulBuxton>
| ../harfbuzz-5.1.0/test/threads/hb-subset-threads.cc:12:1: note: ‘printf’ is defined in header ‘<cstdio>’; did you forget to ‘#include <cstdio>’?
sgw has quit [Ping timeout: 246 seconds]
sgw has joined #yocto
<rburton>
so for reasons, that xuantie uses the langdale release
<rburton>
which has been EOL since may and hasn't been updated to work with latest gcc releases
<rburton>
oh god it even embeds its own copy of oe-core so even if langdale was fixed, it won't be pick it up
<rburton>
that's horrible
<rburton>
if you want to build that then you'll need to build in a container with older gcc releases in. search docker for 'crops' and you'll find some containers which are ideal for that sort of thing. alternatively
<rburton>
moan at them :)
<PaulBuxton>
Ah. I will ping the beagle folks to see if there are plans to move. I suspect they are kind of stuck as they get the yocto base image from the SoC manufacturer. I can use docker easily enough I guess.
<rburton>
the SoC bsp shouldn't provide a copy of oe-core but refer to a branch
bq has joined #yocto
<barath>
rburton: hm maybe not... I'm not sure when those are flushed though. It looks like opkg makes symlinks in the volatile directory if it can, and those resolve just fine when I check after the build, but during the build opkg complains that the files are empty. so if the original file hasn't flushed / synced yet, that might mean opkg itself isn't to
<barath>
blame
<rburton>
yeah i'm not blaming opkg at all here, the files are written by a subprocess so there's no way it can be at fault.
<rburton>
(afaict)
<jkridner>
gotcha. yikes. yeah, it is just what the soc guys gave us. we’ll see if we can get them to update or make our own. right now, or focus has been on mainline kernel support.
<rburton>
i've seen enough terrible bsps that i kind of think they should be treated as inspiration :)
prabhakar has joined #yocto
prabhakarlad has joined #yocto
<barath>
mhm. I assume that there are enough people out there buildings loads of images at once that if I hit this issue of "empty" ipk files and I can't find anyone else who does, then it's probably an issue with my server setup...
<rburton>
kanavin: what's the describe key for in setup-layers?
<kanavin>
rburton, output of 'git describe'
<kanavin>
so you get an idea of where the layer's at (tag directly, or tag+N commits)
<kanavin>
I hate having to manually look up raw commit ids
<rburton>
but i can put anything in there and its not validated right?
<kanavin>
you can, yes
<rburton>
cool
PaulBuxton has quit [Quit: Client closed]
starblue has quit [Ping timeout: 276 seconds]
starblue has joined #yocto
<rburton>
kanavin: i'm trying to grab meta-bitbake-hashserver but it appears the setuplayers --destdir value needs to be hardcoded in the bblayers template, or am I missing something (or josh doing something wrong)?
<kanavin>
rburton, I am not seeing anything wrong in the template
<rburton>
i used destdir=. because i didn't want setup-layers stamping over my existing clones
<rburton>
and now the paths are wrong
<kanavin>
are the layers checked out the way they're laid out in bblayers?
neofutur has joined #yocto
<rburton>
no, the meta-bitbake-hashserver path is wrong
<rburton>
that's the top-level where the other repos are underneath it
belsirk has quit [Remote host closed the connection]
<kanavin>
rburton, then destdir isn't going to work, as the rest of the layers need to be correctly placed relative to meta-bitbake-hashserver
florian_kc has joined #yocto
<kanavin>
setup-layers doesn't check that out for the second time
<rburton>
so i basically need another directory above this layer to contain the layers it clones
<kanavin>
not necessarily
<kanavin>
setup-layer has a switch --force-bootstraplayer-checkout
<rburton>
yeah thats worse :)
vladest has joined #yocto
barath has quit [Ping timeout: 250 seconds]
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
<kanavin>
rburton, yeah so treat setup-layers driven setups as 'projects' needing their own top level directory. but if you can think of improvements, I'm ears.
barath has joined #yocto
prabhakarlad has quit [Ping timeout: 250 seconds]
prabhakar has quit [Ping timeout: 268 seconds]
<rburton>
i like how kas will auto-write a bblayers as it knows what layers were cloned and where they were put
<kanavin>
it also auto-writes a local.conf, which is perhaps less welcome. Doing one but not the other would be confusing.
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
<kanavin>
setup-layers sets up the layers, but doesn't try to set up a build (this was explicitly agreed on)
huseyinkozan has joined #yocto
barath has quit [Ping timeout: 250 seconds]
prabhakar has joined #yocto
prabhakarlad has joined #yocto
barath has joined #yocto
alperak has quit [Quit: Client closed]
alperak has joined #yocto
neofutur has quit [Ping timeout: 256 seconds]
neofutur has joined #yocto
amsobr has joined #yocto
lexano has quit [Ping timeout: 276 seconds]
joekale has joined #yocto
barath has quit [Ping timeout: 250 seconds]
lexano has joined #yocto
frieder has quit [Ping timeout: 252 seconds]
<LetoThe2nd>
jkridner: sorry missed the backlog, what is falling over?
frieder has joined #yocto
Daanct12 has quit [Quit: WeeChat 4.1.2]
frieder has quit [Ping timeout: 264 seconds]
<jkridner>
seems the big issue is just that xuantie-yocto is on an EoL release.
<LetoThe2nd>
jkridner: ah the beaglev-fire? yeah i've dabbled a bit with it and lets say, "it is in questionable shape"
<jkridner>
beaglev-ahead.
frieder has joined #yocto
<LetoThe2nd>
jkridner: ah yeah. sorry my bad. yes, beaglev-ahead.
<jkridner>
i haven’t tried yocto for fire yet, but with icicle, i expect it is in better shape.
<jkridner>
whatever is there for icicle should move to fire with minimal dt patches, etc.
xmn has joined #yocto
jmiehe has quit [Quit: jmiehe]
linfax has quit [Ping timeout: 256 seconds]
barath has joined #yocto
HaRRo has joined #yocto
HaRRo has left #yocto [#yocto]
<JPEW>
kanavin, rburton: Don't use --force-bootstraplayer-checkout in meta-bitbake-hashserver. It will mess things up because it's annoying to make sure the layer that contains the setup-layers.json is itself up to date (and not an invalid commit!) So I've just been not updating it
<Saur>
RP: I never wrote a real selftest for it, no. We have been using the solution in my patch ever since so I sort of dropped that issue.
simone13 has joined #yocto
speeder has joined #yocto
simone13 has quit [Client Quit]
simone64 has joined #yocto
<Saur>
RP: Btw, is there any particular reason that the tests run by do_recipes_qa() is hardcoded rather than using a solution similar to do_package_qa()? Otherwise I am planning on rewriting it so that tests can be added more easily.
speeder_ has joined #yocto
<jstephan>
RP: just talked to pidge about the race condition with base-files bbappend in oeqaself test
<jstephan>
it is still unclear for me why test test is causing that
<jstephan>
and thus I am unsure how to fix it :(
speeder has quit [Ping timeout: 256 seconds]
<rburton>
Saur: do it. i refactored the invocation code that package_qa uses a while ago so it should be relatively easy to add with a bit more refactoring.
<rburton>
Saur: huh actually there's already QARECIPETEST so that should just me moved into the recipe qa task
<Saur>
rburton: Yeah, I noticed...
<RP>
Saur: no reason that I recall
<Saur>
Good, then I'll proceed.
<RP>
Saur: kanavin was the original author
<Saur>
ok.
<RP>
jstephan: with oe-selftest, tests can run in parallel. The tests run in separate build directories and have separate copies of meta-selftest/ but share a copy of meta/. I think one of your changes writes to meta, which means other build processes using that directory can be corrupted
prabhakarlad has quit [Quit: Client closed]
<RP>
jstephan: I haven't identified which test is at fault, just that the change appears to be to base-files and you recently add/changed tests which use base-files.
<RP>
jstephan: if you can test against a recipe in meta-selftest, the issue would be solved as an example
goliath has joined #yocto
DaveNuge has joined #yocto
<jstephan>
RP: I can use a recipe from meta-selftest but how this would be different?
<RP>
jstephan: meta-selftest is copied per parallel process in oe-selftest
<RP>
so writes to meta-selftest will go to a copy, not the real one
<RP>
well, not a shared one
speeder__ has joined #yocto
<kanavin>
Saur, when recipe_qa was added, there were only two tests (presence of maintainer entry, presence of SUMMARY/HOMEPAGE), so I didn't bother with extensibility infrastructure.
<jstephan>
RP: oh! got it! did not notice it :)
<Saur>
kanavin: Fair enough. I'll see if I can improve it a bit then.
speeder_ has quit [Ping timeout: 276 seconds]
<kanavin>
Saur, you can also consider how to register the test results into sstate, and present them when restored from sstate in a way identical to running the tests, as that is very much needed
<jstephan>
RP: also I think I understand the confusion here, your comment on the series is on the wrong test, I have another test that is modifying base-files recipes itself.. I think this is the one that is causing the metadata corruptin
<kanavin>
package_qa suffers from the same issue
<Saur>
kanavin: That ... is a bit more than I was planning to do at this stage...
<kanavin>
Saur, welcome to yocto upstream :)
<kanavin>
'consider' is easier than 'implement' though
<RP>
kanavin: if we fix the "restore warning messages" from sstate bug, that would fix this too
<RP>
jstephan: ah, right, sorry. I took a guess at a possible patch rather than saying it was definitely that one
<RP>
jstephan: there shouldn't be any writes into meta/ from the tests
<jstephan>
RP: noted!
speeder_ has joined #yocto
Saur_Home32 has quit [Quit: Client closed]
Saur_Home32 has joined #yocto
speeder__ has quit [Ping timeout: 246 seconds]
Vonter has quit [Ping timeout: 246 seconds]
Vonter has joined #yocto
yudjinn has joined #yocto
speeder__ has joined #yocto
alperak has quit [Quit: Client closed]
neofutur has quit [Ping timeout: 255 seconds]
speeder_ has quit [Ping timeout: 252 seconds]
neofutur_ has joined #yocto
speeder_ has joined #yocto
speeder__ has quit [Ping timeout: 255 seconds]
rob_w has quit [Quit: Leaving]
speeder_ has quit [Ping timeout: 256 seconds]
<jstephan>
RP: just send a patch for this, hopefully this shold be good, let me know if more work are required!
<RP>
jstephan: without going through your patches carefully it will be hard for me to tell. As long as you're not modifying things in meta/ we should be good though