<alessioigor>
JaMa: Your patch fixes the issue for me (Dunfell and Kirkstone on Ubuntu 18.04). So thanks you
beneth has joined #yocto
yannd has joined #yocto
louis__ has joined #yocto
louis_ has quit [Read error: Connection reset by peer]
kpo has quit [Ping timeout: 250 seconds]
starblue2 has quit [Ping timeout: 252 seconds]
starblue2 has joined #yocto
mckoan is now known as mckoan|away
kpo has joined #yocto
wooosaiiii has quit [Remote host closed the connection]
louis__ is now known as louson
wooosaiiii has joined #yocto
louson has quit [Quit: OFTC]
florian has joined #yocto
louis_ has joined #yocto
louis_ is now known as louson
paulg has quit [Ping timeout: 240 seconds]
paulg_ has quit [Ping timeout: 260 seconds]
<barath>
anyone seen instances of "filename './control.tar.gz' not found" before during do_rootfs? the one instance I've been able to inspect the tmp dir state when that happens (during a CI job, so a bit hard to inspect), the ipk file it tries to unpack is empty/corrupt
d-s-e has quit [Ping timeout: 258 seconds]
goliath has joined #yocto
florian has quit [Ping timeout: 250 seconds]
d-s-e has joined #yocto
<RP>
barath: I've not seen that, no. You'd have to track down the build the bad ipk was created in to get further :/
<JPEW>
The thread wasn't really the reason. The problem with the original code was that it didn't read anything until the process exited, which is bad
rfuentess has quit [Remote host closed the connection]
<RP>
JPEW: ok, cool. That sounds like this could help a few different things then :)
<JPEW>
You have to read the data while the process is running (until you get EOF), otherwise the pipe fills up and the kernel suspends the child process
paulg has joined #yocto
<JPEW>
Hopefully!
<RP>
JPEW: right, that makes sense. I didn't look at what it was doing back then
* RP
wonders the best way to test this
d-s-e has quit [Quit: Konversation terminated!]
* RP
tests on the autobuilder
PhoenixMage has quit [Ping timeout: 240 seconds]
<tgamblin>
RP: the python3 ptest inconsistency is a bit messier than I thought. Adding gcc and g++ still didn't solve it as it was complaining about not having cc1plus; I put g++ in and now it seems to be complaining that the test requires distutils, which is about to be removed in 3.12. I'm wondering if we should maybe just disable that test?
<RP>
tgamblin: it is sounding a bit crazy. Did upstream remove that test?
<tgamblin>
wonder if it'll apply cleanly on 3.11.4
<RP>
tgamblin: if they get rid of the test, I'm happy to just patch it out for now
<RP>
ah, they removed part of the test?
paulg has quit [Remote host closed the connection]
<tgamblin>
RP: they replaced distutils with setuptools and added some other tweaks in that commit. Checking to see if it happens to apply cleanly now (doubt it)
<RP>
tgamblin: fair enough, sounds good
<tgamblin>
RP: It does not. Should I patch it out until we get 3.12?
* tgamblin
checks to see how many patches we might have to carry to make it work
flynn378 has quit [Server closed connection]
flynn378 has joined #yocto
* tgamblin
might be getting somewhere
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
kpo has joined #yocto
pabigot has quit [Read error: Connection reset by peer]
goliath has joined #yocto
pabigot has joined #yocto
goliath has quit [Ping timeout: 245 seconds]
goliath has joined #yocto
ptsneves has joined #yocto
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
florian has joined #yocto
leon-anavi has quit [Quit: Leaving]
alessioigor has quit [Quit: alessioigor]
alessioigor has joined #yocto
ptsneves has quit [Quit: ptsneves]
jbronder has quit [Remote host closed the connection]
<mischief>
why would the non-native one be in the native workdir though?
<RP>
mischief: given that context, it isn't that :/
<RP>
mischief: left over from a previous build? Had the task executed previously? task.order in WORKDIR/tmp would tell youy
manuel_ has joined #yocto
<mischief>
RP: this runs in a container with an empty build dir every run. we do have a sstate mounted from the host and a sstate mirror in s3 though
<mischief>
RP: we run two bitbake invocations, using bitbake --setscene-only image && bitbake --skip-setscene image. this is to avoid some race we had with sstate. it fails in the --skip-setscene phase. could it be an sstate issue?
<RP>
mischief: I'm wondering if each phase triggers the task...